This is the fourth installment of my series on Big Data and privacy, focused on Firmin DeBrabander’s Life After Privacy. Part 1 was a summary of DeBrabander’s book. Part 2 criticized his “victim blaming” approach to the subject (scare quotes mine). Part 3 criticized what I termed his “counsels of despair” with respect to pushing back on Big Data.
I had promised, at the end of Part 3, to discuss examples of successful activism vis-á-vis Big Data. But on second thought, it seems better to defer the case studies until the end of the series, treating them as a set of appendices to the main argument. So in this installment, I’ll continue with my main argument against DeBrabander, focusing on the last of his three “counsels of despair,” which I call Sour Grapes:
Sour Grapes: Because we lack a good philosophical account of the nature and value of the privacy we’ve given up, we lack a defensible motivation to fight very hard to get it back.
As the preceding formulation makes clear, Sour Grapes is really transitional as between a counsel of despair and a change of subject. In making his argument, DeBrabander canvasses the philosophical and legal literature in search of a serviceable definition of privacy, and a defensible account of its value. Finding neither, he reaches the conclusion that there’s none to be had. That, in turn, becomes the rationale for his suggestion that we change the subject. Instead of focusing on privacy, we ought to focus elsewhere; instead of defending privacy, we ought to defend other things.
I don’t think DeBrabander’s survey of the literature is comprehensive or charitable enough to justify the dismissal he offers, and I happen to disagree with many, if not most, of the strictly philosophical criticisms he makes about the value of privacy. But I want to save these deeper philosophical issues for later, focusing in this post on his overall argumentative strategy.
Suppose for argument’s sake that DeBrabander is right: the analyses of and arguments for privacy in the philosophical literature all fail. Does it follow that we should change the subject and focus elsewhere? I don’t think so.
One response to DeBrabander might be, “No, we shouldn’t change the subject; we should try harder to come up with better analyses of and arguments for privacy, and take it from there.” I don’t disagree with that, but I have a different response to offer.
Suppose that we’ve done the best we can as far as philosophical accounts of privacy, and come up short. Regardless, we have ample reason to regard Big Data’s infringements on our privacy as a threat to us, and ample motivation to push back. All we need to know is that they’re threatening infringements, not why. We don’t need a deep philosophical account of privacy to come to this conclusion, valuable as that might be. We just need a bit of backbone, and a bit of common sense.
Consider a different but related subject, private property rights. I happen to know the literature on private property better than I know the literature on privacy. So I can say with some confidence that no one has yet produced an absolutely foolproof philosophical defense of the right to private property, one that traces that right to its deep foundations, that answers all relevant objections, that deals with all of the hard cases (or even not-so-hard cases), and that identifies the truth-conditions for all relevant propositions involved in justifying or applying the relevant norms. And yet, no one, not even Marx (much less Rawls), has proposed the wholesale abolition of private property.
Barring the adoption of a radically revisionary epistemology, the absence of a deep philosophical justification for private property rights has some, but relatively limited, implications for practice. It rules out views that fetishize or absolutize private property rights, exalting them over all other normative considerations. It also rules out dogmatic views that entail that the poor have to die in droves lest the holdings of the super-rich be affected in the least jot or tittle. And it may have other implications of this sort. But it doesn’t entail that in ordinary cases involving ordinary people, when confronted with a thief, you’re obliged to let him steal from you at will.
We may not have a comprehensive account or a foolproof defense of private property rights, but we know enough to know that private property serves a genuine human need for boundaries and control. The basic rationale is something like this: We need space of our own, and things of our own, to exercise the autonomy that’s central to human agency. We’re not herd animals like ants, bees, or cattle who can dispense with enforceable boundaries, or share literally all of our life’s possessions in common. We have some enforceable need for norms that distinguish mine from thine, and given the imperatives of enforcement, have a correlative need for some way of marking and operationalizing these norms.*
If what’s mine belongs to me by right, then when you intrude on it, I have, at the least, a right to ask you to leave. If you don’t leave, I have the right to push you out. If you absolutely refuse to get the message, I have the right to employ more severe measures to the same end. Yes, there are or have been societies where even this thin commitment to private property is (or has been) regularly flouted, but then, there are (or have been) societies where murder, rape, and slavery have been regular happenings, too. A theory of private property need not provide a philosophical answer to relativism of that sort in order to assume that a thief needs to be treated as a criminal, and met with something more forcible than a furrowed brow. It just needs to accept a generic account of the normative separateness of persons–the account common, say, both to a Rawls and a Nozick–and take that assumption as a first principle for further inquiry. Justifying the assumption involves a separate inquiry altogether.
So it is, I would say, when it comes to privacy. As in the case of private property, we have a commonsense notion of what privacy is, and a thin, generic account of its value. Like private property (and in much the same way), privacy serves a need to preserve and safeguard the separateness of persons. The details are no doubt contestable, but the fact itself is clear enough. So is the threat to privacy posed by Big Data. We now have ample documentation, in books like Zuboff’s Age of Surveillance Capitalism, Scheier’s Data and Goliath, Snowden’s Permanent Record, and Gellman’s Dark Mirror, of the scale and depth of Big Data’s intrusions into our lives. Other authors have documented the threats we face from foreign or underworld actors.** So it’s hard to say that we’re safe enough to change the subject and move on. The threat we face is as big as any we’re ever likely to face.
The books I’ve highlighted above are, I realize, controversial ones, so let me find a relatively neutral of putting the point. The simplest way of summarizing the “ample documentation” I have in mind is to realize that short of going deep into the wilderness sans electronic devices of any kind, there is literally nothing you can do to escape surveillance in the modern world, no matter how intimate or private the activity you’re engaged in. And even that wilderness escape would be temporary, as your absence itself is something that’s being monitored.
Whether you’re defecating, having sex, watching ASMR, talking to yourself, mourning a loss, writing in your ‘private’ journal, watching TV, changing clothes, surfing the Internet, praying to God, talking to your therapist, or just sitting idly in an otherwise unoccupied room, it’s entirely possible that you’re being spied on in a fully granular way by electronic means. It’s probable that some of us–many of us, even millions of us–are being spied on in at least a semi-granular way, with strong incentives on the government’s part to move from semi-granular to more-than-that. There are 1.9 million people on the FBI’s Terrorist Watch List alone. But there are far more watch lists out there than that, and far more forms of surveillance than being on a watch list.
What’s certain is that if someone wanted to surveil every aspect of your life, it’s far more likely that they would be able do so and get away with it than that you would be able to stop them, or hold them accountable for it, even once you figured out that they were doing it. And you probably wouldn’t.
It’s easy at this point to hurl charges of “paranoia,” to take refuge in appeals to technical mumbo-jumbo, or to offer soothing re-assurances about institutional “checks and balances,” legal safeguards, or the wonders of strong passwords, end-to-end encryption, and VPNs. A sensible person, we’re told, should trust the system, trust the experts, and acknowledge that the probability of real harm due to surveillance or data breach is very low. On the contrary, we should be focusing instead on the bounty that Big Data has brought us. Think about the unprecedented benefits to be reaped from the unprecedented access to data now at our fingertips. For millennia, our practical deliberations about the world were based on little more than folk wisdom–mere hunches and intuitions. Big Data is to ordinary life what Galileo and Newton were to physics: the means of hitting escape velocity to a genuinely scientific, quantitatively rigorous view of the world. Data replaces ignorance. Big Data replaces Big Ignorance. Who but a Know Nothing could complain?
It’s hard to dispute that Big Data has brought us benefits. But on closer inspection, many of the supposed benefits have been less a matter of deliverables than of Big Hype and Big Promises (see Tenner’s Efficiency Paradox on this). And whatever the benefits, the dangers are there. No matter what re-assurances anyone offers, or what hairsplitting technicalities are thrown up to obscure the facts, it’s an undeniable fact that Big Data is designed to spy on the entire population of the wired world in a fully granular way. It is fully equipped to get full access to everything it deems “relevant” to any aim it regards as in the “public interest”–all of your data, all of your images, every real time access to your life that can be gotten by webcam or recording device or whatever. Whereas if you reveal “too much” of its secret doings by your conception of “relevance” or “the public interest”–no matter how intuitively plausible–you become a “traitor.” (Bear in mind that my definition of “Big Data” includes “foreign actors.” It includes everyone engaged in the Big Data enterprise, no matter who or where they are.)
It’s not an accident that your devices don’t have to be on, or plugged in, to become tracking devices. Nor is it an accident that your car, microwave, TV, or refrigerator is among those devices. Nor is it an accident that the agencies tasked with protecting your digital security deliberately subject you to the risk of malicious hacking, surveillance, and malware (so-called “0-day vulnerabilities“), so as to increase their access to their adversaries, which they then equate with yours. Nor is it an accident that gigantic data breaches happen all the time–everywhere from the Pentagon to your local hospital–and are then either covered up or minimized, so as to ease the way back to business-as-usual. There are no technical limits on total surveillance, and no reliable ways of avoiding it or ensuring in any given case that it isn’t happening. It’s unlikely, of course, that everyone is being spied on in a fine-grained way at the same time. But there is no way, in any given case, to determine that a given person (or group, where the “group” might number in the millions) is not being spied on in a fully granular way right now. No one has a place of sanctuary, and no one can claim immunity, in principle, from being a target.
So the issue is not whether Big Data can surveil us in a totalitarian way (it obviously can) or whether it does (if it’s done skillfully enough, the targets won’t know), but whether it would. This latter question reduces to whether it has a motivation to do so, and/or whether any genuine obstacles stand in the way of its doing so. This is not a quantitative matter, to be answered by “drilling down” into The Data. It’s primarily a moral question, one to which the sciences of data analytics and cybersecurity have little or nothing to contribute in the way of an answer. Given what Big Data can do, what attitude to adopt with respect to what it would do?
Attitudes here range from fideism on the one hand, to paranoia on the other. You could repose the kind of trust in Big Data that theists repose in the God of Abraham: nothing to worry about. Or you could distrust Big Data on the model of Paranoid Personality Disorder: everything to worry about. Or you could find the mean between these extremes. Unsurprisingly, it’s easier to make fun of the extremes than to find the mean between them.
Here’s a first stab at finding it: In some cases, we have good-enough reason for trusting Big Data, or proceeding as though we do, if only because suspicion would lead us to paralysis and make us worse off than otherwise. In those cases, trust becomes the default for lack of better alternatives. I can’t, for instance, use my computer or phone at all if I assume that I’m under constant granular surveillance no matter what precautions I take. Nor can I go to school, get a job, purchase insurance, get credit, buy a house, or go to the hospital on paranoid assumptions. Once I take appropriate precautions in these cases (and many others), I have to leave the matter there rather than obsess over the possibility that I’m being surveilled. Things are different, of course, if I have a specific reason to think that I in particular am being surveilled, but for now, let me consider the modal rather than the outlying case. It’s possible but unlikely that the modal individual is being specifically targeted for surveillance. It would indeed be paranoid to proceed as though Big Data was literally out to get everyone.
Notice, however, that what I’ve just given is mostly a pragmatic, not an evidential argument against paranoia (or something close it it). The argument doesn’t say that we altogether lack reason to be paranoid. It doesn’t deny that you, the reader, could be under full-scale surveillance right now. It just says that if you are the modal individual, it’s unlikely that you are under full surveillance. So whether you are or you aren’t, take relevant precautions and live your life: paranoia is bound to be worse for you than surveillance. The claim here is not so much that Big Data is trustworthy, but that we have no choice but to depend on it.
The rule I’ve defended so far is: if you have no choice but to rely on Big Data, then take all due precautions, and do so. But that’s a pretty grim way of going about your life. What about the parts of life where you have a choice? Reading this blog, after all, is one of them. Should you avoid it for fear of being surveilled?
At this point, it might help to consider, not the worst-case scenario but a garden-variety one on the pessimistic side. The scenario I’m about to generate is one derived partly from reading up on the subject, but mostly from working in Big Data, and generalizing what I see on the job to what might be the case elsewhere. I can’t prove that my generalizations hold good for all of Big Data, or even for x% of it. My claim is simply that my scenario has a certain plausibility to it, and if so, has certain practical implications for the present inquiry.
Suppose you’re trying to figure out whether Big Data might surveil you, and if so, to what extent, and if so, what to do about it. You might think that as a matter of procedure, you ought first to inquire whether Big Data has a motivation to surveil you, and only then to look into the obstacles it faces to doing so, given this motivation.
But in my experience, that’s not how Big Data works–not in government, and not in corporate life. Big Data doesn’t start with a determinate reason for engaging in some determinate sort of data harvesting, and then deal with the obstacles to engaging in it as they arise. It starts by eliminating as many obstacles as stand in the way to its getting unlimited data, and then stops (or really, pauses) when the process of eliminating these obstacles becomes too costly for the moment. With the obstacles out of the way, it then gathers all the data it can, given the existing constraints on collection, bracketing any determinate reason for gathering any particular tranche of data. Once it gathers the data, it hits “Run” or “Enter,” and the data “flows into” whatever location it’s supposed to go. Then it sits there until someone figures out what to do with it.
Put it this way: If I’m given the task of managing, say, the revenue cycle of a given hospital, I wouldn’t, in asking the hospital for data, ask it for some very circumscribed dataset tailored to some very specific purpose. I would ask for all the data they can give me, then figure out what I can do with it. And of course “what I can do with it” doesn’t really mean me in particular, on any particular time horizon, relative to any known extraction, ingestion, or processing capability. It’s a completely open-ended project. It means: give me all the data you can, so that someone, somewhere down the line, possibly well after I’ve left the company, can conjure up some way of monetizing what you’ve given us, given the capabilities we then have for doing so. Do I know what that is? No. Nor do I need to. It’s all just standard business practice. First you ask for the sky, then you figure out what the sky is for.
The extravagance of this enterprise is justified by two assumptions. (1) The value of some proper subset of any dataset will be transparent from the outset. You just look at the data, and it jumps out at you. (2) The value of the rest of the data may not be obvious, but–the assumption goes–there’s surely something that can be done with it at some point, whenever and whatever that might be. The “something” just has to be dreamed up, and people get paid good money to do just that. (I don’t, but “people” do.) In that respect, data harvesting and data mining is driven less by the cold, clammy hand of reason than by the wildest flights of visionary fantasy. Don’t think Homo economicus. Think “Minority Report.”
If this is right, we don’t really need to ask about Big Data’s “motivations” for total surveillance. The motivation is built right into the nature of the thing itself. Big Data is designed for total surveillance because the whole point of creating it is to engage in total surveillance. It simply is a mechanism or system for total surveillance, and unless otherwise constrained, naturally tends in that direction. If it’s to be constrained, the constraints have to be exogenous to Big Data itself. Data ingestion tools often have built-in processing limits, but I’ve never encountered one that asked, “Does running this process really satisfy the demands of justice?”
Our question then becomes how effective or reliable these exogenous constraints are. The two mutually complementary constraints are norms and trust. Big Data is subject to certain norms, moral, customary, and legal. If followed (one is initially inclined to assume) these constraints are pretty constraining. Given full or even partial compliance with them, total surveillance is rendered impossible. That seems comforting enough. It certainly rules out all-out paranoia.
Of course, the norms in question have to be put into practice by people–moral agents. Call them “stewards.” The norms are only as good as these stewards’ adherence to them. But (one is also inclined to assume) we trust (or ought to trust) these data stewards to follow most of the most important norms most of the time, or at least when it’s most important to do so. Perfect compliance and total trust are impossible. Only a paranoid demands such perfection from the world. But partial trust in partial compliance with most of the most important norms taking place most of the most important times doesn’t seem all that bad. Does it?
We now find ourselves in a genuinely odd predicament. Big Data was supposed to liberate us from innumerate hunches and gut-based intuitions. It was supposed to supply us with the hard, cold quantitative data that would inform our deliberations and make them “scientific.” It was supposed to rescue us from our primitive reliance on merely anecdotal information about identifiable people, and give us access to the impersonal realms of Pure Data. We now want to know whether we can trust Big Data not to eat us alive, but ironically, we lack the data to answer our question.
To get data on the trustworthy character of Big Data–meaning, the non-digital, flesh-and-blood people who run and staff it–we would have to erect a system of surveillance over Big Data itself. But as mere laypersons without technical expertise, we can’t be expected to know how to do this. The alternative to doing it ourselves would be to outsource our problem to Big Data. In other words, we could ask Big Data to apply the methods of Big Data to itself, with the goal of revealing to us that it, Big Data, was worthy of our trust. If we did this, of course, we’d first have to trust Big Data to do what we asked it to do. But the whole problem is that we lack the data to get the enterprise off the ground. We can’t trust Big Data to give us the data that tells us that Big Data deserves our trust. And so, it seems, for all the bells and whistles of norms, laws, checks, balances, and institutions, we have no reason to trust Big Data at all.
In theory, the constraints on Big Data’s crossing any problematic boundaries are there–written down in black and white. There are laws, regulations, norms, standard operating procedures, penalties, and so on. But “black and white” is not where the human beings in Management manage things or the ones in Operations operate things. So in practice, we can’t be sure anyone is really observing those constraints. We can hope. We can pray. We can gamble. To be fair, on a good day, they probably are observing some of those constraints. But not every day is a good one. There’s really no way to know. Big Data, then, turns out to be a twenty-first century version of Aristotle’s God, but with a twist. Aristotle’s God was a Prime Mover Unmoved. Big Data is an Unvalidated Validator Seeking Validation. Ironic that an enterprise so wedded to data should rely so heavily on faith.
Figuring out whether to trust the Big Data god is so difficult a task that you might wonder about the rationale for it. Why bother? Suppose we know that we’re unlikely to be surveilled in a fully granular way, but can’t figure out how granular the surveillance ends up being. Can’t we leave matters there? Where, you might wonder, is the harm in being surveilled? The readers of this blog are no doubt wholesome, clean-living folk. Surely there are no criminals or perverts lurking here. If you have nothing to hide, why do so? If you have nothing to hide, why demand the right to lurk in the shadows alongside the terrorists, criminals, scam-artists, bullies, fascists, bigots, pedophiles, human traffickers, and traitors of the Dark Web? They deserve to be surveilled. It benefits us when they are. So what if we’re surveilled in the bargain? Maybe that’s just the price of our online way of life. Or so it’s tempting to think.
There are, I think, at least two major dangers to mass surveillance, one relatively concrete, the other more abstract.
The concrete danger comes from the risks of real harm you incur as a matter of data leakage. The more of your data is exposed, the more of it is out there for malicious eyes to see. The more that that’s true, the more that the malicious minds behind those eyes can use your data to destroy your life. When that happens, expect no help or sympathy from anyone. You’ll be left alone to re-build what others have taken from you, as the ever-smiling, positivity-besotted champions of Big Data look the other way and “move on.” Every omelette requires its broken eggs, after all, and every victim of a data breach is one of them. As the victim of a major data breach myself, I have lots to say about being one, which I’ll discuss in a separate post.
Granted, some parts of Big Data (e.g., the national security establishment) exist in theory to protect you against threats that come from other parts of it (e.g., malign foreign actors). But huge parts of Big Data, including many domestic actors, have no such aim. They exist to make money, and if exposing your data is a risk that their business model requires, they’re only too happy to have you assume that risk for them. What difference does it make to them if your Social Security number is getting gang-banged in the Cloud? It’s not as though, stripped of your data and your identity, you pose much of a threat to the guilty parties. How much of a threat does someone pose if they’re incapable of validating their own existence?
The abstract problem is harder to convey. It’s a truism that we act differently when we’re being observed as compared with when we’re not, and differently under conditions of uncertainty than of knowledge. What if we’re always uncertain whether or not we’re being observed? Then whoever has put us in that situation is, in a deep sense, in control of us. They literally control how we act. As a result, we lose the capacity to act as we would if we knew with confidence that we were not being observed. Try to talk, write, go to the bathroom, change your clothes, have sex, sleep, or just sit in undisturbed repose, in the knowledge that no matter where you go, it’s possible you’re being watched by some third party not explicitly or consensually in the room with you.
The experience is a lot like being a theist, except that belief in theism is usually a matter of voluntary choice, and also except that the self-appointed deities observing you fall well short of possessing any of the divine attributes, like omnibenevolence. To be under constant observation is more like living in what Carl Sagan called a “demon-haunted world,” or what Descartes imagined as the world of the Evil Demon–except that in this case, the demons are real, and well-remunerated.***
You needn’t share my cynicism about Big Data to come to a functionally similar conclusion. You just have to wonder whether anyone has the right to change your behavior by subjecting you to perpetual uncertainty about the extent to which you’re under observation. Isn’t Big Data taking up an implicitly God-like vantage on us, and enacting what Sartre famously called “the desire to be God”? I’d say so. Of course, Sartre’s criticisms were offered in Sartre’s famously undifferentiated way: according to Sartre, just about everyone suffers from the desire to be God. But Sartre never met Big Data. Maybe we all have a God complex, but Big Data’s desire to be God comes a lot closer to satisfying that desire than any of the rest of us do.
For those of us who write, whether as a hobby or as a profession, this means that in principle, there is no such thing as a private thought once that thought is written down.**** Most people nowadays write on computers, not on paper. It’s nearly impossible nowadays to purchase what’s called an “air gapped” computer, meaning one that’s never been hooked up to the Internet, and can function properly without being hooked up. To ask for an air-gapped computer is immediately to raise alarms: who but a criminal would want such a computer? In every case where I’ve made the inquiry, I’ve been made to feel like a pedophile intent on hiding child pornography from law enforcement. Merely to raise the question, then, is to invite surveillance.
If a computer is hooked up to the Internet, it can be surveilled. Indeed, if a computer is hooked up to the Internet, it’s being surveilled. The question is not whether you’re being surveilled online, but when, by whom, and for what. In the nature of the case, the latter questions are unanswerable. And it would be reckless to assume that the answers are entirely benign: a look in your spam filter or even your cookies tracker should disprove that. But if a surveillance attempt is successful, it’s no more likely to be benign than the ones caught in your spam filter. It’s just less likely to be detected at all.
What this means is that the concept of a genuinely private journal or private writing or private written thought is now obsolete. If you turn on an electronic device to write something, you can be surveilled. If your program has an auto-save, it won’t do to save the file on a thumb drive. It’s still saved to the hard drive, which is accessible via the Internet. Same if you save to the hard drive. Same if you save to the Cloud. If some day, you cross an international border, and some border official decides that you can’t cross unless you give him access to your private thoughts, what recourse would you have? The more organized you are, the more accessible your “private” thoughts would be with a few mouse clicks. His incuriosity or boredom would be your only defense.
No matter how tech savvy you are, or think you are, it’s impossible to evade all detection all the time. And the more tech savvy you are, the more you indicate by the dumb moves you don’t make that you’re not the average user. That fact marks you out not only as someone with something to hide, but as someone with an awareness of the fact that it needs hiding, whatever it is. Unless you perpetually operate under a veil of anonymity and secrecy, you’ll eventually indicate to someone, in general terms at least, what it is that you’re hiding. And that will tip off whoever it is that can profit from getting past your defenses. You may be tech savvy enough to evade detection for awhile, but there are plenty of people out there who are tech savvy enough to beat your defenses, or at least try. The game has no clear terminus. And unfortunately, it isn’t a game.
So what’s the mean between fideism and paranoia? I can find no pat formula to capture it. If you want it in a single word, it’s vigilance. If you want a phrase, it’s vigilance tempered by a sense of moral complexity. If you want more than that, then for now, I can only refer you back to the two dozen paragraphs you’ve just read.
Should you rely on Big Data in cases that go beyond the sheer necessities of life? Yes, if you take strong precautions, have contingency plans for what might go wrong, and lack reason to think that you are the target of a concerted, individualized surveillance campaign. But even so, you should actively look for ways to limit your exposure.
And why is Sour Grapes wrong? Because it gets Big Data wrong. We have undeniable reason for pushing back on Big Data because it has awesome powers, is constrained from abusing them only in a purely formal way, is run by morally flawed mortals with ordinary vices, puts us at substantial risk (which it then covers up), and adopts a God-like posture toward us without having God’s omnibenevolence. None of the good it does us can entirely offset or explain away these harms.
Yes, it’d be nice to have a theory that conceptualizes all of this in a neat and tidy way, but more important is to have the right weapons that protect one’s space or drive intruders out of it. The question is how to fashion them, not whether we need to.
DeBrabander writes as though privacy was a lost cause, and as though the construction of an Arendt-inspired collectivist political order were somehow more feasible than the defense of privacy against Big Data. I don’t see why. No particular political goal that DeBrabander is likely to favor–gun control, health care reform, the restoration of labor through the resurrection of union power–is any more or less utopian than the task of reining in Big Data. Indeed, I don’t see how anyone could construct the Arendtian order DeBrabander favors until they’d first secured a measure of privacy. Even collectivist groups have to exclude those hostile to their aspirations in order to have the space to deliberate and act in a productive way. No one can function in an atmosphere of indiscriminate inclusion and total exposure. Contrary to DeBrabander, unless we draw some lines against Big Data and defend them, all bets are off for any higher political aspirations, Arendtian or otherwise. Privacy is a pre-condition of deliberation and action, not an afterthought or effect.
In my next installment, I’ll look at one of DeBrabander’s philosophical arguments regarding the value of privacy. In later posts, I’ll get into the weeds on more practical and technical issues.
*Aristotle’s defense of private property strikes me as preserving the essential insight that distinguishes a commitment to private property from a commitment to communism. See Robert Mayhew’s Aristotle’s Criticism of Plato’s Republic (Rowman and Littlefield, 1997), ch. 5 and Appendix.
**See, for instance, John P. Carlin’s Dawn of the Code War, and Nicole Perlroth’s This Is How They Tell Me the World Ends.
***Zuboff aside, most of the authors who discuss Big Data adopt what strikes me as ridiculously over-charitable assumptions about the motivations of those in charge of it, something I chalk up to over-familiarity with the principals. Precisely because these writers have or have had great access to the principals (including Edward Snowden, in Permanent Record), they have a tendency to avoid the ascription of the most nefarious possible motives to those in charge. In this respect, they remind me of cultural anthropologists who make excuses for the brutalities of the cultures they study. Though this is really a topic of its own, I think it’s clear that “demonic” is a perfectly accurate description of the motivations and behavior of many of those in charge of Big Data.
****Leo Strauss’s “Persecution and the Art of Writing” is insightful on this, except that Strauss’s essay, first published in 1941, needs updating. Strauss writes as though authorial persecution applied only to the elite capable of dangerous philosophical thinking. But that presupposes either that philosophy is all that matters, or that philosophy is all that might be considered dangerous. Given the nature of contemporary surveillance, we’re all in the situation Strauss describes, and all in the position of having to write exoterically. Algospeak is just one symptom of this. See Leo Strauss, “Persecution and the Art of Writing,” reprinted in Persecution and the Art of Writing (Chicago, 1958).
Here’s a piece on privacy you might find interesting. I can’t promise I haven’t shared it with you before, given the mushiness of my memory:
Click to access MR1-1-S16-RODMAN.pdf
LikeLike
You had sent it to me, but I’m glad you re-sent it, since I (a) misplaced and forgot about the one you sent, and (b) didn’t have time to read it then, but do now.
As it happens, DeBrabander is partial to Tocqueville, and quotes frequently from him in his book. I actually have Democracy in America in line to-be-read within the next few months; I’ve read bits and pieces of it, but not the whole thing. So, perfect timing.
LikeLike
Pingback: Facing the Whirlwind (1) | Policy of Truth