Happy Halloween 2017

I’m reblogging this post I did in 2014 and 2015, modified after taking a year off in 2016.

Halloween has, for as long as I can remember, been the only holiday I’ve ever been able to take seriously or wholeheartedly to celebrate. As a nominal Muslim, I fast during Ramadan, but Ramadan isn’t really a holiday, and unfortunately, none of the Muslim holidays (the Eids) are seasonal, seasonality being an essential property of a real holiday. In fact, generally speaking, Muslims have trouble figuring out when exactly their holidays are supposed to take place–another liability of being a member of that faith.

Having spent a decade in a Jewish household, I have some affection for some of the Jewish holidays–Yom Kippur and Passover, though not Hannukah or Purim–but always with the mild alienation that accompanies the knowledge that a holiday is not one’s own: it’s hard to be inducted into a holiday tradition in your late 20s, as I was.

I like the general ambience of Christmastime, at least in the NY/NJ Metro Area, but unfortunately, once you take the Christ out of Christmas, you take much of the meaning out of it as well, Christmas without Midnight Mass being an anemic affair, and Midnight Mass without Christ being close to a contradiction in terms. Not being a Christian, I find it hard to put Christ back into Christmas, mostly because he’s not mine to put anywhere in the first place. (Same with Easter.)

Diwali I just don’t get. Continue reading

Republican Islamophobia: A Response

This is a much belated response to Peter Saint-Andre and Michael Young on Republican Islamophobia, from my post of January 5. Given its length, I’ve decided to make a new post of my response rather than try to insert it into the combox.

Looking over the whole exchange, I can’t help thinking that the point I made in my original post has gotten lost in a thicket of meta-issues orthogonal to what I said in the original post. I don’t dispute that the issues that Peter and Michael have brought up are worth discussing, but I still think that they bypass what I actually said.

Continue reading

Killing in the Name Of: Jason Brennan on Abortion and Self-Defense (1 of 2)

Jason Brennan has a post a few weeks back on abortion and self-defense (Nov. 30), written in the wake of the Planned Parenthood attack in Colorado Springs (Nov. 29). The point he makes is simple, and the argument he offers is, very narrowly construed, sound. But construe the conclusion slightly differently than he does, and the argument misses the point in an obvious way.

The claim in short is that if you think that abortion is murder, and its victims are innocent, you have the right to defend the innocent by force. If the force in question requires killing those who perform abortions, so be it. Brennan invokes a lot of “common law” reasoning to bolster the plausibility of the conditional*, but the appeal to common law is a dialectical fifth wheel that does no real work here. He’s just assuming what we all assume–that you can kill a killer.  After some thought-experimental invocations of superheroes, we reach the conclusion that if you believe that abortion is murder, it would be permissible for you to go around killing abortion providers.  Here’s the conclusion of the argument, put in the mouth of the would-be fetus defender:

“I will, if necessary (if there are no equally effective non-lethal means), kill any would-be child murders to stop them from killing children.” Again, this seems heroic, not wrongful.

Note the parenthetical. What we have here is a conditional claim whose antecedent involves another conditional. Let me re-phrase it slightly, without loss of authorial intention, but with a little gain in clarity:

If necessary, and if there are no equally effective non-lethal means, then kill those whom it’s necessary to kill in order to stop the killing.

Lots of modal claims going on there. Let’s rephrase once again:

If necessary, kill those it’s necessary to kill in order to stop the killing, but if it’s not necessary, do not do so.

What does “necessary” really mean here? I take it that “necessary” means “necessary for bringing about some end.” But the end is not plausibly construed as “bringing abortions down to zero, full stop, by all available means, regardless of any other normative considerations.” The end in question is some complex goal, e.g., a just society or the common good or whatever, where superordinate higher-order features of the goal regulate subordinate features, including strategies for achieving this or that political outcome.

So the anti-abortionist’s ultimate goal is not plausibly described as “do what’s necessary to stop the killing.” It’s “do what’s necessary to bring about the common good, stopping the killing in a way that’s compatible with bringing about the common good.” I’m pro-choice, but it seems to me that anti-abortionists (or pro-lifers or whatever we call them) are entitled to a plausible conception of post bellum considerations, no matter how militant they are about ending abortion. They don’t just want to end abortion, full stop. They want to live in a just society without abortion, and it may not be possible to do that if you try to end abortion by killing people. In any case, the two things–stop the killing and live in a just society without abortion–are not the same thing.

Suppose that abortion really is murder. In that case, killing abortionists would be one obvious means of stopping abortions, but killing would also likely have seriously adverse consequences. It might increase hostility for anti-abortionists to the point of instigating widespread persecution against them. It might even start a civil war. Further, it’s easier in talk than in practice to kill all and only the “right” people during a terrorist/vigilante campaign. Once the killing begins, the enterprise of killing is often overcome by some terrorist/vigilante equivalent of the fog of war, and the wrong people get killed with amazing frequency. Any of those outcomes could obtain, and any of them might end up being worse for the anti-abortion cause (much worse) than not killing abortion providers.

It’s hard to be precise about expected outcomes of this sort, so people reasonably disagree about them. Some people think that a campaign of killing would, all in, be good for the anti-abortion cause. Others disagree. Obviously, both the complexity of the calculations and the possibility of disagreement about them might help explain why even fervent anti-abortionists have a (disjunctive) principled reason for not going around killing abortionists. They may either think that doing so is self-defeating, or they might think that doing so might very well end up being self-defeating, and not worth risking, as long as there are relatively peaceful (or at least orderly) political means for achieving the same ends with fewer collateral damages.

In recent times, the history of the abortion controversy begins with a deceptively liberating case from the pro-choice perspective (Roe vs. Wade) and proceeds from there to a series of restrictions on the original Roe vs. Wade restrictions on abortion, so that abortion, though nominally legal in the U.S, is in many ways embattled and under siege. In other words, opponents of abortion rights have done a pretty creditable job of subverting the right to abortion by purely legal means. Of course, abortions do still take place, and on the anti-abortion view, those abortions are murder. But the question is whether a campaign of vigilante killing would have purchased more for them than the political-judicial campaign they’ve actually enacted. Hardly as obvious as Brennan’s argument suggests.

It’s an open question whether anti-abortionists could, by purely legal means, do a better job of subverting abortion rights than they could by killing abortionists. The United States ended slavery by warfare in 1865; Brazil ended slavery without warfare in 1888. Anti-abortionists could in principle plump for a Brazilian approach to the abolition of abortion on the grounds that while that approach would take longer, it might prove more counter-factually stable than a faster-acting but more violent approach. Arguably, violence would be counter-productive and self-defeating, possibly catastrophically so.

Since it makes no sense to enact a self-defeating strategy, and it’s highly risky to enact what could be (catastrophically) self-defeating, anti-abortionists need not worry that Brennan’s argument pushes them into wanton murder. Contrary to Brennan, “the” issue involved in the abortion debate is not just the moral status of abortion (though I agree that that’s the fundamental issue) but what to do about the fact that abortion is a complex issue that elicits widespread disagreement. In other words, the philosophical issue is not just the theoretical one of whether or not abortion is murder, but the practical one of what to do about the fact that certain ways of disagreeing about it are potentially murderous.

Now consider Brennan’s list of would-be objections to his argument:

There are a number of objections to this line of reasoning, including:

  1. It’s wrong to engage in vigilante justice.
  2. Batman must allow people to murder children because he has a duty to obey the law, and the law permits child murder.
  3. Batman must not kill the child-killers, but must instead only use peaceful means.
  4. Batman must not kill the child-killers, because it probably won’t work and won’t save any lives.
  5. Batman must not kill the child-killers, because they mean well and don’t think they’re doing anything wrong.
  6. Batman must not kill the child-killers, because the claim that “killing six-year-olds is wrongful murder” is controversial among reasonable people.
  7. Batman must not kill the child-killers, because the government or others might retaliate and do even worse things.

I think these objections are either implausible (e.g., 2 is absurd), or are at best mere elaborations of the necessity proviso of defense killing. (E.g., #4.)

Putting aside (4), Brennan is right to say that these are pretty pointless objections. Objection (4) is where the action is. (Construed a certain way, [4] might well entail [1]: vigilante justice might be wrong because it’s likely to be ineffective, and it’s irresponsible to engage in a political strategy that might very well backfire. But I think Brennan intends [1] to mean that vigilante justice is deontically wrong qua violation of the law, full stop. So I’ll ignore it.)

Brennan dismisses (4) as a “at best a mere elaboration…of the necessity proviso of defense killing.” Well, that’s one way of putting things, and not a literally false one, I suppose. But it’s very misleading: a “mere elaboration” of a proviso can also explain why the proviso cannot be enacted under foreseeable conditions, and (4) does just that. In other words, what Brennan calls a “at best a mere elaboration” ends up explaining why, once we leave the thought-experimental laboratory, his suggestion makes no sense in the real political world where it’s supposed to have application.

Digression: the same sort of “elaboration” is the strategy behind what’s come to be called “contingent pacifism” in the just war literature; contingent pacifism is the strategy of justifying de facto pacifism by construing just war provisos in such a way that they can almost never be satisfied in the real world. This literature suggests that depending on how one construes its claims, just war theory (and its doctrine of necessity) can lead either to very hawkish policy prescriptions or to pacifism. But if the same theory leads different theorists to contrary outcomes with respect to the same issue, the differences between the different applications of the theory–the contingencies in question–can hardly be philosophically trivial. If my version of a doctrine leads me to wage war, and your version of the same doctrine prohibits you from ever going to war, it makes no sense to say, “Don’t worry, we’re agreeing on the theory; we just disagree on the contingencies.” In this case, the disagreement on the contingencies could mean the difference between a decade of war and a decade of peace. Conceptualizing that difference is a paradigmatically philosophical task.

Back to abortion: Not killing abortionists because you could get arrested, and/or because it would undermine the anti-abortionist cause, and/or because the collateral damages would be too high, and/or because it could start a civil war are not trivial considerations, whether “morally” or “practically.” From the first person perspective of an agent deciding what to do–not what to write in a blog post–these are all considerations of paramount importance. They make the difference between going ahead and killing someone and deciding not to. So a reader could grant 99.9999% of Brennan’s argument in principle, but still think that the 0.00001 remainder makes a crucial and theoretically significant difference to political practice. And he might insist that Brennan’s way of rendering the argument reveals a blind spot in his thinking about the relation between theory and practice.

I’d put the latter issue like this: Taken as an academic exercise, with all qualifications duly noted, and abstracting entirely from what would be necessary to enact his advice in practice, Brennan’s argument is perfectly sound. Taken as real-world political advice, however, and factoring in all relevant considerations–including prudential considerations about expected consequences–Brennan’s advice is myopic and insane. It seems to me that when the theoretical version of a prescriptive argument ends up sound, but the practical version of it is insane, we’re obliged to think harder about the relation between arguments, theory, and practice.

At a minimum, I think we’re obliged to note the huge gap that obtains between theoretical prescriptions and practical ones. It sounds oxymoronic, but it isn’t. A theoretical prescription is a prescription offered ex hypothesi, as an exercise in deontic logic, without pretending to guide real-life practice: it notes a normative entailment; it doesn’t claim to tell people what to do. A practical prescription is a prescription intended to guide practice, all things considered; it doesn’t just note an entailment, but tells us, all in, what to do.** Put differently, there is a huge difference between saying, “Your views entail that you should go out and kill people–but don’t actually do that, for God’s sake, I’m only pointing out where your views lead!” and saying, “Your views entail that you should go out and kill people–and if that’s where your views lead, so be it. So get your gun and hop to it!” Brennan is saying the former (I think), but you could be excused for interpreting him as saying the latter. The lesson here is paradox-like but not paradoxical:  A prescriptive argument can be sound and yet defective as advice.

The underlying disagreement here, it seems to me, is a version of Hobbes versus Aristotle on prudence. Aristotle takes phronesis (‘prudence’) to be an intellectual virtue that guides individual, first-personal decisions. Despite its practical, individualized, contextualized, consequence-sensitive, first-personal nature, Aristotle insists that phronesis a legitimate object of philosophical inquiry and a legitimate source of knowledge (Nicomachean Ethics, VI.5-13). A view like this puts a certain premium on the nuts and bolts of deliberation, from acceptance of the premises that motivate an action down to the details of what ultimately produces the action in the real world. On an Aristotelian view, what’s philosophically interesting is not just the abstract schema that the agent accepts but how the agent translates that schema into the particularities of a particular action. “Translating a schema into the particularities of a particular action” is the work of phronesis. 

Hobbes denies that prudence so conceived has any significant epistemic value (Leviathan, IV.46.1-6):

… we are not to account as any part thereof, that originall knowledge called Experience, in which consisteth Prudence: Because it is not attained by Reasoning, but found as well in Brute Beasts, as in Man; and is but a Memory of successions of events in times past, wherein the omission of every little circumstance altering the effect, frustrateth the expectation of the most Prudent: whereas nothing is produced by Reasoning aright, but generall, eternall, and immutable Truth.

Prudence, in short, is unscientific. It yields contingent, changeable, contextualized truths, neither important enough nor counterfactually stable enough nor wide enough in scope to count as genuine philosophical knowledge. How the agent translates an abstract schema into action is philosophically uninteresting. What matters is the schema–the model– itself. From this perspective, an inquiry into what the agent is, all things considered, to do seems too fine-grained, variable, and messy to be a genuinely philosophical or genuinely worthwhile activity.

Contemporary Hobbesians (as I’m thinking of them) prize thought-experimentation and social science at the expense of mere first-hand experience, and at the expense of an account of the requirements of first-personal deliberation (i.e., prudence). First-personal agents disappear from view, as do their deliberations and deliberative needs. From this perspective, the mere prudence required for intelligent political action is unworthy of philosophical inquiry. Anarchist Hobbesians have a plausible-looking rationale for this insistence: on their view, politics is an unworthy occupation, so it stands to reason that the epistemic virtues it require are themselves unworthy of sustained reflection.***

As I see it, one of the most valuable contributions of neo-Aristotelian theorizing (in the Nussbaumian mode) is to put social science and thought-experimentation in its place, and insist on the first-personal perspective of the agent and her deliberations–along with history, psychology, and common sense. On a view like this, it isn’t enough to know that if abortion is murder, and self-defense is justified, you can infer that defensive killing would be justified to save fetuses from murder. You need to know whether, even if that argument is sound, you should actually be out killing people. If so, you need to know whom to kill, when and how; how to prevent predictable disasters that arise when you start killing people; and how the killing enterprise fits into the larger aim of achieving the common good. That sounds like “mere strategy” to some people, but on an Aristotelian view, it’s precisely the kind of knowledge that the just and wise agent has, and that the political philosopher studies in order to grasp the nature of justice and wisdom.

Anyway, thought experiments and social science are of some, but relatively little value here. Eventually, thought experiments run out of prescriptive steam for the obvious reason that life isn’t an experiment. Social science runs out of useful things to say because we can’t do experiments on novel courses of action that no one has yet tried–but we can’t refuse to do novel things because there’s no existing social scientific literature about them, either. A virtue like phronesis is indispensable here, both for deliberative agents and for theorists theorizing about what such agents do. If you’re going to do something–e.g., engage in political action–you have to know how to do it, and the only way to know how to do something is to have done it (or have rehearsed doing something as much like it as possible). You need the kind of knowledge that Hobbes denigrates and that our neo-Hobbesians ignore. 

Bottom line: even if you think abortion is murder, don’t do what Jason Brennan tells you. (PS: It’s not really relevant to my argument, but in case you’re wondering, I’m pro-choice on the abortion issue. I believe in abortion on demand from the moment of conception until birth, with some moral reservations about late abortion, while rejecting legal restrictions on it.)

*I corrected this sentence. It originally said, “antecedent of the conditional,” but what I meant was that Brennan invokes common law to bolster the plausibility of the conditional as such.

**I reworded the latter clause after posting. The previous version (which I’ve now forgotten) was wordier and somewhat unclear.

***”Anarchist Hobbesian” may sound like a contradiction in terms, but I don’t think it is. It could mean (a) an anarchist whose meta-philosophical views map onto Hobbes’s and/or (b) an anarchist whose account of political authority maps onto Hobbes’s, but who infers on that basis that no states have authority.

Jason Brennan and Phillip Magness: A Request for Disclosure

Considering the number of times Jason Brennan has alluded, in the context of public discussion, to his once having worked at GEICO, I think it’s only fair that he disclose the following for public consumption:

  1. When did he work at GEICO, and at what location?
  2. What was his title while working there?
  3. What was his salary?
  4. Did he work there through a temp agency, or was he hired directly by GEICO itself?

If the GEICO job is important enough to bring up that many times, it’s worth clarifying the details by way of answers to the preceding questions.

A similar query is in order for Phillip Magness, who’s also been very autobiographically assertive on the subject. The article linked-to in the preceding sentence alludes to 1.5 years spent as a full-time adjunct (I’m presuming that “1.5 years” refers to the period 2008-2010, corresponding to the position of Lecturer at American University on his CV), then invites us to do some “arithmetic” about the income he claims to have earned during that period, and how he managed to live on it while being otherwise productive.

That’s fine, but Magness’s CV indicates that he received three grants during roughly the same period (2007, 2009, 2011). I regard the 2007 and 2011 grants as potentially relevant even though they strictly speaking fall outside of the 2008-2010 period. To be blunt, a year and a half of adjunct work cushioned by three grants is not quite as impressive as the impression one might get by reading the unadorned version of Magness’s apologia pro vita sua.

Three questions for Magness, then:

  1. What was the cumulative monetary value of those three grants?
  2. Does his CV exhaustively list all of his income sources for the relevant years (meaning 2007-2011)?
  3. Did he, during those years (2007-2011), live in a household with someone earning an additional income?

All three questions strike me as relevant to evaluating the story Magness tells.

One problem with both sides in the adjunct debate is that the most assertive people in it seem more interested in parading selective recountings of their valor or misfortunes than in documenting their claims in a way that demonstrates the credibility of what they’re saying to neutral or skeptical readers. If people are going to start going autobiographical in the Great Adjunct Debate–whether they’re adjuncts recounting their minimum-wage woes, or academic stars recounting their Horatio Alger stories–I think they owe us fuller disclosures than any of them have been making about the stories they tell us. Brennan and Magness clearly think of themselves as exemplars for the rest of the profession. How about exemplifying some disclosure about those stories you’ve been telling?

Postscript, 11 pm: I’m satisfied with Brennan’s answer, but on second thought, I have to say I’m not just puzzled but mystified by the autobiographical claims Magness has made in his increasingly-famous essay, “The Myth of the Minimum Wage Adjunct.

As someone who spent the last ~1.5 years of grad school as a so-called “full time adjunct,” constituting my only real source of income at the time, I can state first hand that it will not make you wealthy.

So he was an adjunct for 1.5 years, during which time adjuncting was his “only real source of income.” I take it that the word “real” implies that there was some other, secondary source of income. I’m curious what it was.

Later he tells us,

I can also speak to this first hand as it is something I learned to do quickly during my own period as a full-time adjunct ca. 2008-2009. I was not anything close to well off during this period of my career, but with a little basic time management I not only met my teaching obligations but I (1) finished a dissertation, (2) wrote several peer reviewed articles, (3) composed a substantial part of an academic press monograph, and (4) found more permanent employment.

The problem is, his CV lists a Doctoral Research Grant from George Mason University for the year 2009. I can see how the grant might not literally have overlapped with the adjuncting: if he started adjuncting in January 2008, and continued through fall 2008 and then spring 2009, that would be 1.5 years of adjuncting; he could then have gotten the research grant for the latter half of 2009. But I’m speculating. I think we’re entitled to hear the explanation directly from him.

Literal overlap or not, he cannot, on this basis, claim to “speak to this first hand,” where “this” refers to the experience of the average full-time long-term adjunct–which is what the discussion at BHL was about. One and a half years of adjuncting sandwiched between two grants, along with some undisclosed secondary income source, is not long term adjuncting in any sense relevant to the ongoing controversy. And we don’t even know what he did during the summer of 2008, when he was a “so-called ‘full time adjunct’.” According to Magness, adjuncts don’t teach during the summer months (point 5 of his enumerated points), from which it seems to follow that he didn’t. So did he simply go without income during the summer, or is that when the non-real income source kicked in? If so, what was the source? The answer surely has some bearing on the relationship between his personal experiences and the predicament of the long-term adjunct.

Whatever the answers, we’re left with a mystery in Magness’s account that’s worth clearing up. He wants us to believe that he knows what it’s like to be a long-term adjunct, but the story he’s telling is consistent with saying this:

I was a so-called full time adjunct during 2008-9. Of course, I got a grant in 2007, then one in 2009, and I wasn’t an adjunct during the summer of 2008. During the summer, I got a real job–a real job, albeit with an unreal income. Meanwhile, I had established a relationship with the Institute for Humane Studies, which eventually gave me an administrative job as Academic Program Director, a job I cheerfully hold while suggesting all over Twitter that the university’s problems could be solved if only we eliminated all of those useless administrators on the payroll. I realize that very, very, very few long-term adjuncts could get such a job, precisely because it’s sui generis, and I am now the person who holds it. And yet, I won’t hesitate to lecture long-term adjuncts about what bad time managers they are.

Say it ain’t so, Phil.

David Potts on the Dunning-Kruger Effect

It’s a little known fact that some of PoT’s most avid and engaged readers lurk behind the scenes, being too bashful to log onto the site and call attention to themselves by writing for public consumption. What they do instead is read what the rest of us extroverts write, and send expert commentary to my email inbox. I implore some of these people to say their piece on the site itself, but they couldn’t, possibly. They’re too private for the unsavory paparazzi lifestyle associated with blogging.

About a month ago, I posted an entry here inspired–if you want to call it that–by a BHL post on graduate school. Part of the post consisted of a rant of mine partly concerning this comment by Jason Brennan, directed at a commenter named Val.

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

My rant focused on Brennan’s conflation of psychiatry and clinical psychology in the second sentence (along with the belligerent stupidity of the claim made about clinical psychology), but a few weeks ago, a friend of mine–David Potts–sent me an interesting email about the Dunning-Kruger effect mentioned in the first sentence. David happens to have doctorates in both philosophy and cognitive psychology, both from the University of Illinois at Chicago; he currently teaches philosophy at the City College of San Francisco. In any case, when David talks, I tend to listen.

After justifiably taking issue with my handwaving (and totally uninformed) quasi-criticisms of Jonathan Haidt in the just-mentioned post, David had this to say about the Dunning-Kruger effect (excerpted below, and reproduced with David’s permission). I’ll try to get my hands on the papers to which David refers, and link to them when I get the chance. I’ve edited the comment very slightly for clarity. I think I’m sufficiently competent to do that, but who knows?

First, about the Dunning-Kruger effect. I had never heard of it, which got my attention because I don’t like there to be things of this kind I’ve never heard of. So I got their paper and a follow-up paper and read them. But I was not much impressed by what I read. How is Dunning-Kruger different from the well-established better-than-average effect? For one thing, [Dunning-Kruger] show — interestingly — that the better-than-average effect is not a constant increment of real performance. That is, it’s not the case that, at all levels of competence, people think they’re, say, 20% better than they really are. Rather, everybody thinks they’re literally above average, no matter how incompetent they are. This is different from, say, knowledge miscalibration. Knowledge miscalibration really is a matter of overestimating one’s chances of being right in one’s beliefs by 20% or so. (That is, people who estimate their chances of being right about some belief at 80% actually turn out to be right on average 60% of the time; estimates of 90% correspond to actually being right 70% of the time, etc.) But in the cases that Kruger and Dunning investigate, nearly everybody thinks they’re in the vicinity of the 66th percentile of performance, no matter what their real performance. So that’s interesting.

But that is not the way Dunning and Kruger themselves interpret the importance of their findings. What they take themselves to have shown is that incompetent people have a greater discrepancy between their self-estimates and their actual performance because, being incompetent, they are simply unable to judge good performance. If your grasp of English grammar is poor, you will lack the ability to tell whether your performance on a grammar test is good or bad. You won’t know how good you are — or how good anyone else is for that matter — because of your lack of competence in the domain. Lacking any real knowledge of how good you are, you just assume you’re pretty good. On this basis, they predict that incompetent people will very greatly overestimate their own competence in any domain where the skill required to perform is the same as the skill required to evaluate the performance. (Thus, they do not suppose that, for example, incompetent violin players will fail to recognize their incompetence.)

The trouble I have with this is that it is not well supported by the data. What their data really show, it seems to me, is that in the domains they investigate, nobody is very well able to recognize their own competence level. The plot of people’s estimates of their own abilities (both comparative and absolute) against measured ability does slope gently upwards, but very gently, usually a  15% – 25% increase despite an 80% increase in real (comparative) ability level. The highly competent do seem to be reasonably well able to predict their own raw test scores, but they do not seem to realize their own relative level of competence particularly well. They consistently rate their own relative performances below actuality. For example, in one experiment people did a series of logic problems based on the Wason 4-card task. Participants who were actually in the 90th percentile of performance thought they would be in about the 75th percentile. In another study, of performance on a grammar test, people who performed at the 89th percentile judged that they would be in the 70th. Then they got to look at other participants’ test papers and evaluate them (according to their own understanding). This raised their self-estimates, but only to the 80th percentile.

It is true that poor performers do not recognize how bad they are doing in absolute terms. But the discrepancy is not nearly as great as the discrepancy with regard to comparative performance. In the logic study, after doing the problem set and giving their estimates of their own performance, people were taught the correct way to do the problems. This caused the poor performers to revise their estimates of their own raw scores to essentially correct estimates. But they still thought their percentile rankings compared to others were more than double what they really were. (They did revise these estimates down substantially, but not enough.)

I think Dunning and Kruger have latched onto a logical argument for the unrecognizability of own-incompetence in certain domains and that they are letting that insight drive their research rather than measurements. No doubt if the knowledge of a domain necessary to perform well is also essential to evaluating performance in that domain — one’s own or anyone else’s — then poor performers will be poor judges. This almost has to be right. But the effect seems small insofar as it is attributable to the logical point Dunning and Kruger focus on. The bulk of their findings seems to be attributable, not to metacognitive blindness, but to social blindness to relative performance on tasks where fast, unambiguous feedback is in short supply. In domains where fast, abundant, clear feedback is lacking (driving ability, leadership potential, job prospects, English grammar, logic), nobody really knows very well how they compare with others. So they rate themselves average, or rather — since people don’t want to think they’re merely average — a little above average. And this goes for the competent (who accordingly rate themselves lower than they should) as well as the incompetent.

My low opinion of the Dunning-Kruger effect seems to be shared by others. I have on my shelf six psychology books published after Kruger and Dunning’s paper became common coin, which thoroughly review the heuristics and biases literature, four of which I’ve read cover to cover, and only two of them make any mention of this paper at all. One cites it together with two other, unrelated papers merely as finding support for the better-than-average effect, and the other cites it as showing that even the very worst performers nevertheless tend to rate themselves as above average. In other words, none of these books makes any mention at all of the Dunning-Kruger effect.

But if the Dunning-Kruger effect isn’t of much value as psychology, it’s great for insulting people! Which is no doubt why it is well known on the Internet.

I didn’t know any of that, and thought it would better serve PoT’s readers to have it on the site than moldering in my inbox.
PS. I’ve been having trouble with the paragraph spacing function in this post, as I sometimes do, so apologies for that. I don’t know how to fix it; when I do, it seems fixed, and then the problem spontaneously recurs. (I guess I’m an incompetent editor after all.)
Postscript, December 20, 2015: More on the Dunning-Kruger effect (ht: Slate Star Codex).

From Assurance Contracts to “Compulsory” Voting

Jason Brennan has a series of posts up at BHL on compulsory voting. One of his arguments against compulsory voting is what he calls the Assurance Argument:

The Assurance Argument

  1. Low turnout occurs because citizens lack assurance other similar citizens will vote.

  2. Compulsory voting solves this assurance problem.

  3. If 1 and 2, then compulsory voting is justified.

  4. Therefore, compulsory voting is justified.

I’ve sketched a version of the Assurance Argument here at PoT that’s immune to Brennan’s criticisms. It doesn’t exactly correspond to Brennan’s version of the Assurance Argument above, but I think it’s close enough in form to be worth discussing in the same breath.

I have yet to set it out formally, but my version of the Assurance Argument turns on the idea of an assurance contract to vote. The basic idea is this: Take a context in which low voter turnout is a bad thing you justifiably want to remedy. Find a population apt to vote in a single direction as a unified voting bloc. Make sure that what they’re voting for not only promotes their interests, but in doing so, promotes the common good. Then come up with a mechanism for generating and enforcing an assurance contract that gets that population to vote the relevant way. If you work with the right population, pursue the right aims, and fashion the right contract, my view is that you can generate a binding obligation to vote in the population, and in doing so, solve the assurance problem that Brennan treats as essentially insuperable.

Given the preceding context,  premise (1) of Brennan’s version is fine as is, but the rest has to be modified as follows: In premise (2), substitute “an assurance contract” for “compulsory voting.” In (3) and (4), substitute “enforced contract remedies” for “compulsory voting” (and change the grammar). With that in place, you have a version of the Assurance Argument that comes as close as possible to an argument for “compulsory voting” without quite crossing the line into literal compulsion. 

The general idea is that in any political context in which you can induce people to form an assurance contract to vote, you can “compel” them to vote, or else exact a penalty for failure to vote. That sounds implausible if you’re talking about American elections, but there are other contexts in which it’s feasible.

During the intifadas, Palestinian politics involved mass action where compliance was universally expected, and non-compliance was severely penalized (sometimes by death). The point is that in cases like this, we’re talking about a political culture that involves a strongly solidaristic ethic, where structures are in place for mass collective action.

Imagine that West Bank Palestinians somehow acquired the right to vote in Israeli elections (or East Jerusalemite Palestinians just decided to exercise their pre-existing right to vote), and that the mass action in question turned from coercive uprising-related activity to electoral politics. My claim is: If you can induce near-compliance with the dictates of an uprising (as you can), you can induce explicit consensual compliance with an assurance contract involving a promise to vote in an election. If you can do that, you can compel compliance with the contract.

More specifically: Imagine an electronic caucus–like a MOOC–in which everyone in a given population is expected, due to social pressure, to log on and decide on a course of electoral action. Everyone who logs on then becomes part of a (potential) assurance contract. The numbers are tallied, and if they’re sufficient to tip the election, the contract is considered valid, and people are expected to vote accordingly. If not, the caucus dissolves. (In other words, what I’m calling a caucus really has the function of a caucus plus a census plus an assurance contract.)

Suppose that the numbers are there to tip the election. Then everyone is expected to vote as specified in the contract. Suppose that the contract calls for x votes for a certain candidate/slate/policy. If x votes show up in the election results, fine. But if fewer do, it follows that there were free riders who reneged on the contract. In that case, it becomes a matter of finding out who they are, so as to exact a penalty for non-compliance. Now suppose that the balloting is open, not secret. If so, then if (say) Khawaja failed to vote for the agreed-to candidate, and there’s no secret ballot, someone will squeal on him when the Free Rider Commission makes its inquiry. Under such conditions, I suspect that there will be very few free riders.

If you can pull all that off, you can “compel” votes that tip the scales of the election. The obstacles to pulling it off are psychological rather than conceptual. If the right psychological dispositions were in place–if Palestinians regarded elections the way they regard uprisings, and the Israelis allowed them to organize politically, and allowed them to vote, etc.–you could generate an electoral assurance contract mechanism involving (a) numbers large enough to affect an election but (b) small enough to organize and hold compliant to the terms of the contract. This only seems implausible to Americans because we live in a huge, highly impersonal, individualistic, diverse, and cosmopolitan society where such a contract seems like a mere thought experiment. If you live in a smaller scale society with a different political ethos, however, it’s within the realm of nomological possibility.

The point I’m making isn’t so much about Israelis and Palestinians as about assurance contracts and elections. Even if the preceding doesn’t literally apply to the Palestinian case, my point is, if you can find a case that satisfies the description I’ve just given, you can run some version of an assurance argument on it. It’s an empirical question whether you can generate or discover such a case. I’m not a political scientist, and don’t know the literature very well, but as an armchair consideration, I don’t find my empirical assumptions implausible, and they merely have to be possible to get the argument off the ground. Maybe Brennan discusses the relevant empirical issues somewhere (he’s written a great deal that I haven’t read), but he doesn’t do so in The Ethics of Voting or in “The Right to a Competent Electorate,” which I have read.

There are lots of details to work out here, but once you grasp the principle involved, the sketchiness of the proposal is not an objection to the basic idea. At any rate, my argument is immune to what Brennan calls the Burden of Proof and the Worse Government arguments.

Here’s the Burden of Proof Argument:

The Burden of Proof Argument

  1. Because compulsory voting is compulsory, it is presumed unjust in the absence of a compelling justification.

  2. A large number of purported arguments for compulsory voting fail.

  3. There are no remaining plausible arguments that we know of.

  4. If 1-3, then, probably, compulsory voting is unjust.

  5. Therefore, probably, compulsory voting is unjust.

As a response to my argument, the BP argument fails at premise (1): premise (1) doesn’t apply to my argument because unlike compulsory voting in the literal sense, there’s no initiatory compulsion involved in my assurance contract idea, and no special burden of proof is required to hold someone to a contract to which they’re explicitly a party.

Here’s the Worse Government Argument:

 The Worse Government Argument

  1. The typical and median citizen who abstains (under voluntary voting) is moreignorant, misinformed, and irrational about politics than the typical and median citizen who votes.

  2. If so, then if we force everyone to vote, the electorate as a whole will then become more ignorant, misinformed, and irrational about politics. Both the median and modal voter will be more ignorant, misinformed, and irrational about politics.

  3. If so, in light of the influence voters have on policy, then compulsory voting will lead [to] at least slightly more incompetent and lower quality government,

  4. It is (at least presumptively) unjust to impose more incompetent and lower quality government.

  5. Therefore, compulsory voting is (at least presumptively) unjust.

This argument fails at premise (1) as well. As far as I can tell, premise (1) implicitly makes a claim about the median American voter. But I’m not talking about American voters; I’m talking about non-American ones. Unless the claims of (1) generalize to the voters I have in mind, the WG argument involves an ignoratio elenchi against my proposal.

If anyone can cite studies that show that, say, Israeli Arab voters are misinformed, ignorant, or irrational when they vote for the United Arab List, I’d like to see it. If anyone can cite studies that show that East Jerusalemite Palestinians would be misinformed, ignorant, or irrational to vote for (candidates that favor) more housing permits, I’d like to see that, too. But I’m skeptical.

*I changed the title of the post after posting.

Psychology, Psychiatry, and Moral Philosophy: An Open Thread

I’ve been working on and thinking about issues at the intersection of psychology, psychiatry, and moral philosophy lately, so this (partly but not entirely edifying) discussion-thread at BHL caught my eye. I thought I’d reproduce it here, comment on it, and then just leave the comments open indefinitely for thoughts on the matter.

The discussion arises in the context of a post by Jason Brennan on whether one should go to grad school. I don’t particularly like the self-congratulatory tone of the post, but don’t disagree with the advice he gives. Early on in the post, he addresses a frequently-asked question and offers up an answer:

I like reading and discussing economics or political philosophy. It‘s my hobby. Should I go to grad school? You can do all these things without getting a Ph.D. You won’t be as good at it, but you can read and discuss economics while holding down a job as an insurance agent, a lawyer, or a consultant. You might be able to maintain your hobby while making a lot more money.

It’s not very adeptly or tactfully put, but on the whole, I agree with Brennan. His point is not that a non-PhD. cannot in principle be as good as PhDs at philosophy. His point is that the generalization holds as a rule: generally speaking, and given current economic and institutional realities, you need a PhD to excel at philosophy. There are some notable exceptions to that rule, of course. Some of the most brilliant and successful academic philosophers got into the profession back in the day when a PhD was considered unnecessary (e.g., Alasdair MacIntyre, Colin McGinn, Saul Kripke), but no one holds not having a PhD against them. Coming the other way around, I know  non-academics out there (without PhDs) who can hold their own–and then some–with many PhD philosophers. But I think such people are the exception, not the rule. Ultimately, one has to commit the fallacy of accident to deny the truth of what Brennan is saying. We can recognize that exceptional cases exist while acknowledging the truth of the rule he’s identified.

Perhaps Brennan should have qualified what he said to accommodate the exceptional cases, but I also think it’s clear he had a very different sort of case in mind–e.g., the middle manager who wants to do philosophy on the side.  I think Brennan is correct to think that such a person will tend not to be as good at philosophy as the PhD philosopher from a top-20 school (Arizona, Princeton, Rutgers, Oxford, Pittsburgh, etc.) who is herself working at an R1 school and (therefore) doing philosophy all day. (And most would come out and admit it.)  The more invested you are in your day job, the heavier its demands. But the heavier its demands, the fewer resources you have to devote to philosophy. Given the (very) heavy demands of doing good philosophy, having fewer resources means, all things equal, you won’t do it as well as someone with more resources at her disposal. As someone who spent nine years temping and adjuncting before finding a full-time academic position, that doesn’t seem controversial to me.

It’s not much different than the situation of the guy who spends eight hours a day working assiduously on his guitar chops versus the guy who noodles a bit on his prized Gibson SG after a long day at work. The first guy might make it in the music business, if he’s lucky and other things come together; the second guy may do a gig of AC/DC covers at the local bar (if they let him in), but can’t expect to headline Met Life Stadium (capacity: 88,000), or for that matter, headline the local equivalent of the Wellmont Theater (capacity: 1,200). (Again, I should know.)

The conversation took a different (and actually, more interesting) direction after an intervention by someone named Val, a psychiatrist, who jumped in with this comment just below. Responding to the Brennan passage quoted above, he or she had this to say (sorry for the pronoun ambiguity, but “Val” could be either male or female):

Rubbish and simple minded navel-gazing. Except for the unique subspecialty of a Ph.D tenured research professor (“I’m the foremost expert on La Rochefoucauld’s writing of the year 1678!”), anyone who puts in the time and is clever can speak on intellectual issues with equal footing. You can certainly be “as good at it” in whatever interests you.

I’m a psychiatrist attached to a large research university and spend most of my day as a clinician. The philosophy professors who have careers focusing on ethics, political philosophy, or Scholasticism are barely on equal footing with the well-read clinicians who have been reading the epistemology of science for the last 25 years.

I think Val’s comment talks somewhat past Brennan’s. Yes, “anyone who puts in time” can speak with equal footing, but Brennan’s point is that if you have a day job, the better the job, the less time you’ll have to put in. The worse the job, the less sense it makes to do philosophy rather than get yourself a better job (and then do philosophy, in which case, it’s back to the first option). There are exceptions to this rule, too, but as a rule, it holds. Val’s situation is unique, and escapes Brennan’s point, but doesn’t generalize to the cases Brennan is discussing–the majority of cases.

Unfortunately, Brennan, given an opportunity to re-direct the conversation, only had this to say:

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

It’s a somewhat cryptic–and actually pretty stupid–response. The first sentence is just a particularly abusive instance of poisoning the well. The second sentence suggests that Brennan is under the impression that Val is a clinical psych(ologist). In other words, his implicit reasoning is:

You must be one of those dumb people who’ve opted to work in clinical psychology. Your GRE scores were probably too low to work in a difficult field, like philosophy, economics, or cognitive psychology. Your MCAT scores were probably too low to get you into a good medical school, or to get you in at all. So you opted for the easy way out–clinical psychology. And given that, you must think you’re particularly smart because you’re operating under the Dunning-Kruger effect. Being a victim of that effect, you’ve taken umbrage at my suggestions, but that’s because the effect has deluded you.

One problem here is that Val is a psychiatrist with an MD. So the GRE is irrelevant to his/her situation, and he/she obviously did well enough on the MCATs to get into med school, get an MD, go into practice, and get attached to a research university.

A second problem is that even if there was a documented correlation between low GRE/MCAT scores and the choice of clinical psychology as a profession, it wouldn’t follow that clinical psychology was “easy.” The more obvious inference would be that neither the GRE nor the MCAT was designed to test skill or aptitude in clinical psychology. A little Howard Gardner might have gone a long way here.

Personal experience might help, too. Brennan often likes to talk about his, so here’s a bit of mine. I spent part of grad school writing GRE questions for the Educational Testing Service (ETS), so I have a fairly good sense of what’s involved in designing them, including what they test and what they don’t test. There’s a lot that they don’t test, and a lot in them, methodologically and substantively, that is highly debatable, regardless of what ETS’s in-house psychometricians will tell you. Keith Stanovich’s work is relevant here.

It’s a great irony, by the way, that a large number of the item writers for the GRE (and personnel at ETS generally) are people who, by Brennan’s standards, are academic failures–i.e., grad students, often at Rutgers, Princeton, Temple, or Penn, who’ll never get a tenure track R1 job, or grad students (Rutgers, Princeton, Temple, Penn) who never finished their programs. So lots of Brennanite “failures” end up being the gate-keepers for the Brennanite “winners.” Something similar is true of the PRAXIS exam: I wrote items for PRAXIS at a time when, as a doctoral student without a teaching certificate, I was writing exam questions for a profession I wasn’t permitted to enter–and the questions I wrote were for an exam involving the very credential I lacked for purposes of entry!

A bit of advice, then: Brennan tells people who might want to go to grad school, but shouldn’t, to get a job at GEICO. I would say, instead: get a job at ETS. I worked there as a part-timer for almost six years before I got a full time academic position. It was a good place to work. Not my first preference, but still.

Incidentally, if I were Jerry Springer, at this point I would say that one important lesson we learn here is not to accuse someone of being a victim of the Dunning Kruger effect, accuse him/her of bombing the GRE, and misread what he/she wrote all in the same comment.

Anyway, back to Val’s comment. I sort of agreed, sort of disagreed. So here’s what I said:

I’m a PhD philosopher working on a master’s degree in counseling psych. I spend a fair bit of time discussing philosophy vs clinical psychology and/or psychiatry with people in those fields. I see where you’re coming from, but don’t agree with you (not that I agree with Brennan’s comment below*).

An enormous amount of the literature in both clinical psychology and psychiatry strikes me as methodologically weak and substantively trivial. (Much of it also makes huge, unwitting assumptions about difficult issues in the philosophy of mind.) The clinical work that (good) psychiatrists do gives them practical experience that philosophers don’t typically have (fair enough), but it’s very narrow and doesn’t equip them with the resources to think about bread-and-butter philosophical issues. In any case, for many psychiatrists, “clinical work” nowadays means “medication management,” not therapy. I don’t see how expertise at managing a dosing schedule gives a person insight into the foundations of ethics. I’m willing to hear the argument, but off hand, I don’t see it.

That’s not to say that there aren’t brilliant philosopher-psychiatrists out there (e.g., Jonathan Lear, Richard Chessick…Sigmund Freud), i.e., people with excellent philosophical skills who have capitalized on their clinical work. I’d also be willing to say that they have insight and understanding that most philosophers in the field lack. But that’s a far cry from the claims you’re making.

One look at Brennan’s derisive comment below* should tell you that if you were looking for intelligent engagement with your arguments, you’ve come to the wrong place. If you’re interested in discussing the issues, feel free to come by my blog or contact me privately (contact info at the blog). I sometimes blog on issues at the intersection of philosophy and psychology in the broad sense (that includes psychiatry), and wouldn’t mind batting this one around. We’re mostly philosophers, but there are some psychologists and psychiatrists lurking in the “audience.” You might find it fruitful to have a conversation with us. And rest assured, we won’t ask you about your MCAT score or reduce your arguments to a diagnosis.

Val saw what I wrote and had this to say:

Irfan – I agree with a good deal of what you have said. An enormous amount of psychology and psychiatry research is indeed methodologically weak. As the saying goes, nearly of all of psychology research is trivial if true, and if attempting to show something non-trivial, is impossible to convincingly demonstrate. My experience as well has been that most psychologists and psychiatrists are grossly ignorant of the surrounding philosophical issues.  However, there are plenty of psychiatrists that I work with who are keenly aware of the epistemic problems of the assumptions inherent in modern psychiatry and are well read in the psychiatrist-philosophers, (Jung, Jaspers, Freud…Popper is also popular. Human Action was recently under discussion in the geriatrics department). …

I agree with that, of course. I also think it goes the other way. Most philosophers are grossly ignorant of psychology and psychiatry, but it’s unclear to me (one year into a psychology program) how much of a debility that turns out to be. If so much psychology research is trivial, what leverage does one get out of relying on it to do moral or political philosophy? Some, I think, but it’s difficult to articulate what it is.

Same issue from a different direction: as a journal editor and conference organizer, I read dozens of manuscripts in ethics and political philosophy from authors who are trying (sometimes trying too hard) to showcase their familiarity with cutting edge work and cutting edge ways of doing philosophy. A large proportion of this work showcases the latest work in psychology. Decades ago, Robert Nozick told us that either we work within Rawls’s system, or explain why not. Now the same is implicitly being said of Jonathan Haidt. It is, one might say, a haidtful state of affairs.

Much of this psycho-philosophical experiment-mongering strikes me, frankly, as trivial, and if you dig hard enough, you find in many cases that philosophers tend, subtly (or not so subtly) to overstate, distort, and cherry pick research findings from psychology to make them less trivial than they are.

The truth is, by comparison with the intuition-mongering philosophy literature, the psychological literature tends to be very, very equivocal. Here’s a random example that I just happened to read yesterday, Daniel Wegner and Sophia Zanakos, “Chronic Thought SuppressionJournal of Personality, 62:4 (December 1994). The abstract says:

We conducted several tests of the idea that an inclination toward thought suppression is associated with obsessive thinking and emotional reactivity….[Our measure of thought suppression] was found to correlate with measures of obsessional thinking and depressive and anxious affect, to predict signs of clinical obsession among individuals prone toward obsessional thinking, to predict failure of electrodermal responses to habituate among people having emotional thoughts.

Then you read the article and the qualifications start coming: “Throughout this article, we have tried to caution that our intepretations of these results are not the only possible interpretations at this time” (p. 636).

It’s one of dozens of examples I could have used, from cognitive to clinical to political psychology. I’m not faulting the authors. My point is: psychology findings do not easily lend themselves for use as “inductive backing” for some controversial claim in ethics or political philosophy. They just aren’t written that way, or with that purpose in mind. But that’s the way philosophers often use them, at least in my experience. The psychology research of the philosophers is a lot like the God of the philosophers: not the original article. Philosophers seem wedded to the psychology of journal abstracts, not journal text–to unqualified thesis statements, not to the thesis-death-by-a-thousand-qualifications-followed-by-recommendations-for-more-grant-funding-and-research that one typically finds in the text. The jury is still out for me, but I often find myself wondering how useful all this psychology-mongering really is for philosophy.

Of course, then I read hand-waving, flat-footed philosophy that resolutely ignores the empirical literature, and I swing the other way. It also helps to read classic texts–Aristotle, Aquinas, Hobbes, Locke, Freud–and see how much they got wrong, empirically speaking. (Just think of what passes for biology or cultural anthropology in any one of these writers.) I just got finished reading Calvin Hall’s Primer of Freudian Psychology, published in 1954. One doesn’t think of 1954 as being that long ago–the Eisenhower Administration wasn’t ancient history–but the author has the nerve (so to speak) to assert that asthma, arthritis, and ulcers are psycho-somatic effects of ego defense mechanisms (pp. 85-87). Primal repressions, we’re told, arise in Lamarckian fashion via the “racial history of mankind” (p. 85). I guess sometimes pseudo-science is just pseudo-science. So I’d be the last to trash appeals to hard fact as a constraint on normative theorizing.

Val again:

I’ve often thought that psychiatry rewards the philosophically minded more than any other specialty. General medicine, for instance, largely reduces to this model: is the blood sugar >6%? If yes, implement algorithm given to you by the Joint Commission. Pattern recognition and memorization required, but not a lot of analysis.

In psychiatry, if a patient complains of depression, you have to say, what does depression mean to this patient? Is depression even real? How can I judge this patient as having depression when there are no absolute standards? How will I know if his depression is responding to treatment? Why is the treatment even working? What caused the depression? Why do some develop depression in similar circumstances but not others? Good clinicians conceptualize patients in such a manner, and this is how they are discussed at conferences. Poor psychiatrists uncritically push pills.

MIT press released a very good collection last year, Classifying Psychopathology, for sale on the shelves in the medical school book shop. I doubt very much a well read psychiatrist wouldn’t be “as good” (to use Brennan’s silly words) at discussing the contents as a Ph.D philosopher who specialized in ethics.

I agree with most (or a lot) of that, but notice that the context of Val’s comment is psychopathology. Yes, within that context, psychiatrists have a lot of challenging, important philosophical work to do. But the context is itself very narrow. You can master all that there is to know about psychopathology, whether psychiatrically or philosophically (or both), and still be light-years away from dealing with issues that are central to ethics.

Anyway, there’s a lot to think about and respond to there. To keep this post within reasonable length, I’ll post any further thoughts I have in the combox. But I figure that some of PoT’s lurking readers may have things to say–there are some psychologists and at least one psychiatrist out there, along with a few non-psychiatrist MDs–so I’ll just leave this open for comment.

*Brennan’s comment was below mine when I first wrote. As of March 9, 2015, Brennan’s response to Val no longer bears his name, and is attributed instead to an anonymous “Guest.” The same is true of a few other comments of his in that discussion.

Newsflash: Pakistani Taliban Kill Lots of Innocent Children (Sardonic Edition)

Here are my three favorite commentaries on the Pakistani Taliban’s recent attack on a school in Peshawar:

KABUL: The Afghan Taliban have condemned a raid on a school in Peshawar that left 141 dead in the country’s bloodiest ever terror attack, saying killing innocent children was against Islam.

Survivors said militants gunned down children as young as 12 during the eight-hour onslaught in Peshawar, which the Tehreek-i-Taliban Pakistan (TTP) said was revenge for the ongoing North Waziristan operation.

“The Islamic Emirate of Afghanistan has always condemned the killing of children and innocent people at every juncture,” the Afghan Taliban, which often target civilians, said in a statement released late Tuesday.

“The intentional killing of innocent people, women and children goes against the principles of Islam and every Islamic government and movement must adhere to this fundamental essence.”

“The Islamic Emirate of Afghanistan (the official name of the Taliban) expresses its condolences over the incident and mourns with the families of killed children.”

The Afghan Taliban are a jihadist group loosely affiliated to the Pakistan Taliban, with both pledging allegiance to Mullah Omar.

That’s from “Afghan Taliban Condemn Peshawar School Attack,” in Karachi’s Dawn.

Here’s another great one, for those who know a bit about Pakistani politics. It’s from Imran Khan, leader of Pakistan’s Tehrik-e-Insaf political party.

“I have never seen an atrocity like this in my entire life…I cannot even comprehend how someone could kill children like this,” he said.

“If someone killed my children like this, I would seek to avenge it as well,” Imran said.

Yes, terrorist attacks are really unprecedented for the Pakistani Taliban. I mean, who ever heard of the Pakistani Taliban killing innocent people? In Pakistan, no less? Has Imran sahib informed the Royal Society?

Then there’s this gem:

Obama terrorizes and murders innocent Pakistani citizens.

That’s supposed to be a commentary on drone warfare against the Pakistani Taliban. I’ve italicized the word of interest. Here is what I find interesting about it.

Suppose that the U.S. packed up its drones tomorrow and left South Asia for good. What does the author think should happen next? Broadly speaking, there are only two options. Either the Pakistani military fights the Taliban or not.

(1) Suppose they fight the Taliban. Suppose they choose to do so by means of the least destructive method available to them– drones. (Actually, drones are not quite ‘available’ to Pakistan right now, but imagine that they were.) Suppose that these drones kill “innocent Pakistani civilians” as a side-effect of the attempt to fight the Taliban. Would Nawaz Sharif then be as guilty of “murder” as Obama has been alleged to be? Or do you have to be an American drone operator to satisfy that description?

(2) Suppose that the Pakistani military chooses not to fight the Taliban, on the grounds that doing so would lead to the deaths of “innocent Pakistani civilians” (as it surely would). Suppose that the Taliban then murder Pakistanis civilians with impunity for the next seven or eight years, as they’ve done for the last eight. In fact, imagine that the Taliban ratchet up their killings on the grounds that it’s easier to kill people when the army that’s supposed to be protecting them refuses to do so. Would the author be willing to accept those consequences as an implication of his fastidious strictures on drone warfare?

While I’m on this subject, let me ask one last set of questions. The Taliban are non-state actors–a kind of terrorist NGO. They are, in other words, de facto anarchists. According to anarchist theory, “the state” lacks legitimacy. So imagine we decide to  get rid of it.

Now imagine, further, that “we” are Pakistanis. (Yes, I realize that my thought-experiment is starting to strain credulity at this point.) Let’s imagine, then, that “we” Pakistanis abolish the Pakistani state tomorrow. I assume that the Taliban would not be deterred from further depredations by this act.

So here is my question, intended for anarcho-capitalists: In what sense would Pakistanis be better off without a state than with one in facing the Taliban? And how should they do it? Whatever the method, it must meet two specifications: (1) it must not involve the assistance of a state, and (2) it must not lead to the deaths of any innocent third-parties. In this season of miracles, that surely can’t be too much to ask.

Postscript, December 18, 2014: More coverage of Peshawar. A poignant passage from a story from this morning’s New York Times, “Horror Paralyzes Pakistan After Methodical Slaughter“:

Some mourners expressed frustration at the apparent impotence of their own security forces. “What is this army for?” shouted one man at the city’s main Lady Reading hospital, where he had come to collect the body of his grandson.

“Where are their atom bombs and airplanes now?” he said. “They were of no use if they cannot protect us from death in our daily lives.”

Better questions could scarcely be asked, and truer words could scarcely be uttered. But we’re talking about armed forces that have begun every war they’ve fought, and lost every war they’ve begun: they’re guilty of genocide (East Pakistan, 1971) and willing to start nuclear war with India over uninhabitable chunks of ice (Siachen Glacier), but incapable of grasping the fact that their deals with the devil have surrendered the entire northwest of the country to totalitarian psychopaths bent on mass murder in the name of God. Pakistanis should never forget that the partition of the subcontinent was intended to give the Muslims of the subcontinent a safe haven from religious persecution by Hindus. Somehow, it never occurred to them that “they” might persecute “themselves.” Call it another grim chapter in the annals of that supposedly impossible phenomenon–“reverse discrimination.”

Meanwhile, from the same article:

Back at the deserted Army Public School, snipers perched on the rooftops, watching for a potential follow-up attack. In the nearby tribal belt, the Pakistani Army mounted fresh airstrikes.

Were they merely “fresh airstrikes” or were they mass murder? Would they have been mass murder if carried out by drones more precise than the airstrikes? I renew the question.

I find it interesting that in the English language press, at any rate, a lot of Pakistani commentary has taken the form of anguished questions. This column by Sameer Khosa in Lahore’s Nation consists of almost nothing but questions until this passage at the end:

Let us finally put an end to the criminally dishonest nature of our conversation on the Taliban, and on the national security challenge as a whole. Because now, we have seen its cost and it is unbearable.

Carry these children in your heart always. Let their innocence be the antidote to the lies that are peddled to us. Let their curiosity about the world remind us to ask anyone who has a one-sentence-long solution to this problem how they propose it will end. Let us fight in their name. Let their gravestones say: tell us now that this is not our war. Tell us now that this is not personal.

The problem is, this is what Pakistanis always say after a Taliban atrocity, only to forget it until next year’s atrocity. I’m not criticizing Khosa; I’m criticizing his audience. What he’s saying is undeniably true. So is what these people are saying. And these two.  The problem is that it’s been true for years. Remember what happened in Peshawar last year? It was Malala before that, and the massacre of the Shias of Derra Adam Khel before that, and the Geo TV station before that, and the Bajaur market before that, and the attack on the shrine of Data Ganj Baksh before that, and the one on the Ahmadi mosques in Lahore before that, and the assassination of Benazir before that. How many “before thats” does a rational person need before he figures out “we have a problem, and we have to solve it”? (Here’s a list of TTP attacks.) Unfortunately, what Khurram Hussain is saying is true, too.

Anyway, I can’t help continuing the semi-sardonic theme of the original post. So, a few quotations in that vein:

Khursheed Shah says terrorism is national issue

Speaking to media representatives after attending the MPC, Opposition Leader in the National Assembly Syed Khursheed Shah said there is a complete consensus among political parties of the country on the terrorism issue.

He expressed his resolve to stand shoulder to shoulder with the armed forces in their ongoing fight against terror. Shah also urged the media to play a proactive role in eradicating terrorists from the country.

The PPP leader said that even Israeli state does not carry out such atrocities on Palestinians like the terrorist did to young kids yesterday at the school in Peshawar.

That’s from Dawn, “No distinction now between good and bad Taliban: Nawaz.” I mean, if they’re worse than Israelis, then we really have to fight them. Incidentally, the U.S. just normalized relations with Cuba. Any chance of Pakistan doing the same with Israel sometime soon?

I certainly wouldn’t go quite as far as Sherry Rahman does here, but I see her point, and it’s a nice counter-narrative to those handwaving claims one hears about the virginal innocence of the Taliban’s clean-handed apologists and sympathizers:

PESHAWAR: Pakistan Peoples Party (PPP) leader Sherry Rehman said Wednesday that if anyone engaged in the apologist narrative when it comes to terrorism and terrorist attacks, they would be considered as terrorists and allies of the terrorists.

Time has come for a decision and anyone who presents justification for acts of terrorism will be regarded as a traitor.

“Whoever is a friend of the terrorists is a traitor,” Rehman said addressing media representatives in Peshawar.

Rehman urged that the people of Khyber Pakhtunkhwa will not remain the victims and instead become those who will lead the war against terrorists.

Of course, taken literally, Rahman’s policy would require locking up large chunks of Pakistan’s judiciary. But I don’t think Rahman quite means what she’s saying–at least not as stated. It’s still the heat of the moment.

I leave you, finally, with a Word Press Editor’s Pick for 2014, written in October by Mehreen Kasana, a Pakistani graduate student at a school in Brooklyn.

On my way to class, I take the Q train to Manhattan and sit down next to an old white man who recoils a noticeable bit. I assume it’s because I smell odd to him, which doesn’t make sense because I took a shower in the morning. Maybe I’m sitting too liberally the way men do on public transit with their legs a mile apart, I think to myself. That also doesn’t apply since I have my legs crossed. After a few seconds of inspecting any potential offence caused, I realize that it has nothing to do with an imaginary odor or physical space but with the keffiyeh around my neck that my friend gifted me (the Palestinian scarf – an apparently controversial piece of cloth). It is an increasingly cold October in NYC. Sam Harris may not have told you but we Muslims need our homeostasis at a healthy level. While our bodies regulate our internal fanatic temperatures to remain stable, sometimes it gets a little too chilly so we pull out those diabolical scarves and wrap them around our diabolical necks and diabolically say, “Holy shit. It is cold today, Abdullah.” To which Abdullah replies, “Wallah. My ass is freezing.”

Reading her, you’d think that the act of wearing a keffiyeh in Brooklyn or Manhattan was a wildly rare and transgressive occurrence. It isn’t. But let me add one more “maybe” to the list: maybe this is the kind of thing that happens occasionally, that the author could very well be imagining, that doesn’t matter much even if it happened, and that is best ignored rather than inflated into the occasion of a self-pitying drama of grievance stretching back to Hiroshima, the Raj, and the Atlantic slave trade.

See if you have the discipline to make it through the whole thing. Kasana doesn’t want to apologize for Muslim atrocities. That’s fine. I don’t think she should, and have said as much in the past. But try as hard as you can to make coherent sense of her claim that there is no distinction to be made between good and bad Muslims. And feel free to enlist the help of the Mahmood Mamdani article she links to in her post to do so. Yes, I realize that she’s rejecting the “binary opposition” of Good and Bad Muslim within a specific narrative. But at the end of the day, what does she think is left of the ordinary distinction between good and bad Muslims? Should we throw it out? I don’t know a single Muslim who thinks so. Try to make sense of what just happened in Peshawar while ignoring the distinction, and reflect on the results. Hard to do. So why should any non-Muslim apologize for making use of it? No apology, so to speak.

The Politics of Voting: Four Suggestions

I’ve been thinking a lot lately about voting. I have Jason Brennan to thank for having stimulated me to sustained thought on the subject, via his much-acclaimed book, The Ethics of Voting. As I’ve said before, I agree with Brennan’s thesis in a general way, but the more I think about the details of his argument, the less plausible I find them. (I find his arguments for voter disenfranchisement downright hopeless.) Here’s a link to the 2013 Reason Papers symposium on Brennan’s book, and here’s a link to an earlier critique at PoT of Brennan’s account of character-based voting.

I’ll have more to say about Brennan’s arguments as I find the time to write about them. Meanwhile, here are four quick thoughts on voting, three of them relevant to American elections, the fourth to Israeli elections. In each case, it seems to me that the wrong issues are being discussed–when they’re being discussed at all–and that we ought to change the terms of debate. Only the last of the four topics is relevant to Brennan’s work.

(I) Felon Disenfranchisement
There’s been a lot of talk in the past few years about felon disenfranchisement: felons in the U.S. (perhaps elsewhere, but I don’t know) are deprived of the right to vote. Here’s a fairly typical piece from The New York Times criticizing felon disenfranchisement as racist.

I find discussion of this topic confused. There are at least three different issues involved here; each needs to be distinguished from the others and discussed on its own terms.

(1) A first issue is: given an ideal definition of “felony,” and a well-functioning criminal justice system, should felons be permitted to vote, or should they be deprived of that right as an inherent part of their punishment?

My answer is, “they should be deprived of the right to vote.” I endorse a debt-based conception of punishment according to which, when we interact with someone, we owe him or her (at a minimum) respect for their rights. When someone violates those rights, he incurs a debt to the victim–a debt consisting of compensation for the lost value of the exercise of the victim’s right, among many other things. Punishment, in my view, ought to consist of repayment of that debt. If the debt can’t be paid in full–and for a variety of reasons, it may be impossible to do so–offenders can permissibly be deprived of those goods that would count as ill-gotten gains from crime.

Some simple examples: If you rob me, your voting to dispose of my income without having compensated me for the commission of the crime counts as an ill-gotten gain. Since you’re not entitled to such a gain, you can be disenfranchised. If you kidnap me, what you’ve done is illicitly to try to “govern” my actions by brute force. If I survive, you owe me compensation for your trying to rule me in this way. But voting is a case of ruling me, as well. So ruling me by the ballot counts as an ill-gotten gain (or would) until you’ve paid off the debt you incurred by kidnapping me. And so, once again, you can be disenfranchised until you do.

Suppose that the repayment-requirements on such debts are prohibitively high–high enough that they can’t typically be paid in full by anyone, regardless of how wealthy they are. On my view, government should in such cases have the authority to deprive offenders indefinitely of the right to vote. If you (the offender) can’t compensate me (the victim) for what you’ve done to me, you don’t have the right indirectly (i.e., by voting) to decide the disposition of goods that belong to me. And that, in effect, justifies the disenfranchisement policy we currently have. (For a somewhat similar view of punishment, see the work of Daniel McDermott, who defends what he calls a debt-based conception of retributivism. I’m not sure where McDermott stands on felony disenfranchisement, however.)

Suppose now that we think of government, on Lockean grounds, as a kind of mutual-defense pact for the protection of rights. In that case, any attack on the rights of any member of the pact is an attack on the rights of every party to the pact. By implication, a debt owed to the victim is simultaneously a debt owed to every party to the pact. If the debt in question cannot be discharged in full–and if the crime is serious enough, it probably can’t be–then the parties to the pact can permissibly deny the offender access to ill-gotten goods in lieu of full payment of the debt.

This resolves the old problem of the “missing beneficiary.” For example: if you murder me, you incur a debt to me for having done so. Of course, being dead, I’m not around to collect the debt. In that case, you owe a debt to the rights-respecting members of my society–via their agent, government. Now suppose that you can’t pay the debt in full. In that case, they can deprive you of certain categories of goods on my behalf as well as theirs. One good they can deprive you of is the right to vote: after all, your having that right would give you the right to dispose of the income that they have earned while you still owe them compensation for the right you’ve violated. And you’re not entitled to that. (Incidentally, even if I’m physically unavailable to collect a debt for having been murdered or wrongfully killed, I could during my lifetime have set up an escrow account as an insurance policy in the event of my murder/wrongful death. In that case, an offender might still be obliged to compensate me posthumously, with the proceeds going to my heirs or to the state, as my will or lack of one implies.)

Something similar would apply to rape, to assault and battery, to drunk driving, and to plenty of other recognizable felonies. In short, I don’t see why, as long as we define “felony” properly, felons should be allowed to vote. The debts they’ve incurred to the rest of us are sufficiently high that we needn’t worry so much about whether they have the right to govern us, or dispose of our income. They don’t. I don’t mean to suggest that we have no obligations toward them. I just means that access to the ballot isn’t one of them.

(2) Second issue: is the operative definition of “felony” in the U.S. a good one? Does it, on moral grounds, include and exclude the appropriate items?

I’d say: “no” and “no.” This issue is the one that, in my view, actually gives rise to the felon disenfranchisement controversy. The real problem, it seems to me, is that we’ve made felons of people who shouldn’t be felons, and in consequence of that, have deprived people of the right to vote who should have it. If doing so has adverse racial consequences, my suggestion is: redefine “felon” more narrowly, so as to exclude certain categories of crimes from the list of felonies. If we do, I suspect that the “felon disenfranchisement” problem (insofar as it is a problem) either disappears or is greatly reduced in scope.

(3) Third issue: regardless of the definition of “felony,” is the U.S. criminal justice system systematically and unjustifiably biased against certain populations or sub-populations?

My answer: “probably.” No matter how we define “felon,” there will probably be residual problems in our criminal justice system, some of them with adverse racial consequences–some of them just plain old unjust–and those problems need to be addressed. But the resolution of those problems is not facilitated by the enfranchisement of felons. Convicted murderers, rapists, batterers, and drunk drivers have no distinctive insight into the rights and wrongs of criminal procedure. Nor does it make much sense to bank on the possibility that some small fraction of those convicted felons might be innocent (I’m sure some are), and might impart the wisdom of innocence to us via the ballot. The probabilities of that happening are tiny enough to render the venture as a whole quixotic.

The bottom line is that instead of crusading for voting rights for murderers, kidnappers, rapists, robbers, etc., we ought to be redefining “felony” and actively reforming the defects of our criminal justice system. Felon enfranchisement is just a distraction from those far more important tasks.

(II) Voter ID laws
Now consider voter ID laws. Here’s a usefully balanced article, also from the Times, suggesting that voter ID laws, while problematic, do not have the large-scale effects that some have alleged of them.

The standard argument for voter ID laws is that they pre-empt or minimize voter fraud. The standard argument against them asserts that there is little evidence of voter fraud in the U.S., that voter ID laws have racist effects, and that contrary to their proponents’ rhetoric, voter ID laws are covertly there to produce racist effects.

Once again, however, all this seems to me a distraction from the real issue. To see why, consider the tacit implication of the arguments against voter ID laws. Why, according to those arguments, are voter ID laws unfair? Spelled out, the answer is that large numbers of Americans lack the means to obtain photo IDs for themselves. Lacking access to photo IDs, they can’t meet the requirements of voter ID laws, and are de facto disenfranchised by them.

Suppose ex hypothesi that that’s true, and pause on it for a moment. Voting aside, isn’t that precisely the problem in need of discussion and rectification? How is it that large numbers of people in a first world country do not have access to the means of self-identification? Even if we do away with voter ID laws, the underlying problem remains in place. In other words, even if you don’t need an ID to vote, you need it for other things. How are people without IDs expected to open bank accounts, visit the doctor, or travel by plane–or get driver’s licenses, library cards, discount cards, or government benefits, etc.? Either they’re to do without these things because they lack ID, or they need access to these things, and must therefore obtain access to IDs. I would opt for the latter option, but no matter how you slice it, the issue is not voter IDs, but access to IDs as such. 

The scarce-access-to-IDs situation seems to me a good argument for having some equivalent of a national identity card in just the way and for just the same reasons that so many other countries have them. Here’s a case for them, from the Washington Post.

I agree with the reasons the Post gives for having them, but I’d give one more. It’s been argued by critics of social contract theory since Hume* that express consent theories of consent to government do not or cannot work because we never in fact consent expressly to government. I suppose that that’s partly true, at least for natural-born citizens; we don’t consent to government in the way that we consent, say, to the terms of a credit card. But I see national ID laws as a chance to respond to that problem. Why not structure the task of getting a national ID so that the act of getting one either requires express consent to the government issuing the card, or requires explicit non-consent? If you consent, you get an ID card, and with it, the benefits and burdens of “membership” in the polity. If you refuse consent, you don’t get a card, and can be denied the benefits of membership while being spared the burdens.

There are, to be sure, lots of complications here, many of them entangled in debates about immigration and immigration policy. I can’t settle those here. I would just say that it seems to me that the mechanism I describe is possible, and that its existence would rebut Hume-type arguments against consent, and solve some other practical problems as well. At the very least, focusing on our ID problem–which has significant adverse effects on people’s living their lives–beats focusing on a voter ID problem that seems not to have any significant effects on voting.

(III) Low voter turnout
Now consider the low voter turnout issue. The problem here is supposed to be that relatively few voters show up to vote. In partisan terms, that means that Democrats fare badly in the elections (which, of course, matters more to Democrats than to others). In more general terms, it means that our democracy is not as “robust” as it could be. Personally, I happen to think it means that the ballot choices we’re typically offered aren’t worth voting for, whether for or against. Here’s a website, FairVote.org, devoted to discussion of the issue. Once again, however, it seems to me that much of the discussion there and elsewhere is focused on the wrong things.

Suppose that we want to increase voter participation. (There are reasons not to want to, having to do with wrongful voting and voter incompetence, but set them aside.)  In that case, I’d offer two proposals:

(1) Put a “None of the Above” option on the ballot, so that voters can vote against all the (other) options on the ballot. As things currently stand, you can write “NOTA” as a “write in” on the ballot (I regularly do), but few people realize this, and most people surmise, correctly, that write-ins are meaningless. (I’ve encountered poll workers unaware of the fact that NOTA is a write-in possibility.) But if “NOTA” were on the ballot, it would be at least as significant as any other option on the ballot, and all those disgruntled voters who don’t vote because they dislike all the options might now vote in order to express that view.

(2) Move Election Day from Tuesday to the weekend. Yes, a small minority of mostly religious voters might be inconvenienced by that move (if so, they can use absentee ballots), but as it stands, huge numbers of working people are inconvenienced by Election Day’s having to compete with the workday. Change the day, and I suspect you’d increase voter turnout.

(IV) Voting and the right to complain
 Let me move now from American to Israeli elections, or more precisely, elections in Jerusalem. When I visited Israel/Palestine in 2013, I was both surprised and dismayed to discover that while East Jerusalemite Palestinians have the right to vote in Jerusalem’s municipal elections (though not in Israeli national elections), they almost unanimously refuse to exercise that right, even though their exercising it would substantially change the political landscape of Jerusalem, and benefit them. The argument I heard from Palestinians was that voting would legitimize Israel, which they refuse to do. Sadly, the few Palestinians who offered to run for municipal office, or to vote for pro-Palestinian candidates or causes, were widely regarded by other Palestinians as traitors to the Palestinian cause.

I find that a self-defeating and incoherent set of attitudes. East Jerusalemite Palestinians widely accept–and demand–government benefits from Israel, so it makes no sense for them to refuse to exercise political rights that are on offer from Israel, especially if the refusal to exercise those rights merely disempowers those who refuse to exercise them. The fact is, the budget for government services in East Jerusalem is in the hands of non-Palestinian Israelis, as are decisions bearing on the protection of Palestinian rights. As things currently stand, decisions on both sets of issues are made in ways that ignore or violate Palestinian rights. I would argue that respect for one’s rights is essential to one’s well-being. As it happens, the only efficacious way of ensuring respect for Palestinian rights in Jerusalem is to make changes to the budget and policies of the Jerusalem municipal authority. And the only efficacious way of changing the budget and policies of that authority is to vote to change them. So the options are: vote to defend your rights, or acquiesce in their violation and the consequent diminution of your well-being.

Suppose that we each  have a self-regarding moral obligation to promote our well-being (insofar as doing so is open to us). If so, give the preceding facts, Palestinians ought to vote. If “ought-hood” is sufficient for “duty” or “obligation,” then eligible Palestinian voters have a moral obligation to vote. Contrary to a recent argument of Jason Brennan’s, then, the case of East Jerusalemite Palestinians seems a picture-perfect example of the old saw that if you don’t vote, you shouldn’t complain–or more precisely, if you don’t vote, you shouldn’t complain about the things that voting would have improved, and that only voting can improve, at least for the foreseeable future. If you’re going to be taxed, and you’re going to be regulated, it makes no sense to stand by as your tax money is spent by everyone but you on everything but what matters to you. It likewise makes no sense to stand by as you are regulated to death by the people who are spending your money, as your rights go violated or ignored. Voting is in effect an act of self-defense, and self-defense is a moral obligation.

The obstacle here is supposed to be that it is not instrumentally rational for individual voters to vote, because individual votes cannot change the outcome of an election (or more precisely, cannot change the outcome of a sufficiently large election–a qualification that is sometimes relevant but often ignored in discussions of “voting,” as though all voting were large-scale voting). But if you know anything about Palestinian political culture, I think you’ll see that this objection is spurious. There is no need to worry about the efficacy or utility of individual votes qua individual if the voters in question don’t conceive of their votes in those terms in the first place. If voters naturally conceive of themselves as members of a solidaristic group, and can coordinate their efforts in a given direction as a group–and have a strong reason to do so, and might well be inclined to do so–then the unit of concern is not the utility of individual votes, but the the votes of voting blocs qua blocs whose members self-consciously act in concert.

I realize I’m describing an idealized case, but my point is, it’s a possible case. In fact, it’s more possible and plausible than half of the thought-experiments that clog the philosophical literature. (By the way, there is no contradiction between seeing yourself as an individual with an individual obligation to promote your well-being, and seeing yourself, qua voter, as part of a voting bloc. Membership in the bloc could precisely be what promotes your individual well-being, so that your individual well-being is what dictates membership and a solidaristic self-conception in the first place.)

Now suppose that Palestinians** get their act together, ditching the nationalist and Islamist rhetoric that has retarded their progress for decades. They come to see voting as an act of both collective and individual self-defense. They also see the defense of their rights as a contribution to the common good (which includes Israelis). Suppose (perhaps improbably but not impossibly) that the Israelis do not interfere significantly with Palestinians’ voting en masse.

Suppose further that Palestinians think of voting by analogy with having an intifada. In other words, as with the first intifada in the 1980s, they organize their efforts to vote strategically*** as a single unified voting bloc: they caucus, organize, and promise one another to vote for pro-Palestinian policies. Suppose that it is relatively obvious what these policies should be, and what the votes for these policies should be. Suppose, further, that voters are well-informed. Now suppose that a large number of Palestinians enter these caucuses voluntarily, and through caucusing, manage to ascertain (by mechanisms internal to the caucuses) that there are enough Palestinian votes among them to tip the scales of a given Jerusalem election. If so, each Palestinian voter could regard himself or herself as part of an assurance contract with all other Palestinian voters. And if so, each voter would have an obligation (to the others and to him or herself) to vote in the way he or she had promised in the contract.

My argument here is essentially that if you can organize a mass uprising–an intifada–you can organize a mass voter campaign. Further, if an intifada involves the implicit equivalent of an assurance contract (as it does), you can in principle model  a mass voter campaign on an intifada, and turn the campaign into an activity that involves an actual assurance contract. But if contracts bind, an electoral assurance contract yields a duty to vote. So under certain nomologically possible conditions, there can be a duty to vote, and given this duty, it can be irrational to complain about unfair or harmful political policies if you don’t vote.

I can’t work through all the details here, but take a look at Brennan’s argument in light of the preceding. Either my East Jerusalem case is a counter-example to his thesis, or it’s a defeater for it. In the first case, it refutes the thesis as stated. In the second case, it suggests that the thesis is highly misleading as stated. Given that, my argument requires that Brennan qualify his claims about the ethics of voting in ways that take more explicit stock of cases like the East Jerusalem one–something that would substantially change the “flavor” of his theory.

I realize that Brennan has an explicit discussion of strategic voting in his book (Ethics of Voting, pp. 129-33), and that the discussion includes a “strategic voting clause” (p. 131), but I think almost all of what he says talks past what I’m saying here. What he doesn’t discuss either in the book or in the article I’ve linked to, is the possibility that you could have a duty to vote in cases like the East Jerusalem one, that your vote would matter in those cases, and that you’d have no right to complain if you didn’t vote. (See the notes below for a comment on “strategic voting.”)

While you’re looking at Brennan’s arguments, read his discussion of “the moral disenfranchisement of poor minorities” in The Ethics of Voting, pp. 105-7. I find the discussion very inadequate even on its own terms, but for present purposes it’s worth noting how narrow it is. Like so many American writers, in writing about “minorities,” Brennan structures his discussion around black-white relations in the U.S., assuming somehow that what he says about that will generalize elsewhere–everywhere. It doesn’t. In particular, he assumes that “poor minorities [will] overwhelmingly qualify as bad voters” by his criteria, and offers some rather handwaving suggestions about how they’re to handle–or how he would think about handling–their disenfranchisement.

What he doesn’t consider is the possibility that the issues in contention in a given election may sometimes be entirely straightforward and require nothing in the way of the social scientific “credentials” he regards as necessary conditions for eligibility to vote. Putting aside the American case, I think this is patently obvious in non-American ones, like that of East Jerusalem. It takes no special social scientific wisdom to figure out that your interests, your rights, and the common good are better promoted by someone who stands for fairness than by someone who makes no secret of wanting to subvert your interests, violate your rights and exclude you from the common good. If Brennan’s epistemic elite hasn’t figured that out, frankly, they have a lot to learn.

I’m hoping to spend the summer of 2015 in East Jerusalem teaching at Al Quds University. While I’m there, I intend to make the case for what I call rights-based strategic voting by Palestinians in Israeli elections. Feel free to hit me with objections in the combox if you disagree with the sketch I’ve just given of it. I may well be hit with more than that while I’m there, and I’d like to start my preparations now.

*Actually, Hume concedes, almost parenthetically, that consent is a possible basis for political legitimacy: “I only pretend [aver] that it has very seldom had place in any degree, and never almost in its full extent” (paragraph 20). But that claim is entirely compatible with consent’s coming to be the basis of political legitimacy in the future by concerted effort aiming to bring it about. Considered as an argument against Locke on consent, what Hume says in “Of the Original Contract” strikes me as a series of ignoratios elenchi.

**For brevity, I use the word “Palestinian” throughout, but I don’t really mean to be restricting that to ethnic Palestinians. I’m using “Palestinian” as short-hand for those who would actively organize for and act on behalf of Palestinian rights in East Jerusalem. The bulk of those people would most likely be ethnic Palestinians, but not all of them would. It’s just too cumbersome to be explicit about this in every sentence.

***I’m using the term “strategic” in its colloquial, not its technical sense. In its technical sense, “strategic voting” is voting for candidates or policies that are contrary to one’s sincere preference, in the hopes that doing so will realize some preferred outcome. In the colloquial sense, “strategic” voting is simply voting to bring about some end by means of a collectively-adopted political strategy for bringing the end about. I happen to think that the technical concept of “strategic voting” is a confused and equivocal one, but that doesn’t matter. My scenario makes no reference to insincerity on voters’ part.

Postscript, Nov. 30, 2014 (relevant to proposal I, felon disenfranchisement): This blog post, at Slate Star Codex, is well worth reading on the race and criminal justice in the United States. It complicates the picture, but I don’t think it changes anything I said about felon disenfranchisement. Hat-tip: Kate Herrick.

Postscript, April 5, 2015 (relevant to proposal IV, voting and the right to complain): Useful background on the political situation in East Jerusalem, from the London Review of Books.

Postscript, December 25, 2015 (relevant to proposal II, voter ID laws): An interesting article in The New York Times about Mayor DeBlasio’s “New York ID” program and the obstacles to success it’s facing at area banks. All things considered, the program seems a step in the right direction.

My famous friends: name-dropping without (much) shame

A couple of days ago, I wrote a post dedicated in part to discussing the work of people I either don’t know, or barely know at all. Today’s post is just the opposite: a name-dropping attempt to bask vicariously in the glory of others’ accomplishments, simply because they happen to be friends or relatives of mine. There’s no credit like unearned credit! I’m going to bold everyone’s name below, just to make this post look more like the gossip column that it is.

My friend William Dale is Associate Professor of Medicine at Pritzker School of Medicine at the University of Chicago. (He has a half dozen other titles, but never mind.) He seems to make it into The New York Times every other day for his work on geriatrics, but here’s the latest, about the connections between his work and the National Social, Life, Health, and Aging Project at Chicago. And yes, that’s him in the header photo of their page.

I’m not sure I know Jose Duarte well enough to call us “friends,” but we have hung out a bit, so I’ll gloss over the niceties. Jose has been creating waves for his research, with Jonathan Haidt, on the political biases of research in social psychology. Here’s a piece in The New Yorker about his most recent publication. And here’s a link to the paper itself, “Political Diversity Will Improve Social Psychological Research.”

My friend Stephen Hicks is celebrating the tenth anniversary of the publication of his 2004 book, Explaining Postmodernism: Skepticism and Socialism from Rousseau to Foucault. It’s gone through God-knows-how-many printings, and at least five translations that I know of, with more on the way. (I’d like to put in a vote for an Urdu translation, by the way.) I’d like to think that I made some tiny contribution to the success of the book; as co-managing editor of Reason Papers, I happened to edit  (all right, co-edit) one of the longer and more positive reviews of the book. But obviously, I couldn’t have done that unless Stephen had written the book (and Steven Sanders had written the review!) in the first place.

Finally, on the Famous Friend Front, my buddy Chris Sciabarra is featured in a piece on Ayn Rand in New York Magazine, improbably titled, “Ayn Rand, Girl Power Icon.” Amusingly, the piece opens with Chris’s professed puzzlement about the phenomenon, and only gets better from there.

I mentioned famous relatives. Did I tell you that my cousin Khawaja Saad Rafiq is the Minister of Railways for the Islamic Republic of Pakistan? I only mention that because here’s a piece featuring Saad bhai in the Pakistan Observer. In it, he takes issue with Jason Brennan’s thesis in The Ethics of Voting. According to Dr. Brennan, we have no duty to vote, but according to cousin Saad, the “Country Can Only Make Progress Thru the Power of Vote.” Well, Saad bhai doesn’t quite mention Dr. Brennan by name, but the implicit spirit of contention is there. I actually think that a conversation between Saad bhai and Dr. Brennan on voting would be a hilariously instructive affair for all parties. In fact, I offer in advance to serve as interpreter to overcome the language barrier* for the conversation. I rather doubt that the event will ever happen, but as a thought-experiment, I think it has a lot to recommend itself.

*PS, I kind of think that language would be the least of the barriers involved. Cf. Bernard Williams on real and notional confrontations, Ethics and the Limits of Philosophy, pp. 160ff.

Postscript, December 19, 2014: Amazingly, within a few weeks of my issuing a call for an Urdu translation of Explaining Postmodernism, Stephen Hicks has announced a forthcoming Hindi translation. Behold the power of PoT.