Character-Based Voting and Leadership Effects (1 of 3)

Here’s yet another post from my project on character-based voting (CBV). It’s the first of three posts on CBV and leadership effects, and one of many on CBV.

As I’ve said in previous posts, “character-based voting” is voting for or against a political candidate on the basis of what the voter takes to be his traits of character. That contrasts with “policy-based voting,” which is voting for a candidate based on the expected consequences of the policies the voter expects the candidate to pass.

Continue reading

Character-Based Voting and the Policy of Truth

For the past six months or so, I’ve been working on a project on what I call “character-based voting” (CBV), construed as voting for a political candidate based on her traits of character, as contrasted with “policy-based voting” (PBV) which is voting for a political candidate based on the expected consequences of the candidate’s expected policies.

It’s a rough and in some contexts problematic distinction, but clear enough to work with. There’s a clear enough distinction to be drawn between voting for a candidate because you regard her as more honest than her rival, and voting for a candidate because you expect her to enact policies X1…Xn, which have expected consequences C1…Cn, which you regard as net favorable, but which you don’t expect her rival to enact. My modest claim is that CBV can in principle be justified, and has its place. Continue reading

Moral Grandstanding and Character-Based Voting

I’ve recently been teaching Justin Tosi and Brandon Warmke’s paper, “Moral Grandstanding“–and simultaneously been working on a paper on Jason Brennan’s critique of character-based voting–and happened to see an interesting connection between the two. So this post harks back to, and ties together, two topics we’ve recently been discussing here at PoT–Michael’s recent post on grandstanding, and mine on character-based voting.

Suppose, as per Brennan’s argument in The Ethics of Voting, that character-based voting is justified insofar as character functions either as a proxy for the policies that a candidate might enact once elected, or more generally, for the quality of governance he might be expected to engage in. Now suppose that character-based voting sometimes is justified on those grounds, so that character sometimes does function as a proxy variable for predictions about a candidate’s capacities for good governance in the future. Continue reading

Character-Based Voting: The Case of Joseph P. Ganim

This story, about the current gubernatorial campaign in Connecticut, offers a near-perfect exemplification of the criticism that I’ve made in the past of Jason Brennan’s critique (in The Ethics of Voting) of character-based voting. “Character-based voting” is a vote for or against a candidate based primarily on considerations concerning the candidate’s moral character, as contrasted with considerations concerning the policy positions he promises (or can reliably be predicted) to make. Brennan argues (or more precisely, asserts without argument) that character-based voting is only legitimate insofar as it functions as a proxy for predictions about policy, adding (or half-adding) that it usually doesn’t.

One of my objections to Brennan’s claim is that it assumes without argument that future-oriented considerations are the only ones that matter to deliberations about how to vote for political candidates. But (I suggest) elected office comes with rewards, and it’s plausible to think that considerations of moral desert are relevant to the distribution of rewards. Moral desert is a past-oriented consideration. Absent an explicit discussion of the role of moral desert in voting, and an argument that it’s somehow outweighed, defeated, or made irrelevant by future-oriented considerations, the role of moral desert can’t be dismissed. Since moral desert can’t be dismissed, a candidate’s past can’t be dismissed, insofar as it reveals relevant considerations of moral character. But if that’s right, the case for character-based voting is stronger than Brennan makes it out to be.    Continue reading

Happy Halloween 2017

I’m reblogging this post I did in 2014 and 2015, modified after taking a year off in 2016.

Halloween has, for as long as I can remember, been the only holiday I’ve ever been able to take seriously or wholeheartedly to celebrate. As a nominal Muslim, I fast during Ramadan, but Ramadan isn’t really a holiday, and unfortunately, none of the Muslim holidays (the Eids) are seasonal, seasonality being an essential property of a real holiday. In fact, generally speaking, Muslims have trouble figuring out when exactly their holidays are supposed to take place–another liability of being a member of that faith.

Having spent a decade in a Jewish household, I have some affection for some of the Jewish holidays–Yom Kippur and Passover, though not Hannukah or Purim–but always with the mild alienation that accompanies the knowledge that a holiday is not one’s own: it’s hard to be inducted into a holiday tradition in your late 20s, as I was.

I like the general ambience of Christmastime, at least in the NY/NJ Metro Area, but unfortunately, once you take the Christ out of Christmas, you take much of the meaning out of it as well, Christmas without Midnight Mass being an anemic affair, and Midnight Mass without Christ being close to a contradiction in terms. Not being a Christian, I find it hard to put Christ back into Christmas, mostly because he’s not mine to put anywhere in the first place. (Same with Easter.)

Diwali I just don’t get. Continue reading

Republican Islamophobia: A Response

This is a much belated response to Peter Saint-Andre and Michael Young on Republican Islamophobia, from my post of January 5. Given its length, I’ve decided to make a new post of my response rather than try to insert it into the combox.

Looking over the whole exchange, I can’t help thinking that the point I made in my original post has gotten lost in a thicket of meta-issues orthogonal to what I said in the original post. I don’t dispute that the issues that Peter and Michael have brought up are worth discussing, but I still think that they bypass what I actually said.

Continue reading

Killing in the Name Of: Jason Brennan on Abortion and Self-Defense (1 of 2)

Jason Brennan has a post a few weeks back on abortion and self-defense (Nov. 30), written in the wake of the Planned Parenthood attack in Colorado Springs (Nov. 29). The point he makes is simple, and the argument he offers is, very narrowly construed, sound. But construe the conclusion slightly differently than he does, and the argument misses the point in an obvious way.

The claim in short is that if you think that abortion is murder, and its victims are innocent, you have the right to defend the innocent by force. If the force in question requires killing those who perform abortions, so be it. Brennan invokes a lot of “common law” reasoning to bolster the plausibility of the conditional*, but the appeal to common law is a dialectical fifth wheel that does no real work here. He’s just assuming what we all assume–that you can kill a killer.  After some thought-experimental invocations of superheroes, we reach the conclusion that if you believe that abortion is murder, it would be permissible for you to go around killing abortion providers.  Here’s the conclusion of the argument, put in the mouth of the would-be fetus defender:

“I will, if necessary (if there are no equally effective non-lethal means), kill any would-be child murders to stop them from killing children.” Again, this seems heroic, not wrongful.

Note the parenthetical. What we have here is a conditional claim whose antecedent involves another conditional. Let me re-phrase it slightly, without loss of authorial intention, but with a little gain in clarity:

If necessary, and if there are no equally effective non-lethal means, then kill those whom it’s necessary to kill in order to stop the killing.

Lots of modal claims going on there. Let’s rephrase once again:

If necessary, kill those it’s necessary to kill in order to stop the killing, but if it’s not necessary, do not do so.

What does “necessary” really mean here? I take it that “necessary” means “necessary for bringing about some end.” But the end is not plausibly construed as “bringing abortions down to zero, full stop, by all available means, regardless of any other normative considerations.” The end in question is some complex goal, e.g., a just society or the common good or whatever, where superordinate higher-order features of the goal regulate subordinate features, including strategies for achieving this or that political outcome.

So the anti-abortionist’s ultimate goal is not plausibly described as “do what’s necessary to stop the killing.” It’s “do what’s necessary to bring about the common good, stopping the killing in a way that’s compatible with bringing about the common good.” I’m pro-choice, but it seems to me that anti-abortionists (or pro-lifers or whatever we call them) are entitled to a plausible conception of post bellum considerations, no matter how militant they are about ending abortion. They don’t just want to end abortion, full stop. They want to live in a just society without abortion, and it may not be possible to do that if you try to end abortion by killing people. In any case, the two things–stop the killing and live in a just society without abortion–are not the same thing.

Suppose that abortion really is murder. In that case, killing abortionists would be one obvious means of stopping abortions, but killing would also likely have seriously adverse consequences. It might increase hostility for anti-abortionists to the point of instigating widespread persecution against them. It might even start a civil war. Further, it’s easier in talk than in practice to kill all and only the “right” people during a terrorist/vigilante campaign. Once the killing begins, the enterprise of killing is often overcome by some terrorist/vigilante equivalent of the fog of war, and the wrong people get killed with amazing frequency. Any of those outcomes could obtain, and any of them might end up being worse for the anti-abortion cause (much worse) than not killing abortion providers.

It’s hard to be precise about expected outcomes of this sort, so people reasonably disagree about them. Some people think that a campaign of killing would, all in, be good for the anti-abortion cause. Others disagree. Obviously, both the complexity of the calculations and the possibility of disagreement about them might help explain why even fervent anti-abortionists have a (disjunctive) principled reason for not going around killing abortionists. They may either think that doing so is self-defeating, or they might think that doing so might very well end up being self-defeating, and not worth risking, as long as there are relatively peaceful (or at least orderly) political means for achieving the same ends with fewer collateral damages.

In recent times, the history of the abortion controversy begins with a deceptively liberating case from the pro-choice perspective (Roe vs. Wade) and proceeds from there to a series of restrictions on the original Roe vs. Wade restrictions on abortion, so that abortion, though nominally legal in the U.S, is in many ways embattled and under siege. In other words, opponents of abortion rights have done a pretty creditable job of subverting the right to abortion by purely legal means. Of course, abortions do still take place, and on the anti-abortion view, those abortions are murder. But the question is whether a campaign of vigilante killing would have purchased more for them than the political-judicial campaign they’ve actually enacted. Hardly as obvious as Brennan’s argument suggests.

It’s an open question whether anti-abortionists could, by purely legal means, do a better job of subverting abortion rights than they could by killing abortionists. The United States ended slavery by warfare in 1865; Brazil ended slavery without warfare in 1888. Anti-abortionists could in principle plump for a Brazilian approach to the abolition of abortion on the grounds that while that approach would take longer, it might prove more counter-factually stable than a faster-acting but more violent approach. Arguably, violence would be counter-productive and self-defeating, possibly catastrophically so.

Since it makes no sense to enact a self-defeating strategy, and it’s highly risky to enact what could be (catastrophically) self-defeating, anti-abortionists need not worry that Brennan’s argument pushes them into wanton murder. Contrary to Brennan, “the” issue involved in the abortion debate is not just the moral status of abortion (though I agree that that’s the fundamental issue) but what to do about the fact that abortion is a complex issue that elicits widespread disagreement. In other words, the philosophical issue is not just the theoretical one of whether or not abortion is murder, but the practical one of what to do about the fact that certain ways of disagreeing about it are potentially murderous.

Now consider Brennan’s list of would-be objections to his argument:

There are a number of objections to this line of reasoning, including:

  1. It’s wrong to engage in vigilante justice.
  2. Batman must allow people to murder children because he has a duty to obey the law, and the law permits child murder.
  3. Batman must not kill the child-killers, but must instead only use peaceful means.
  4. Batman must not kill the child-killers, because it probably won’t work and won’t save any lives.
  5. Batman must not kill the child-killers, because they mean well and don’t think they’re doing anything wrong.
  6. Batman must not kill the child-killers, because the claim that “killing six-year-olds is wrongful murder” is controversial among reasonable people.
  7. Batman must not kill the child-killers, because the government or others might retaliate and do even worse things.

I think these objections are either implausible (e.g., 2 is absurd), or are at best mere elaborations of the necessity proviso of defense killing. (E.g., #4.)

Putting aside (4), Brennan is right to say that these are pretty pointless objections. Objection (4) is where the action is. (Construed a certain way, [4] might well entail [1]: vigilante justice might be wrong because it’s likely to be ineffective, and it’s irresponsible to engage in a political strategy that might very well backfire. But I think Brennan intends [1] to mean that vigilante justice is deontically wrong qua violation of the law, full stop. So I’ll ignore it.)

Brennan dismisses (4) as a “at best a mere elaboration…of the necessity proviso of defense killing.” Well, that’s one way of putting things, and not a literally false one, I suppose. But it’s very misleading: a “mere elaboration” of a proviso can also explain why the proviso cannot be enacted under foreseeable conditions, and (4) does just that. In other words, what Brennan calls a “at best a mere elaboration” ends up explaining why, once we leave the thought-experimental laboratory, his suggestion makes no sense in the real political world where it’s supposed to have application.

Digression: the same sort of “elaboration” is the strategy behind what’s come to be called “contingent pacifism” in the just war literature; contingent pacifism is the strategy of justifying de facto pacifism by construing just war provisos in such a way that they can almost never be satisfied in the real world. This literature suggests that depending on how one construes its claims, just war theory (and its doctrine of necessity) can lead either to very hawkish policy prescriptions or to pacifism. But if the same theory leads different theorists to contrary outcomes with respect to the same issue, the differences between the different applications of the theory–the contingencies in question–can hardly be philosophically trivial. If my version of a doctrine leads me to wage war, and your version of the same doctrine prohibits you from ever going to war, it makes no sense to say, “Don’t worry, we’re agreeing on the theory; we just disagree on the contingencies.” In this case, the disagreement on the contingencies could mean the difference between a decade of war and a decade of peace. Conceptualizing that difference is a paradigmatically philosophical task.

Back to abortion: Not killing abortionists because you could get arrested, and/or because it would undermine the anti-abortionist cause, and/or because the collateral damages would be too high, and/or because it could start a civil war are not trivial considerations, whether “morally” or “practically.” From the first person perspective of an agent deciding what to do–not what to write in a blog post–these are all considerations of paramount importance. They make the difference between going ahead and killing someone and deciding not to. So a reader could grant 99.9999% of Brennan’s argument in principle, but still think that the 0.00001 remainder makes a crucial and theoretically significant difference to political practice. And he might insist that Brennan’s way of rendering the argument reveals a blind spot in his thinking about the relation between theory and practice.

I’d put the latter issue like this: Taken as an academic exercise, with all qualifications duly noted, and abstracting entirely from what would be necessary to enact his advice in practice, Brennan’s argument is perfectly sound. Taken as real-world political advice, however, and factoring in all relevant considerations–including prudential considerations about expected consequences–Brennan’s advice is myopic and insane. It seems to me that when the theoretical version of a prescriptive argument ends up sound, but the practical version of it is insane, we’re obliged to think harder about the relation between arguments, theory, and practice.

At a minimum, I think we’re obliged to note the huge gap that obtains between theoretical prescriptions and practical ones. It sounds oxymoronic, but it isn’t. A theoretical prescription is a prescription offered ex hypothesi, as an exercise in deontic logic, without pretending to guide real-life practice: it notes a normative entailment; it doesn’t claim to tell people what to do. A practical prescription is a prescription intended to guide practice, all things considered; it doesn’t just note an entailment, but tells us, all in, what to do.** Put differently, there is a huge difference between saying, “Your views entail that you should go out and kill people–but don’t actually do that, for God’s sake, I’m only pointing out where your views lead!” and saying, “Your views entail that you should go out and kill people–and if that’s where your views lead, so be it. So get your gun and hop to it!” Brennan is saying the former (I think), but you could be excused for interpreting him as saying the latter. The lesson here is paradox-like but not paradoxical:  A prescriptive argument can be sound and yet defective as advice.

The underlying disagreement here, it seems to me, is a version of Hobbes versus Aristotle on prudence. Aristotle takes phronesis (‘prudence’) to be an intellectual virtue that guides individual, first-personal decisions. Despite its practical, individualized, contextualized, consequence-sensitive, first-personal nature, Aristotle insists that phronesis a legitimate object of philosophical inquiry and a legitimate source of knowledge (Nicomachean Ethics, VI.5-13). A view like this puts a certain premium on the nuts and bolts of deliberation, from acceptance of the premises that motivate an action down to the details of what ultimately produces the action in the real world. On an Aristotelian view, what’s philosophically interesting is not just the abstract schema that the agent accepts but how the agent translates that schema into the particularities of a particular action. “Translating a schema into the particularities of a particular action” is the work of phronesis. 

Hobbes denies that prudence so conceived has any significant epistemic value (Leviathan, IV.46.1-6):

… we are not to account as any part thereof, that originall knowledge called Experience, in which consisteth Prudence: Because it is not attained by Reasoning, but found as well in Brute Beasts, as in Man; and is but a Memory of successions of events in times past, wherein the omission of every little circumstance altering the effect, frustrateth the expectation of the most Prudent: whereas nothing is produced by Reasoning aright, but generall, eternall, and immutable Truth.

Prudence, in short, is unscientific. It yields contingent, changeable, contextualized truths, neither important enough nor counterfactually stable enough nor wide enough in scope to count as genuine philosophical knowledge. How the agent translates an abstract schema into action is philosophically uninteresting. What matters is the schema–the model– itself. From this perspective, an inquiry into what the agent is, all things considered, to do seems too fine-grained, variable, and messy to be a genuinely philosophical or genuinely worthwhile activity.

Contemporary Hobbesians (as I’m thinking of them) prize thought-experimentation and social science at the expense of mere first-hand experience, and at the expense of an account of the requirements of first-personal deliberation (i.e., prudence). First-personal agents disappear from view, as do their deliberations and deliberative needs. From this perspective, the mere prudence required for intelligent political action is unworthy of philosophical inquiry. Anarchist Hobbesians have a plausible-looking rationale for this insistence: on their view, politics is an unworthy occupation, so it stands to reason that the epistemic virtues it require are themselves unworthy of sustained reflection.***

As I see it, one of the most valuable contributions of neo-Aristotelian theorizing (in the Nussbaumian mode) is to put social science and thought-experimentation in its place, and insist on the first-personal perspective of the agent and her deliberations–along with history, psychology, and common sense. On a view like this, it isn’t enough to know that if abortion is murder, and self-defense is justified, you can infer that defensive killing would be justified to save fetuses from murder. You need to know whether, even if that argument is sound, you should actually be out killing people. If so, you need to know whom to kill, when and how; how to prevent predictable disasters that arise when you start killing people; and how the killing enterprise fits into the larger aim of achieving the common good. That sounds like “mere strategy” to some people, but on an Aristotelian view, it’s precisely the kind of knowledge that the just and wise agent has, and that the political philosopher studies in order to grasp the nature of justice and wisdom.

Anyway, thought experiments and social science are of some, but relatively little value here. Eventually, thought experiments run out of prescriptive steam for the obvious reason that life isn’t an experiment. Social science runs out of useful things to say because we can’t do experiments on novel courses of action that no one has yet tried–but we can’t refuse to do novel things because there’s no existing social scientific literature about them, either. A virtue like phronesis is indispensable here, both for deliberative agents and for theorists theorizing about what such agents do. If you’re going to do something–e.g., engage in political action–you have to know how to do it, and the only way to know how to do something is to have done it (or have rehearsed doing something as much like it as possible). You need the kind of knowledge that Hobbes denigrates and that our neo-Hobbesians ignore. 

Bottom line: even if you think abortion is murder, don’t do what Jason Brennan tells you. (PS: It’s not really relevant to my argument, but in case you’re wondering, I’m pro-choice on the abortion issue. I believe in abortion on demand from the moment of conception until birth, with some moral reservations about late abortion, while rejecting legal restrictions on it.)

*I corrected this sentence. It originally said, “antecedent of the conditional,” but what I meant was that Brennan invokes common law to bolster the plausibility of the conditional as such.

**I reworded the latter clause after posting. The previous version (which I’ve now forgotten) was wordier and somewhat unclear.

***”Anarchist Hobbesian” may sound like a contradiction in terms, but I don’t think it is. It could mean (a) an anarchist whose meta-philosophical views map onto Hobbes’s and/or (b) an anarchist whose account of political authority maps onto Hobbes’s, but who infers on that basis that no states have authority.

Jason Brennan and Phillip Magness: A Request for Disclosure

Considering the number of times Jason Brennan has alluded, in the context of public discussion, to his once having worked at GEICO, I think it’s only fair that he disclose the following for public consumption:

  1. When did he work at GEICO, and at what location?
  2. What was his title while working there?
  3. What was his salary?
  4. Did he work there through a temp agency, or was he hired directly by GEICO itself?

If the GEICO job is important enough to bring up that many times, it’s worth clarifying the details by way of answers to the preceding questions.

A similar query is in order for Phillip Magness, who’s also been very autobiographically assertive on the subject. The article linked-to in the preceding sentence alludes to 1.5 years spent as a full-time adjunct (I’m presuming that “1.5 years” refers to the period 2008-2010, corresponding to the position of Lecturer at American University on his CV), then invites us to do some “arithmetic” about the income he claims to have earned during that period, and how he managed to live on it while being otherwise productive.

That’s fine, but Magness’s CV indicates that he received three grants during roughly the same period (2007, 2009, 2011). I regard the 2007 and 2011 grants as potentially relevant even though they strictly speaking fall outside of the 2008-2010 period. To be blunt, a year and a half of adjunct work cushioned by three grants is not quite as impressive as the impression one might get by reading the unadorned version of Magness’s apologia pro vita sua.

Three questions for Magness, then:

  1. What was the cumulative monetary value of those three grants?
  2. Does his CV exhaustively list all of his income sources for the relevant years (meaning 2007-2011)?
  3. Did he, during those years (2007-2011), live in a household with someone earning an additional income?

All three questions strike me as relevant to evaluating the story Magness tells.

One problem with both sides in the adjunct debate is that the most assertive people in it seem more interested in parading selective recountings of their valor or misfortunes than in documenting their claims in a way that demonstrates the credibility of what they’re saying to neutral or skeptical readers. If people are going to start going autobiographical in the Great Adjunct Debate–whether they’re adjuncts recounting their minimum-wage woes, or academic stars recounting their Horatio Alger stories–I think they owe us fuller disclosures than any of them have been making about the stories they tell us. Brennan and Magness clearly think of themselves as exemplars for the rest of the profession. How about exemplifying some disclosure about those stories you’ve been telling?

Postscript, 11 pm: I’m satisfied with Brennan’s answer, but on second thought, I have to say I’m not just puzzled but mystified by the autobiographical claims Magness has made in his increasingly-famous essay, “The Myth of the Minimum Wage Adjunct.

As someone who spent the last ~1.5 years of grad school as a so-called “full time adjunct,” constituting my only real source of income at the time, I can state first hand that it will not make you wealthy.

So he was an adjunct for 1.5 years, during which time adjuncting was his “only real source of income.” I take it that the word “real” implies that there was some other, secondary source of income. I’m curious what it was.

Later he tells us,

I can also speak to this first hand as it is something I learned to do quickly during my own period as a full-time adjunct ca. 2008-2009. I was not anything close to well off during this period of my career, but with a little basic time management I not only met my teaching obligations but I (1) finished a dissertation, (2) wrote several peer reviewed articles, (3) composed a substantial part of an academic press monograph, and (4) found more permanent employment.

The problem is, his CV lists a Doctoral Research Grant from George Mason University for the year 2009. I can see how the grant might not literally have overlapped with the adjuncting: if he started adjuncting in January 2008, and continued through fall 2008 and then spring 2009, that would be 1.5 years of adjuncting; he could then have gotten the research grant for the latter half of 2009. But I’m speculating. I think we’re entitled to hear the explanation directly from him.

Literal overlap or not, he cannot, on this basis, claim to “speak to this first hand,” where “this” refers to the experience of the average full-time long-term adjunct–which is what the discussion at BHL was about. One and a half years of adjuncting sandwiched between two grants, along with some undisclosed secondary income source, is not long term adjuncting in any sense relevant to the ongoing controversy. And we don’t even know what he did during the summer of 2008, when he was a “so-called ‘full time adjunct’.” According to Magness, adjuncts don’t teach during the summer months (point 5 of his enumerated points), from which it seems to follow that he didn’t. So did he simply go without income during the summer, or is that when the non-real income source kicked in? If so, what was the source? The answer surely has some bearing on the relationship between his personal experiences and the predicament of the long-term adjunct.

Whatever the answers, we’re left with a mystery in Magness’s account that’s worth clearing up. He wants us to believe that he knows what it’s like to be a long-term adjunct, but the story he’s telling is consistent with saying this:

I was a so-called full time adjunct during 2008-9. Of course, I got a grant in 2007, then one in 2009, and I wasn’t an adjunct during the summer of 2008. During the summer, I got a real job–a real job, albeit with an unreal income. Meanwhile, I had established a relationship with the Institute for Humane Studies, which eventually gave me an administrative job as Academic Program Director, a job I cheerfully hold while suggesting all over Twitter that the university’s problems could be solved if only we eliminated all of those useless administrators on the payroll. I realize that very, very, very few long-term adjuncts could get such a job, precisely because it’s sui generis, and I am now the person who holds it. And yet, I won’t hesitate to lecture long-term adjuncts about what bad time managers they are.

Say it ain’t so, Phil.

David Potts on the Dunning-Kruger Effect

It’s a little known fact that some of PoT’s most avid and engaged readers lurk behind the scenes, being too bashful to log onto the site and call attention to themselves by writing for public consumption. What they do instead is read what the rest of us extroverts write, and send expert commentary to my email inbox. I implore some of these people to say their piece on the site itself, but they couldn’t, possibly. They’re too private for the unsavory paparazzi lifestyle associated with blogging.

About a month ago, I posted an entry here inspired–if you want to call it that–by a BHL post on graduate school. Part of the post consisted of a rant of mine partly concerning this comment by Jason Brennan, directed at a commenter named Val.

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

My rant focused on Brennan’s conflation of psychiatry and clinical psychology in the second sentence (along with the belligerent stupidity of the claim made about clinical psychology), but a few weeks ago, a friend of mine–David Potts–sent me an interesting email about the Dunning-Kruger effect mentioned in the first sentence. David happens to have doctorates in both philosophy and cognitive psychology, both from the University of Illinois at Chicago; he currently teaches philosophy at the City College of San Francisco. In any case, when David talks, I tend to listen.

After justifiably taking issue with my handwaving (and totally uninformed) quasi-criticisms of Jonathan Haidt in the just-mentioned post, David had this to say about the Dunning-Kruger effect (excerpted below, and reproduced with David’s permission). I’ll try to get my hands on the papers to which David refers, and link to them when I get the chance. I’ve edited the comment very slightly for clarity. I think I’m sufficiently competent to do that, but who knows?

First, about the Dunning-Kruger effect. I had never heard of it, which got my attention because I don’t like there to be things of this kind I’ve never heard of. So I got their paper and a follow-up paper and read them. But I was not much impressed by what I read. How is Dunning-Kruger different from the well-established better-than-average effect? For one thing, [Dunning-Kruger] show — interestingly — that the better-than-average effect is not a constant increment of real performance. That is, it’s not the case that, at all levels of competence, people think they’re, say, 20% better than they really are. Rather, everybody thinks they’re literally above average, no matter how incompetent they are. This is different from, say, knowledge miscalibration. Knowledge miscalibration really is a matter of overestimating one’s chances of being right in one’s beliefs by 20% or so. (That is, people who estimate their chances of being right about some belief at 80% actually turn out to be right on average 60% of the time; estimates of 90% correspond to actually being right 70% of the time, etc.) But in the cases that Kruger and Dunning investigate, nearly everybody thinks they’re in the vicinity of the 66th percentile of performance, no matter what their real performance. So that’s interesting.

But that is not the way Dunning and Kruger themselves interpret the importance of their findings. What they take themselves to have shown is that incompetent people have a greater discrepancy between their self-estimates and their actual performance because, being incompetent, they are simply unable to judge good performance. If your grasp of English grammar is poor, you will lack the ability to tell whether your performance on a grammar test is good or bad. You won’t know how good you are — or how good anyone else is for that matter — because of your lack of competence in the domain. Lacking any real knowledge of how good you are, you just assume you’re pretty good. On this basis, they predict that incompetent people will very greatly overestimate their own competence in any domain where the skill required to perform is the same as the skill required to evaluate the performance. (Thus, they do not suppose that, for example, incompetent violin players will fail to recognize their incompetence.)

The trouble I have with this is that it is not well supported by the data. What their data really show, it seems to me, is that in the domains they investigate, nobody is very well able to recognize their own competence level. The plot of people’s estimates of their own abilities (both comparative and absolute) against measured ability does slope gently upwards, but very gently, usually a  15% – 25% increase despite an 80% increase in real (comparative) ability level. The highly competent do seem to be reasonably well able to predict their own raw test scores, but they do not seem to realize their own relative level of competence particularly well. They consistently rate their own relative performances below actuality. For example, in one experiment people did a series of logic problems based on the Wason 4-card task. Participants who were actually in the 90th percentile of performance thought they would be in about the 75th percentile. In another study, of performance on a grammar test, people who performed at the 89th percentile judged that they would be in the 70th. Then they got to look at other participants’ test papers and evaluate them (according to their own understanding). This raised their self-estimates, but only to the 80th percentile.

It is true that poor performers do not recognize how bad they are doing in absolute terms. But the discrepancy is not nearly as great as the discrepancy with regard to comparative performance. In the logic study, after doing the problem set and giving their estimates of their own performance, people were taught the correct way to do the problems. This caused the poor performers to revise their estimates of their own raw scores to essentially correct estimates. But they still thought their percentile rankings compared to others were more than double what they really were. (They did revise these estimates down substantially, but not enough.)

I think Dunning and Kruger have latched onto a logical argument for the unrecognizability of own-incompetence in certain domains and that they are letting that insight drive their research rather than measurements. No doubt if the knowledge of a domain necessary to perform well is also essential to evaluating performance in that domain — one’s own or anyone else’s — then poor performers will be poor judges. This almost has to be right. But the effect seems small insofar as it is attributable to the logical point Dunning and Kruger focus on. The bulk of their findings seems to be attributable, not to metacognitive blindness, but to social blindness to relative performance on tasks where fast, unambiguous feedback is in short supply. In domains where fast, abundant, clear feedback is lacking (driving ability, leadership potential, job prospects, English grammar, logic), nobody really knows very well how they compare with others. So they rate themselves average, or rather — since people don’t want to think they’re merely average — a little above average. And this goes for the competent (who accordingly rate themselves lower than they should) as well as the incompetent.

My low opinion of the Dunning-Kruger effect seems to be shared by others. I have on my shelf six psychology books published after Kruger and Dunning’s paper became common coin, which thoroughly review the heuristics and biases literature, four of which I’ve read cover to cover, and only two of them make any mention of this paper at all. One cites it together with two other, unrelated papers merely as finding support for the better-than-average effect, and the other cites it as showing that even the very worst performers nevertheless tend to rate themselves as above average. In other words, none of these books makes any mention at all of the Dunning-Kruger effect.

But if the Dunning-Kruger effect isn’t of much value as psychology, it’s great for insulting people! Which is no doubt why it is well known on the Internet.

I didn’t know any of that, and thought it would better serve PoT’s readers to have it on the site than moldering in my inbox.
PS. I’ve been having trouble with the paragraph spacing function in this post, as I sometimes do, so apologies for that. I don’t know how to fix it; when I do, it seems fixed, and then the problem spontaneously recurs. (I guess I’m an incompetent editor after all.)
Postscript, December 20, 2015: More on the Dunning-Kruger effect (ht: Slate Star Codex).

From Assurance Contracts to “Compulsory” Voting

Jason Brennan has a series of posts up at BHL on compulsory voting. One of his arguments against compulsory voting is what he calls the Assurance Argument:

The Assurance Argument

  1. Low turnout occurs because citizens lack assurance other similar citizens will vote.

  2. Compulsory voting solves this assurance problem.

  3. If 1 and 2, then compulsory voting is justified.

  4. Therefore, compulsory voting is justified.

I’ve sketched a version of the Assurance Argument here at PoT that’s immune to Brennan’s criticisms. It doesn’t exactly correspond to Brennan’s version of the Assurance Argument above, but I think it’s close enough in form to be worth discussing in the same breath.

I have yet to set it out formally, but my version of the Assurance Argument turns on the idea of an assurance contract to vote. The basic idea is this: Take a context in which low voter turnout is a bad thing you justifiably want to remedy. Find a population apt to vote in a single direction as a unified voting bloc. Make sure that what they’re voting for not only promotes their interests, but in doing so, promotes the common good. Then come up with a mechanism for generating and enforcing an assurance contract that gets that population to vote the relevant way. If you work with the right population, pursue the right aims, and fashion the right contract, my view is that you can generate a binding obligation to vote in the population, and in doing so, solve the assurance problem that Brennan treats as essentially insuperable.

Given the preceding context,  premise (1) of Brennan’s version is fine as is, but the rest has to be modified as follows: In premise (2), substitute “an assurance contract” for “compulsory voting.” In (3) and (4), substitute “enforced contract remedies” for “compulsory voting” (and change the grammar). With that in place, you have a version of the Assurance Argument that comes as close as possible to an argument for “compulsory voting” without quite crossing the line into literal compulsion. 

The general idea is that in any political context in which you can induce people to form an assurance contract to vote, you can “compel” them to vote, or else exact a penalty for failure to vote. That sounds implausible if you’re talking about American elections, but there are other contexts in which it’s feasible.

During the intifadas, Palestinian politics involved mass action where compliance was universally expected, and non-compliance was severely penalized (sometimes by death). The point is that in cases like this, we’re talking about a political culture that involves a strongly solidaristic ethic, where structures are in place for mass collective action.

Imagine that West Bank Palestinians somehow acquired the right to vote in Israeli elections (or East Jerusalemite Palestinians just decided to exercise their pre-existing right to vote), and that the mass action in question turned from coercive uprising-related activity to electoral politics. My claim is: If you can induce near-compliance with the dictates of an uprising (as you can), you can induce explicit consensual compliance with an assurance contract involving a promise to vote in an election. If you can do that, you can compel compliance with the contract.

More specifically: Imagine an electronic caucus–like a MOOC–in which everyone in a given population is expected, due to social pressure, to log on and decide on a course of electoral action. Everyone who logs on then becomes part of a (potential) assurance contract. The numbers are tallied, and if they’re sufficient to tip the election, the contract is considered valid, and people are expected to vote accordingly. If not, the caucus dissolves. (In other words, what I’m calling a caucus really has the function of a caucus plus a census plus an assurance contract.)

Suppose that the numbers are there to tip the election. Then everyone is expected to vote as specified in the contract. Suppose that the contract calls for x votes for a certain candidate/slate/policy. If x votes show up in the election results, fine. But if fewer do, it follows that there were free riders who reneged on the contract. In that case, it becomes a matter of finding out who they are, so as to exact a penalty for non-compliance. Now suppose that the balloting is open, not secret. If so, then if (say) Khawaja failed to vote for the agreed-to candidate, and there’s no secret ballot, someone will squeal on him when the Free Rider Commission makes its inquiry. Under such conditions, I suspect that there will be very few free riders.

If you can pull all that off, you can “compel” votes that tip the scales of the election. The obstacles to pulling it off are psychological rather than conceptual. If the right psychological dispositions were in place–if Palestinians regarded elections the way they regard uprisings, and the Israelis allowed them to organize politically, and allowed them to vote, etc.–you could generate an electoral assurance contract mechanism involving (a) numbers large enough to affect an election but (b) small enough to organize and hold compliant to the terms of the contract. This only seems implausible to Americans because we live in a huge, highly impersonal, individualistic, diverse, and cosmopolitan society where such a contract seems like a mere thought experiment. If you live in a smaller scale society with a different political ethos, however, it’s within the realm of nomological possibility.

The point I’m making isn’t so much about Israelis and Palestinians as about assurance contracts and elections. Even if the preceding doesn’t literally apply to the Palestinian case, my point is, if you can find a case that satisfies the description I’ve just given, you can run some version of an assurance argument on it. It’s an empirical question whether you can generate or discover such a case. I’m not a political scientist, and don’t know the literature very well, but as an armchair consideration, I don’t find my empirical assumptions implausible, and they merely have to be possible to get the argument off the ground. Maybe Brennan discusses the relevant empirical issues somewhere (he’s written a great deal that I haven’t read), but he doesn’t do so in The Ethics of Voting or in “The Right to a Competent Electorate,” which I have read.

There are lots of details to work out here, but once you grasp the principle involved, the sketchiness of the proposal is not an objection to the basic idea. At any rate, my argument is immune to what Brennan calls the Burden of Proof and the Worse Government arguments.

Here’s the Burden of Proof Argument:

The Burden of Proof Argument

  1. Because compulsory voting is compulsory, it is presumed unjust in the absence of a compelling justification.

  2. A large number of purported arguments for compulsory voting fail.

  3. There are no remaining plausible arguments that we know of.

  4. If 1-3, then, probably, compulsory voting is unjust.

  5. Therefore, probably, compulsory voting is unjust.

As a response to my argument, the BP argument fails at premise (1): premise (1) doesn’t apply to my argument because unlike compulsory voting in the literal sense, there’s no initiatory compulsion involved in my assurance contract idea, and no special burden of proof is required to hold someone to a contract to which they’re explicitly a party.

Here’s the Worse Government Argument:

 The Worse Government Argument

  1. The typical and median citizen who abstains (under voluntary voting) is moreignorant, misinformed, and irrational about politics than the typical and median citizen who votes.

  2. If so, then if we force everyone to vote, the electorate as a whole will then become more ignorant, misinformed, and irrational about politics. Both the median and modal voter will be more ignorant, misinformed, and irrational about politics.

  3. If so, in light of the influence voters have on policy, then compulsory voting will lead [to] at least slightly more incompetent and lower quality government,

  4. It is (at least presumptively) unjust to impose more incompetent and lower quality government.

  5. Therefore, compulsory voting is (at least presumptively) unjust.

This argument fails at premise (1) as well. As far as I can tell, premise (1) implicitly makes a claim about the median American voter. But I’m not talking about American voters; I’m talking about non-American ones. Unless the claims of (1) generalize to the voters I have in mind, the WG argument involves an ignoratio elenchi against my proposal.

If anyone can cite studies that show that, say, Israeli Arab voters are misinformed, ignorant, or irrational when they vote for the United Arab List, I’d like to see it. If anyone can cite studies that show that East Jerusalemite Palestinians would be misinformed, ignorant, or irrational to vote for (candidates that favor) more housing permits, I’d like to see that, too. But I’m skeptical.

*I changed the title of the post after posting.