From the Nicomachean Ethics to the Grant Study

[Here as promised is a first draft of the paper I’ll be giving this Saturday at the annual conference of the Association for Core Texts and Courses in Plymouth Harbor, Massachusetts. Papers for the conference are supposed to be short, non-technical treatments of a core text or two appropriate for undergraduate teaching, along with a rationale for teaching them. This year’s theme is the relation between the arts and sciences in undergraduate education. Comments are welcome, though I probably won’t see them until next week. I’ll add hyperlinks next week as well. This discussion was quite helpful to me in thinking things through.]

Continue reading

David Potts on the Dunning-Kruger Effect

It’s a little known fact that some of PoT’s most avid and engaged readers lurk behind the scenes, being too bashful to log onto the site and call attention to themselves by writing for public consumption. What they do instead is read what the rest of us extroverts write, and send expert commentary to my email inbox. I implore some of these people to say their piece on the site itself, but they couldn’t, possibly. They’re too private for the unsavory paparazzi lifestyle associated with blogging.

About a month ago, I posted an entry here inspired–if you want to call it that–by a BHL post on graduate school. Part of the post consisted of a rant of mine partly concerning this comment by Jason Brennan, directed at a commenter named Val.

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

My rant focused on Brennan’s conflation of psychiatry and clinical psychology in the second sentence (along with the belligerent stupidity of the claim made about clinical psychology), but a few weeks ago, a friend of mine–David Potts–sent me an interesting email about the Dunning-Kruger effect mentioned in the first sentence. David happens to have doctorates in both philosophy and cognitive psychology, both from the University of Illinois at Chicago; he currently teaches philosophy at the City College of San Francisco. In any case, when David talks, I tend to listen.

After justifiably taking issue with my handwaving (and totally uninformed) quasi-criticisms of Jonathan Haidt in the just-mentioned post, David had this to say about the Dunning-Kruger effect (excerpted below, and reproduced with David’s permission). I’ll try to get my hands on the papers to which David refers, and link to them when I get the chance. I’ve edited the comment very slightly for clarity. I think I’m sufficiently competent to do that, but who knows?

First, about the Dunning-Kruger effect. I had never heard of it, which got my attention because I don’t like there to be things of this kind I’ve never heard of. So I got their paper and a follow-up paper and read them. But I was not much impressed by what I read. How is Dunning-Kruger different from the well-established better-than-average effect? For one thing, [Dunning-Kruger] show — interestingly — that the better-than-average effect is not a constant increment of real performance. That is, it’s not the case that, at all levels of competence, people think they’re, say, 20% better than they really are. Rather, everybody thinks they’re literally above average, no matter how incompetent they are. This is different from, say, knowledge miscalibration. Knowledge miscalibration really is a matter of overestimating one’s chances of being right in one’s beliefs by 20% or so. (That is, people who estimate their chances of being right about some belief at 80% actually turn out to be right on average 60% of the time; estimates of 90% correspond to actually being right 70% of the time, etc.) But in the cases that Kruger and Dunning investigate, nearly everybody thinks they’re in the vicinity of the 66th percentile of performance, no matter what their real performance. So that’s interesting.

But that is not the way Dunning and Kruger themselves interpret the importance of their findings. What they take themselves to have shown is that incompetent people have a greater discrepancy between their self-estimates and their actual performance because, being incompetent, they are simply unable to judge good performance. If your grasp of English grammar is poor, you will lack the ability to tell whether your performance on a grammar test is good or bad. You won’t know how good you are — or how good anyone else is for that matter — because of your lack of competence in the domain. Lacking any real knowledge of how good you are, you just assume you’re pretty good. On this basis, they predict that incompetent people will very greatly overestimate their own competence in any domain where the skill required to perform is the same as the skill required to evaluate the performance. (Thus, they do not suppose that, for example, incompetent violin players will fail to recognize their incompetence.)

The trouble I have with this is that it is not well supported by the data. What their data really show, it seems to me, is that in the domains they investigate, nobody is very well able to recognize their own competence level. The plot of people’s estimates of their own abilities (both comparative and absolute) against measured ability does slope gently upwards, but very gently, usually a  15% – 25% increase despite an 80% increase in real (comparative) ability level. The highly competent do seem to be reasonably well able to predict their own raw test scores, but they do not seem to realize their own relative level of competence particularly well. They consistently rate their own relative performances below actuality. For example, in one experiment people did a series of logic problems based on the Wason 4-card task. Participants who were actually in the 90th percentile of performance thought they would be in about the 75th percentile. In another study, of performance on a grammar test, people who performed at the 89th percentile judged that they would be in the 70th. Then they got to look at other participants’ test papers and evaluate them (according to their own understanding). This raised their self-estimates, but only to the 80th percentile.

It is true that poor performers do not recognize how bad they are doing in absolute terms. But the discrepancy is not nearly as great as the discrepancy with regard to comparative performance. In the logic study, after doing the problem set and giving their estimates of their own performance, people were taught the correct way to do the problems. This caused the poor performers to revise their estimates of their own raw scores to essentially correct estimates. But they still thought their percentile rankings compared to others were more than double what they really were. (They did revise these estimates down substantially, but not enough.)

I think Dunning and Kruger have latched onto a logical argument for the unrecognizability of own-incompetence in certain domains and that they are letting that insight drive their research rather than measurements. No doubt if the knowledge of a domain necessary to perform well is also essential to evaluating performance in that domain — one’s own or anyone else’s — then poor performers will be poor judges. This almost has to be right. But the effect seems small insofar as it is attributable to the logical point Dunning and Kruger focus on. The bulk of their findings seems to be attributable, not to metacognitive blindness, but to social blindness to relative performance on tasks where fast, unambiguous feedback is in short supply. In domains where fast, abundant, clear feedback is lacking (driving ability, leadership potential, job prospects, English grammar, logic), nobody really knows very well how they compare with others. So they rate themselves average, or rather — since people don’t want to think they’re merely average — a little above average. And this goes for the competent (who accordingly rate themselves lower than they should) as well as the incompetent.

My low opinion of the Dunning-Kruger effect seems to be shared by others. I have on my shelf six psychology books published after Kruger and Dunning’s paper became common coin, which thoroughly review the heuristics and biases literature, four of which I’ve read cover to cover, and only two of them make any mention of this paper at all. One cites it together with two other, unrelated papers merely as finding support for the better-than-average effect, and the other cites it as showing that even the very worst performers nevertheless tend to rate themselves as above average. In other words, none of these books makes any mention at all of the Dunning-Kruger effect.

But if the Dunning-Kruger effect isn’t of much value as psychology, it’s great for insulting people! Which is no doubt why it is well known on the Internet.

I didn’t know any of that, and thought it would better serve PoT’s readers to have it on the site than moldering in my inbox.
PS. I’ve been having trouble with the paragraph spacing function in this post, as I sometimes do, so apologies for that. I don’t know how to fix it; when I do, it seems fixed, and then the problem spontaneously recurs. (I guess I’m an incompetent editor after all.)
Postscript, December 20, 2015: More on the Dunning-Kruger effect (ht: Slate Star Codex).

Psychology, Psychiatry, and Moral Philosophy: An Open Thread

I’ve been working on and thinking about issues at the intersection of psychology, psychiatry, and moral philosophy lately, so this (partly but not entirely edifying) discussion-thread at BHL caught my eye. I thought I’d reproduce it here, comment on it, and then just leave the comments open indefinitely for thoughts on the matter.

The discussion arises in the context of a post by Jason Brennan on whether one should go to grad school. I don’t particularly like the self-congratulatory tone of the post, but don’t disagree with the advice he gives. Early on in the post, he addresses a frequently-asked question and offers up an answer:

I like reading and discussing economics or political philosophy. It‘s my hobby. Should I go to grad school? You can do all these things without getting a Ph.D. You won’t be as good at it, but you can read and discuss economics while holding down a job as an insurance agent, a lawyer, or a consultant. You might be able to maintain your hobby while making a lot more money.

It’s not very adeptly or tactfully put, but on the whole, I agree with Brennan. His point is not that a non-PhD. cannot in principle be as good as PhDs at philosophy. His point is that the generalization holds as a rule: generally speaking, and given current economic and institutional realities, you need a PhD to excel at philosophy. There are some notable exceptions to that rule, of course. Some of the most brilliant and successful academic philosophers got into the profession back in the day when a PhD was considered unnecessary (e.g., Alasdair MacIntyre, Colin McGinn, Saul Kripke), but no one holds not having a PhD against them. Coming the other way around, I know  non-academics out there (without PhDs) who can hold their own–and then some–with many PhD philosophers. But I think such people are the exception, not the rule. Ultimately, one has to commit the fallacy of accident to deny the truth of what Brennan is saying. We can recognize that exceptional cases exist while acknowledging the truth of the rule he’s identified.

Perhaps Brennan should have qualified what he said to accommodate the exceptional cases, but I also think it’s clear he had a very different sort of case in mind–e.g., the middle manager who wants to do philosophy on the side.  I think Brennan is correct to think that such a person will tend not to be as good at philosophy as the PhD philosopher from a top-20 school (Arizona, Princeton, Rutgers, Oxford, Pittsburgh, etc.) who is herself working at an R1 school and (therefore) doing philosophy all day. (And most would come out and admit it.)  The more invested you are in your day job, the heavier its demands. But the heavier its demands, the fewer resources you have to devote to philosophy. Given the (very) heavy demands of doing good philosophy, having fewer resources means, all things equal, you won’t do it as well as someone with more resources at her disposal. As someone who spent nine years temping and adjuncting before finding a full-time academic position, that doesn’t seem controversial to me.

It’s not much different than the situation of the guy who spends eight hours a day working assiduously on his guitar chops versus the guy who noodles a bit on his prized Gibson SG after a long day at work. The first guy might make it in the music business, if he’s lucky and other things come together; the second guy may do a gig of AC/DC covers at the local bar (if they let him in), but can’t expect to headline Met Life Stadium (capacity: 88,000), or for that matter, headline the local equivalent of the Wellmont Theater (capacity: 1,200). (Again, I should know.)

The conversation took a different (and actually, more interesting) direction after an intervention by someone named Val, a psychiatrist, who jumped in with this comment just below. Responding to the Brennan passage quoted above, he or she had this to say (sorry for the pronoun ambiguity, but “Val” could be either male or female):

Rubbish and simple minded navel-gazing. Except for the unique subspecialty of a Ph.D tenured research professor (“I’m the foremost expert on La Rochefoucauld’s writing of the year 1678!”), anyone who puts in the time and is clever can speak on intellectual issues with equal footing. You can certainly be “as good at it” in whatever interests you.

I’m a psychiatrist attached to a large research university and spend most of my day as a clinician. The philosophy professors who have careers focusing on ethics, political philosophy, or Scholasticism are barely on equal footing with the well-read clinicians who have been reading the epistemology of science for the last 25 years.

I think Val’s comment talks somewhat past Brennan’s. Yes, “anyone who puts in time” can speak with equal footing, but Brennan’s point is that if you have a day job, the better the job, the less time you’ll have to put in. The worse the job, the less sense it makes to do philosophy rather than get yourself a better job (and then do philosophy, in which case, it’s back to the first option). There are exceptions to this rule, too, but as a rule, it holds. Val’s situation is unique, and escapes Brennan’s point, but doesn’t generalize to the cases Brennan is discussing–the majority of cases.

Unfortunately, Brennan, given an opportunity to re-direct the conversation, only had this to say:

Val, I bet you just think you’re smart because of the Dunning-Kruger effect.

Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.

It’s a somewhat cryptic–and actually pretty stupid–response. The first sentence is just a particularly abusive instance of poisoning the well. The second sentence suggests that Brennan is under the impression that Val is a clinical psych(ologist). In other words, his implicit reasoning is:

You must be one of those dumb people who’ve opted to work in clinical psychology. Your GRE scores were probably too low to work in a difficult field, like philosophy, economics, or cognitive psychology. Your MCAT scores were probably too low to get you into a good medical school, or to get you in at all. So you opted for the easy way out–clinical psychology. And given that, you must think you’re particularly smart because you’re operating under the Dunning-Kruger effect. Being a victim of that effect, you’ve taken umbrage at my suggestions, but that’s because the effect has deluded you.

One problem here is that Val is a psychiatrist with an MD. So the GRE is irrelevant to his/her situation, and he/she obviously did well enough on the MCATs to get into med school, get an MD, go into practice, and get attached to a research university.

A second problem is that even if there was a documented correlation between low GRE/MCAT scores and the choice of clinical psychology as a profession, it wouldn’t follow that clinical psychology was “easy.” The more obvious inference would be that neither the GRE nor the MCAT was designed to test skill or aptitude in clinical psychology. A little Howard Gardner might have gone a long way here.

Personal experience might help, too. Brennan often likes to talk about his, so here’s a bit of mine. I spent part of grad school writing GRE questions for the Educational Testing Service (ETS), so I have a fairly good sense of what’s involved in designing them, including what they test and what they don’t test. There’s a lot that they don’t test, and a lot in them, methodologically and substantively, that is highly debatable, regardless of what ETS’s in-house psychometricians will tell you. Keith Stanovich’s work is relevant here.

It’s a great irony, by the way, that a large number of the item writers for the GRE (and personnel at ETS generally) are people who, by Brennan’s standards, are academic failures–i.e., grad students, often at Rutgers, Princeton, Temple, or Penn, who’ll never get a tenure track R1 job, or grad students (Rutgers, Princeton, Temple, Penn) who never finished their programs. So lots of Brennanite “failures” end up being the gate-keepers for the Brennanite “winners.” Something similar is true of the PRAXIS exam: I wrote items for PRAXIS at a time when, as a doctoral student without a teaching certificate, I was writing exam questions for a profession I wasn’t permitted to enter–and the questions I wrote were for an exam involving the very credential I lacked for purposes of entry!

A bit of advice, then: Brennan tells people who might want to go to grad school, but shouldn’t, to get a job at GEICO. I would say, instead: get a job at ETS. I worked there as a part-timer for almost six years before I got a full time academic position. It was a good place to work. Not my first preference, but still.

Incidentally, if I were Jerry Springer, at this point I would say that one important lesson we learn here is not to accuse someone of being a victim of the Dunning Kruger effect, accuse him/her of bombing the GRE, and misread what he/she wrote all in the same comment.

Anyway, back to Val’s comment. I sort of agreed, sort of disagreed. So here’s what I said:

I’m a PhD philosopher working on a master’s degree in counseling psych. I spend a fair bit of time discussing philosophy vs clinical psychology and/or psychiatry with people in those fields. I see where you’re coming from, but don’t agree with you (not that I agree with Brennan’s comment below*).

An enormous amount of the literature in both clinical psychology and psychiatry strikes me as methodologically weak and substantively trivial. (Much of it also makes huge, unwitting assumptions about difficult issues in the philosophy of mind.) The clinical work that (good) psychiatrists do gives them practical experience that philosophers don’t typically have (fair enough), but it’s very narrow and doesn’t equip them with the resources to think about bread-and-butter philosophical issues. In any case, for many psychiatrists, “clinical work” nowadays means “medication management,” not therapy. I don’t see how expertise at managing a dosing schedule gives a person insight into the foundations of ethics. I’m willing to hear the argument, but off hand, I don’t see it.

That’s not to say that there aren’t brilliant philosopher-psychiatrists out there (e.g., Jonathan Lear, Richard Chessick…Sigmund Freud), i.e., people with excellent philosophical skills who have capitalized on their clinical work. I’d also be willing to say that they have insight and understanding that most philosophers in the field lack. But that’s a far cry from the claims you’re making.

One look at Brennan’s derisive comment below* should tell you that if you were looking for intelligent engagement with your arguments, you’ve come to the wrong place. If you’re interested in discussing the issues, feel free to come by my blog or contact me privately (contact info at the blog). I sometimes blog on issues at the intersection of philosophy and psychology in the broad sense (that includes psychiatry), and wouldn’t mind batting this one around. We’re mostly philosophers, but there are some psychologists and psychiatrists lurking in the “audience.” You might find it fruitful to have a conversation with us. And rest assured, we won’t ask you about your MCAT score or reduce your arguments to a diagnosis.

Val saw what I wrote and had this to say:

Irfan – I agree with a good deal of what you have said. An enormous amount of psychology and psychiatry research is indeed methodologically weak. As the saying goes, nearly of all of psychology research is trivial if true, and if attempting to show something non-trivial, is impossible to convincingly demonstrate. My experience as well has been that most psychologists and psychiatrists are grossly ignorant of the surrounding philosophical issues.  However, there are plenty of psychiatrists that I work with who are keenly aware of the epistemic problems of the assumptions inherent in modern psychiatry and are well read in the psychiatrist-philosophers, (Jung, Jaspers, Freud…Popper is also popular. Human Action was recently under discussion in the geriatrics department). …

I agree with that, of course. I also think it goes the other way. Most philosophers are grossly ignorant of psychology and psychiatry, but it’s unclear to me (one year into a psychology program) how much of a debility that turns out to be. If so much psychology research is trivial, what leverage does one get out of relying on it to do moral or political philosophy? Some, I think, but it’s difficult to articulate what it is.

Same issue from a different direction: as a journal editor and conference organizer, I read dozens of manuscripts in ethics and political philosophy from authors who are trying (sometimes trying too hard) to showcase their familiarity with cutting edge work and cutting edge ways of doing philosophy. A large proportion of this work showcases the latest work in psychology. Decades ago, Robert Nozick told us that either we work within Rawls’s system, or explain why not. Now the same is implicitly being said of Jonathan Haidt. It is, one might say, a haidtful state of affairs.

Much of this psycho-philosophical experiment-mongering strikes me, frankly, as trivial, and if you dig hard enough, you find in many cases that philosophers tend, subtly (or not so subtly) to overstate, distort, and cherry pick research findings from psychology to make them less trivial than they are.

The truth is, by comparison with the intuition-mongering philosophy literature, the psychological literature tends to be very, very equivocal. Here’s a random example that I just happened to read yesterday, Daniel Wegner and Sophia Zanakos, “Chronic Thought SuppressionJournal of Personality, 62:4 (December 1994). The abstract says:

We conducted several tests of the idea that an inclination toward thought suppression is associated with obsessive thinking and emotional reactivity….[Our measure of thought suppression] was found to correlate with measures of obsessional thinking and depressive and anxious affect, to predict signs of clinical obsession among individuals prone toward obsessional thinking, to predict failure of electrodermal responses to habituate among people having emotional thoughts.

Then you read the article and the qualifications start coming: “Throughout this article, we have tried to caution that our intepretations of these results are not the only possible interpretations at this time” (p. 636).

It’s one of dozens of examples I could have used, from cognitive to clinical to political psychology. I’m not faulting the authors. My point is: psychology findings do not easily lend themselves for use as “inductive backing” for some controversial claim in ethics or political philosophy. They just aren’t written that way, or with that purpose in mind. But that’s the way philosophers often use them, at least in my experience. The psychology research of the philosophers is a lot like the God of the philosophers: not the original article. Philosophers seem wedded to the psychology of journal abstracts, not journal text–to unqualified thesis statements, not to the thesis-death-by-a-thousand-qualifications-followed-by-recommendations-for-more-grant-funding-and-research that one typically finds in the text. The jury is still out for me, but I often find myself wondering how useful all this psychology-mongering really is for philosophy.

Of course, then I read hand-waving, flat-footed philosophy that resolutely ignores the empirical literature, and I swing the other way. It also helps to read classic texts–Aristotle, Aquinas, Hobbes, Locke, Freud–and see how much they got wrong, empirically speaking. (Just think of what passes for biology or cultural anthropology in any one of these writers.) I just got finished reading Calvin Hall’s Primer of Freudian Psychology, published in 1954. One doesn’t think of 1954 as being that long ago–the Eisenhower Administration wasn’t ancient history–but the author has the nerve (so to speak) to assert that asthma, arthritis, and ulcers are psycho-somatic effects of ego defense mechanisms (pp. 85-87). Primal repressions, we’re told, arise in Lamarckian fashion via the “racial history of mankind” (p. 85). I guess sometimes pseudo-science is just pseudo-science. So I’d be the last to trash appeals to hard fact as a constraint on normative theorizing.

Val again:

I’ve often thought that psychiatry rewards the philosophically minded more than any other specialty. General medicine, for instance, largely reduces to this model: is the blood sugar >6%? If yes, implement algorithm given to you by the Joint Commission. Pattern recognition and memorization required, but not a lot of analysis.

In psychiatry, if a patient complains of depression, you have to say, what does depression mean to this patient? Is depression even real? How can I judge this patient as having depression when there are no absolute standards? How will I know if his depression is responding to treatment? Why is the treatment even working? What caused the depression? Why do some develop depression in similar circumstances but not others? Good clinicians conceptualize patients in such a manner, and this is how they are discussed at conferences. Poor psychiatrists uncritically push pills.

MIT press released a very good collection last year, Classifying Psychopathology, for sale on the shelves in the medical school book shop. I doubt very much a well read psychiatrist wouldn’t be “as good” (to use Brennan’s silly words) at discussing the contents as a Ph.D philosopher who specialized in ethics.

I agree with most (or a lot) of that, but notice that the context of Val’s comment is psychopathology. Yes, within that context, psychiatrists have a lot of challenging, important philosophical work to do. But the context is itself very narrow. You can master all that there is to know about psychopathology, whether psychiatrically or philosophically (or both), and still be light-years away from dealing with issues that are central to ethics.

Anyway, there’s a lot to think about and respond to there. To keep this post within reasonable length, I’ll post any further thoughts I have in the combox. But I figure that some of PoT’s lurking readers may have things to say–there are some psychologists and at least one psychiatrist out there, along with a few non-psychiatrist MDs–so I’ll just leave this open for comment.

*Brennan’s comment was below mine when I first wrote. As of March 9, 2015, Brennan’s response to Val no longer bears his name, and is attributed instead to an anonymous “Guest.” The same is true of a few other comments of his in that discussion.

Addictions, Cravings, and Compulsions: Challenging the Frankfurtian Model (with two postscripts)

Readers of this blog know, or may remember, that yours truly was, briefly, a drug addict. It was actually a rather interesting experience to undergo, philosophically speaking, and one of the things I did while going through it was to read up on the philosophical and psychological literature on addiction, and to compare what I read there with my own six-month experience of addiction. I have a folder full of journal entries on the subject–at least a hundred pages or so–and some day I’d like to get some of that material out there into “the literature.”

A basic problem with the literature, as I see it, is that very few of the people writing in it either are, or have ever been addicts, and their lack of first-hand experience distorts much of what they write on the subject.* Their definitions of “addiction” are far too narrow to cover the varieties of addiction (even to cover the varieties of specifically pharmacological addiction, setting aside the supposed behavioral varieties, e.g. sex addiction, shopping addiction, etc.). And by my lights, they’re far too timid about considering the possibility that addicts are responsible for having becoming addicts, and are capable of choice as addicts.

But one particularly problematic assumption, ubiquitous in both the philosophical and psychological literature, is the claim that addiction necessarily involves a craving for the addictive substance. The paradigm example of this assumption is the celebrated discussion of addiction in Harry Frankfurt’s famous paper, “Freedom of Will and the Concept of a Person” (originally published in the Journal of Philosophy, 68:1 [Jan. 1971], reprinted in The Importance of What We Care About [1988]). It’s in many ways a very insightful paper, and like a lot of people, I’ve been heavily influenced by it. Reading Frankfurt while I was an addict, however, I couldn’t help thinking that he’d generated a conception of “addiction” designed specifically to clarify the thought-experiments in the essay, regardless of whether any of it bore any relation to the real-world phenomenon of addiction.

Whether it’s explicitly cited or not, the Frankfurtian conception of addiction plays an outsize role in the literature on addiction. And it’s not hard to see why. Suppose that you’ve never been an addict, but are interested in the topic. Suppose that you don’t know any addicts, either. How do you know what it’s like to be one? As it happens, you can’t really get a visualizable “picture” of addiction by reading social scientific or psychiatric studies of addiction in peer reviewed journals, by reading the “substance abuse” chapter of a textbook of abnormal psychology, by consulting the newest version of DSM, or by reading either the philosophical or psychological literature on “addiction science.” Nor will it help to attend lectures of this sort. The preceding sources will give you important facts about addiction, and teach you how to logic-chop some important distinctions. They’ll give you some important vocabulary, as well, and introduce you to the various “models” of addiction. But they won’t tell you what it’s like to be an addict, and like it or not (so to speak), the first-person perspective is crucial for understanding what it is to be one.

Enter Frankfurt: Frankfurt gives his readers a vivid “picture” of what it’s like to be an addict. Though it’s a third-personal account, it’s vivid and detailed enough to enable a non-addict to imagine what it would be like to be a (Frankfurtian) addict from the first-person perspective. And clearly, it would suck: a Frankfurtian addict is someone with an irresistible first-order craving for a pharmacologically-addictive substance. Either he resists this first-order craving at the second-order level, or not, and different implications follow in each case. Frankfurt never mentions by name what addictive substance he has in mind, but I get the impression that he’s discussing a stereotypical case of either heroin or cocaine addiction (or perhaps alcoholism).

As I say, it’s an interesting discussion, but I find the picture it paints of the addict very misleading. In particular, I don’t think there’s good reason to think that cravings are either necessary or sufficient for addiction.

To see this, consider a somewhat stylized, thought-experimental version of my own case of addiction. Imagine a very strict Kantian who goes to the doctor with some medical complaint. Our Kantian takes his doctor to be a reliable authority on medical matters, and regards following his doctor’s orders as a matter of duty to self. Further, our Kantian discharges his duties to self from the motive of duty. In other words, if the doctor tells him to do something, he does it because it’s his duty (to self), whether or not he wants to.

So our Kantian goes to the doctor with some medical complaint, and the doctor gives him strict orders to take a certain medication, X. As it happens, X is an addictive, psychotropic medication. Suppose that our patient has a temperamental hostility to the idea of taking any drug for any reason. So he really doesn’t want to take X. But he feels duty-bound to do so, under the doctor’s orders. So he grudgingly fills out the prescription and grudgingly takes X. Within a few weeks, he becomes addicted to it, but doesn’t know that he is. He might in principle continue like this for years, never grasping that every dose he takes pushes him further and further into addiction.

So here is the situation:

  • Our Kantian is ex hypothesi addicted to X;
  • He keeps taking X, thereby reinforcing his addiction to X;
  • He would suffer intense withdrawal if he stopped taking X;
  • Despite not wanting to take X, he continues to take X, but only from the motive of duty.

I take it to be obvious that you cannot have a craving for a substance that you do not want to take, and you cannot have a craving for a substance that you only take from the motive of duty. And yet you can clearly be addicted to such a substance, at least in the pharmacological sense of being physically dependent on it. If that’s right, craving for X is not a necessary condition of addiction to X. You can be addicted to X and not know it, hence not crave it. You can be addicted to X and not want to take it, but take it from the motive of duty–hence not crave it.

Reflecting a bit on my own experience, I’m willing to admit that there’s a slight complication here. (The phenomenology of addiction defies neat philosophical claims.) Even in the case of the Kantian addict, I think it’s possible that though our Kantian doesn’t want to take X, and takes it from the motive of duty, the pharmacological/physiological effects of addiction can alter one’s personality so that he’s in some sense psychologically compelled to take X without craving it.

This is an odd thought (and phenomenon), and I would have dismissed the possibility out of hand had I not experienced it myself. Think of it like this. Suppose that our Kantian takes X from the motive of duty and only for that reason. He doesn’t like taking X, wishes he didn’t have to, doesn’t want to. But dutiful Kantian that he is, he takes it. Suppose he takes it every night at precisely 10 pm. As 10 pm approaches, he might find himself in the grips of some very odd internal states. He might, for instance, develop an anxious compulsion to take X, or an uneasily anxious feeling about the idea of not taking X. He would thus find himself in the odd state of taking X from the motive of duty, not wanting to take it, but anxiously feeling compelled to take it, and averse to the idea of not taking it–all at the same time. I actually felt like that fairly often.

Related is the possibility that if our addict fails to take X promptly at 10 (and is sufficiently addicted to it), he either senses or subconsciously anticipates the onset of withdrawal symptoms, and develops a vague (but powerful) psychological compulsion to hurry up and take it. (“Hurry up, please, it’s time….”) Remember, ex hypothesi  that our Kantian neither knows that he’s addicted nor knows that withdrawal is an issue. My point is that the physiology of withdrawal can to make its presence felt in his appetitive states despite his ignorance.

Some might be tempted to call this physiologically-induced appetitive presence a “craving,” but it doesn’t feel, phenomenologically, like anything I would call a craving. In retrospect, I think of it as a classic case of chronic, pharmacologically-induced anxiety.  I’m inclined to think that in a Kantian, this anxiety would manifest itself as a specifically deontic compulsion: the compulsion to take the drug would not be experienced, phenomenologically, as a “craving” for it, but as a very urgent, anxious imperative to the effect that X must be taken. (“Hurry up, please, it’s time….”) But an imperative or an anxiety is not a craving in the ordinary understanding of that term, even if it produces a compulsion to do something. (I’m not a Kantian, but the picture of the Kantian agent I’ve painted here approximates my own experience of addiction. One feature of addiction is that it alters your personality so that you find yourself doing things that would otherwise be “out of character,” and yet weren’t produced ex nihilo, either.)

I suppose you could reintroduce the idea of craving here by claiming that our Kantian has a craving for the substance under the guise of a “craving” for doing his duty from the motive of duty, but even if that is a coherent thought (I’m not sure it is), it’s so distant either from Frankfurt or from what the literature describes as a “craving” that we’d have to revise our understanding of “craving” to be able to use it this way.

So while I want to insist that cravings are not a necessary condition for addiction, I’m willing to accommodate some version of the phenomenon that the Frankfurtian picture ascribes to addiction: addictions involve compulsive or anxious behavior, but compulsions are not accurately described as “cravings.” (It’s essential to my account that in large part, the compulsion or anxiety has a pharmacological etiology. Of course the pharmacological etiology could itself have a psychological one.)

I think it’s obvious that cravings are not sufficient for addictions. We crave many things, but it’s an abuse of language to say that we’re addicted to them. I crave knowledge, but I can’t be said to be addicted to it in the way that I was addicted to Ambien. I once had a three-year-long craving to listen to a single album (AC/DC’s Black Ice): I listened to it several times a week for three solid years. But that wasn’t an addiction in the relevant sense, either. I’m very skeptical of the extension of the concept of “addiction” to behavioral contexts without a pharmacological component, e.g., sex addiction, porn addiction, shopping addiction, etc. In my view, “addiction” is a specifically pharmacological concept involving the ingestion of a physical substance and a neurobiological mechanism that produces physical dependence on the substance.

A final observation: I get the sense that the addiction literature has not fully taken on board the possibility that prescription drugs are, like “illicit” drugs, highly addictive, psychotropic substances.** The literature, then, seems fixated on addictions to alcohol, heroin, cocaine, cigarettes, and the like, and has much less to say about FDA-approved drugs–neuroleptics, anti-depressants, stimulants (including caffeine), benzodiazepines, SSRIs, and so on. That seems to me a massive omission. If anything, it’s the latter category that needs more sustained philosophical attention than the former. I hope to give it some more attention in future posts here.

*A notable exception to this rule is Owen Flanagan of Duke University. See Flanagan’s “What Is It Like to be an Addict?” in Jeffrey Poland and George Graham, Addiction and Responsibility.

**Flanagan is, once again, an exception to the general rule. See the preceding note.

Postscript, March 2, 2015: A simpler and more obvious counter-example to the “craving conception” of addiction just hit me. Suppose that X is addicted to a psychotropic medication, and simply forgets to take it at the appointed time. Surely forgetting to take X is incompatible with craving X. QED.

Anyone who doubts the supposition (that psychotropic medications are addictive) can either check the Physicians’ Desk Reference or Peter Breggin’s Psychiatric Drug Withdrawal for clinical information, or Robert Whitaker’s Anatomy of an Epidemic for narrative/anecdotal accounts.

Obviously, an even simpler counter-example to the craving conception of addiction is the (to me, obvious) phenomenological fact that people can be addicted to psychotropic drugs, experience no craving for the drug whatsoever, and willfully “go off their meds” when they decide for whatever reason to do so. The example in the post is, after all, just an elaborate way of saying that.

According to Jon Elster, “All addictive behaviors seem to go together with some form of craving. The idea of craving–the most important explanatory concept in the study of addiction–is complex” (Jon Elster, Strong Feelings: Emotion, Addiction, and Human Behavior, p. 62). I agree that the concept of craving is complex, but the rest of Elster’s claim–an axiom of the literature on addiction–seems hopelessly wrong to me. It either ignores the possibility (and reality) of iatrogenically-induced addiction to psychotropic medication, or else consigns it to a different, and ultimately marginal conception of addiction that plays almost no role in the sexiest, most prestigious books and journals. The literature doesn’t yet seem to have taken seriously the possibility that doctors can impose addictions on unwilling and unwitting patients.  The very definition of “addiction” manages to get doctors off the hook, so to speak, and manages to blame the victims.

For another couple of examples of the craving assumption, check out Merle Spriggs’s “Autonomy and Addiction,” (PDF) especially pp. 6-7, along with the reference to Morse (n.42).

Postscript, September 28, 2015: I’ve been in the market for a therapist lately. To find the right one, I made an initial list of seven who seemed suitable, drawn mostly from the overlap between the Psychology Today “Find a Therapist” listing and the one for my insurance carrier. One turned out not to be available, one never responded (not the first time), and the conduct and demeanor of a third struck me as off-putting and unprofessional.

So I made appointments with the remaining four, three of whom turned out to be excellent, but one of whom, a PsyD (for whatever that’s worth), struck me, frankly, as a hack. Within short order, Dr. Hack had driven the intake session down (what seemed to me) an irrelevant byroad, and had decided to conduct an aggressive interrogation designed to uncover my flaws as a person. The “flaws” tumbled out, one after another, all based on inferences that no human being could legitimately have made about a stranger within twenty or thirty minutes of meeting him.

It didn’t take Dr. Hack long to conclude that I was clinically depressed and needed to go on an anti-depressant. My affect, Dr. Hack informed me, was “flat,” and that flatness was an infallible indication of depression. It hadn’t occurred to Dr. Hack that perhaps the “flatness” of my affect was a response to the flatness of his personality. When I protested that I didn’t think I was depressed (at all)–didn’t feel depressed, didn’t meet the clinical criteria of depression–I was abruptly told that that was precisely how depression manifested itself in men (as opposed to women): men denied their depression in bouts of irritation and rage; women “stayed in bed all day.” The latter had become the societal stereotype of depression, Dr. Hack informed me, but since atypical depression is still depression, I’d have to accept a diagnosis of depression, whether I liked it or not. And that meant going on an anti-depressant as a condition of working with Dr. Hack, too. Dr. Hack magnanimously allowed that he wasn’t qualified to tell me precisely which anti-depressant at which dose; that was a job for a psychiatrist. But the bottom line was: no anti-depressant, no therapy.

That made things easy, since I had no intention either of going on an anti-depressant or of working with Dr. Hack. Bottom line: I unloaded my co-pay and got the hell out of there.

I tell the story because I think it tells us something about the therapy profession today as well as about its relationship to psychotropic medications.

For one thing, I think therapists suffer from a real problem of professionalism. Even when they get PsyD’s, a supposedly practical doctorate, some of them don’t seem to learn the basics of professional etiquette. Going back to one of the therapists I called before I met Dr. Hack: it’s not kosher to ignore a legitimate query regarding professional services you’ve advertised. You may not want a certain client, even based on the message they leave on your voice mail, but it’s not legitimate to ignore them as though they’d never called you at all.

Therapists like to think of themselves as “health care practitioners,” but don’t seem to have grasped that behavior like that is flatly unacceptable in a health care profession. Incidentally, for a profession so eager to regulate the rest of the world, it’s amazing how proprietary they can be about their supposed right to refuse service (or refuse to contact potential clients) on the basis of whims and hunches about X’s “sounding like” the proverbial “problem client.” In conversation outside of clinical contexts, I’ve heard therapists tell me, sotto voce, “Oh, I stay the hell away from clients like those.” Fine: you have the right to stay away from a certain kind of client. You don’t have the moral right to delete a legitimate query from an unwanted client without further ado.

A second aspect of the same problem: the rush to clinical judgment. As a rule, no therapist can (legitimately) give a DSM-5 diagnosis within thirty minutes of the first intake session. Maybe there are clinical geniuses out there–and/or sufficiently simple cases–that are exceptions to that rule, but otherwise, it seems to me a pretty clear rule.

A corollary of the rule is that you shouldn’t be reaching for the prescription pad half-way before the first session is done. Yes, there are some obvious exceptions to that rule, but the exceptions don’t find their way that often to the average therapy office.

Further implication: prescription is a medical judgment. That means that if you’re going to prescribe a psychotropic medication, you’d better have done a history and physical on your client/patient in the medical sense. If you don’t know how to do a history/physical–and most therapists don’t–then you have no business talking about prescriptions. By “talking about prescriptions,” I mean: saying anything that asserts or implies that the client needs a prescription for some psychotropic medication. At best, a non-MD has the professional right to refer the client out to an MD, but that’s it. Otherwise, my view is that they should keep their mouths shut on the subject.

One more implication: Given the way graduate programs in psychology are currently structured, no PsyD (qua PsyD) ever has any business talking about prescriptions.  Maybe some day, PsyD’s and Ph.D’s will be educated so as to know what they’re doing when it comes to psycho-pharmacology–my friend Ray Raad has made some interesting arguments for that–but that day hasn’t arrived yet, and won’t arrive anytime soon. Until then, I’d prescribe silence.

The mental health professions have expanded the concepts of “mental illness” and “addiction” far beyond what those terms mean in ordinary discourse. Maybe we ought to consider medicalizing the overprescription of psychotropic medications by mental health care practitioners. I’d be interested to see the profession’s reaction to the proposal that overprescription is itself a mental illness or an addiction. At that point, it seems to me, the old adage “physician heal thyself” would come to have new and revolutionary meaning. A thought for DSM 6.

From Martin Anderson to Charlie Hebdo and back

I woke up yesterday morning, looked at the obituaries, and resolved that before the day was done, I had to say something about the passing of Martin Anderson, described in The New York Times’s obit merely as “a conservative economist who helped shape American economic policy in the 1980s as a top adviser to President Ronald Reagan.” According to the Times, Anderson “died on Saturday at his home in Portola Valley, Calif. He was 78.”

By day’s end, however, Martin Anderson’s peaceful death at 78 had come to seem like an irrelevancy, hijacked and displaced by the Charlie Hebdo attack in Paris–a sick and sad replay of recent events in Peshawar and Ottawa, among other places. This video is eloquent testimony to the dignity of Paris–of France–in the wake of the attacks. (Nicholas Kristof properly points out–in an otherwise soporific column in this morning’s Times–that more people were killed in a suicide bombing in Yemen on the same day as the Charlie Hebdo attacks as were killed in Paris. It’s an interesting question why “we” are more focused on Paris than on Yemen, but “we” are.)

It always feels a bit corny to insert a flag into a blog post, but after the anti-French stupidities expressed over the last decade in the U.S., I think we kind of owe it to them.

flgfranc.gif (700×467)

The truth is, I never particularly liked the Muhammad cartoons for which Charlie Hebdo became famous, and for which its staff has now been targeted. I found them tasteless, pointless, unsubtle, and unfunny. But in the wake of the attacks, the slogan du jourJe Suis Charlie–happens to be true. We have the right to be discursively tasteless, pointless, unsubtle, and unfunny. No one has the right to kill us, or even to lay a finger on us, for it. And we each have to fight, or at least struggle, for that right. Those of us who don’t fight the enemy directly, with weapons, at least have a responsibility to declare our opposition to that enemy, and in so doing to stand in solidarity with its victims–thereby making ourselves a target for its attacks.

In other words, we have to do from afar what the people of Paris have been doing in the streets of their city. We have to stand up–and stand out. That’s what flags are for. Ironically, the word jihad captures exactly the right nuances here, denoting a form of struggle that combines elements of violent fighting and non-violent struggle. What we need against the jihad of the fanatics is a counter-jihad of our own, one open both to Muslims and to non-Muslims–to anyone who stands to become a victim.

From the looks of it, the attackers in the Charlie Hebdo case appear to be “homegrown” French Muslims, members of France’s alienated underclass. In a strange way, then, the obituary for Charlie Hebdo bears an indirect connection to the one for Martin Anderson. The connection is supplied by the fact that both concern the causes of violence in an alienated underclass (where “causes” includes the agency of the attackers themselves).

Unfortunately, The Times’s obituary focuses on Anderson’s years in the Reagan Administration, making only cursory reference to his first book and masterpiece, The Federal Bulldozer (1964/1966).

An expert on welfare and relations between state and federal governments, Mr. Anderson published his first book, “The Federal Bulldozer: A Critical Analysis of Urban Renewal, 1949-1962,” in 1964. Years later he became a crucial architect of Reagan’s New Federalism — the handing over control of government programs to the states.

The passage conceals more than it reveals about the book. The first sentence seems to suggest that The Federal Bulldozer is fundamentally about welfare policy or state-federal relations. It isn’t. The second sentence suggests that The Federal Bulldozer provides the blueprint for Reagan’s New Federalism. It doesn’t. The author of the obituary seems randomly to have sandwiched an allusion to the book between random facts about Anderson that she felt obliged to cram into the obituary, whether or not doing so made for accurate or coherent reading.

In my view, The Federal Bulldozer deserves canonical status up there with Michael Harrington’s The Other America, Rachel Carson’s Silent Spring, Richard Kluger’s Simple Justice, and Jane Jacobs’s The Death and Life of Great American Cities. All five are must-read texts, especially for Americans: original, path-breaking, and interdisciplinary discussions of social issues that permanently affected the way we think about those issues.

The Federal Bulldozer is an unsparing critique of “urban renewal.” Whether you agree with Anderson’s conclusions or not, you can’t ignore the facts he puts on the table: he lays bare in exacting detail what happens when a government decides to “renew” a city by brute force, displacing its inhabitants and violating their rights in the name of “progress.” You don’t have to be a fan of the Reagan presidency to appreciate its claims; in fact, it helps not to be one. You just have to think that there are limits to what the state can do to “improve” the lives of its citizens, especially when “improve” is such a contentious idea, and the intended improvements “improve” some people’s prospects at the expense of others’. There’s a book waiting to be written about why it is that intelligent libertarians like Anderson have so often felt the need to make common cause with conservative Republicans like Nixon and Reagan, on the assumption that the Nixons and Reagans of the world are the closest approximations of liberty and justice to be found in American life. But until that book is written, feel free to ignore the Republican politics of Anderson’s later years to read what he had to say about urban renewal in The Federal Bulldozer.

Here’s a passage from Introduction to the Paperback Edition of the book:

The question that we should have asked in 1949, when the federal urban renewal program started, is long overdue now: Is it right to deliberately hurt people, to push around those who are least able to defend themselves, to spend billions of dollars of the taxpayers’ money, so that some people might be able to enjoy a prettier city?

That answer is your own, and for those whose morals permit them to answer yes, there is another question: Has any city been ‘renewed’?

Here the answer is no. The federal urban renewal program has been, and continues to be, a thundering failure–with one important exception: it has exhibited an amazing talent for continued growth. (pp. vii-viii)

Here’s an excerpt from the book’s penultimate paragraph:

The personal costs of the program are difficult to evaluate. Hundreds of thousands of people have been forcibly evicted from their homes in the past and it will not be long before the number passes the million mark. The indications are that these people have not been helped in any significant way. Their incomes remain the same, they are still discriminated against, and their social characteristics remain essentially unchanged. …On balance, the federal urban renewal program has accomplished little in the past and it appears doubtful if it will accomplish much in the future. This raises a serious question: On what grounds does the federal government justify continuing and expanding the present program?

It is recommended that the federal urban renewal program be repealed now. (p. 230)

Three years after Anderson wrote that, race riots broke out across the country. According to the Kerner Commission’s report, urban renewal played a major role in producing those riots (though, for the record, the Kerner Commission ultimately came out in favor of an “expanded and reoriented” form of urban renewal):

Urban renewal projects, which were intended to clear slums and replace them with low-cost housing, in fact resulted in a reduction of 2,000 housing units [in northern New Jersey]. On one area, designated for urban renewal six years before, no work had been done, and it remained as blighted in 1967 as it had been in 1961. Ramshackle houses deteriorated, no repairs were made, yet people continued to inhabit them. “Planners make plans and then simply tell people what they are going to do,” Negroes [sic] complained in their growing opposition to such projects. (p. 70)

I don’t mean to imply that the depredations of urban renewal justify or even excuse rioting (whether in Newark in 1967 or Ferguson in 2014), much less that similar conditions in France excuse or justify the attacks on Charlie Hebdo. (For a good discussion of rioting, applicable both to 1967 and to 2014, see Jonathan Bean’s classic article, “Burn, Baby, Burn,” in The Independent Review.)  I just mean to draw attention to a correlation: where you have the conditions that create a permanent or semi-permanent underclass, you can expect spectacular violence, even if the violence has its proximate causes in a lunatic ideology and/or the idiosyncratic psychoses of individual criminals and psychopaths. (And even if some of the attackers are rich.) If you gather such people into an army, they become the Taliban, Al Qaeda, or ISIS. If you concentrate them in certain underclass neighborhoods, and treat them badly enough for long enough, they become rioters. If you disperse them, they become the sort of terrorists we’ve seen in Peshawar, Ottawa, and Sydney. You can find people like that anywhere, even where the going is good (just think of school shootings in the U.S.). But you can practically be guaranteed to motivate them to act out if the going stays bad for long enough–as it has for decades in France.

I’m the last one to deny that Islam has a role to play in the explanation of Islamist terrorism. I think most educated people have by now been able to grasp that it does. But it’s worth remembering that even a paradigmatically Islamist coup like the 2007 Jamia Hafsa siege in Pakistan began, as so many such disputes do, with a clash over land: it began when the Capital Development Authority of Islamabad asserted the right to demolish mosques in the name of the Pakistani equivalent of urban renewal (thereby implying that the state owned the mosques and could demolish them at will). Islam is an idea, but we can’t understand the role an idea plays in the physical world unless we grasp how it relates back to the physical world.

That’s what our anti-Islamist ideologists have failed to do. They are instinctive Hegelians: as far as they’re concerned, the Idea of Islam enacts itself as world spirit and somehow induces Muslims to kill in the name of God. But which Muslims, and why them? The implicit answer seems to be an appeal to concomitant variation: more Islam means more violence; hence the more Muslim a person is, the more prone to violence he’ll be. But this explanation founders on an obvious fact: some Muslims are very devout, but disinclined to violence; others are very violent but disinclined to devotion. We can either accommodate this fact at face value (“the face value interpretation”), or re-interpret it so that the non-violently devout are less Muslim than the violently non-devout (“the revisionist interpretation”). There are plausible arguments to be made either way, but–at face value–I think the face value interpretation is more plausible than the revisionist.

The face value interpretation suggests the need to get our minds around the other part of the explanation for terrorism. Supposing that Islam has a role to play in the explanation of Islamist terrorism, why does it play that role for some Muslims but not for others? What is it that the differentiates the religious terrorist from the religiously devout non-terrorist? In the first case, my hypothesis is that religion serves to intensify a sense of alienation; in the second, religion serves to confirm a sense of belonging. The first leads to violence; the second does not.

That leaves us with a different set of questions, however. Why do some people fasten on the alienation-inducing elements of religion, and others focus on the elements that confirm  a sense of belonging?

I don’t have a full answer to the question. Part of the answer, I’d speculate, is that in the nature of things, religion lacks the conceptual resources to differentiate successfully between the two elements, and all of the Abrahamic religions are a relatively seamless blend of both alienation- and solidarity-inducing elements (including elements that alienate believers from reality itself). Every religion requires some commitment to fideism, and fideism undercuts the conceptual resources you need to make the relevant distinction. As a lapsed Muslim, I can identify the features of Islam that still appeal to me (and many do), and reject the features that don’t (and many don’t). I’m free do that because I don’t regard any part of the faith as binding on me. But I couldn’t do that if the whole faith were binding on me. If it were, I’d have to find a way to accommodate every genuine element of the faith. And I don’t think that can be done in a way that allows for a clean distinction between alienation- and solidarity-inducing elements of Islam (or any Abrahamic religion).

The other part of the answer is that where you find a propensity to religiously-induced alienation, you invariably find state-driven socio-political dysfunction. State-driven socio-political dysfunction is dysfunction driven by coercion. It’s much easier to induce a sense of alienation in someone if you take from him–or people like him–his sense of control over his physical (or economic) environment. Unemployment will do it (I’m assuming that a significant aspect of unemployment is explained by state policy). So will some equivalent of urban renewal, or some form of over-regimentation or over-regulation (recall that the Arab Spring began as a response to over-regulation). Add a long history of state-sponsored coercion to the mix–whether in the form of Jim Crow or the colonization of Algeria and its aftermath–and you induce a stronger sense of alienation, especially, I think, if the state-sponsored coercion in some sense represents the democratic will of the dominant majority. Add widespread racism to the mix, and the toxicity increases.

So far I’ve focused on factors external to the persons in question. But those are far from exhaustive. Some people subjected to those conditions become criminals or terrorists, but some don’t. What distinguishes the two groups?

Here, it seems to me, one needs to appeal either to straightforwardly moral or psychiatric predicates–the predicates that, in my view, do the most explanatory work, even if they do so in the context of background factors of the sort I’ve been describing above. There is the sheer psycho-pathology of a certain kind of male who refuses to accept responsibility for his life, who egregiously fails to negotiate life’s relatively ordinary trials and tribulations, who lashes out at others for his own perceived (and often accurate) sense of inadequacy and failure, and who feels an abiding sense of humiliation and shame for his perceived (often accurate) sense of failure. There’s also the role played by a confused discourse in which one side offers caricatures and cartoons of its adversary, and the other side responds to those caricatures with a tribalized sense of grievance and resentment.

That may well be the tip of the iceberg, but “that,” it seems to me, is the set of beliefs and circumstances that unites rioters and terrorists, and serves as a partial explanation of their otherwise unintelligible violence. People feel the need to lash out when they feel out of control, and they feel out of control either when policies external to them rob them of control, and/or when they themselves act in ways that subvert their own autonomy, or when both things happen nearly simultaneously. The result is the need for a fantasy life that seems to restore a sense of control, and religion is the perfect source of the most violent fantasies as well as of the prospect of apparent control. (So are secular ideologies, including Marxism, libertarianism, and Objectivism–a topic for a different post.)

In short, a terrorist is a control freak who’s out of control. The most dangerous thing you can do is to laugh at a such a person, which is what Charlie Hebdo did. The result, as Charlie Hebdo themselves predicted, was mass murder.

Mr. Charbonnier, like the other Charlie Hebdo journalists, published under his pen name, Charb.  His last published cartoon appeared in Wednesday’s issue, a haunting image of an armed and cross-eyed militant with the words, “Still no attacks in France,” and the retort: “Wait! We have until the end of January to offer our wishes.”

No one deserves to die for predicting his own death. There is no way to justify initiating force of any kind in response to a speech act that doesn’t itself initiate force. So there can be no justifying or rationalizing what happened yesterday in Paris.

But we owe it to ourselves to come up with a better way of conceptualizing Islamist violence than the one Charlie Hebdo offered. We can’t negotiate our way out of the quagmire by means of cartoons, caricatures, and derision. As Spinoza puts it, “I have taken great care not to deride, bewail, or execrate human actions, but to understand them” (Political Treatise, I.4). That doesn’t mean we shouldn’t deride, bewail, or execrate terrorism. It just means that understanding comes first. It also means that if we lack understanding, we have to seek it. And the truth is that a decade and a half after 9/11, we do lack it. But we still have time. And God knows, there’s no shortage of data to work with.

Postscript (added later). Here’s a thoughtful and insightful interview on Charlie Hebdo with Jacob Levy, of McGill (the link goes to a 3:08 minute interview at BBC’s The World). I’ve commented there on the inconsistencies and hypocrisies involved in French attitudes on free speech. I’m no expert on French politics, but I’ve discussed the politics of Islam in France in this longish essay in Reason Papers (esp. pp. 174-75 and 179-81).

PS 2: An excellent piece by Hussein Ibish at Book Forum, “The False Piety of the Hebdo Hoodlums.”

PS 3, January 9, 2015: (apologies for the problems with paragraph spacing; this happens from time to time, but I don’t know how to fix it):

Events are taking place in France faster than I can keep up with them here. Meanwhile, David Brooks aptly reminds us that he’s not Charlie Hebdo, not that anyone would have thought that he was. One passage in his column particularly cries out for comment:

Public reaction to the attack in Paris has revealed that there are a lot of people who are quick to lionize those who offend the views of Islamist terrorists in France but who are a lot less tolerant toward those who offend their own views at home. …

Americans may laud Charlie Hebdo for being brave enough to publish cartoons ridiculing the Prophet Muhammad, but, if Ayaan Hirsi Ali is invited to campus, there are often calls to deny her a podium.

Six months a slave: my bout with Ambien addiction (Part 2 of “Psychiatric Medications: Promise or Peril?”)

I’d like to get back to summarizing the presentations at last week’s Felician symposium on psychiatric medications, but two things before I do:

First, I’m happy to report that all four presenters have agreed to write their presentations up for a symposium to appear in Reason Papers. The written version of the symposium will probably be published in the journal sometime in early 2016.

Second, an anecdote.

I’ve mentioned Robert Whitaker’s work here several times before. He’s the author of Mad in America and Anatomy of an Epidemic; he’s also a contributor to the website Mad in America. I happened to notice Marcia Angell’s review of Anatomy back in 2011 when it (the review) came out, but had no direct interest in the topic at the time, and more or less filed it away for future reference. I eventually managed to develop a direct and personal interest in the topic, and in the interests of disclosure—and the amusement of telling the story—I may as well explain how it came about.

The long and short of it is that in 2013, I became a psychotropic drug addict myself. The addiction came about through the good intentions but serious errors of my medical practitioners, and, as far as I’m concerned, it counts as a significant (though ultimately not medically serious) case of iatrogenic injury. The experience soured me for a while on the medical profession (including pharmacists), and especially on psychiatry and Big Pharma. I have a less bitter and less intense attitude now, but still have to confess to a residual resentment at all involved for what I went through. The benign residue of that resentment, however, is curiosity. I wonder what happened to me, and why. Hence the interest in the topic itself.

Anyway, here’s my story. After several straight months of insomnia and depression following a divorce, I asked my primary care physician for something to help me sleep. The something turned out to be Ambien. My doctor put me on a dose of 90 x 12.5 mg controlled release pills, which—in compliance with the directions on the bottle—I took “daily as needed” until I ran out (and then got some more).

Around day 25, the medication started to lose its original effect of knocking me out within about ten minutes of taking it.

Around day 40, I had regularly begun to lose my sense of how many pills I was taking on a given night, and started to double and even triple up on the 12.5 mg/day dosage. Having done that a few times, and having realized how insane it was, I then abruptly decided to stop taking the pills altogether, thereby inducing a relatively severe and totally unexpected withdrawal reaction (which I misinterpreted as the effects of extreme sleep deprivation). In the process, I almost crashed my car a few times, suffered two physical collapses on campus, and scared the hell out of a lot of people, including friends, family, students, colleagues, several nuns, a security guard, and an administrator or two. Colleagues had to call 911 for both of my collapses after finding me semi-conscious and on the ground. I found it scary, and judging from the looks on the faces of the first responders, and the way the cops encircled me and kept their hands on their weapons, they seemed pretty frightened, as well. (There’s no telling what harm a semi-conscious philosophy professor might do to a group of armed law enforcement officers. “I don’t really know where my hands are, but don’t shoot!”)

On the one occasion when I was taken to the ER (I refused treatment “against medical advice” on the other occasion–correctly, I still believe), no one seemed interested in hearing about my Ambien issues. They duly noted it in their chart, then promptly ignored the issue and moved on. The ER doctor diagnosed me as having “vertigo,” prescribed an anti-vertigo medication, gave me an IV with saline solution, and left it at that. In retaliation for his refusal to listen to what I had to say about Ambien, I lied to him and told him after a few hours in the ER that I was fit to drive home. I guess he believed me, and then cheerfully discharged me; I less cheerfully drove home (or at least in the direction of my home) and then nearly crashed my car into a diner. (Having missed the diner, I decided to stop and have a meal there: I mean, if you don’t wreck the diner while driving past it, you might as well stop and have the hot open-faced turkey sandwich to celebrate your good fortune. Insanity never tasted so good.)

I eventually got home, but still had to fill out the anti-vertigo prescription. I didn’t trust myself to drive to the pharmacy, but didn’t trust myself to walk there, either: vertigo is no respecter of modes of locomotion. I ended up staggering there somehow, only to discover that I had lost the anti-vertigo prescription somewhere between my apartment and the pharmacy. Out of options, I staggered back home, reframing the loss of the anti-vert prescription as a defiant refusal to comply with medical orders, and settling on the ground to have my vertigo in a safe place. That’ll show that ER doc.

I lay there awhile, let the vertigo wash over me a bit, then popped another 12.5 mg CR Ambien, settling soon enough into another four refreshing hours of non-REM sleep. By 2 am, I was wide awake, reading Jorge Luis Borges (on insomnia), and waiting for the sun to come back up so that I could start yet another vertiginous and sleep deprived day teaching ethics, critical thinking, and aesthetics to students who seemed not to notice that anything was amiss. (Conveniently, I had managed to collapse after class had ended. None of my students saw the collapse happen; I lay on the ground an hour before I was discovered by the instructor who needed to use the classroom after me.) At that hour, being “wide awake” for the forty-fifth night in a row didn’t feel anything like being in a Katy Perry video. It felt like being in a madhouse of my own making.

Somewhere around day 85, it began to dawn on me that I was addicted to Ambien and had to find a way to get off. (What, you ask, did I do between day 45 and day 85? I followed the directions on the bottle, that’s what. I popped those pills “as necessary,” supplying my own personal criterion of “necessity.”)

No one—not my physician, not my pharmacist—had ever informed me that any of this was likely or possible. In fact, my pharmacist insisted that Ambien was harmless, that no one ever got addicted from it, that one could safely be on it for years, and that when the time came to get off years hence, I could safely make that decision at will.

Not really. Getting off the medication was a bit of a drag. I started the taper around my hundredth day on the medication. The taper protocol, which involved a 12.5 mg reduction of the medication per week–one abrupt drop per week from 12.5 mg a night to 0–gave me intense nightmares, paranoia, and hallucinations, among them a particularly wild psychotic episode in which I believed that my brain was being devoured by pink, L-shaped worms.* I also had unbelievably vivid, detailed, apparently true-to-life dreams of home invasions, of unknown intruders coming into my house and maliciously leaving all the lights on (while, in the dream, I was alone in my apartment tapering from Ambien), and (my favorite) of being asked by an ex-girlfriend to lead an eager and willing army of small children to overthrow the U.S. government. (I woke up before we did any harm.)**

A physician I eventually consulted to help supervise the taper described my taper protocol as “an act of self-punishment,” and put me on a more gradual one. Unfortunately, before it was all over, I had yet another episode that put me in the ER. This time, I had to call 911 myself, only to discover that the paramedics sent to rescue me had gotten lost on the way to my apartment. (In other words, my local EMS had failed to pull off what Papa John’s routinely accomplishes. Is it the tips?) As I saw them circling my apartment complex without ever quite finding their way to my building, I was forced to leave my apartment in the middle of what was supposed to be a medical emergency to guide them to their intended destination. When I did, one of them blamed their inability to find me on the complexity of my apartment complex. The other one blamed it on her addiction to Xanax. Et tu, Paramedic? Anyway, there’s nothing like honesty.

When I told her that I myself was suffering side-effects from Ambien withdrawal, Xanax Girl blurted out, “Ambien? Shit, I was going to switch to that tomorrow. They gave me the prescription for it, and I’m pretty sick of this Xanax–but you know, maybe I won’t now. You’re fucked up, honey. I don’t want end up like that.” I told her she had a point. She thanked me for the advice, then got me into the ambulance, and nearly managed to crash it into a barbershop before we got to the ER. (No, we didn’t stop for a haircut.) I’ll never forget the crazed, anxiety-ridden look on her face. I felt protective of her. She seemed worse off than me.

This time the ER doctor listened to my anti-Ambien rant, then nodded sagely and said, “Yeah, but Ambien is nothing. You should see Klonopin withdrawal. Now that’s some shit! I’ve seen people vomiting for hours from that. I mean, not to make light of what you’re going through right now.” Not at all.

Another saline drip. A few questions about my fitness to leave the ER. Some informative sheets of paper on the perils of Ambien dependency. Then, discharge. My friend Mike picked me up, and we decided to get pizza (pizza cures everything). Unfortunately, despite the pizza, the symptoms came back that night, but I couldn’t bear to call 911 again. I got through it somehow, mostly by forcing myself to stay awake.

All in all, the withdrawal lasted 71 miserable days. Once I got off Ambien, however, my sleep patterns returned to normal. The irony was that the Ambien had done almost nothing to help me sleep, which is what it had been prescribed to do. I suffered eight consecutive months of insomnia, six of them on Ambien–less than four hours of sleep a night for about 250 nights. Bad as the insomnia was, however, the experience as a whole convinced me that Ambien was a lot worse than the condition it had been prescribed to correct. It also gave renewed meaning to a line from Ozzy Osbourne’s “Flyin’ High Again”: I really should have kept my feet on the ground, and waited for the sun to appear. Better insomnia than addiction. And the experience primed me for Robert Whitaker’s anti-medication message.

Though it’s obviously not Whitaker’s fault, it was probably a mistake on my part to have read his book during withdrawal from a psychiatric medication: I learned a lot from the book, but the experience of reading it at the time almost certainly ramped up my sense of paranoia, and probably fed my nightmares and hallucinations. (On the other hand, I have to admit that the nightmares and hallucinations gave me new and distinctive insight into Descartes’ Meditations, so I guess I made epistemological lemonade of the psychotropic lemons I’d been served. Call it a contribution to positive psychology.) Even under the best of circumstances, it’s difficult to read and contemplate Whitaker’s thesis without suffering mental disturbance of some sort.

A year or so after my Ambien ordeal, I’d like to think that I’ve achieved some measure of objectivity.

*Postscript, December 14, 2014: I forgot to mention the episode where I hallucinated that demons had entered my brain via my eyes, roosting in my eyelids. I blame the lapse of memory on my Ambien use, but hey, according to the experts, Ambien improves memory, so don’t listen to me.

**Postscript, February 9, 2015: I just happened to discover a music video that’s a picture-perfect depiction of an Ambien withdrawal nightmare–“Big Bad Wolf” by In This Moment. Just fall asleep after hours (or days or weeks) of insomnia, draw out the wolf-piggie dialogue depicted here for a few hours, and repeat every night for a few months–and you’ll get the idea.

Psychiatric Medications: Promise or Peril? (Part 1)

About twenty years ago, Robert Nozick published a brilliant paper, “Socratic Puzzles,” intended to address the apparent paradox of Socrates’s avowal of ignorance:

Socrates claims he does not know the answers to the questions he puts, and that if he is superior in wisdom this lies only in the fact that, unlike others, he is aware that he does not know. Yet he does have doctrines he recurs to…and he shows great confidence in these judgments. …Is this supremely confident Socrates merely being ironic when he elsewhere denies that he knows? How are we to understand what Gregory Vlastos terms ‘Socrates’ central paradox’, his profession of ignorance? (“Socratic Puzzles,” in Socratic Puzzles, p. 145).

I won’t try to summarize Nozick’s (to my mind successful) resolution of the Socratic paradox. I’ll just cut to the chase regarding its payoff:

Inquiry arises because of puzzlement, John Dewey said. People who are quite confident of the truth of their very extensive views are unlikely to engage in probing inquiry about these matters. The first step for Socrates, then, must be to show these others that they need to think about these matters, that is, to show them that what they already are thinking (or unthinkingly assuming) is quite definitely wrong. (“Socratic Puzzles,” p. 153).

And more:

Socrates has doctrines but what he teaches is not a doctrine but a method of inquiry….He teaches the method of inquiry by involving others in it, by exhibiting it. Their job is to catch on, and to go on. (“Socratic Puzzles,” p. 154)

And yet more:

Socrates shows something more: the kind of person that such sustained inquiry produces. It is not his method alone that teaches us but rather that method (and those doctrines it has led him to) as embodied in Socrates. (“Socratic Puzzles,” p. 154).

That’s a long preface to a discussion about psychiatry, but it seems to me the best entree into a discussion of the Felician Institute event that I organized this past Saturday, “Psychiatric Medications: Promise or Peril?” The upshot, ironically enough, was a collective but highly instructive profession of ignorance by the four presenters invited to address the symposium. Whatever their “doctrinal” disagreements, all four presenters agreed–in some way, at some level—with this proposition (my words, not theirs):

Despite the ubiquity of the use of psychiatric medications in the United States (and perhaps the First World generally), we really have no clear idea what we are doing when we use them, with what consequences, or with what rationale. What’s clear is that we’re widely overusing them with highly problematic consequences.

They may not have put it that way (though I think one or two did), but I think all four were committed to the claim. When you consider what’s at stake—the mental health not just of the present but of future generations, of children,  the elderly, and everyone in between—that’s a fairly sobering thought.

The “profession of ignorance” involved here was not the helpless or hapless “I don’t know” of the unprepared student or the ignorant layperson coming to the issue for the first time. It was a profession of ignorance by people in one way or another professionally involved in the field of mental health—as a science reporter and activist (Robert Whitaker), as a psychiatrist in private practice (Ray Raad), as a counseling psychologist and professor of counseling (Peter Economou), and as a philosopher of psychiatry and patient (Christian Perring). And the audience they were addressing was also, to large degree, professionally involved in mental health, consisting in large part of students from Felician’s Master’s Program in Counseling Psychology. It was a Socratic profession of ignorance—a profession of ignorance of the sort possible to people with deep knowledge of a subject, and something important to say about it.

I’m very pressed for time, given the end of the semester, but what I’d like to do over the next few days is to summarize what the presenters did say, and perhaps invite some further discussion both from the panelists and audience to add to or correct what I’ve missed. Obviously, any reader of the blog is invited to comment as well.

A summary of the event is perhaps in order:  The event began with a remarkably personal and candid introduction by Dr. Anne Prisco, our College president, on the dilemmas she’s faced as a mother, confronting the issue of whether or not to medicate one of her sons for what might have been (but might not have been) a case of ADHD. She decided not to: better that he should underperform, her reasoning went, than that he should become dependent on stimulants. That deep skepticism about the use of psychiatric medications set the agenda and tone of the rest of the conversation (with some significant provisos and caveats offered by Ray Raad, the only psychiatrist on the panel, and probably the only psychiatrist in the room).

The first of the two panels featured a 45-minute talk by Robert Whitaker, and centered on the thesis of Whitaker’s controversial (and prize-winning) 2010 book, Anatomy of an Epidemic, which is highly critical of the use of psychiatric medications. Whitaker’s talk was followed by a 25 minute commentary by Ray Raad, a psychiatrist in private practice in New York City. Raad agreed in a very general way with Whitaker’s argument, but disputed many of the specifics, with interesting (and still debatable) implications for Whitaker’s thesis. What followed was a relatively brief but very interesting discussion. I can’t quite remember the details anymore, so perhaps other participants can fill them in when I manage to write up a summary of the panel itself.

The second of the two panels featured two thirty minute presentations. The first, by Peter Economou, sketched a “middle of the road” approach to psychiatric or psychological treatment, combining cognitive-behavioral therapy with the judicious use of medications. Peter’s was perhaps the most skeptical, theoretically eclectic, and overtly Socratic of the four presentations: he actually just came out and said, “The truth is, we know what works in this or that context, but ultimately, we have no idea why it works or what we’re doing.” Christian Perring came at the issue by considering the “epistemic difficulties” presented by consumers of mental health services in confronting the conflicting claims of “psychiatric expertise.” The talk was tellingly and instructively inconclusive: considering the nature of the epistemic difficulties, it’s not entirely clear what potential patients should do, or what “informed consent” means under such conditions of uncertainty. We had a nice (meaning: contentious) hour-long discussion after that, which I’ll try to reconstruct at some point if I can.

After that, of course, we had a reception in which participants self-medicated with the widely-used psychotropic substance known as “alcohol.” (The event was fueled by self-medication via that other widely-used psychotropic substance, “caffeine.”)

More to come, as I manage to get to it.

(Thanks to George Abaunza for the NPR link on medicating the elderly.)

Postscript, December 10, 2014: An interesting article in today’s New York Times, about the use of ketamine (“Special K”), a hallucinogen, for depression.

Reminder: “Psychiatric Medications: Promise or Peril?” Fall 2014 Felician Symposium

Here’s a reminder, for those of you in the New York/New Jersey Metro Area, of our upcoming symposium, “Psychiatric Medication: Promise or Peril? An Interdisciplinary Discussion.” The symposium is the third annual one sponsored by the Felician Institute for Ethics and Public Affairs, and is co-sponsored by the Felician College Department of Psychology, and Felician’s Graduate Program in Counseling Psychology. It takes place Saturday, December 6 between 1 and 5 pm in the Castleview Room on the Rutherford, New Jersey campus of Felician College. The Castleview Room is located on the second floor of the Student Union Center on the Rutherford campus. (The GPS address is 223 Montross Ave., Rutherford, NJ, 07070.)

The topic is timely enough as it is, but has been made particularly so by recent coverage of the issue in The New York Times, among other places. Check out this article on psychiatric drug use in children, as well as these follow-up letters on the same article. This review of Yochi Dreazen’s The Invisible Front discusses the use of psychiatric drugs for PTSD in returning veterans. Also worth checking out is Alan Schwarz’s controversial series on ADHD in The New York Times, which you can find by scrolling backward on his dedicated page at their website. Likewise worth checking out (and more supportive of the use of medications) are guest posts at the Times by Richard Friedman of Weill Cornell Medical College.

I’ve only scratched the surface of the popular literature on psychiatry, but I’ve found the work of Peter Breggin, Gary Greenberg, and Peter Kramer illuminating in addressing the important background issues. (For whatever it’s worth, despite his reputation among libertarians, I have generally not found the work of Thomas Szasz particularly helpful. And despite her reputation among mainstream readers, I have very mixed feelings about the work of Kay Redfield Jamison.)

Here’s the line-up of presenters at the Felician event:

Raymond Raad replaces Cheryl Kennedy of Rutgers New Jersey Medical School, who unexpectedly had to cancel. I’m very grateful to Ray (who lurks on PoT) for doing the event on such short notice.

Whitaker’s work features prominently in a much-discussed two-part review by Marcia Angell in The New York Review of Books; for another view of Whitaker’s work, check out this highly critical review by E. Fuller Torrey, along with Whitaker’s response.

If you’re interested in issues at the intersection of philosophy, psychiatry, and psychology, and don’t know Christian Perring’s Metapsychology Online Reviews, you probably need to head there ASAP (see link above). [Added later: Perring is the author of the entry for “Mental Illness” for the Stanford Encyclopedia of Philosophy, the  main reference work in the field.]

Peter Economou not only has the distinction of having founded a Counseling and Wellness Center in New Jersey (see link above), but of being on the New Jersey State Board of Psychological Examiners (aka “the licensing board”)–and of being my academic advisor in the counseling program at Felician.

Hope to see some of you at the symposium.

PS., More grist for the mill: Though much of it is behind a paywall, I just happened to notice this piece by Mitchell Feinberg, “On the Moral Use of ‘Smart Drugs,'” in The Objective Standard. Perhaps readers who subscribe to TOS can tell us what Feinberg says. Meanwhile, neurophilosopher Patricia Churchland weighs in on the controversy in her recent book, Touching a Nerve: The Self as Brain:

To the degree that I am optimistic, it is because there are scientific discoveries that obviously and unequivocally have been used to make life better–such as polio and smallpox vaccines; such as Prozac and lithium; such as hand washing by surgeons and the use of local anesthetics by dentists….(p. 23)

It does seem generally true that as we come to understand that a particular problem, such as PMS or extreme shyness, has a biological basis, we find relief–relief that our own bad character is not, after all, the cause and relief because causality presents a possible chance for change. If we are lucky and current science has moved along to understand some of the causal details, interventions to ameliorate may emerge. Even if a medical intervention is not available, sometimes just knowing the biological nature of the condition permits us to work around, or work with, what cannot be fixed. For some problems, such as bipolar disorder and chronic depression, medical progress has been greater than for other problems, such as schizophrenia and the various forms of dementia. As more is unraveled about the complex details of these conditions, effective interventions will likely be found. The slow dawning of deep ideas about the brain and the causes of neurological dysfunction has lifted us from the cruel labeling of demonic posesssion or witchery. (p. 31)

I take it that Churchland takes her neurophilosophical eliminativism about mind to prescribe support for the pro-medication (“promise”) side of the debate? If she doesn’t intend that, it’s not clear to me what she is saying. (Of course, it’s not clear to me how eliminativists can have intentions, either, but never mind.)

Postscript 2, November 30, 2014: Some excellent posts on psychiatric medications, care of Scott Alexander at Slate Star Codex: SSRI’s, More Than You Ever Wanted to Know, and Such Crazy Feelings About Crazymeds.

Philosophy, Psychiatry, Psychology: Some Resources and Announcements

I have no way of knowing where the readers of this blog live, but I know that some of you have an interest in issues at the intersection of philosophy, psychiatry and psychology. So, in one way or another, this post is for you.

(1) On Saturday, December 6 (1-5 pm), the Felician Institute for Ethics and Public Affairs will be holding its third annual fall symposium in the Castle View Room of Felician’s Rutherford, New Jersey campus (located on the second floor of the Student Union Building).* This year’s topic is “Psychiatric Medications: Promise or Peril? An Interdisciplinary Discussion.” The symposium will feature four speakers:

Whitaker’s work featured prominently in a much-discussed two-part review by Marcia Angell in The New York Review of Books; for another view of Whitaker’s work, check out this highly critical review by E. Fuller Torrey, along with Whitaker’s response.

I’ll be moderating one session; the other will be moderated by Ruvanee Vilhauer, Professor of Psychology at Felician and until recently, chair of the Psychology Department here. It should be an exciting afternoon, so if you’re in the area and interested, I hope you’ll consider attending. Thanks to Jacob Lindenthal of Rutgers New Jersey Medical School (NJMS) for his advice in putting the event together. Thanks also to Dr. Lindenthal for putting together the Mini-Med School event that I attended this past spring at NJMS, and which, in part, provided the inspiration for the Felician event.

The event is free and open to the public. Refreshments will be provided.

P.S. The papers from the 2012 symposium, on Robert Talisse’s Democracy and Moral Conflict, were just published in Reason Papers and Essays in Philosophy. The papers from last year’s symposium, on Christine Vitrano’s The Nature of Value of Happiness, will be published in Reason Papers in 2015.

(2) Other metro-area conference announcements:

  • On Sunday, November 2, the Northeast Counties Association of Psychologists will be presenting a lecture by Kenneth Frank, “Practicing Psychotherapy Integration: Can Neuroscience Help?” at the Cresskill Senior Center in Cresskill, New Jersey. Details here. I’ll be there along with a few PoT people (so to speak), so if you’re in the area, stop by (though there’s a fee). Thanks to Peter Economou for the suggestion.
  • The Association for the Advancement of Philosophy and Psychiatry has been around since 1989, but for some reason I just managed to notice its existence (obviously a case of narcissistic personality disorder), but it’s jam-packed with valuable resources. Their last conference was in New York; their next conference has yet to be announced. Christian Perring heads the New York-area chapter (small world!).

(3) On a related note, as a fledgling counseling student, I was recently obliged to buy my personal copy of DSM-5, the Fifth Edition of the Diagnostic and Statistical Manual of Mental Disorders. Meanwhile, to make sense of it, I’ve been making my way through Gary Greenberg’s The Book of Woe: The DSM and the Unmaking of Psychiatry. I’ve only gotten about 80 or 90 pages into Greenberg’s book (it’s about 400 pages long), but it’s a great read so far.  Greenberg is a psychologist with an anti-psychiatry ax to grind; he’s also a great writer and a clear thinker who knows how and when to raise the relevant philosophical issues. The book raises some important questions not just about psychiatry per se, but about the logic of classification and the axiology of health and disease. I recently read and enjoyed Greenberg’s Manufacturing Depression: The Secret History of a Modern Disease, but I happen to like Book of Woe better. Highly recommended, for whatever that’s worth.

*The location was changed on October 22, 2014. It had previously been scheduled for a location on the Lodi campus.