I’ve been working on and thinking about issues at the intersection of psychology, psychiatry, and moral philosophy lately, so this (partly but not entirely edifying) discussion-thread at BHL caught my eye. I thought I’d reproduce it here, comment on it, and then just leave the comments open indefinitely for thoughts on the matter.
The discussion arises in the context of a post by Jason Brennan on whether one should go to grad school. I don’t particularly like the self-congratulatory tone of the post, but don’t disagree with the advice he gives. Early on in the post, he addresses a frequently-asked question and offers up an answer:
I like reading and discussing economics or political philosophy. It‘s my hobby. Should I go to grad school? You can do all these things without getting a Ph.D. You won’t be as good at it, but you can read and discuss economics while holding down a job as an insurance agent, a lawyer, or a consultant. You might be able to maintain your hobby while making a lot more money.
It’s not very adeptly or tactfully put, but on the whole, I agree with Brennan. His point is not that a non-PhD. cannot in principle be as good as PhDs at philosophy. His point is that the generalization holds as a rule: generally speaking, and given current economic and institutional realities, you need a PhD to excel at philosophy. There are some notable exceptions to that rule, of course. Some of the most brilliant and successful academic philosophers got into the profession back in the day when a PhD was considered unnecessary (e.g., Alasdair MacIntyre, Colin McGinn, Saul Kripke), but no one holds not having a PhD against them. Coming the other way around, I know non-academics out there (without PhDs) who can hold their own–and then some–with many PhD philosophers. But I think such people are the exception, not the rule. Ultimately, one has to commit the fallacy of accident to deny the truth of what Brennan is saying. We can recognize that exceptional cases exist while acknowledging the truth of the rule he’s identified.
Perhaps Brennan should have qualified what he said to accommodate the exceptional cases, but I also think it’s clear he had a very different sort of case in mind–e.g., the middle manager who wants to do philosophy on the side. I think Brennan is correct to think that such a person will tend not to be as good at philosophy as the PhD philosopher from a top-20 school (Arizona, Princeton, Rutgers, Oxford, Pittsburgh, etc.) who is herself working at an R1 school and (therefore) doing philosophy all day. (And most would come out and admit it.) The more invested you are in your day job, the heavier its demands. But the heavier its demands, the fewer resources you have to devote to philosophy. Given the (very) heavy demands of doing good philosophy, having fewer resources means, all things equal, you won’t do it as well as someone with more resources at her disposal. As someone who spent nine years temping and adjuncting before finding a full-time academic position, that doesn’t seem controversial to me.
It’s not much different than the situation of the guy who spends eight hours a day working assiduously on his guitar chops versus the guy who noodles a bit on his prized Gibson SG after a long day at work. The first guy might make it in the music business, if he’s lucky and other things come together; the second guy may do a gig of AC/DC covers at the local bar (if they let him in), but can’t expect to headline Met Life Stadium (capacity: 88,000), or for that matter, headline the local equivalent of the Wellmont Theater (capacity: 1,200). (Again, I should know.)
The conversation took a different (and actually, more interesting) direction after an intervention by someone named Val, a psychiatrist, who jumped in with this comment just below. Responding to the Brennan passage quoted above, he or she had this to say (sorry for the pronoun ambiguity, but “Val” could be either male or female):
Rubbish and simple minded navel-gazing. Except for the unique subspecialty of a Ph.D tenured research professor (“I’m the foremost expert on La Rochefoucauld’s writing of the year 1678!”), anyone who puts in the time and is clever can speak on intellectual issues with equal footing. You can certainly be “as good at it” in whatever interests you.
I’m a psychiatrist attached to a large research university and spend most of my day as a clinician. The philosophy professors who have careers focusing on ethics, political philosophy, or Scholasticism are barely on equal footing with the well-read clinicians who have been reading the epistemology of science for the last 25 years.
I think Val’s comment talks somewhat past Brennan’s. Yes, “anyone who puts in time” can speak with equal footing, but Brennan’s point is that if you have a day job, the better the job, the less time you’ll have to put in. The worse the job, the less sense it makes to do philosophy rather than get yourself a better job (and then do philosophy, in which case, it’s back to the first option). There are exceptions to this rule, too, but as a rule, it holds. Val’s situation is unique, and escapes Brennan’s point, but doesn’t generalize to the cases Brennan is discussing–the majority of cases.
Unfortunately, Brennan, given an opportunity to re-direct the conversation, only had this to say:
Val, I bet you just think you’re smart because of the Dunning-Kruger effect.
Clinical psych is easy as pie. It’s what people with bad GRE or MCAT scores do.
It’s a somewhat cryptic–and actually pretty stupid–response. The first sentence is just a particularly abusive instance of poisoning the well. The second sentence suggests that Brennan is under the impression that Val is a clinical psych(ologist). In other words, his implicit reasoning is:
You must be one of those dumb people who’ve opted to work in clinical psychology. Your GRE scores were probably too low to work in a difficult field, like philosophy, economics, or cognitive psychology. Your MCAT scores were probably too low to get you into a good medical school, or to get you in at all. So you opted for the easy way out–clinical psychology. And given that, you must think you’re particularly smart because you’re operating under the Dunning-Kruger effect. Being a victim of that effect, you’ve taken umbrage at my suggestions, but that’s because the effect has deluded you.
One problem here is that Val is a psychiatrist with an MD. So the GRE is irrelevant to his/her situation, and he/she obviously did well enough on the MCATs to get into med school, get an MD, go into practice, and get attached to a research university.
A second problem is that even if there was a documented correlation between low GRE/MCAT scores and the choice of clinical psychology as a profession, it wouldn’t follow that clinical psychology was “easy.” The more obvious inference would be that neither the GRE nor the MCAT was designed to test skill or aptitude in clinical psychology. A little Howard Gardner might have gone a long way here.
Personal experience might help, too. Brennan often likes to talk about his, so here’s a bit of mine. I spent part of grad school writing GRE questions for the Educational Testing Service (ETS), so I have a fairly good sense of what’s involved in designing them, including what they test and what they don’t test. There’s a lot that they don’t test, and a lot in them, methodologically and substantively, that is highly debatable, regardless of what ETS’s in-house psychometricians will tell you. Keith Stanovich’s work is relevant here.
It’s a great irony, by the way, that a large number of the item writers for the GRE (and personnel at ETS generally) are people who, by Brennan’s standards, are academic failures–i.e., grad students, often at Rutgers, Princeton, Temple, or Penn, who’ll never get a tenure track R1 job, or grad students (Rutgers, Princeton, Temple, Penn) who never finished their programs. So lots of Brennanite “failures” end up being the gate-keepers for the Brennanite “winners.” Something similar is true of the PRAXIS exam: I wrote items for PRAXIS at a time when, as a doctoral student without a teaching certificate, I was writing exam questions for a profession I wasn’t permitted to enter–and the questions I wrote were for an exam involving the very credential I lacked for purposes of entry!
A bit of advice, then: Brennan tells people who might want to go to grad school, but shouldn’t, to get a job at GEICO. I would say, instead: get a job at ETS. I worked there as a part-timer for almost six years before I got a full time academic position. It was a good place to work. Not my first preference, but still.
Incidentally, if I were Jerry Springer, at this point I would say that one important lesson we learn here is not to accuse someone of being a victim of the Dunning Kruger effect, accuse him/her of bombing the GRE, and misread what he/she wrote all in the same comment.
Anyway, back to Val’s comment. I sort of agreed, sort of disagreed. So here’s what I said:
I’m a PhD philosopher working on a master’s degree in counseling psych. I spend a fair bit of time discussing philosophy vs clinical psychology and/or psychiatry with people in those fields. I see where you’re coming from, but don’t agree with you (not that I agree with Brennan’s comment below*).
An enormous amount of the literature in both clinical psychology and psychiatry strikes me as methodologically weak and substantively trivial. (Much of it also makes huge, unwitting assumptions about difficult issues in the philosophy of mind.) The clinical work that (good) psychiatrists do gives them practical experience that philosophers don’t typically have (fair enough), but it’s very narrow and doesn’t equip them with the resources to think about bread-and-butter philosophical issues. In any case, for many psychiatrists, “clinical work” nowadays means “medication management,” not therapy. I don’t see how expertise at managing a dosing schedule gives a person insight into the foundations of ethics. I’m willing to hear the argument, but off hand, I don’t see it.
That’s not to say that there aren’t brilliant philosopher-psychiatrists out there (e.g., Jonathan Lear, Richard Chessick…Sigmund Freud), i.e., people with excellent philosophical skills who have capitalized on their clinical work. I’d also be willing to say that they have insight and understanding that most philosophers in the field lack. But that’s a far cry from the claims you’re making.
One look at Brennan’s derisive comment below* should tell you that if you were looking for intelligent engagement with your arguments, you’ve come to the wrong place. If you’re interested in discussing the issues, feel free to come by my blog or contact me privately (contact info at the blog). I sometimes blog on issues at the intersection of philosophy and psychology in the broad sense (that includes psychiatry), and wouldn’t mind batting this one around. We’re mostly philosophers, but there are some psychologists and psychiatrists lurking in the “audience.” You might find it fruitful to have a conversation with us. And rest assured, we won’t ask you about your MCAT score or reduce your arguments to a diagnosis.
Val saw what I wrote and had this to say:
Irfan – I agree with a good deal of what you have said. An enormous amount of psychology and psychiatry research is indeed methodologically weak. As the saying goes, nearly of all of psychology research is trivial if true, and if attempting to show something non-trivial, is impossible to convincingly demonstrate. My experience as well has been that most psychologists and psychiatrists are grossly ignorant of the surrounding philosophical issues. However, there are plenty of psychiatrists that I work with who are keenly aware of the epistemic problems of the assumptions inherent in modern psychiatry and are well read in the psychiatrist-philosophers, (Jung, Jaspers, Freud…Popper is also popular. Human Action was recently under discussion in the geriatrics department). …
I agree with that, of course. I also think it goes the other way. Most philosophers are grossly ignorant of psychology and psychiatry, but it’s unclear to me (one year into a psychology program) how much of a debility that turns out to be. If so much psychology research is trivial, what leverage does one get out of relying on it to do moral or political philosophy? Some, I think, but it’s difficult to articulate what it is.
Same issue from a different direction: as a journal editor and conference organizer, I read dozens of manuscripts in ethics and political philosophy from authors who are trying (sometimes trying too hard) to showcase their familiarity with cutting edge work and cutting edge ways of doing philosophy. A large proportion of this work showcases the latest work in psychology. Decades ago, Robert Nozick told us that either we work within Rawls’s system, or explain why not. Now the same is implicitly being said of Jonathan Haidt. It is, one might say, a haidtful state of affairs.
Much of this psycho-philosophical experiment-mongering strikes me, frankly, as trivial, and if you dig hard enough, you find in many cases that philosophers tend, subtly (or not so subtly) to overstate, distort, and cherry pick research findings from psychology to make them less trivial than they are.
The truth is, by comparison with the intuition-mongering philosophy literature, the psychological literature tends to be very, very equivocal. Here’s a random example that I just happened to read yesterday, Daniel Wegner and Sophia Zanakos, “Chronic Thought Suppression” Journal of Personality, 62:4 (December 1994). The abstract says:
We conducted several tests of the idea that an inclination toward thought suppression is associated with obsessive thinking and emotional reactivity….[Our measure of thought suppression] was found to correlate with measures of obsessional thinking and depressive and anxious affect, to predict signs of clinical obsession among individuals prone toward obsessional thinking, to predict failure of electrodermal responses to habituate among people having emotional thoughts.
Then you read the article and the qualifications start coming: “Throughout this article, we have tried to caution that our intepretations of these results are not the only possible interpretations at this time” (p. 636).
It’s one of dozens of examples I could have used, from cognitive to clinical to political psychology. I’m not faulting the authors. My point is: psychology findings do not easily lend themselves for use as “inductive backing” for some controversial claim in ethics or political philosophy. They just aren’t written that way, or with that purpose in mind. But that’s the way philosophers often use them, at least in my experience. The psychology research of the philosophers is a lot like the God of the philosophers: not the original article. Philosophers seem wedded to the psychology of journal abstracts, not journal text–to unqualified thesis statements, not to the thesis-death-by-a-thousand-qualifications-followed-by-recommendations-for-more-grant-funding-and-research that one typically finds in the text. The jury is still out for me, but I often find myself wondering how useful all this psychology-mongering really is for philosophy.
Of course, then I read hand-waving, flat-footed philosophy that resolutely ignores the empirical literature, and I swing the other way. It also helps to read classic texts–Aristotle, Aquinas, Hobbes, Locke, Freud–and see how much they got wrong, empirically speaking. (Just think of what passes for biology or cultural anthropology in any one of these writers.) I just got finished reading Calvin Hall’s Primer of Freudian Psychology, published in 1954. One doesn’t think of 1954 as being that long ago–the Eisenhower Administration wasn’t ancient history–but the author has the nerve (so to speak) to assert that asthma, arthritis, and ulcers are psycho-somatic effects of ego defense mechanisms (pp. 85-87). Primal repressions, we’re told, arise in Lamarckian fashion via the “racial history of mankind” (p. 85). I guess sometimes pseudo-science is just pseudo-science. So I’d be the last to trash appeals to hard fact as a constraint on normative theorizing.
I’ve often thought that psychiatry rewards the philosophically minded more than any other specialty. General medicine, for instance, largely reduces to this model: is the blood sugar >6%? If yes, implement algorithm given to you by the Joint Commission. Pattern recognition and memorization required, but not a lot of analysis.
In psychiatry, if a patient complains of depression, you have to say, what does depression mean to this patient? Is depression even real? How can I judge this patient as having depression when there are no absolute standards? How will I know if his depression is responding to treatment? Why is the treatment even working? What caused the depression? Why do some develop depression in similar circumstances but not others? Good clinicians conceptualize patients in such a manner, and this is how they are discussed at conferences. Poor psychiatrists uncritically push pills.
MIT press released a very good collection last year, Classifying Psychopathology, for sale on the shelves in the medical school book shop. I doubt very much a well read psychiatrist wouldn’t be “as good” (to use Brennan’s silly words) at discussing the contents as a Ph.D philosopher who specialized in ethics.
I agree with most (or a lot) of that, but notice that the context of Val’s comment is psychopathology. Yes, within that context, psychiatrists have a lot of challenging, important philosophical work to do. But the context is itself very narrow. You can master all that there is to know about psychopathology, whether psychiatrically or philosophically (or both), and still be light-years away from dealing with issues that are central to ethics.
Anyway, there’s a lot to think about and respond to there. To keep this post within reasonable length, I’ll post any further thoughts I have in the combox. But I figure that some of PoT’s lurking readers may have things to say–there are some psychologists and at least one psychiatrist out there, along with a few non-psychiatrist MDs–so I’ll just leave this open for comment.
*Brennan’s comment was below mine when I first wrote. As of March 9, 2015, Brennan’s response to Val no longer bears his name, and is attributed instead to an anonymous “Guest.” The same is true of a few other comments of his in that discussion.
Brennan is, as so often, unnecessarily nasty, but I take his point about graduate school making you better at philosophy to be unobjectionable: you’ll be a better philosopher if you go to graduate school because you’ll be able to devote considerably more time and energy to it and you’ll be spending a great deal of time with other philosophers, many of whom are more experienced than you. By contrast, working as a consultant or a lawyer or at GEICO will not only decrease the amount of time and energy you have for philosophy; it will also likely decrease your contact time with other philosophers. None of this implies that if you went to graduate school, you are a better philosopher than anyone who hasn’t. That’s likely, but far from necessary, whatever Brennan thinks about it.
I’m not sure, though, why Brennan puts as much weight on money as he does (I mean, I’m hardly surprised when a libertarian tilts in the direction of reducing everything to economics, but Brennan is supposed to be a bleeding heart libertarian!). Why should it be silly for someone to spend 5-7 years earning a Ph.D. in philosophy and then turn around and get a job at GEICO? Sure, she could perhaps make more money at GEICO in those 5-7 years, but what exactly would be the problem if her plan was instead to earn the PhD and then move into some other field, whether insurance or bartending or teaching yoga or working in a hotel lobby? It’s natural, of course, for those of us who love our subjects enough to get PhDs to want to continue working in an academic environment. But the assumption I encounter almost everywhere is that if you don’t continue on in academia after the PhD, then the PhD is a waste of time. I have a very difficult time understanding that attitude. It’s true that I would prefer to spend the rest of my life devoting myself to reading, writing, teaching, and thinking about Greek and Latin philosophy and literature. But if I end up having to pack up my books and go get a job doing something else, then I’ll hardly be inclined to think that all the years I’ve spent doing what I most love to do were a waste.
When people ask me about graduate school, the advice I give them is always the same: imagine that you go, spend 5-7 years working very hard to earn your degree, and then can’t find work in the field; if you think you would then regret going to graduate school, don’t go; if you wouldn’t regret it, do it.
I have no fixed opinions about the relative merits of psychiatrists and philosophers, except that I’m glad people do both of these things and that it seems obvious that each could benefit the other.
This hadn’t occurred to me when I wrote, but is very true:
I agree with part of your implicit criticism of Brennan: he’s too fixated on money. But I think the fixation on money is part and parcel of an implicit commitment to a sort of social scientific version of operationalism: to be is to be a variable measurable by our best social sciences, economics and/or cognitive psychology. I doubt that the commitment would be put that way, explicitly, but it explains the fixation not only on income but on standardized test scores. It’s a theme not only in Brennan’s polemics on this subject, but in Bryan Caplan’s and Charles Murray’s critiques of higher education.
I’m comfortable with the advice you give at the end, if only because it’s the approach I myself took to graduate school and what came after. But I’d have trouble giving the advice to someone just starting out, because I think most people starting out would regard it as self-evident that if you got the degree but didn’t get a job in the field, you would regret it–how could you not?
You’re right to single out the underlying assumption–if you spend 5-7 years in academia and don’t get a job, you’ve wasted your time. I actually find this belief relatively easy to understand. Suppose you go in to grad school wanting both the terminal degree and “the glorious job that goes with it.” You get the degree in 5-7 years, and then go on the market. You’re then in a low income holding pattern for years. The rest of your life is on hold: you can’t plan the future, can’t start a family, you hesitate to get married. You then face a dilemma: persist or abandon?
If you persist, the pattern continues for years. If you abandon, many people would feel a loss of self-esteem at having failed at their chosen endeavor. Now it becomes time to get a new job–but you’re in your 30s. If you’re inclined to focus on where your peers are in your 30s (status-wise, or prestige-wise, or income-wise), what weighs on you is the fact that you’re at entry level and they’re “ahead” of you. And then there may be student loans to pay off. Or medical bills. Or whatever. I’m not saying that these attitudes are justified, but they’re easy enough to understand.
Though I understand the attitude, I just think it’s a wrongheaded way of looking at the world. This is one place where I think Ayn Rand really did get things right–in her contrast between Howard Roark and Peter Keating in The Fountainhead. Roark arranges his life so as simultaneously to live his dreams, to achieves his dreams, to maintain his integrity, to discharge his “ordinary” responsibilities, and to live without regrets or rancor despite going through difficult times. It’s a high standard to aspire to, but it beats a life of keeping up with the Joneses.
LikeLiked by 1 person
Oh, I quite agree that most people considering graduate school would initially regard it as self-evident that they’d regret it if they spent all that time working for the degree and then had to give it up. Usually when I’ve given this advice it’s either put people off graduate school or led them to indulge in exceptionalist fantasy (“Sure, but I’m exceptional, and I’ll go to Harvard, so…”). But I really don’t think anybody who would actually regret it has any business going to graduate school in the humanities; I’m actually inclined to punch anybody who feels that way and does. To be clear, the regret I reject isn’t regret that one doesn’t get to keep on going; it’s regret that one ever went to graduate school in the first place. If we’re talking about graduate programs outside of the humanities, I might change my tune, but I honestly don’t think anybody who would only value spending 5-7 years devoted to studying a humanistic subject on the condition that they’d have a successful academic career afterwards deserves to be successful. Unfortunately, there is no cosmic justice.
But I think you started this thread to talk about philosophy and psychiatry, so let’s get some psychiatric comments going!
LikeLiked by 1 person
Wow, what a refreshingly hard-core view. I like it. The distinction between the two forms of regret does the trick. It’s funny that both you and Brennan want fewer people to go to grad school but have different litmus tests for keeping the wrong sort out. I prefer yours.
I wasn’t entirely clear what I meant by “open thread”: I was inviting comments either on the philosophy/psych issues raised by my exchange with Val, or the original grad school/career issues raised by Brennan. My post was a kind of weird hybrid covering both topics.
I was tempted to run this past you (David R) privately, but it may be of general interest: In the original post, I said it was hard to articulate what leverage one gets from doing philosophy in the light of a study of psychology, but I guess one minimal thing is just an interesting and thought-provoking change of perspective. I went back and re-read Nicomachean Ethics VII after prolonged exposure to both Freud and a standard textbook of psychopathology, and started to see things and ask questions that had never previously occurred to me. I’m not sufficiently up to date in the Aristotle literature anymore to know whether they’re addressed there, but I’d be curious to know. Here’s a very small sample.
(1) NE VII.2, 1146a30-b3: Aristotle is laying out endoxa and discussing the intemperate vs. the akratic person. The intemperate person, he tells us, is (or is thought to be) “easier to cure” of his problem (Irwin’s tr) than the akratic because there is a paradoxical sense in which the intemperate is more amenable to rational persuasion than the akratic. It’s an interesting question whether that’s right, but to me the more interesting question is: what does Aristotle mean by “cure”? Was there an ancient Greek analogue to psychotherapy? Or is he referring to some form of criminal punishment? Or both?
(2) NE VII.5, 1148b25: Aristotle is discussing “bestial” states of character. He gives some examples, then says (Irwin tr): “These states are bestial. Other states result from attacks of disease and in some cases from fits of madness…” But the examples of the “other states” seem indistinguishable, pathologically, from the bestial states. Why the distinction? And what is the distinction? Oddly, the one set of states gets a name, but the second does not.
It’s interesting that 1148b25-30 basically anticipates modern theories of trauma. I don’t have the Greek text with me, so I don’t know whether A. uses the word “trauma,” but he’s basically picked out the etiology of “personality disorders.” There seems to be some room there for responsibility for character even if you’ve been the victim of childhood sexual trauma, 1148b30ff.
(3) NE VII.5 1149a7-12 discusses what we would now call phobias, and the contrast with modern psychopathology is interesting. Aristotle describes people afflicted with phobias as analogous to cowards, but has no problem using the term “coward” for them with that proviso. Modern psychopathology goes out of its way to avoid the stigma involved in using such moralized language, going out of its way (possibly too far out of its way) to stress specifically biological predispositions to phobia. From Aristotle’s perspective, modern psychiatry is “soft”; from the modern perspective, Aristotle’s approach is stigmatizing and moralistic. But it seems to me that there are strengths and weaknesses on both sides, and it’s an interesting question who’s right.
I could go on (I haven’t even gotten to the Aristotle-Freud connection), but you get the idea–just one small point of intersection in one part of one text.
I know I’m flogging a dead horse by revisiting this topic in a new key, but I can’t get over the sheer myopia and wrongheadedness of some of Brennan’s career advice. I teach at a small liberal arts college without a philosophy major, so I don’t have the opportunity to advise very many up-and-coming philosophy grad students, but if this if what people at R1 universities are telling their charges, they have to expect some pushback from the rest of us, because they’re just reproducing and institutionalizing their own prejudices and dogmas about the future shape of the field. Someone asks Brennan this:
Here’s his answer:
The question to ask the questioner, it seems, to me, is: First, why are you assuming that the place to seek a post-BA degree is the institution where you earned your BA? Your university may not offer a post-BA degree in philosophy, but why restrict your search that way? There may be a reason, but Brennan’s answer doesn’t even address the issue.
Second question: you say that you’re interested in the philosophical aspects of literature. Fair enough: what aspects of what literature? The interest has to be specified. If the goal is put in a totally generic form, there’s no determinate advice to give because the goal isn’t sufficiently determinate to generate any.
If the questioner can’t answer that, he has something to think about. But suppose he’d had an answer. In that case, Brennan’s answer neither engages the question being asked, nor makes any coherent sense.
Suppose the questioner had (somewhat unrealistically) said: I want to study the ethical aspects of nineteenth century British literature. Or even better: I want to study how utilitarianism plays out in parallel form in nineteenth century philosophy and literature. Philosophy: from Bentham through the Mills to Sidgwick. Literature: from Jane Austen through the Brontes, the Romantic poets, through George Eliot, etc. That’s a research program to last a lifetime, or at least a career. It’s also a research program for which there is some demand in journals in philosophy (e.g., Philosophy and Literature) and English (e.g., Victorian Studies Journal). It’s precisely not “continental philosophy,” but it’s obvious why an MA in English would facilitate that research program. And it’s hardly an obscure research program, or an unimportant one. I certainly don’t think it’s any less important than one focused on the ethics of voting or libertarianism.
Now look at Brennan’s advice. “Better to get an MA in math instead”? Really? To understand all those multi-variable equations in Middlemarch? Prima facie, the advice is just a non sequitur. But more subtly, it’s just an offhand dismissal of the very idea of a research program of the sort I’ve just described. The program I’ve just described is not based on economics. It’s not based on cognitive psychology. It doesn’t involve statistics. So evidently, according to Brennan, it’s not worth discussing, or imagining.
“I suspect most philosophers have a negative view of English departments…” Well, in the 1950s, most philosophers had “negative impressions” of Jews. So if the year was 1953, and your name was Irving Finkelstein, would Brennan’s advice be, “Better change that name to Ian Faherty. You don’t want the people reading your application to get the wrong idea…”? What goes unchallenged here is whether philosophers should be prejudiced against anyone with an MA in English. But if you were trying to get at the connection between, say, Mill and George Eliot, wouldn’t such a degree make perfect sense?
While we’re on the subject: you can’t write about nineteenth century British culture while being ignorant of British imperialism. But how much would you learn about the relation between imperialism and either philosophy or literature while getting an MA in either math or philosophy? Could it be that philosophers actually have something to learn from the people doing post-colonial literature in the English Department?
Brennan tells the questioner that “the way” philosophers approach “philosophical texts” is “very different” from what’s taught in Departments of English. But that’s irrelevant to the question being asked. The question was not about “philosophical texts” but philosophical approaches to literary texts. There is no such thing as “the way” in which philosophers are obliged to approach Shakespeare, Blake, Charlotte Bronte, or E.M. Forster. Nor is there even an agreed-on set of questions that philosophers are obliged to ask about these texts (or an agreed-on canon of texts). Brennan’s answer to the student not only ignores the actual question asked, but de-legitimizes the aspiration behind it, out of what seems to me a desire not just to describe but to institutionalize uniformity and conventionalism. That attitude expresses what’s wrong with philosophy today, not what we should be reproducing into the indefinite future.
I think you might be getting too much critical mileage out of Brennan’s interpretation of the question (I haven’t looked at the original post, so I’m assuming that what you quote here is the whole question). I’d agree that it isn’t obvious that what the questioner is asking is whether it would make sense to do an M.A. in English on philosophical literature with the intention of going on to do a Ph.D. in philosophy. But it isn’t crazy either, and of course Brennan prefaces his whole response with the supposition that that’s what he’s asking. So even if Brennan’s interpretation of the question is objectionable, he isn’t presenting his answer as an answer to the question interpreted in the alternative ways you propose. So insofar as your objections to his answer presuppose the alternative interpretation, I don’t think they’re relevant. For what it’s worth, I’m not so sure that Brennan’s interpretation is all that odd; it’s at least no more odd than some of what you propose in its place. First, I think it’s probably just assumed that the student’s current institutional options are limited; otherwise the question makes very little sense. Second, if the student were really interested in philosophical aspects of some particular literature, then it wouldn’t make much sense for her to be asking whether an English M.A. is a reasonable alternative to a graduate degree in philosophy, since hardly any graduate programs in philosophy specialize in that sort of thing, and even where some do the more sensible question would then be whether philosophy or English is the better choice, not whether English would be a good substitute. The more natural interpretation of the question, it seems to me, is roughly: what I really want to do is philosophy, but I can’t; would doing an M.A. in English focusing on philosophical literature be a decent alternative? Where I think Brennan’s interpretation might actually be objectionable is that it assumes that the student is proposing the alternative for purely instrumental reasons; she wants to use the M.A. as a stepping stone to a philosophy Ph.D. program. But she might just as well be asking about whether the English M.A. would be intrinsically rewarding given her philosophical interests. So I’d agree that Brennan’s interpretation is in keeping with what sometimes appears to be Brennan’s general attitude that only Ph.D’s in philosophy (or maybe economics or some other social science fields) are really worthy human beings. But given his interpretation, I don’t think his advice is at all bad. If the student’s interest in the English M.A. really is just as a way to move toward a philosophy Ph.D., then unless she’s interested in Continental philosophy it’s relatively unlikely that it will get her anywhere. Taking dominant attitudes among the philosophers who are likely to be on the admissions committee into account doesn’t strike me as a problem. Not only does Brennan explicitly disclaim any endorsement of the judgment — so that we have to be uncharitable to him, I think, to claim that he is simply endorsing the dominant prejudices of his colleagues — but it’s not nearly so analogous to the anti-Jewish prejudice you compare it to. For one thing, if there were a prevalent anti-Jewish prejudice in philosophy, wouldn’t it be right to inform an inquiring Jewish student about it? More importantly, though, being Jewish isn’t something that the student is considering as a means to a separate end; advising someone to abandon her religious identity or commitments in order to get a Ph.D. in philosophy is just not analogous to advising her not to pursue some optional course of action in order to get a Ph.D. in philosophy.
Your assessment of Brennan’s attitudes toward the profession might be right on, but I think you may be letting your dislike of his attitudes drive you toward an unreasonably harsh reaction to this particular Q&A.
Here’s a belated response to that, but I did want to respond. We’re disagreeing to some degree, but also talking past one another in some respects.
First, a minor concession: I was probably getting too much mileage out of too short a comment of Brennan’s, and probably was slighly over-stating my claims. But on the whole, I think what I said was correct. (And, yes, I did quote the whole exchange.)
Let me clear up a misunderstanding right at the outset:
I didn’t find Brennan’s interpretation of the question objectionable; I was interpreting it just the way that he did. In other words, after noting the unclarity of the question as stated, I went on to interpret the question as though the questioner were asking: does it make sense to do an MA in English in order to study philosophically-oriented literature, and subsequently do a PhD in philosophy, using the MA either as a credential or as an acquired skill-set for the PhD? I continue to think that it does, and regard Brennan’s advice to this questioner (“get an MA in math”) as both insulting and wrongheaded. It’s worth noting that the only approximately action-guiding advice Brennan gives is: if you’re doing continental, then maybe get an MA in English, but probably not; or else get an MA in math. But every element of that advice is wrong, both in the letter and in the spirit.
I may be over-stating my objections to Brennan, but in charity to him, you’ve overlooked the most ludicrous element of his “advice.” He is talking to a student interested in literature, and his advice is: abandon that interest out of deference to the supposed prejudices of your mentor/masters; adopt totally different interests (supposedly) likely to appeal to their ill-conceived sense of “rigor”; and spend a few years on it. It’s as though someone asked how best to study Rousseau in grad school and the response was, “Forget Rousseau; learn some math, and study somebody important, like Frege.” That’s not advice given to the questioner, but an expression of derision for the question itself. Brennan shows no capacity even to compute what the questioner is asking. He’s asking about philosophical approaches to literature, not the analysis of philosophical texts per se.
I think the concept of “intrinsic reward” is a bit of red herring here. A person could be genuinely interested in getting an MA in English but still want to use the MA instrumentally to get a PhD in philosophy. Instrumentality doesn’t entail lack of genuine interest. You could just as well regard getting the MA as constitutive of the person’s graduate career. Learning Greek is instrumental to specializing in ancient philosophy, but it doesn’t follow that that fact undermines the love one might have for getting those irregular verbs right.
I just disagree with this:
I don’t see why. Continental philosophers aren’t the only ones interested in literature; that’s why my example specified British literature. And an MA in English is of obvious utility for understanding British literature: you learn things in an MA English program that no one ever discusses in philosophy, e.g., the relation between British literature and political history, and the specific jargon of literary analysis. Contrary to Brennan’s off-hand claim that there is some canonical philosopher’s way of reading a text, there isn’t: philosophers could learn a lot (a lot more than they realize) by reading literary theory as taught by depts of literature, and I was surprised to discover–by browsing Philosophy and Literature–that they increasingly seem to be doing so.
On the last part:
He’s not just suggesting that the questioner take dominant attitudes into account; he’s suggesting that the questioner appease those attitudes at the price of treating his (the questioner’s) actual intellectual interests as a dispensable frill. There’s no other way to interpret Brennan’s “math” advice except as a way of saying: “I see that you’re interested in literature, but don’t be.” Imagine having said that to, say, Martha Nussbaum in 1980.
As re the anti-Jewish analogy: Brennan’s advice is not analogous merely to informing a Jew that there’s anti-Semitism in the academy. It’s analogous to telling a Jew to change his Jewish-sounding name in response to anti-Semitism. There’s no need to assume that being-Jewish is a particularly powerful identity. You could be a nominal Jew with no particular identification with Judaism or Jewish causes, etc Still, if you had a Jewish-sounding name, and there was anti-Jewish prejudice in the academy, the question would arise: should you change the name or not? In one sense, changing your name might well be interpreted as a mere inconvenience, but necessary for upward advance. From another perspective, however, it’s a form of appeasement, and that’s what I find objectionable about it. You’re supposedly entering a field devoted to reason. Why should the price of entry be appeasement of irrationality? Is that price worth paying? I think the answer is “no,” even if you’re just a nominal Jew with a name like “Irving Finkelstein.” You can’t be expected to change your name to get a job (or the right job).
I think the analogy holds to the topic at hand. A person’s aspiration to study literature is more obviously central to his identity than Irving Finkelstein’s nominal commitment to Judaism. But I don’t think even a nominal Jew should change his name to appease anti-Semites. By parity of reasoning, I don’t think the would-be literature student should get an MA in math to appease people who have a low (but uninformed, prejudicial) view of literature programs. And appeasement is what’s being demanded here. Yes, I realize that Brennan says that he is not endorsing the prejudices, but despite not endorsing them, the advice he ends up giving implicitly re-affirms their legitimacy and suggests that we ought to appease them. None of his advice takes seriously the aspiration to study literature in a philosophical context.
Hmm, alright. If we suppose that literature is among the student’s main philosophical interests, then I’ll agree that Brennan’s advice is at best useless and objectionably demeaning. I didn’t read the question that way — I didn’t interpret the student’s interests as focused importantly on literature already — but on re-reading the question I can see that it’s at least equally sensible to read it that way. If it’s read that way, then your analogy to anti-semitic prejudice makes sense. Is it really so clear, though, that Brennan reads it that way? I’m not convinced of that.
Interesting all. Here’s my take on each.
1. “Easier to cure” here is euiatoteros, from the root that gives us the -iatry of psychiatry and the like. Aristotle does sometimes seem to assume a conception of punishment that is at least partly rehabilitative, but his language for that is kolasis, and I don’t see any reason to import that notion here, especially since what he says is that the intemperate person would seem easier to cure because he could be persuaded to change his mind. So I think the notion is simply that the intemperate person, but not the akratic person, acts under the rule of reason, and hence can improve simply by changing his beliefs, whereas the akratic person has to train his emotional and appetitive side to be responsive to reason, which requires habituation and is therefore more difficult to achieve, especially in a short amount of time. But I don’t think Aristotle accepts this view any more than he accepts the view that he considers immediately above, that foolishness combined with akrasia is a virtue since the foolish person’s akrasia leads him to act contrary to his foolish beliefs and therefore to do the right thing. He’ll reject that view because virtue requires acting rightly for the right reasons from a settled and harmonious disposition to so choose; he can reject this proposal because the intemperate person’s emotional and appetitive side won’t simply adapt without incident to a dramatic shift in beliefs. In fact, though, in VII.8, he rejects the idea on the grounds that the intemperate person is actually harder to persuade and is more incurable than the akratic person, because vice prevents us from being persuaded of the correct views. It’s not clear to me how strongly we should take this claim. I’m inclined to see it, as I’m inclined to see many other claims in Aristotle, as a claim about the full-blown case, not as a claim that applies equally to any and every case whatsoever — so we might say that to the extent that someone is vicious, he cannot be persuaded to adopt views that oppose his vices, but this allows for many people to be somewhat vicious, or unstably vicious, and hence to some extent persuadable. As for your question about psychotherapy, I think the answer for the classical period is yes and no; the medical analogy is pervasive in Aristotle and Plato, but there’s no real evidence of anything like therapeutic practices, whereas of course in the Hellenistic period ethics the Stoics, Epicureans, and even the Skeptics envision philosophy itself as therapeutic and devise arguments and certain discursive practices designed to root out false beliefs and stabilize our grasp of true ones (the best general books on this topic are Nussbaum’s Therapy of Desire and PIerre Hadot’s What is Ancient Philosophy? and Philosophy as a Way of Life).
2. I don’t think there is supposed to be an exclusive distinction between ‘bestial’ and other states here, though there may be some distinctions within the category of the bestial. On the one hand, the Greek can be read as something like, “These states are bestial, and so are those which come about because of diseases…etc.,” so that we’re just adding to the list. On the other hand, though the initial examples are certainly more extreme and more akin to the behavior of wild animals, what all of the examples share is that they are beyond the limits of vice, presumably because they are in some way or other beyond the limits of the voluntary. It isn’t that we can’t overcome them, be overcome by them, or embrace them, but that the flaw is not fundamentally an expression or product of voluntariness. That much is admittedly somewhat speculative, since Aristotle doesn’t come out and say it, but it’s pretty apparent, I think, and not at all my original contribution. I think the important distinctions within the class of the bestial are between (a) traits that people have by nature (in the born-with-it-and-can’t-help-it sense, not the teleological sense), some of which are (a.i) isolated deformities in this or that person, others of which are (a.ii) inherited among people who live in climatically inhospitable conditions — some of these distant “barbarians,” as Aristotle thinks; (b) diseases — traits that are acquired in much the way that a cold is, and which tend to be temporary; ‘madness’ is only one example of this; and (c) habits — traits formed by habituation but evidently fixed in our behavior in a way that sets them beyond the voluntary in a way that the habituated dispositions of the virtues and the vices aren’t — it might be worth keeping in mind that virtues and vices are “prohairetic states,” states that make an impact on our choices; presumably the habit of biting one’s nails is not connected with voluntariness and choice in the right way. I’m not sure what else we need to say in order to maintain these distinctions, or whether Aristotle is entitled to them. But there is a fine treatment of them in Howard Curzer’s Aristotle and the Virtues, which deals with many of the parts of the Ethics that scholars tend to ignore. He explicitly connects them to our thoughts about mental disorders; he argues that genuine alcoholism, for example, should be thought of as a kind of theriōdes, and that the problem drinking that many people find themselves with is more akin to a form of intemperance. (Aristotle does not use the word trauma here, but he does describe some of the habits in question as the result of subjection to hubris — he is thinking of male homosexual desire there, but he seems to intend the example to be generalizable).
3. I think Aristotle — and more so, even a fairly “fundamentalist” contemporary Aristotelian — can have it both ways here. In one sense, Aristotle is precisely not being “hard” rather than “soft,” because the bestial conditions are beyond the scope of responsibility. It’s not that we can’t be responsible for anything that we do in connection with some such condition that we have, but that we can’t be justly blamed for the condition and its unavoidable consequences. So if I’m a bestial coward, then I’m not to be condemned in the same way as a vicious coward is; I am, instead, properly the object of sympathetic understanding or pity (NE III.1 1109b31-32). So the impulse to avoid stigma is one that Aristotle can recognize as tracking an important truth. What he wouldn’t do is refuse to acknowledge that these conditions are in any way bad, or attempt to avoid stigma by insisting that these are just different ways of being and not disorders. So perhaps we can say that he can be soft, but he can’t melt. What he can certainly do is emphasize the praiseworthiness of people who act admirably in their efforts to live with such disorders. Here we would do well to apply some features of his political theory to ethics: just as there can be good and admirable features of constitutions that fail to achieve the best possible, especially when the cause of that failure lies outside the control of citizens or legislators, so too there can be good and admirable features of people who fail to achieve the best possible, especially when the cause of their failure lies outside their control. Curzer is good on these points, too.
I’m not so sure about psychiatry, but I certainly think that the study of Aristotle — and ancient ethics more generally — benefits from interaction with psychology. I’ve toyed with the idea of developing a freshman seminar style course that would study Aristotle and the Roman Stoics alongside contemporary work in positive psychology and cognitive behavioral therapy, but I haven’t been able to convince anybody yet that I’m qualified to teach such a thing. I got the idea in part from reading Jules Evans’ Philosophy for Life and Other Dangerous Situations, which is not bad so far as self-help books go.
Awesomely informative, thanks. On the envisioned freshman seminar, personally, I would dispense with positive psychology and CBT and read the older, early 20th century classics of psychodynamic theory–Sigmund and Anna Freud, Jung, Adler, etc. Freud’s The Ego and the Id is a must-read. To make CBT intelligible, it seems to me that you have to cover the “B” element by taking a detour through Skinner. You may want to do that–Beyond Freedom and Dignity or Walden II work as undergraduate texts. But if I were doing it, I’d heap on the Freud. Jonathan Lear’s work is a helpful lead-in.
Having slept on it, and on second thought, I now find part of what you say in response to (2) and (3) a bit puzzling.
On (2): I’m “disabled” here by the fact that I’m at home and my Loeb is at the office. But in Irwin’s translation, NE VII.5 1148b25-30 goes like this:
I’ve deleted Irwin’s bracketed interpolations, added two of my own, and added the letters to mark passages.
The problem I have with your interpretation is that I don’t see that the A-examples are more extreme than the B-examples. Both are more extreme than the C-examples–unless by “more extreme” you mean that the A-examples are heritable tendencies in whole “races” of people (your a.ii), whereas the B-examples are merely isolated deformities in this or that disordered individual (your a.i) that don’t speak to the disordered tendencies of their race/ethnicity. In that sense “more extreme” would be “more of a deviation from what is kata physin,” in which case I see what you mean. A whole “race” of disordered people is a more extreme deviation from nature than a handful of deviations among normal people even if the handful are as deviant as the average members (or even most extreme members) of the deviant “race.”
I don’t have a text to prove it, but I have to think that Aristotle has a conception of a heritable trait–a madness–passed on through a deviant family within a normal race, as in the Greek tragedies. But I guess the real lesson here is that Aristotle is not all that interested in this issue, hence not all that interested in fine-grained distinctions between etiologies of bestiality. The focus of NE VII is akrasia, not madness. And NE is a manual for politicians, not a treatise in the way we think of them, much less the ancient Greek DSM.
On (3): I take your point, of course, but I still think it would be problematic to describe phobias as analogues to cowardice, e.g., to describe what we now call “anxiety disorders” as “cowardice disorders” or even “analogical cowardice disorders,” or to have a diagnostic category of “phobic cowardice.” That would be like conceiving of depression by analogy with laziness–“lazy melancholic disorder.”
By the way, not even DSM would dispute that all of the above are disorders. You have to travel well out of the mainstream to encounter people who regard a disorder as “just another way of being” (though once you do travel, you will encounter them). If anything, we’ve gone to the extreme of regarding every eccentricity as a “disorder.” But I think Aristotle is very vague on, and contemporary psychiatry is very dogmatic about, moral responsibility for having a disorder. Aristotle just doesn’t bother (very much) with working out how the voluntary applies to deviant cases, e.g., how much responsibility the victim of hubris has for the structure of his (is it ever her?) personality. Contemporary psychiatry/clinical psychology seems to me to operate (for the most part) on straightforwardly deterministic assumptions. You can read a whole textbook of abnormal psych and never encounter a single reference to free will or moral responsibility. There may be tacit acceptance of the idea that change is up to the agent, but there’s virtually no discussion of the possibility that the etiology of the disorder came about through the agency of the agent.
I take it that the initial examples are more extreme because they are settled dispositions rather than temporary states. It’s not that the acts they lead to are further from the mean or something, but that settled brutishness is a more extreme sort of disorder than “attacks of disease” and “fits of madness.” But I don’t think Aristotle puts any emphasis on the notion that the ones are more extreme than the others.
I think what Aristotle says about brutish cowardice is, in effect, that in a way it is cowardice and in a way it isn’t. The linguistic intuitions you point to support the sense in which it isn’t, but others would support the way in which it is. Suppose I have a phobic disorder and am uncontrollably terrified of snakes. You awake one morning to find that a snake has infiltrated your apartment. You initially think about calling me for help catching it or luring it outside (suppose I’m your neighbor, and you know I’m home). You might then think, “Ah, but Dave is petrified of snakes, and there’s no way he’ll be able to muster the courage to help me out. I’d better call animal control.” I think it’s a contingent fact of our contemporary usage that we find it inappropriate to describe me as a “coward” in this situation; we have no problem describing me as unable to muster the courage, and we’d rightly apply most of the same descriptions to me and someone who we would happily describe as a coward. To the extent that our terms unavoidably connote blameworthiness, then we’re justified in not describing people as “brutish cowards” or “cowards by analogy” or what not. And I think Aristotle can accommodate that perfectly well. But I can’t see anything wrong with saying, in an explanatory context, “Dave’s snake phobia is cowardice in a way and in a way it isn’t; he does tend to make and act on the same assessments of snake-related danger that a coward does, but it’s totally outside of his control; he didn’t get that way by choice, and there’s not much he can do now to keep himself from feeling and acting that way.”
I agree that Aristotle doesn’t say enough about voluntariness and responsibility with regard to brutishness, and that one reason for that is that he’s mainly interested in distinguishing it from akrasia and giving us an account of that. But I do think that he says enough for us to conclude that he regards brutishness as lying outside the scope of voluntary action and responsibility.
I see your point on issue (2), but I can’t accept what you’re saying on (3). But I guess this reproduces our ongoing disagreement about “the moral” in a new guise.
The problem with any analogical usage of moral terms is that it’s not clear what part of the analogy is being brought along for the ride. In the case of trait ascriptions like courage/cowardice, it seems to me that there’s a fundamental difference between coming to have the trait because you the agent brought it about that you acquired it, and coming to have the trait because some phenomenon beyond your control saddled you with it. It’s often difficult to draw the distinction, or know how it applies, but that’s why analogical trait ascriptions are so misleading. They gloss over the difficulty.
In the paradigm case, the ascription to someone of a vice is an ascription of culpability and presupposes moral responsibility. In fact, I think it presupposes some account of human mental life such that you can spell out what the agent has to do to make himself culpable. If we don’t know the etiology, we ex hypothesi can’t spell that out.
The problem with analogical usage is that it glosses over the latter incapacity. On the one hand, we can’t spell out the etiology of the trait; on the other hand, we’re using language that conveys the impression that we can. Even if you (very artificially) added a proviso to the contrary (“He’s a coward, not that I’m blaming him for it”), the proviso just induces cognitive dissonance, like a judge’s telling a jury to disregard what they’ve just heard after they’ve just heard it. Moral predicates are paradigmatically used for contexts of praise and blame. If you’re operating in a context where praise and blame isn’t appropriate, I think you need a new vocabulary altogether.
If we don’t understand how phobias work (i.e., how they arise), it’s inappropriate either to say that a person who has one is a phobic coward or that they can’t muster up the courage to face the phobia-inducing phenomenon. Those are really just alternative ways of saying the same thing. It’s a contingent fact about our usage that we tend not to pay attention to whether we’re talking about what’s in the agent’s control and what isn’t.
What’s wrong with the quotation at the end of your second paragraph is two-fold. For one thing, I think it would just be clearer and more economical to say: Dave’s snake phobia is a clinical disorder (if it is). That makes it transparent that it’s not in his control. We can add, if we like, that a disorder resembles the analogous moral phenomenon, but the fundamental thing to be clear on is that it isn’t a moral phenomenon. Second, it’s not explanatory. If we know that Dave’s phobia is definitely out of his control, we should be able to say something about what makes it so–but then, that’s what we should be talking about, in a vocabulary appropriate to what it is. We’re then faced with the puzzle of how something totally out of your control can give you attitudes that are so much like (moral) cowardice.
We may discover that people are in some sense responsible for their phobias (or some people are responsible for some phobias, or some aspects of some phobias, etc. etc) But in such cases, we don’t need analogical usage like “phobic cowardice.” In those cases, phobias would just straightforwardly be a form of cowardice, full stop.
So my bottom line here is that Aristotle’s use of analogy just muddies the waters. Take a case of phobia. Suppose there is zero culpability involved, just pure clinical pathology. In that case, we need medical terminology, and we should avoid moral terminology altogether. Suppose that there is some culpability involved, but it’s very mitigated or attenuated, and very different from the sort of culpability we encounter in the paradigm case (very different from a pure lapse of integrity). In that case, we have moral cowardice of a non-paradigmatic form, but it’s still moral cowardice. The concept of “phobic cowardice” applied indifferently to both sorts of case seems to me a confusing hybrid of moral and medical thinking.
I’m trying to apply to your March 10, 11:36 AM post, but I’m not sure it will show up in the right place. If not, adjust accordingly.
I don’t want to defend Aristotle’s usage as a model for how we should label these things in our own context, in part for just the reasons you emphasize; because our moral terms typically do imply responsibility, it’s more misleading than clarifying to use these terms. I don’t think Aristotle himself should be faulted for his way of putting the point, though, in part because when he was writing there simply was no recognized distinction of the requisite moral/medical type when it came to psychology, and unless I’m wildly misreading him, he’s in fact trying to draw a distinction very much like the one we make. But I do think it would be a mistake even for us to insist that there is no analogy between these traits. Phobias are dispositions to assess certain features of the world as terrifying when they really aren’t dangerous (or not that dangerous) and to engage in avoidance behavior that is not warranted by the real danger. If there’s not a real similarity there with cowardice, then I don’t know what will satisfy your criteria for similarity. Phobias also legitimately enter into our assessment of people in ways that vices do: we rightly judge that someone with a phobia will not be able to perform well in a situation that triggers that phobia, just as someone with the vice of cowardice will not be able to do so, and we rightly judge that someone with a phobia is worse off than someone without it, just as someone with the vice of cowardice is worse off than someone without it. Of course, the same is true of someone born without legs; but phobias and the like further resemble vices because they affect our action not simply qua bodily movement but qua expressive of choice; phobias either distort our choices, prevent us from acting in accordance with them, or present considerable challenges to acting in accordance with them. If not for these similarities, we wouldn’t face the problem of distinguishing the blameworthy from the non-blameworthy. So I can agree with you that the importance of that distinction justifies our refusing to label the phobia as a kind of cowardice, but the considerations that lead Aristotle to label it that way seem to me to be roughly right.
Another way to think about the “brutish coward” label is that “brutish” in that expression might function as an alienans adjective; just like glass diamonds aren’t really diamonds and plastic flowers aren’t really flowers, brutish cowardice isn’t really cowardice; but it objectively resembles cowardice in ways that lead people to treat it that way.
On a quick re-read, I’m less satisfied with Curzer’s account than I thought I was. I think he’s basically right, but as I see it he puts too much emphasis on the extreme character of the ‘bestial’ states — which he translates, perhaps more happily, as ‘brutishness’ — in terms of the degree of the desire or the unusual objects of desire. To my mind, neither of these will explain why these states don’t count as vices. But Curzer comes around eventually to the view that part of what distinguishes brutishness is that it is not only incorrigible (Curzer thinks vice is incorrigible too), but that they are not formed in the way that virtues and vices are, even when they are formed through habituation. Here is his most general summary of brutishness:
Curzer is misleading when he says that people become vicious by choice, since if he means “choice” to stand in for Aristotle’s prohairesis, then he’s got Aristotle wrong — virtue and vice are formed by voluntary actions, but not all voluntary actions are “chosen” in Aristotle’s special sense — selected by rational deliberation formed on the basis of desires formed by beliefs about what is good. Choice in this sense is necessary for virtue and vice, but voluntary action is what shapes our character; this is fairly clear from NE III.5. Given this qualification, it’s also misleading to say that a disposition counts as brutish if it has been formed by “socialization into a corrupt society,” since socialization comes in a variety of forms; one of those forms is the inculcation of practices and beliefs, and when these are importantly mistaken, so will be the character of people brought up to live that way. The kind of habituation that Aristotle has in mind when he discusses brutishness is of a different sort — as, for example, the kinds of dispositions that might result from being subjected to hubris from childhood.
I’m still not sure that leaves us with a clean account of the difference between vices and brutishness, but it’s a start, and Curzer is at least right to think that the concept of brutishness overlaps to a great extent with our concept of a personality disorder or mental illness.
I basically agree with you on this one, but I see the rationale for Curzer’s writing as he does. Correct, “choice” cannot stand in for prohairesis for the reasons you give. But if character is formed by voluntary actions, we need a word for the mental act involved in voluntary action, and especially (though not exclusively) on metaphysical-libertarian assumptions, “choice” is the right English word for the job. I think Irwin uses “decision” for prohairesis, in which case “choice” might be term to use for the merely voluntary and metaphysically up to the agent, where that includes the formation of vicious character. In other words, if vicious people are responsible for becoming vicious, we need to isolate the part of the etiology of their becoming vicious that’s up to them, and “choice” does the right work. I’ll use “choice” in that way in what follows.
That said, for the reasons you give, Curzer’s list of brutishness-causing-etiologies is misleading. Socialization can involve, probably does involve, choice, whether in a corrupt or a non-corrupt society. I’d go so far as to say that it involves choice in a totalitarian dictatorship. It’s not obvious that all trauma invariably produces brutishness sans choice: the relationship between trauma and character formation is not as simple as, say, the relation between violent contact with a bluntly applied force and tissue damage. Even being subjected to hubris from childhood doesn’t produce character traits all by itself. Choice could be involved. What this suggests to me not only that Curzer’s account is imprecise but that Aristotle’s category of the “brutish” is problematically coarse-grained or vague. It’s not clear what part of the state is produced by the agent (if any) and what part is not–or how. (Not that I’m blaming Aristotle! He’s just “guilty” of non-moral imprecision. 🙂 ).
Quasi-technical point: Here I have to take back something I said (3/7 at 1:47 pm). Curzer is misusing the term “personality disorder” (as I did in the earlier comment). It’s a tempting mistake to have made, but the term “personality disorder” has a very specific definition in abnormal psychology, and now that I reflect on it, I think it’s a mistake to read the concept into NE VII (or NE at all). What Aristotle has done is to pick out pathological behaviors central to (what we would later call) a “personality disorders.” That’s not the same as anticipating the concept of a “personality disorder,” as Curzer says, and I said.
There’s a mare’s nest of problems here that all advise against importing the concept of “personality disorders” too easily into Aristotle scholarship. For one thing, the behaviors under discussion do not necessarily arise within the context of a personality disorder (exhibiting them is neither necessary nor sufficient for having a personality disorder). Second, the contemporary definition of “personality disorder” is itself a problematic one.* For one thing, it’s (really) unclear whether personality disorders involve culpability or not. For another, the standard definition of “personality disorder” is very imprecise and typically interpreted in a relativist fashion. The complications here really need to be worked out in a series of papers or a book devoted to the task. I can’t do justice to them here beyond what I’ve just said.
*The definition: A personality disorder is “an enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual’s culture, is pervasive and inflexible, has the onset in adolescence or early adulthood, is stable over time, and leads to distress and impairment” (DSM IV-TR, p. 685).
Pingback: David Potts on the Dunning-Kruger Effect | Policy of Truth
Pingback: From the Nicomachean Ethics to the Grant Study | Policy of Truth
Pingback: When Arguments Fail: A Response to Jason Brennan | Policy of Truth