Student Papers Written via Translation Software: A Conundrum?

I’m curious what PoT readers think of this.

I just got out of a faculty meeting in which one of the faculty from the Dept of Communications gave a short presentation on a problem he said he’s recently encountered. The problem is as follows. Some of our students come from non-English-speaking households. Obviously, they can speak English in a functional way, but they feel more comfortable writing in their native languages, especially if they  have to do so for credit. When assigned writing assignments for class, some students in this situation will write the paper in their native language (say, Spanish), translate the paper into English via an online translation app (e.g., Google Translate), and then submit the translated version of the paper for credit without disclosing what they’ve done.

For present purposes, let’s set aside the possibility that the original non-English paper was plagiarized, and just assume that the student wrote the original paper herself. Question: Is the practice of submitting a translated version of one’s work problematic? If so, why?

I think so. Here are three or four reasons why, just off the top of my head:

(1) First, though we don’t typically make the point explicit, the presumption is that special cases aside, English is the medium of instruction in American colleges and universities. It’s fundamental to the mission of a college or university that, whatever else we may be trying to do, we’re trying to develop and improve our students’ capacities for reading, writing, and speaking Standard English. Reflexive or habitual use of a translation app defeats that purpose, defeats our efforts, is incompatible with the mission of the institution (and with common sense), and is therefore unacceptable. Many of my colleagues seemed uncomfortable with putting things this way, but I think it’s pedagogical bedrock.

(2) Second, translations via translation devices are a crap shoot–a fact that has two separate and problematic implications.

(a) For one thing, they’re a crap shoot in the sense of being very imperfect devices for their intended purpose. The program I know best is Google Translate. Google Translate can be a very useful app if you know what you’re doing with it–typically, if you know a bit about both the input and the output languages you’re dealing with. In fact, there are times when I’m left breathless by what it can do: feed it an idiomatic bit of English, and it spits back an idiomatic bit of Spanish or Urdu. But the fact remains that many translations via Google Translate sound like dreck, even when the translations involve relatively proximate languages, like Spanish and English. And once you move from proximate to remote languages, mere dreck can devolve into outright gibberish (see below). The biggest problem is, the more you need a translation program, the less likely it is that you’ll know when it is that your translation has crossed the Gibberish Rubicon. But the less you need a translation program, the less you should be relying on it in a paper for credit in a college course.

Students who use translation programs tend not to notice these facts. They also tend to be rather insensitive to the strain caused by reading their “works in translation.”  Consider a rather extreme case, admittedly involving problems that go beyond the one my colleague was raising, but vividly illustrating the strain in question.

When I taught political philosophy at Al Quds University in Palestine this past summer, some of my students plagiarized papers on, say, Aristotle’s Politics or Machiavelli’s Prince by going online, finding English language articles on the topic of a given paper, copying and pasting the articles into a Word file, pasting the Word file into Google Translate for an Arabic translation, and then submitting the gruesome results to my translator, whose job it was to…translate it back into English for me. The psychological effects of having to grade this stuff quickly began to rival the effects of life under the Israeli military occupation. My translator, frankly, lost it. “I don’t understand a fucking word in this paper! Machiavelli had a pet lion? The lion was friends with a fox? What the fuck are they talking about, The Prince–or The Zookeeper?”

مرحبا بكم في الغابة! Benvenuto nella giungla! Welcome to the Jungle! 

We had to keep at this for a couple of stressful hours before we finally detected the covert role of Google Translate in l’affair–sort of like the DaVinci Code, but with Machiavelli, in Palestine, a day before the grades were due. (Il Principe, Chapter 8: “Concerning those who have tried to obtain a passing grade in the class by villainy.”)  I know this thought-experiment has a “what is it like to be a bat” quality to it, but try to imagine what it’s like to spend hours trying to “read” (=hear) really bad political philosophy papers, submitted in Arabic for translation into English, based on plagiarized material posted online in English, in the hands of students reading Arabic translations of English translations of, say, the Greek of Aristotle’s Politics or the Italian of Machiavelli’s Prince.  

Sound like fun? 

Suono come divertimento?  

 مذاق کی طرح لگ رہا ہے؟

(b) Computerized translations are also a crap shoot in a subtly different sense. A student who uses (say) Google Translate because he lacks confidence or mastery of English is in effect gambling with his own work when he uses it. Precisely because he doesn’t know what the translating device is doing, he can’t understand the relationship between what he puts into the device, and what comes out. Once he clicks “translate,” what happens next may as well be magic. Asked to comment on “his own” work–e.g., “why did you put things that way?”–he’ll be in the paradoxical situation of having nothing useful to say. The implication, I think, is such a person is not in the relevant sense the author of the work he’s submitting–which makes what he’s doing unacceptably close to plagiarism.

Here’s an admittedly extreme example. It doesn’t involve a translation device, but it gets the point across: When I taught at John Jay College in New York, I once remember assigning a paper on ethics (I forget the topic) to students in an Intro Philosophy class. One Spanish-speaking student visited me during office hours, describing her difficulties in getting started with the paper. Appreciating her difficulty (or so I thought), I naively referred her to the College’s Writing Center for help.

A few days later, I got a completed version of the paper: clear, cogent, on-topic, and in perfectly grammatical English. Unfortunately, the paper had been written in its entirety by an English-speaking tutor at the Writing Center, who affixed a long letter to the paper, explaining that he had indeed written the paper “on behalf of” the student, but in doing so, had “based” the paper “in its entirety” on the student’s “ideas.” As he rather loftily put it (invoking his expertise as a Writing Tutor del pueblo), a Spanish-speaking student could hardly be expected to write a philosophy paper in English–implying that it was unreasonable of me to have made the demand of her. I don’t even remember how I ended up handling the matter; I could hardly fail the student for following the lead of the tutor, and could hardly file an Honor Code violation against an employee of the College. I think I probably ended up giving her an A-.

Incidentally, the preceding facts help differentiate the use of something like Google Translate from, say, the use of spell check. A spell check function can flag apparently misspelled words, but can’t unilaterally correct them. Ultimately, the user of the spell checker has to know the right spelling of a given word to know what to do with the wrong one once it’s flagged. The program can give suggestions, but the user has to know what to do with them. In any case, the scope of a spell check function is much narrower than that of a translation device: it extends to the small percentage of words that are misspelled, not to every word in the file.

(3) This is a vaguer reason why translations are problematic, perhaps reducible to the other three, but it seems to me that there is simply something dishonest about failing to disclose one’s reliance on an electronic device that plays so large a role in generating the work one submits.

Can anyone think of other reasons? Am I being too one-sided here? Comments invited in the combox. In English, por favor.

7 thoughts on “Student Papers Written via Translation Software: A Conundrum?

  1. I am sensitive to the problems that non-native speakers of English face in writing papers in English, for at least two reasons. First, I have had several very intelligent students who were not native speakers of English who could simply not produce quality papers in English. Second, despite my ability to read four languages other than English — and two of them ancient, at that — I could not produce a quality paper in any of those languages, at least not without an extraordinary amount of time and effort (and probably not even then). I am therefore not inclined to underestimate the difficulties involved; in fact, I think students who are not native English speakers but can nonetheless produce good papers in English are simply amazing, if for no other reason than that I know that I wouldn’t be able to do the equivalent without years and years of trying and failing to do so.

    Nonetheless, two considerations seem overwhelming to me. First, I have had students who were not native speakers of English who write at least as well as their peers, and often better. Admittedly, in almost all of these cases a mixture of native talent and fortunate educational circumstances in youth account for why these students can do this. Still, they can do it. More importantly, I have had a number of these students. Hence the ability is not some sort of extremely unusual miracle of nature, but something that intelligent students given the right set of opportunities and instruction can achieve. Second, anyone who is a student at an Anglophone institution can be reasonably expected to know English well enough to read the assignments adequately, understand lectures and discussion adequately, and contribute to discussion adequately, whether in class or one-on-one in office hours. It therefore seems hardly unreasonable to expect that the same students should be expected to produce tolerably good English papers as well. To maintain that anyone who cannot do so simply has no business being a student in an Anglophone college or university is not to downplay the achievement of gaining proficiency in a second language; I know full well that I could not succeed as a student in a German, French, Roman, or Ancient Greek college or university, and I am humbled when I meet with non-native English speakers who can do this in English. For all that, the fact remains that if I were to insist that I should be allowed to write in English or to have Google Translate render my English into French, German, Latin, or Greek, I would be insisting on a policy that would be unfair to my fellow native English speakers who can speak and write fluent German, French, Latin, or Ancient Greek; I would in fact be insisting that despite my inability, I should be rewarded equally with people of ability. I find it hard to see how this makes sense.

    All that said, I apply very different standards to native and non-native speakers of English when I grade papers. I do this in part because it seems to me that it is in fact a greater achievement for a non-native speaker of English to produce something that is clear, precise, and well expressed than for a native speaker to do the same. I offer what I have been led to believe is an unusually detailed level of comments and suggestions on student writing, because I think one of my main tasks is to help students become better writers. But writing in another language and using Google Translate does not make anyone become a better writer. In fact, it directly prevents them from becoming better writers. Applying somewhat looser standards for non-native speakers is, I think, entirely fair; but someone who writes a paper in his native language and uses Google Translate has simply not written an English paper. I would consider this cheating and would be strongly inclined to give a 0 to any paper that I could prove was produced in this way.

    Liked by 1 person

    • I’d wanted to respond to the practical suggestion in the last paragraph of your comment, but haven’t had the chance until now. I draw the distinction somewhat differently–not quite native vs. non-native speaker of English, but those who intend to live in Anglophone countries vs. those who don’t. The latter distinction turns out, in practice, to be one between students who intend to live in the United States, native English speaker or not vs. exchange students from foreign countries where English is not an official language (e.g., Japan, Korea, Italy, Poland in my case).

      The distinction between native and non-native speakers of English is often very difficult to apply in practice. Suppose that you have 100 students, and all that you know about their backgrounds comes from your roster (name, address, email, student ID #). In that case, your best guess about whether a given student is a native English speaker will come from (a) their accent, and (b) other peripheral facts about them that lead you to believe that they’re not native English speakers. But this is very imprecise, and potentially problematic. Once you try to to apply it to a large enough population of students, I think you’ll find that some non-native speakers are already entirely fluent; meanwhile, some students lacking fluency either are native speakers, or while not native speakers, lack fluency in English for reasons unrelated to the latter fact.

      For instance, take a student of Dominican ancestry (with a Dominican name) who speaks English with an accent and has trouble writing a paper. It will often be unclear whether she’s having trouble because she’s a native Spanish speaker, or simply because she has trouble writing papers. I discovered this in a sort of unforgettable way one semester when I had a Spanish-speaking student who was having a lot of trouble getting through Mill’s On Liberty. So I came up with the bright idea of assigning On Liberty in Spanish. Turns out he had exactly the same trouble understanding Mill in Spanish as he had in English. The problem was one of understanding Mill, not understanding Mill-in-English. (I hadn’t quite figured out whether he’d write the paper in English or in Spanish, but his trouble with Mill-in-Spanish made the problem moot, so I just dropped it and went back to English.)

      Anyway, all of this puts one in an odd quandary. Suppose you have a native-born African American student who’s having trouble reading or writing your assignments. Should you treat African American Vernacular English as a language in its own right, and therefore treat speakers of AAVE as non-native speakers? Many people would say “yes,” but I disagree. Once you do, there are no limits on the “non-native speakers” you’ll find on your rosters. African-American Vernacular English is just one species of a whole genus of vernacular forms of English. (I don’t mean to suggest that such a student can use Google Translate to solve her problem. I just mean that the instructor faces a quandary in knowing how to deal with the situation.)

      Now suppose you have a native-born Dominican student who comes from a household (and/or neighborhood) where Spanish (or Spanglish) is the dominant language, but who speaks English “when necessary,” and has been doing so since childhood. Is such a student a native English speaker or a native Spanish speaker? It’s unclear.

      Something similar applies if you teach Native Americans on a reservation (actually this case is a hybrid of the preceding two).

      I bypass all of these difficulties by (trying to) treat all Americans, ceteris paribus, as being linguistically on par. If you’re an American, and you intend to live and work in the US, my presumption is that you have to learn how to read, write, and speak Standard Written English (or some approximation thereof). That’s what I expect you to learn, and that’s the standard I employ in grading you.

      Since a grade is not measure of a person’s moral worth, it doesn’t bother me that the distribution of grades so conceived reflects proficiency, effort, and luck all at once. A grade is a measure of a student’s proficiency. Proficiency is often (partly) a matter of luck (it’s path-dependent on luck). So, many A students will get A’s because they’re proficient, and because their capacity for proficiency is partly a matter of luck. Mutatis mutandis, the same will be true of the F students, and of everyone in-between. I don’t try to discount for luck in either case.

      This never goes over well, but if one repeatedly stresses that grades measure non-moral proficiency and not moral worth, there should be no reason for complaint. The complaints arise from the fact that most students are encouraged to think that an A in ethics (or anything else) means that they are morally virtuous human beings, whereas an F means that they are morally bad. Actually, both claims are wrong. An A in ethics could mean that a student put a lot of effort into the class, but it could also just mean that a given student has a high IQ and can do A-level work without much effort. Likewise, mutatis mutandis, with F’s. The etiology is not particularly relevant to the grade.

      I add the ceteris paribus proviso above because I don’t think grades are (or should be) entirely luck insensitive. Legally, I have to make accommodations for students with disabilities under Section 504 of the Americans with Disabilities Act. And certain excuses, mitigating circumstances, etc. can legitimately affect a grade. (Of course, one huge circumstance is where I’m teaching: I don’t have the same expectations of Felician students as I had of Princeton students.) But given an institution, and given certain exceptions, I tailor grades to expressed proficiency, not effort or luck.

      The students who get a free pass from me are students who don’t intend to remain in the United States or in any other English-speaking country. Felician’s exchange students tend to be from Japan, South Korea, Italy, and Poland. The Europeans tend not to have trouble with English, but the East Asians do. In a certain way, such students get away with pedagogical murder in my classes: I cut them enormous slack. That’s partly because their English is not in the ballpark of adequate, but also because one could hardly expect it to be: these students learned English in ESL classes, and don’t really have the opportunity to use it back home. But the relevant point is that such students have no significant use for English back home. So I see little point in demanding proficiency in a skill that is ultimately a kind of frill for them. Anyway, they’re going back home in a few months (so that there’s no time to bring their English skills up to par anyway). So it’s unreasonable to hold them to the same standard as the American students, and there’s no point in doing so anyway.

      There are few outlying cases as well, e.g., African students (from say, East Africa) who speak East African versions of English (I don’t know the correct linguistic term for these), and are going back to East Africa, to speak English plus this, that, and another language. I find these cases hard to deal with, partly because I don’t know anything about the circumstances of life in East Africa. But they’re rare.

      I make heavy weather of this not just to highlight the pedagogical point, but for two further reasons. One is that I don’t think there’s any entirely adequate way of dealing with the problem at issue here–the problem being, how does one fairly assign grades to students, given that their command of the language is shaped by such disparate causal factors? Each approach involves troublesome trade-offs.

      The other is that I think this is one context that illustrates the legitimacy and utility of the concept of “the moral.” I don’t mean to be denying that students can, in principle, be praised or blamed for the effort they put (or don’t put) into getting good grades. But that praise or blame is (as I see it) ancillary to the task of the instructor qua grader. In other words, grading is not primarily about praising or blaming (or any other sort of specifically moral appraisal). It’s about appraising and reporting on a person’s essentially non-moral mastery of some episteme or techne. There is no intrinsic connection between being an A student and being a morally good person, and none, either, between being an F student and being a morally bad person. The task of passing moral judgment on a student’s academic efforts requires knowledge of a sort that I doubt any instructor could have, and has almost nothing to do with the grades anyone gets (or ideally ought to get).

      The interesting thing about the faculty meeting that gave rise to the discussion at hand was that it polarized the moralizers as against the proficiency-mongers about grades. The moralizers wanted to insist that a grade is a kind of moral gold star intended to commend the student for her efforts (less often to shame the “bad” student for the failure of her efforts, but that’s implicit in the view, whether the moralizers like it or not). The proficiency-mongers denied just that. On our view, a grade doesn’t commend; it reports an appraisal. I think the theoretical difference makes a significant pedagogical difference in practice.

      I don’t think there’s much of a literature on this, but there should be.


      • You’re right that there’s an important distinction between people who are non-native speakers and people who have actually been living in a non-English-speaking country until attending college. I don’t think it makes much difference whether they intend to stay or not — after all, they might change their minds either way. What matters to me is whether they have had the opportunity and instruction necessary to learn English well enough to be expected to write it well. But I’d apply a similar sort of consideration to grading native speakers with widely divergent backgrounds, too; I think it is abundantly reasonable for me to have higher expectations of my students at Dartmouth and Rice than I would if I were teaching community college, for example. What happens if we’re in a very mixed environment, as I was when I taught at the University of Texas? Well, then I sacrifice sensitivity to individuals to preserve the fairness of grading everyone by the same standard. But I might still be willing to make some exceptions in cases where it’s clear that the particular student has been educationally disadvantaged in a way that puts him or her in roughly the position of a non-native speaker who has only been living in an Anglophone country for a few years. I’ve never had a student like that, and maybe I’d change my mind if faced with making the decision. The only similar cases I’ve had have been with students who suffer from genuine cognitive disabilities.

        You may still disagree, but it might be worth noting that what I have in mind here is not grading a paper on a wholly different standard, and I’m not thinking about cases in which the English is extremely poor. I’m thinking of giving less weight to clarity and elegance of expression, not of giving it no weight at all and not of lowering my standards on grammatical correctness, word choice, etc. And I’m not thinking at all about the many other things that I assess when I grade a paper, such as the quality of its argument, its organization, etc. It might also matter that I usually have extraordinarily high standards for papers; I can’t remember anymore how many students have told me that the grade they received from me was the lowest they’d ever received for a paper.

        More generally, though, I’m not quite on board with your notion of the grade as a simple assessment. I agree that morality has little or nothing to do with it, but if I were to think of grading simply in terms of a student’s technical mastery of the subject, then I would have to give low grades to virtually all the students I teach, because none of them are anything close to masters at writing philosophy, literary criticism, or history. It is, of course, absurd to expect students, whether freshmen or seniors, to achieve that level of mastery. But that’s why I don’t think we can think of grading simply as an assessment of proficiency. The assessment needs to be sensitive to the students’ level of knowledge and skill. I think of grades instead as assessments of achievement, and the assessment has to be relative to the level of achievement that I can reasonably expect from the students. I don’t think it’s reasonable to expect my students who grew up in Korea and only moved to the U.S. two years ago to write English as well it is reasonable for me to expect my students who are native speakers to write. It’s got nothing to do with assessing their characters and everything to do with refusing to subject people to standards that they cannot meet.


        • I’m not entirely sure we’re disagreeing, or disagreeing that much. I think the problem we’re running into here is that there are so many different variables involved that no one has a fool-proof way of optimizing on all of the relevant values without sacrifice of something important. That’s why I’m baffled by facile claims like this:

          Now, the general rule is that one should spend no more than 1 hour of prepping/grading outside of class per class. Many faculty do in fact spend more time, but this is bad time management.

          So don’t worry; even if we botch this, we can console ourselves with the knowledge that someone has cookie-cutter answers to all our questions.

          But let me back up a bit. Let me just try to list what seem to be the fundamental questions or issues.

          1. Does “ought implies can” apply to the classroom? If so, how?

          (a) Should the grading system be constructed so that literally everyone registered for the class can get an A?
          (b) Or…so that everyone registered for the class can pass?
          (c-d, e-f) Or…so that everyone of normal (or “ordinary”) abilities…can get an A? Or…pass?

          Part of “the grading system” is how one weights the factors involved in grading a particular assignment. (For simplicity’s sake, let’s put aside what these factors are.) Every written assignment has to meet a certain standard of intelligibility, grammatical correctness, etc. But

          (g-h) Does the standard vary if one comes to think that a particular student either literally cannot meet the standard, or cannot reasonably be expected to meet it, except at exorbitantly high cost? (Set aside what counts as a cost.)

          2. How much relativization (and/or what kind of relativization) is appropriate when it comes to grading? Generally, I guess we’re relativizing vis-a-vis populations based on samples, where we’re inferring something about the relevant capacities of the population based on some sample.

          (a) There’s relativization to school (e.g., elite vs. junior colleges vs. intermediate cases).
          (b) Relativization to course level at a school (100, 200, etc.)
          (c) Relativization to type of course at a school (general education vs. oriented to a major), though this tends to overlap with course level, or at least ought to.
          (d) Relativization to the existing skills and capacities of individuals in a given class.
          (e) Relativization to how well the instructor thinks he’s taught the class on this particular occasion. (One can’t hold students responsible for one’s own pedagogical lapses. If they do poorly because one has taught them poorly, one has to adjust for that. And believe me, that happens.)

          As a general matter, I think one thing that gives the impression of a general disagreement is that you say that you have extraordinarily high standards for papers. I don’t. If that’s the case, then that difference will be a confounding factor in any conversation we have, even if we agree on all the general principles involved. We might agree on the general principles, but systematically disagree about cases. It’d be a bit like a case where two guitarists get together to jam, each guitarist insisting that his guitar is “in tune” (with itself), but ignoring the fact that neither guitar is “in tune” with the other (or “in tune” haplos). It would sound like a mess. “But I thought you said you were in tune!” “I am in tune. Are you in tune?” “Of course I am. My A on the 6th string sounds just like my open A on the 5th…” “Mine too!” etc.

          Re (1): I think some version of “ought implies can” has to apply to grading. So I’m agreeing with your first comment against your second. You can’t expect people to satisfy standards they can’t meet. So I plump for (b): every class has to be structured so that every registered student can pass. It is no injustice that a given student lacks the literal capacities to get an A (or even a B, or…possibly even a C?). But it would be an injustice if you discovered that you’d pitched the material so high that a given student literally could not pass the class, no matter how hard or sincerely she tried. Prima facie, that indicates that something is wrong, either with the class or with one’s teaching.

          I don’t think a class of that nature could be justified by invoking proficiency or mastery (and that wasn’t what I meant when I invoked those terms). The relevant sort of proficiency or mastery is not proficiency or mastery of the subject-matter or discipline as such, but proficiency or mastery of the material presented in the class, where the selection of material for a given class presupposes some version of “ought implies can.” When I teach, say, ethics, the grade is not a report or assessment of students’ proficiency at or mastery of ethics tout court, but of ethics-as-taught-in-Khawaja’s-Phil-250, with all of the appropriate relativizations already built into that presentation of the class.

          That’s why I reject this:

          More generally, though, I’m not quite on board with your notion of the grade as a simple assessment. I agree that morality has little or nothing to do with it, but if I were to think of grading simply in terms of a student’s technical mastery of the subject, then I would have to give low grades to virtually all the students I teach, because none of them are anything close to masters at writing philosophy, literary criticism, or history. It is, of course, absurd to expect students, whether freshmen or seniors, to achieve that level of mastery. But that’s why I don’t think we can think of grading simply as an assessment of proficiency.

          I think the correct inference is not that we shouldn’t think of grading simply as an assessment of proficiency, but that “proficiency” is equivocal in the sense I’ve described. In an undergraduate setting, what it involves is not mastery of a subject per se, but mastery of material presented in a given course, where the material presented is explicitly intended to convey some aspect of the subject, relativized to a particular pedagogical context.

          Moving to issue (2), I think (d) presents the hardest (and most relevant) set of issues. One problem is that it’s very difficult to gauge the existing skills and knowledge of a given student. Another: You may have too many students to know them very well. Third: Some students are hard to get to know. Fourth: Students have vested interest in pretending to suffer from liabilities that may not exist, or exaggerating the ones they have. Fifth: Considerations of political correctness deter or obviate certain relevant sorts of inquiries…

          You’re Pakistani, right? So how much English do you know, anyway? Oh, you’re Indian. Well, what’s the difference? Oh, sorry: I mean there’s a big difference! Yes, yes of course, I know Pakistanis learn English, but you have this accent, so I was just wondering…No, it’s not a bad accent! Look, all I’m really trying to say is that you can’t write for shit, and I’m trying to figure out if it’s your national origin or that you just can’t write…Work with me here, Muhammad.

          Etc. In the language-related cases, a lot of my difficulty is straightforwardly epistemic. I’m just not sure how to discover what needs to be known.

          But insofar as one can get the relevant information, I guess my principle is: ensure that anyone, whatever their capacities, can pass. After that, try to treat everyone, ceteris paribus, by the same standard of (appropriately relativized) competence/proficiency. I don’t think I can cash out the ceteris paribus clause in a combox. It would take a paper to do that–and probably make a good paper (if I had the time to write it). But I wouldn’t make this particular exception:

          I might still be willing to make some exceptions in cases where it’s clear that the particular student has been educationally disadvantaged in a way that puts him or her in roughly the position of a non-native speaker who has only been living in an Anglophone country for a few years.

          I think a student like that is a candidate for extra “coaching,” but not for an exception to the grading standards. In other words, the student may deserve something more to compensate for the educational disadvantage, but the compensation should not take the form of an exception to grading standards applied to everyone. It should take some other form, like individualized tutoring in office hours.

          It’s worth remembering that what makes something a “clear case” is itself a matter of luck. “Cases where it’s clear” are cases that you the instructor happened to discover (or had thrust on you). But the larger the population, the greater the likelihood of cases of exactly the same sort that you never discovered. Maybe a given student was just too shy to be discovered. Or maybe some other factor was at work. So if the motivation for lowering the standards is to compensate for luck, it actually ends up penalizing the timid student who didn’t get the break you gave this particular student. My view is: keep the grade the same, but make it known that you will work with anyone who is willing to be helped, being sensitive to the fact that the disadvantaged may need more help and not ask for it. Once you make that known, however, it becomes the responsibility of anyone needing help to get it, even if they have to overcome their natural shyness to ask for it. If they don’t, they may well be unlucky (perhaps they’re just saddled with a timid temperament), and your grade will reflect that bad luck, but less through your doing than if you made a grading exception.

          “Genuine cognitive difficulties” are almost a topic of their own. Suffice it to say that I’m uncomfortable with the way in which ADA 504 cases are treated, especially for conditions like ADHD. (Dyslexia, by contrast, is much easier to diagnose.) It just seems too easy for anyone to claim to have just about any condition and get special accommodations for it. I’m sufficiently suspicious of the moral integrity of psychiatrists to doubt the findings of a supposedly objective “clinician.” Which is not to say that they shouldn’t be accommodated. It’s just to say that a lot goes on in dealing with such cases that should give one pause.

          OK, out of time.


          • Well, that certainly shows how complicated the questions are, and I don’t think you even managed to get into all the complications! I’m especially puzzled about how we should understand the “can,” and your points here about genuine cognitive difficulties only begin to scratch the surface. On that particular topic, I’m troubled by my own experience of having students to whom I was officially required to give extra time and accommodation despite the apparent lack of any disabilities at all, but I’m also troubled by my experience of not being able to give accommodations to students who seemed pretty clearly to have something going on but weren’t, for whatever reason, able to secure the requisite paperwork from Student Accessibility Services or the equivalent. Maybe my judgment was mistaken in both cases — I’m certainly not qualified to diagnose these things — but it’s hard not to think that there are potential problems of unfairness in both directions.

            I can accept your clarified account of grading as assessment of proficiency. I of course suspected that you had to have something like that in mind, and though I’m still inclined to think that the considerations you’ve noted show that proficiency as such isn’t the fundamental thing, I don’t think we have any substantive disagreements about it.

            Here’s a case of relativity in standards that I wonder what you’d think about. I am currently team-teaching a 300-level philosophy course. One of the students is a freshman who has never written a philosophy paper before (why the student was permitted to take the course in the first place is a different story). The paper was not especially great; I initially gave it a B-. After discussion with my fellow instructor, however, we agreed to raise it to a B. One reason that I found acceptable was that a number of the problems I had noted were not the sort of thing that one could reasonably expect a freshman who has never written a philosophy paper before to avoid. Certainly I would not have regarded the logical errors as excusable in a paper by a student with two or three philosophy classes under his belt. Was it fair to put less weight on them because of the student’s inexperience? I really don’t know what to think. I accepted it, and so obviously I’m inclined to think that it’s fair. But I might be wrong about that.

            It would be silly of me not to point out that my thoughts on this topic are also shaped by the fact that in virtually every semester that I’ve graded papers, I have never had nearly so many students as you do. Having fewer classes with fewer students makes it much easier for me to talk with them outside of class and get to know them, and also to spend more time considering just how to weigh various factors in grading. That doesn’t eliminate the problems you’ve been talking about, but it does alleviate some of them.


      • Well, ok, that last formulation is in need of qualification. To some extent the standard of assessment cannot change simply because meeting it is beyond the control of the person being assessed; I would not, for example, give students with severe learning disabilities higher grades simply because of their disabilities — I’d give them reasonable accommodations to help them achieve the higher grade despite their disability, or I’d encourage them not to take my class. Things are also different depending on the subject matter. I once taught two semesters of ancient Greek to a student with pretty serious dyslexia, and there is no question whatsoever that she was simply not capable of doing Greek well enough to earn an A (happily, she acknowledged that and set herself the goal of getting a C, which she accomplished despite the severe challenges — her grade didn’t reflect an assessment of her character, but her earning the grade certainly reflects her admirable character). But even in Greek, where the standards are typically wholly objective — that either is or is not a third person plural pluperfect middle/passive indicative verb — the standards have to be sensitive to students’ background knowledge; I do not ask students to be able to translate the Apology after a mere ten weeks of Greek, and I do not, say, deduct points when a first year student translates an inceptive aorist with an English simple past, which I would do in a third or fourth year reading course. Writing, I think, raises very different questions of assessment, and I think it’s more reasonable to shift the criteria in that case.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s