This semester I’m trying something new in my writing classes: trying to eliminate the interference of “writing enhancement” software, along with all other potential sources of noise between the students’ brains and the page, from my take-home essays. This is because as an ESL teacher, I need to maintain the validity of the “grammar” scores on writing assignments that I give, assuming that I need grammar scores at all, and of course I need to know that whatever students are turning in is a product of their own thought processes. To that end, I’m changing the planning and drafting processes and part of my grading rubrics.
For comparison, the writing process that I used to use looked like this:
Outline: 5 points in the “homework” grade category
Draft 1: 1 point of the total essay score
typed or handwritten at home
Draft 2: 9 points of the total essay score
Gets detailed feedback on content, structure, and grammar from me
Final draft: 90 points of the total essay score
And a typical rubric for the final draft looked like this (initially adapted from a few coworkers’ rubrics):
The essay has a well-focused thesis.
The writer supports this thesis in the body paragraphs.
Sources are utilized well and integrated into the argument.
Total for content
The introduction paragraph(s) captures the reader’s attention and introduces the sources and background enough so that the thesis is understandable to a unknown reader.
The body paragraphs show clear and effective organization, and have clear idea progression and relationship between paragraphs. The point of each body paragraph is always clear.
The concluding paragraph readdresses the thesis nicely, does not exactly repeat it, and gives the reader a reason to care.
Total for organization
The essay has sophisticated, well-chosen sentence structures. The language errors do not interfere with communication. In particular, there should be no errors with noun clauses, comma splices/run-on sentences, hedging, or hypotheticals.
Total for grammar/language
Effective use of MLA format including a Work Cited Page.
Total for format & writing process.
The problems with this approach were 1) a lot of more feedback was given than was actually used for revision, 2) the first draft scores (out of 1) were consistently found to be very predictive of final course grades but were worth very little on their own and 3) I could not tell when the grammar scores I was giving were valid and when I was basically giving Google Translate an A.
Outside of our classrooms, an arms race is being waged between smarter and harder-to-detect ways of generating papers through AI on one side and software designed to detect plagiarism on the other. Copying and pasting still happens (and is the easiest to catch, even without Turnitin.com), but a minimally savvy plagiarist can direct a writing “assistant” generate an essay (Google “generate essays” for examples) or a summary, as I found on a recent podcast episode. At least for the moment, automatic plagiarism-checking software doesn’t catch AI-generated text, whether it comes from Google Translate or Ultron. An add-on to Chrome called Draftback can play back each keystroke in the creation of an essay (or any other Google Doc), potentially catching copying and pasting from AI sources (as copied and pasted text appears all at once as opposed to one letter at a time), but can’t tell who’s sitting in the chair typing text that is entered manually. When I see grammaticality, vocabulary and idiomaticity that is conspicuously improved, I have no way of knowing whether it comes from hard work and scrupulous proofreading or from the magic of smartphones:
I thought English/ESL departments might be some of the first to notice the black box of take-home writing, but others are even more on the cliff’s edge. The transition described in this post was also partly spurred by a conversation that I had in the adjunct work room at one of my community colleges in California, in which a Philosophy professor decried the amount of plagiarism going on in his and others’ classes and told me that he had on good word that UCLA’s Philosophy department no longer gave take-home writing at all. There is, after all, several hundred years’ worth of plagiarizable text on Plato’s Cave.
At my new job I’ve had the chance to talk to a few professors in different departments, and when it comes up, I’m often surprised at how large a portion of their writing assignments has also moved from students’ homes on the weekend to labs on campus during class hours. The reasons stated are usually a combination of wanting to help the students build good writing habits more actively and also simply having no ability to trust what you are getting when an assignment leaves your classroom doors. Some have also said that they dislike the for-profit model of services like Turnitin and Unicheck as well as the message of distrust that they send to students, preferring to keep writing to class hours where at least the pretense of benevolent watchfulness instead of red-pen-policing can be maintained.
I realized that there was a way to kill all of these birds with one stone as well as emphasize the “ideas” part of essays by radically changing my writing process.
The new process looks like this for a non-research essay based on a book or article:
Outline: 10 points of the total essay score
Peer review and instructor feedback on the outline
Done in Google Classroom
Many activities to build robust outlines before Draft 1
Draft 1: 30 points of the total essay score
In-class in a computer lab with only the outline and one page of notes (the outline has whatever quotes they’ve chosen to use)
Typed into the same document as the outline with no other websites or software allowed
Peer review and instructor feedback
Grammar feedback is only on the first 2 paragraphs, and after that only in the form of the COCA tag
All other feedback is on higher-order issues
Draft 2: 60 points of the total essay score
Revised at home and turned in
Accompanied by separate grammar assignments based on Draft 1
What has changed is that:
weights for all 3 parts of the writing process are distributed more equally
only Draft 1 has a grammar score
Draft 2 has grammar assignments in place of a grammar score
There are only 2 drafts
Both Drafts 1 and 2 have most of their points given to Content, a bit less for Structure, and a tiny bit for Format/Mechanics. Overall, compared to my old writing process and rubric, more time and more points are given to Content.
(I should also point out that I’m working with a shorter time limit now than I used to – 7-week terms instead of 16-week semesters. Still, I think the important parts don’t suffer much from the eliding of one draft.)
The grammar assignments that I give now in place of a grammar score for Draft 2 are all COCA-derived, and students use my COCA tags in their Draft 1 to know what to look up. This was actually the topic of a talk I gave at ITESOL last month (titled “Using COCA to Simplify Your Correction Codes”), and even if I find reasons to change the 2-draft model outlined above, I will almost certainly be keeping COCA in place of grammar on my rubrics. The assignments are short but open-ended in both the problem (something from their Draft 1) and the solutions.
In addition to changing the process, I try to have prompts that discourage ghostwriting or copying – a combination of new or unusual source texts (Digital Minimalism being a recent example), personalization (the DM essay required screenshots from the students’ own smartphones), and just topics that students want to write about (again, smartphones).
A lot of my former and current colleagues have described moving to a “studio” model of teaching academic writing – lab co-reqs at my last community college, 5-unit plus-sized courses at my current one. Who knows how the proliferation of text-generating technology will affect the “academic essay” in future writing classes?
I joke about this in class, but it’s probably not too far off that we’ll be asking students to turn off their retinal implants before doing anything in class (or generating class content by AI ourselves).
After not posting for a full month, I have a post that has been on the back burner for at least a year as an idea and half a year as a draft. It definitely falls under the “somebody should have already done this, and nobody has, so I will” category of research, like my THE/hensachi comparison that continues to be the most-read blog post I’ve ever written. In this case, I’m taking another look at a type of word that has interested me for a long time, the participial adjective, adjectives formed from the present or past participles of verbs, like interest/interesting/interested.
The relationship between verbs and adjectives, lightly questioned
It’s tempting to explain both the meanings and grammar of participial adjectives with reference to the verbs that form their bases. The question is, in the mind of a fluent speaker in 2019, are verbs still the bases of participial adjectives? My intuition is that they aren’t, that adjectives like “interesting” enter the lexicon of a typical speaker long before the verb “interest”, and only after much experience of similar words and/or explicit teaching does the relationship between the two become clear and productive.
If my intuition is correct and these are adjectives first and verb derivatives only after some reflection, there are implications for usage and teaching.
On usage, as came up recently in a Twitter discussion with @LinguisticsGirl, the closeness of the relationship between past participial adjectives (e.g. “interested”) to the passive voice of verbs like “interest” has implications for the meanings and grammar of these words.
On meaning, because a passive verb phrase (e.g. “is eaten”) has a patient (the subject) and an implied but sometimes unspecified agent (the object of the preposition “by”), an adjective based on that verb phrase could be thought to also have a patient and an agent. That is, if speakers are actively aware of the relationship between the passive verb phrase “be interested” (where “interested” is a verb) and the participial adjective “interested”, they may believe that the adjective “interested” also has an implied agent, the one who “interests” the subject. To illustrate:
Music bores Sam.
Here, “bore” is verb with an agent (“music”) and a patient (“Sam”).
Sam is bored by music.
Here, the same relationship between agent and patient is rendered with a passive verb phrase- a be-verb and the past participle of “bore”, plus the optional prepositional phrase indicating the agent.
Sam is very bored.
And a be-verb plus adjective. Does the average reader imagine that there must be an agent causing Sam’s current state, as they probably would if the sentence were “Sam is eaten”? To use the example that I used on Twitter, does one assume that “broken rocks” must have been broken by someone or something, or is “broken” just how the rocks are, with no implied cause?
On grammar, we already know that participial adjectives have a variety of prepositions instead of the expected “by” denoting the… let’s just call it the quasi-agent.
This seems to be evidence of the looseness of the relationship between participial adjectives and the passive verb phrases that they resemble. Clearly, at the very least, participial adjectives have some options for prepositions that passive verb phrases don’t. It is tempting to think that the number of possible prepositions after a given participial adjective is related to its prevalence in corpora as an adjective vs. as a verb. More on that at the bottom.
The implications of the relationship between participial adjectives and verbs for teaching seem to be in the approach that one would take if the relationship were strong or weak. If most fluent speakers keep the relationships between verbs like “disturb” and adjectives like “disturbing” active in their minds and use both with similar meanings and at similar rates, it could be more advantageous to teach the verbs along with the rules for generating adjectives more, as the rules could be counted on to be fairly regular, productive, and useful. On the other hand, if speakers keep “disturb” and “disturbing” separate in their minds, use them at very different rates and with different meanings, it could be more useful to ignore or downplay the relationship between the two and focus on statistical fluency and input, encouraging students to see them simply as separate words as input dictates.
The verbs, totally listed
Below, I have some contributions to our understanding of participial adjectives to make. First, here’s a big glob of data, and a bit of explanation afterward.
Verb – 3s
Adj – pres
Adj – past
All of this data came from the iWeb corpus over the spring and summer of 2019, basically built up over time whenever I had a spare 20 minutes or so to look up some words. In most cases, I just thought of a word that I noticed was both a verb and a participial adjective and did the search right away. There was no method to how I settled on words to search for.
The columns are:
Verb base: Exactly what you think it is.
verb 3s: the number of hits for that verb with its 3rd person singular “s” attached and the verb.3SG tag _v?z*, e.g. dismays_v?z*
adj – pres: the number of hits for that verb in its present participle form and the adj.ALL tag _j*, e.g. dismaying_j*
adj – past: the number of hits for that verb in its past participle form and the adj.ALL tag _j*, e.g. dismayed_j*
v/total: the number of verb 3s hits divided by the number of hits in all 3 categories. I.e., the % of hits that were adjectives instead of verbs.
The reasons that I searched for these categories were related to the limitations of the concordancer. The iWeb corpus (along with COCA and the other BYU-hosted corpora) reliably confuses grammatical categories, for example returning this sentence as an example of interest_v* (“interest” as a verb):
…James and Vugo is that they really have drivers best interest in hand.
I found that the verb form least likely to result in a misclassification of this kind was the third person singular, i.e. “interests” or “dismays”. Note that this reduces but does not eliminate misclassifications (try searching for interests_v* yourself to see). Likewise, interesting_j* and interested_j* reduce but do not eliminate misclassifications of these words as adjectives – it is possible, as is the entire premise of this post, that readers both human and computer get confused as to which uses of “am interested” are passive verb phrases and which ones are adjectives. It is probably less likely that the concordancer gets confused about present participle adjectives, as I certainly have never heard a sentence like “it is interesting me”, but for some verbs like “terrify” confusion is still possible – “terrifying me” does occur in iWeb 68 times.
Results, partly discussed
As a result of the accommodations to the limitations of the concordancer described above, the results have to be taken as ballpark estimates of the relative frequencies of the words in question as verbs or adjectives. The high ranking of “dismay” above gives us a sense that the adjectives “dismaying” and “dismayed” are more common than the verb “dismay”, and that “dismayed” is more common than “dismaying”, but it’s still not clear exactly how much more common in either case.
In the list above, the words are listed in order of v/total. That is, the highest ratios of adjectives are at the top of the list, and the lowest are at the bottom.
The data gives some support to the idea that at least for certain participial adjectives, their uses as adjectives far outnumber their uses as verbs. These participial adjectives include conversation and coursebook staples like “amazing” and “embarrassed”, but also some oddballs like “hearten” and “enthrall”. At the low end are words like “consider” whose main life is still as a verb and is only rarely used as an adjective (e.g. “in my considered opinion”) and last-minute addition “trigger”, which at the moment has no present participle adjective hits but is sure to change in the coming years.
It’s hard to extrapolate this data to answer the question, “how close is the relationship between ‘disturb’ and ‘disturbing’ in the average speaker’s brain?”, but it certainly seems compatible with my hypothesis that at least in the case of words like “amazing”, the adjective is capable of surviving on its own without analogy to the verb “amaze”. It stands to reason that a word that outnumbers another word 133:1 in frequency, as “amazing” does with “amazes”, probably can afford to pay its own rent, so to speak. And yes, I am using obtuse metaphors as a way of avoiding questions of psycholinguistics or neurolinguistics which I have absolutely no right to pretend to be able to answer.
The iWeb corpus an other corpora are less useful for semantic analysis, but it seems to me that many of the words high on the list here have gaps between the meanings of the verbs and their related adjectives – “disturb” doesn’t have all the same nuances as “disturbing” or “disturbed”, and “amaze” certainly doesn’t have the Kardashian-like connotations of “amazing”.
(At this point in the post, I vanished for at least 15 minutes unfruitfully searching for a clip of Dong Nguyen from Kimmy Schmidt saying “amaaazing”.)
In closing, the reader is invited to take from the data what lessons they will. I humbly suggest that one lesson that is not compatible with the data is that for all participial adjectives, the relationships between the adjectives and the verbs that they are based on are obvious and productive.
The same data, differently manipulated
For kicks, here is the same list, but in order of ratio of present participle adjectives to all adjectives:
adj – pres
adj – past
According to this list, these sentences should sound extremely wrong to you:
“The news was just flooring.”
“Critics are highly rating of that movie.”
“The President was totally rationalizing of his behavior.”
Again, I will mostly leave the implications to you, but I count this as at least compatible with the idea of letting input address at least the less common ones and only explicitly teaching the most common/most equally distributed.
Preposition options, negatively correlated
Last, to test my dropped breadcrumb from earlier about non-“by” prepositional complements, I added up all the hits for all prepositions following the word in its past participle form, but without a verb or adjective tag, i.e. dismayed _i*. I then divided the number of hits for “by” by the total number of preposition hits, giving me a sense of how often the preposition following the past participle of that verb is “by”. For verbs that are interpreted only as verbs and never as adjectives, we would expect a higher number, because “Salads are eaten by yoga practitioners” but not “Salads are eaten of yoga practitioners” grammatically describes the relationship between the patient and agent for “eaten”. On the other hand, we expect a bit of noise in these results, as “Salads are eaten at restaurants” remains possible, as does “Salads were eaten up“. Indeed, only 33% of prepositions following “eaten” are “by”, although “by” is indeed the top hit.
The following are (is?) a random group of 11 words from the earlier list that I did the above search for:
% of prep results that are “by”
Interestingly, the ratio of non-“by” prepositions after a given word did correlate with its ratio of hits as adjective to verb at -0.65. That is, the more often a word was used as an adjective vs. a verb, the more often it had prepositions other than “by” following it. Verbs marked with an asterisk had prepositions other than “by” as their top hit (“bored” had two prepositions above “by”, “of” and “with”).
Again, this speaks the possibility that in the minds of most fluent speakers, these participial adjectives are not explicitly or actively related to the verbs that etymologically form their bases. It stands to reason, although it isn’t proved here, that on other issues including the implied existence of an agent and the semantic relationship between the passive voice and the past participle adjective are less close than some casual linguists, language teachers, and coursebooks seem to assume.
Did I mention that my university has a half-term break right now? Don’t expect another post like this until at least December.
Addendum, just added
On the “close relationship” between participial adjectives and verbs, some readers have pointed out that I could have been more precise in what I meant. Here, I hope to flesh out some of the various ways that the two could be “related” without, again, treading too hard on territory outside my expertise with phrases like “instantiated in the brain” or “sharing an entry in the mental lexicon”.
I can think of 3 ways that these verbs and adjectives might be semantically related: number of meanings, state/action, and degree.
On meaning specifically, consider these three definitions from dictionary.com:
verb (used with object)
to interrupt the quiet, rest, peace, or order of; unsettle.
to interfere with; interrupt; hinder:
Please do not disturb me when I’m working.
to interfere with the arrangement, order, or harmony of; disarrange:
to disturb the papers on her desk.
to perplex; trouble:
to be disturbed by strange behavior.
upsetting or disquieting; dismaying:
a disturbing increase in the crime rate.
marked by symptoms of mental illness:
a disturbed personality.
Notice that only one of the meanings listed for the verb is similar to the present participle adjective, and none are similar to that of the past participle adjective (except metaphorpically). Even a grammatically ambiguous sentence can be interpreted as clearly a verb or clearly and adjective based on meaning:
The “do not disturb” sign is out, but clearly we’re being disturbed.
His collection of loose toenails is disturbing.
He’s clearly disturbed, judging by his interest in feet.
Based on these examples, “disturb” has a loose relationship to its participial adjectives. If you do a similar search to that I did above, but for “amaze”, you will see that some verbs and participial adjectives retain very similar meanings. I don’t have any quantitative way to refer to this, but let’s just say the fewer meanings are the same or similar, the less close the relationship.
Perception as a state or action is usually more of a difference between verbs and adjectives as grammatical categories, but my verbs are mostly stative – that is, they refer to a state of being rather than a discrete action, and therefore collocate more than adverbs of intensity than adverbs of frequency, just like adjectives. However, for at least some of the verbs above, there will be an option for an action rather than state meaning:
He disturbs me at work every day, and he is disturbing me right now. (action)
Your lack of faith disturbs me. (state, verb)
Your lack of faith is disturbing. (state, adjective)
Clearly, the action meaning is unavailable for the adjectives. What this means for “closeness” of verbs and adjectives is that if a verb has a possible meaning as an action verb, it could be said to be less close to its participial adjectives, which naturally don’t.
Last, for degree, adjectives unlike verbs are usually perceived as gradable – attributing some quality to nouns to varying degrees, as specified by adverbs like “a little” or “very”. There are exceptions like “unique” (at least according to some) or “freezing”, but the key area of interest for us is the extent to which verbs share these qualities with their participial adjectives, regardless of what those particular qualities are. For example, the verb “amaze” seems to have the same ungradability as its adjectives “amazing” and “amazed”:
It absolutely amazes me.
I am absolutely amazed.
It’s absolutely amazing.
But “compel” seems not to be not as gradable, or not gradable in the same ways, as “compelling” or “compelled”:
Δ It doesn’t compel me very much.
It’s not very compelling.
Δ I’m not very compelled.
Curiously, “compel” as a verb and “compelled” as an adjective seem less gradable than “compelling” as an adjective, perhaps because interpretation of “compel” is so closely tied to the completion of the verb that it usually takes as an infinitive complement. That is, if I “compel” you to wash the dishes, you almost definitely wash the dishes, but if I’m just “compelling” in general, my status as “compelling” doesn’t have a binary on-off status tied to the completion of anything in particular.
I believe that when verbs and adjectives differ in their ability to be seen as gradable or in degrees, they can be said to have a more distant relationship. When they are the same in these respects, their relationship can be described as “close”.
Any other forms of “closeness” will have to wait for another day.
Addendum, added again
Here are some charts showing the relative frequencies of the verb (with the caveats above), the present participle adjective, and the past participle adjective for the top 20 most frequent words in the list (as of this update, at least).
Since I’m at it, I thought I’d provide a bit of the opposite of what I did in my last addendum – signs of “distance” between verbs and the participial adjectives that come from them.
Date of first use
First, not every participial adjective in this list has a unique dictionary entry at all – devastated, for example, appears in neither dictionary.com nor etymonline.com, although its partners devastating and devastate do. Of those that do, often the first recorded use with a particular meaning is noted, for example “Meaning “dejected, lowered in spirits” is from 1620s.” for depressed from etymonline.com. An older first use as an adjective, particularly with a distinct meaning, could speak to a meaning as distinct as ice and cream have to ice-cream (1744).
Age of first use
This is opening an issue that begs for actual data that I don’t have, but if it could be shown that people begin using amaze and amazing at different ages, it could also speak to greater “distance” between these words. On the other hand, if both begin appearing in speech at about the same time, one could simply be a true morphological derivation of the other, formed by rules analogous to a wug test. I believe we are seeing this process of derivation in real time with the birth of the adjective triggered. If future generations of children start using sentences like “He was so triggered”years before they say “The video triggered him”, we can assume that these are distinct words, not just morphologically derived variations on the verb.
Charts, very framed
Last, here are two super handy charts for you to print, frame, and finally replace that picture of your niece with:
Shortly after my acquiescent post on the constant rejection one faces applying for full-time ESL jobs, I got an email curiously positive in nature and free of formulaic boilerplate. I had gotten so used to rejection that I almost didn’t comprehend it at first – but it was an invitation to interview, something I had gotten just a few times in the years since my MA. And after that first interview on Skype, I got another such email from the same place, inviting me for a campus visit. When the date came in late May, after I made sure my grading for the weekend was already done, I boarded a plane at John Wayne Airport at 4 AM and spent the whole day in a state besides the one that I have lived in since returning to the US in 2016.
Now, I was breathing such rarefied air at this point that I felt zero pressure to succeed, happy to plant my flag at the “second interview” stage before what I assumed would be a quick descent back down to solid adjunct ground. This was a Monday. I had classes again at my usual schools on Tuesday and plenty of proctoring and grading to do after that to help push the entire episode into the past tense – I was already imagining the conversations I would have in the break room at all the same schools next semester about the time I came this close to getting a full-time position.
But as a call a few days later informed me, I did get it, and very soon after this post goes up, I’ll be starting my first classes there.
By crazy when-it-rains-it-pours coincidence, this was the 2nd full-time job offer I took this year – although the first was a contract only for the summer. That job, which just ended, has given me a bit of a sneak preview of my life as a full-time teacher in a context other than Californian community colleges. I thought I would share a bit of my reflections here, both as a document of my thoughts for myself and as a guide for other adjuncts hoping to do something similar.
Adjunct Goodbyes and Full-time Goodbyes
I’m excited about my new job, but I do have a few regrets about leaving the colleges where I teach now. One of those regrets is that I did many things for the last time at my main schools without realizing they were the last times. I had my last norming meeting (and I enjoy those), my last walk with a student between the classroom and the lab to show them where it is, and my last unexpectedly long pause while the projector warms up, all without knowing that I would never do those things there again. I saw a bunch of people in passing in a hallway or copy room and said some simple words of greeting or an inside joke not realizing that those were the last times I’d be doing that with those people. Not to strike too melodramatic a tone, but for the most part these were the first workplace acquaintances I made in California, and they witnessed my whole process of getting my feet wet, asking silly or obvious questions really politely (“Sorry if this is obvious to everyone here but me, but what is an SLO?”). I will probably like my new coworkers – teachers are usually nice – but they won’t be my first coworkers in the US. I have a lot of words of thanks to go around, but I won’t be specific here. If we spent more than one microwave’s cooking time together, I appreciated it.
There are a few students who had let me know that they wanted to sign up for my fall classes with whom I’m not holding up my end of the bargain. This makes me feel a bit guilty, as does the fact that I won’t be able to wave or chat to former students that I see around campus, but both of these are a bit of an unnatural extension of the teacher-student relationship, which formally has a lifespan of one semester. The same goes for quite a few “single-serving friends” I made in break and copy rooms, for whom the loss isn’t of a deep friendship but just the potential for a longer one of whatever quality it was for 30 minutes a week while we both ate Amy’s frozen burritos. I got some kind words from my now-former coworkers, but of course the definition of an “adjunct” is something inessential to the major workings of whatever it’s part of. At any school with adjuncts, some portion of instructors and students will have the experience of suddenly not having a colleague or teacher on campus anymore every semester. I suppose part of my newbieness that never wore off was expecting to know when that time was coming for me.
(OK, I will single out for thanks 4 people whose initials are G.P., R.B., C.C., and B.W. who saw me at my most newbieish and imparted some very important and well-timed advice. Shucks, also my most frequent collaborators H.L. and D.P.. Also all my SIs.)
On the other hand, at my full-time summer job, we all knew pretty well from at least mid-June that I would be gone, and the program exists solely so that students matriculate out of it and into another program. The goodbyes here had pomp and ritual and lots of tears. People act differently when they know things are ending, and the entire last day of work was dedicated to ceremonial closing of the program, complete with thank-you cards being exchanged, speeches, skits, musical performances by every combination of students and teachers, and a lovely banquet to top it off. It was the best way to conclude a summer program and my time in California, with some really excellent people.
The lesson here, I guess, is to know as much as possible when you’re heading into a round of goodbyes.
I spent some time talking about performativity with a content-based class this summer, in both the linguistic “I now pronounce you man and wife” sense and the Butlerian “gender is created through its performance” sense. I didn’t anticipate to find the principle illustrated in the responses to two mass shootings in the days after our class ended, in the usual round of “thoughts and prayers” (sometimes in those words exactly and sometimes in other words as the original phrasing has become a bit of a cliché) being offered for the victims.
(To be clear, although this post is about language, I think the news and the banal responses are horrifying. This is a topic for a separate post, but you can always count on an ESL teacher not to buy arguments based on national exceptionalism – they seem more ridiculous the more of them you encounter.)
I mean this literally. I got a Fitbit last year, and during the spring semester, I tracked how many steps I took during an average of 5 class sessions of each of the 3 courses that I taught.
My classes were a content-based IEP class with 13 students, a mixed-skills intermediate-level credit community college ESL class with 21 students, and an advanced ESL writing class with 25 students.
Across 5 class sessions, the average number of steps total for each class was:
Content-based IEP: 236
Intermediate CC: 626
Adv. writing CC: 440
Of course, since the class sessions were of different lengths, it makes sense to divide the number of steps by the number of minutes in which I had to take them.
Steps per minute of class time, including breaks:
Content-based IEP: 2.63 steps per minute
Intermediate CC: 2.78 steps per minute
Adv. writing CC: 1.96 steps per minute
Last, because higher numbers of students might feasibly require the teacher to move more and farther around the classroom, here are the steps per minute further divided by the numbers of enrolled students:
Content-based IEP: 0.20 steps per minute per enrolled student
Intermediate CC: 0.13 steps per minute per enrolled student
Adv. writing CC: 0.08 steps per minute per enrolled student
What does this tell me?
I tended to walk around more, all other things being equal, in the content-based class. I attribute this to the type of work they typically did – small group discussions in which I would move from group to group and either guide the discussion, participate as an equal, or just listen. The other two classes, at community college, usually involved at least some “lecturing”, standing relatively still or sitting at the computer and typing notes projected onto a screen.
I think my classes could benefit from structuring more lessons around small group work rather than lectures to begin with. As it turns out, a further benefit might be that it helps me reach my fitness goals.
As with the same class last semester, and as happens to me often, I have been spurred to blog by an unusual utterance by a student, or should I say an utterance which in its non-target-likeness highlights an interesting linguistic phenomenon.
Some verbs, like “know”, say something about the mind of the subject of the sentence as well as the mind of the sentence’s speaker. That is, if Kim says, “Eva knows that 3 students will fail the class”, not only Eva but also Kim believes that the proposition “3 students will fail the class” is true. If Kim believes that Eva is wrong about those 3 students, she will probably choose a different verb, like “believe” or “think”, because if Kim says “Eva thinks that 3 students will fail the class”, she avoids giving the impression that she agrees with Eva.
(It’s an interesting question how many clauses deep these verbs have to be before the speaker is no longer presumed to agree with the proposition. For example, if Laura thinks that Kim believes that Eva knows that 3 students will fail the class, is it implied that Laura agrees? Does the factivity of “know” leap out of its clause and infect every person in the sentence, or does one non-factive verb break the chain? I tend to think that if Laura heard a sentence like “Eva knows that 3 students will fail”, but thinks she’s wrong, she’ll change the verb to a non-factive one in relaying that information to someone else.)
As you see from my aside, these verbs are called factive. In short, they imply that the content of noun clause that follows is factual. “Know” is one of these, as are “understand”, “realize”, “prove”, and “remember”.
The error that I saw that inspired this post was the opposite: a verb being used to imply that the content of the noun clause was false, as in “deny”, “disbelieve”, and “doubt”, which all mean that the subject believes or says that the proposition that follows is false. These words, unlike factive verbs, don’t presuppose that the speaker agrees. When the newspaper says, “Dems doubt that Trump will leave willingly”, the newspaper isn’t taking the position that they are right about him. The newspaper is simply relaying the Dems’ state of mind.
(Confusingly for Japanese learners of English, “doubt”, 疑う utagau in Japanese implies that the subject has a sneaking suspicion that the proposition is true, rather than false as it is in English. Another strike against grammar-translation.)
The error that I saw used a factive verb with a negative prefix and was followed by a noun clause that the writer intended to say was false. It was something like “Many people misunderstand that the earth is flat”. The writer, as I understood it, was trying to say that many people believe that the earth is flat, but they are wrong. This left me sitting and re-reading the sentence for a few minutes as I tried to figure out just what seemed so strange about it. I did my customary COCA search and found a relative lack of noun clauses after “misunderstand” compared to “understand”, validating some of my intuition, but it didn’t give me an answer as to why.
One factor that occurred to me is that “deny”, “disbelieve”, and “doubt” still leave the proposition standing on its own two feet epistemologically. They don’t bring up the proposition and in the same breath invalidate it – they just say that the subject disagrees with it. It is still free to exist as a proposition and be believed by other subjects. It seemed perverse to me that “misunderstand” would have a noun clause following it that was presupposed even by the speaker to be false.
As I was typing this though, I remembered “disprove”, which shares with “misunderstand” a factive root and a negative prefix. To my understanding, “disprove” is a true unfactive – if I say “Einstein disproved that matter and energy are distinct”, I am also stating my agreement with Einstein. If we accept the premise that some propositions are true and others are false, the above sentence can only be true if the proposition contained in it (“matter and energy are distinct”) is false. Therefore, the combination of negative suffix with factive verb to mean “the noun clause following this verb is definitely not true” cannot be the source of the strangeness of “misunderstand that…”
Another factor may be that unlike “deny”, “disbelieve”, and “doubt”, and even “disprove”, the speaker’s and the subject’s opinions of the truth of the proposition in “misunderstand” are different. When “Trump disbelieves that” his approval ratings are low, Trump believes that the proposition is false, and the speaker doesn’t take a position on it. When “Einstein disproves that” matter and energy are distinct, Einstein and the speaker agree. However, in my student’s usage of “misunderstand”, the speaker and the subject definitely disagree. “Trump misunderstands that millions of illegals voted”, in my student’s usage, means that Trump believes it, but he is wrong. In my limited exploration of this issue, this is the only case where the speaker uses a verb to imply both that the speaker believes the proposition and that the proposition is false.
Perhaps for an unfactive verb to make sense, as “disprove” does, it has to say not only that the proposition is false, but that the subject is right that the proposition is false. Anything else is uncromulent.
This course is a bit of a chimera – ostensibly a pre-requisite for transfer-level writing, but in practice very similar to free adult education courses. Students are quite open about this, and the extraneousness of the course in light of the growing AESL program is part of the reason that it will no longer be offered in the fall (in addition to a law passed in CA mandating that community colleges move ESL students up to transfer level writing within 3 years). On the other hand, it’s the course that I’ve taught in at this school the longest, and I have a sentimental attachment to it. In light of that, it might not be all that useful to be combing over my curriculum for areas of potential improvement, but I still want to see what I did right and what I did wrong.
This class is one level below transfer, which is kind of a big deal within ESL – students who pass this class are supposed to be able to hang with native-speaking teenagers in English 100 (Writing 1, English 1A, Humanities 101, or whatever your university calls it). This is the first time I’ve taught this class, and in doing my usual round of end-of-the-semester spreadsheets I’m mostly interested in what kinds of homework assignments predict overall grades, which in turn are (presumably) a good measure of English reading and writing ability. This will help me to choose assignments that are really worth doing to assign in the coming semesters and weight them appropriately.
It’s no surprise that in a writing class, writing assignments end up composing a large part of the final grade. But which writing assignments are the best predictors of the scores of all the others? To figure this out, I normalized each score (0 to 1, so some assignments don’t end up more predictive just because they were worth a lot of points), added them all up, and compared how much each individual one correlated with the total. The highest correlation is the assignment with the most predictive power for writing scores overall. This assignment turned out to be……….
(For part 1 or part 2 of this series, scroll waaaaay down to 2016.)
We had something of a popularity contest in the US in 2016 between a very comfortable public speaker and a slightly stiff one. Depending on one’s prior feelings or biases, the former may have looked either charismatic or puffed up, and the latter may have looked duplicitous or booksmart.
For a casual viewer, it could sometimes seem that the comfortable speaker simply knew his stuff better, which resulted in his greater comfort communicating that knowledge to large numbers of people. He projected confidence, which encouraged trust. For people not actually listening to the words he used, it was easy and tempting to consider the self-assured speaker a more experienced, able leader, who had earned his confidence through ability and experience. He didn’t choose his words carefully, but his ease on stage seemed as if it might have come from years of being tested and winning. The careful speaker always seemed to have to work a little too hard to find words that sounded right, and therefore felt dishonest – or worse, scheming – to many.
For people who were listening to (or reading) the content of the message rather than the delivery, it was practically irresistable to come to the opposite conclusion; that the stiff, careful speaker chose her words to reflect her nuanced, well-informed thoughts, which naturally didn’t come pouring forth like a river but in precisely measured portions. Meanwhile, the confident speaker’s spell was thoroughly broken on the page. Instead of a freewheeling and charming salesman, his words seemed like those of a buggy machine translator working with Nike slogans in Armenian.
Throughout the campaign and to the present day, it has been a constant joke that President Trump’s speech patterns reflect a lazy and uneducated mind. And while it may be true that he is lazy and uneducated (as opposed to unschooled), the evidence for this is not to be found in his basic speech patterns. As language teachers (and everyone reading this is probably a language teacher), we shouldn’t condone criticism of him or anyone else that is based on the premise that verbal performance is a reliable measure of intellect.
It is a truth that is especially evident to language teachers that the sophistication of one’s thoughts and the sophistication of one’s verbal ability can differ widely. There are people who have chunks of academic circumlocution constantly at the ready to bring to bear on topics that they have no particular expertise in. There are also people whose words never quite build a substantial enough bridge for their weighty ideas to cross. Our entire occupation is based on mismatch between our students’ intellects and their communication abilities. If one reliably predicted the other, we wouldn’t need language as a separate subject at all. This is particularly true in ELT (my field), but all language teachers from speech pathologists to teachers of creative writing courses in college know that sophisticated thoughts are no guarantee of sophisticated expressive ability.
It’s also important to keep in mind that abstract linguistic competence doesn’t always manifest in perfect form in real-world situations. There can be quite a bit of “noise” between the language that exists in a person’s head and what escapes from their mouth in a high-pressure situation like an interview on 60 Minutes or an address that will be heard by millions. The presence of a threat, the need to present oneself a particular way to particular people, a time limit, or conversely, great self-confidence can disrupt or enhance linguistic performance. As language teachers, we have workarounds or accommodations to the phenomenon of performances not always matching competence – reducing the number of observers, trying to gather a sample for evaluation unobtrusively, allowing students with anxiety disorders to skip certain portions of the test, etc. It should be no surprise to us to that a politician’s verbal performance isn’t a reliable measure of their linguistic competence, or of course that their linguistic competence isn’t a reliable measure of their intelligence.
Some criticisms (that is, almost all criticisms) of the current President are valid and if anything understated. But we should know better than to attack him for his way of talking. Obviously, this goes 10x for his wife, who seems to be, like him, far too small a person for their historical moment, but is also unfairly criticized for just sounding strange.
Again – there is plenty of other evidence that Trump is incurious and ignorant. There’s no need to insult most of our students by implication just to make that point.
Since I started teaching community college ESL, I’ve set aside at least one class period in all my writing classes to teach students how to use COCA and the other BYU corpora, but I struggled for a long time to incorporate it in an intuitive way into my intermediate multi-skill classes. I think its utility is clear, but the interface (computer literacy can be a problem) and baseline metalinguistic knowledge necessary just to use it have thus far stopped me from making it a regular feature. I do, however, have one activity that uses corpora (either COCA or iWeb) that is reliably entertaining and useful for classes of any level. I call it Corpus Family Feud.
Like the real Family Feud (a TV game show, for those of you outside the US and non-fans of SNL), the point is for participants to guess the most common answers to a survey question. Unlike the real Family Feud, the questions are specifically concerned with language use, and the “survey” is of corpus data rather than 100 people randomly by phone.
Also like the real Family Feud, it’s the studio’s (i.e., the teacher’s) job to prepare the questions and collate the survey answers beforehand, and then reveal them to the participants after they have made guesses.
The basic steps are:
Before class, prepare sentences with one or more blanks, and then find the most common words that fill those blanks according to corpus data. 3-5 total sentences for one session seems to be a good rule of thumb to keep interest high throughout the activity.
Also before class, prepare a slideshow (I use Google Slides) that features the sentence with blanks, directions for what kinds of words go in the blanks, and the answers in list form. The answers should be set to be invisible when the slide loads and appear on subsequent clicks.
During class time, announce that you are playing a game, and display the slide with the first sentence. Tell explicitly what kinds of words can be used to fill in the blank, and tell in general terms that you found the top 5 words that people actually use to fill in that blank in their real communication in the real world.
Have students write down the top 5 words that they think fill that blank in the real world. Announce that they will get 1 point for each of their answers that is actually in the top 5. Tell them also that it doesn’t matter which order they put them in; they get 1 point for any answer that was in the top 5.
After a few minutes, announce that you will start displaying the answers. Drum roll and display the first answer. Students will probably applaud, shriek, or say, “ohhhhh”. Remind them to keep track of how many points they have as you continue drum rolling and displaying the answers in sequence.
After you’ve displayed every answer, ask the students who has 3 points, 4 points, or 5 points until you figure out who the winner is. Give the winner a piece of candy or some other gold star-equivalent. Repeat with the next sentence.
As a variation, you can choose 5 words in advance, display them when you display the sentence, and ask the students to put them in order. This allows you to choose words other than the true top 5 according to corpora (which are often boring words that nobody ever thinks of, like “be” or “doing”), but requires you to give points only for correct order of words rather than giving points for any word that appears in the actual list.
For example, let’s say your intermediate multi-skill class is covering gerunds (I mean “covering” as in it came up for one reason or another, whether as a front-loaded chapter of a synthetic syllabus or as focus on form after a task). You might decide on a few chunks where gerunds are commonly used, like “I enjoy ___” or “____ is important”. These would be the questions for your game. Your slides might look like this:
I display the frequencies, but this is probably unnecessary. In the variation where you supply the words, it might look like this:
Where only the ranking and frequency numbers appear on click and the words are displayed from the beginning.
Other variations I have used in the past look like this:
There is almost literally no end to the kinds of phrases or grammar you can use to play this game. Besides an excuse to use corpora in a mid-level class, this helps turn what could be an abstract grammar lesson into one that respects chunking and the conventions, rather than just the rules, of language. Have fun!
This blog is way for me to make sense of complexities of teaching and learning English as a Foreign Language. My aim is to research areas of interest to inform my teaching and increase the impact of my teaching.