Counterproductive SEVP rules for ESL

At the moment, universities across the US are panicking about a rule change from ICE saying that:

Students attending schools operating entirely online may not take a full online course load and remain in the United States. The U.S. Department of State will not issue visas to students enrolled in schools and/or programs that are fully online for the fall semester nor will U.S. Customs and Border Protection permit these students to enter the United States.

https://www.ice.gov/doclib/sevis/pdf/bcm2007-01.pdf

Obviously, this is a mean-spirited and counterproductive policy, regardless of what one thinks the purpose of higher ed is (unless you think the purpose is mean-spiritedness).

I want to draw attention to a particular change in policy, or rather discontinuation of exemption to previous policy, that is also counterproductive in a way that is very nuts-and-bolts to ESL teachers.

3) Students attending schools adopting a hybrid model—that is, a mixture of online and in person classes—will be allowed to take more than one class or three credit hours online. These schools must certify to SEVP, through the Form I-20, “Certificate of Eligibility for Nonimmigrant Student Status,” that the program is not entirely online, that the student is not taking an entirely online course load for the fall 2020 semester, and that the student is taking the minimum number of online classes required to make normal progress in their degree program. The above exemptions do not apply to F-1 students in English language training programs or M-1 students, who are not permitted to enroll in any online courses (emphasis added)

Ibid

Which refers to this section of the rules set out by USCIS’s Student Exchange and Visitors Program (SEVP):

(G) For F-1 students enrolled in classes for credit or classroom hours, no more than the equivalent of one class or three credits per session, term, semester, trimester, or quarter may be counted toward the full course of study requirement if the class is taken on-line or through distance education and does not require the student’s physical attendance for classes, examination or other purposes integral to completion of the class. An on-line or distance education course is a course that is offered principally through the use of television, audio, or computer transmission including open broadcast, closed circuit, cable, microwave, or satellite, audio conferencing, or computer conferencing. If the F-1 student’s course of study is in a language study program, no on-line or distance education classes may be considered to count toward a student’s full course of study requirement (emphasis added).

https://www.ice.gov/sevis/schools/reg

This was an unnecessary rule before COVID-19, but is severely counterproductive now. To understand why, it’s important to look at institutions’ guidelines for face-to-face classes in the fall and consider them in light of common practice in ESL classrooms.

Although my university and many others are “reopening” on-campus instruction in the fall, the reality of the classroom will be quite a bit different from pre-COVID times. Citing my own university’s guidelines for the coming fall semester:

Faculty, staff and students are expected to wear face coverings as required by the Governor’s Executive Order. SUU will provide masks for those who do not have their own.

To help with contract tracing efforts this fall, professors will keep seating charts and take attendance in classrooms.

https://www.suu.edu/coronavirus/classroom-instruction.html, emphasis added

In addition to these mandated countermeasures of masks and assigned seating, any professors with common sense will seat students at least 6 feet from each other if space allows, and will definitely keep at least 6 feet away from the students themselves. By themselves, these measures (required and commonsense) are welcome, but combined with the requirement that language classes be held in person, they create the potential for a very unproductive fall 2020 semester for ESL programs.

Main Image
Source: https://www.wsetglobal.com/knowledge-centre/blog/2020/june/17/return-to-the-classroom-post-covid-19 – “The Wine & Spirit Education Trust provides globally recognised education and qualifications in wines, spirits and sake, for professionals and enthusiasts.” Yet another subfield of education that I didn’t know existed

Consider how hard this makes many, if not most, of the staple activities of the ESL classroom – basically anything other than lectures, which ESL teachers tend to avoid (as do many pedagogically modern teachers in other fields). I was going to make a list of popular activities that are made difficult or impossible under social distancing rules, but there’d be no point – all of them are. Just imagine trying to do any kind of group work with students covering their faces, seated 6 feet apart, and unable to change seats. In the ESL classroom, for many good pedagogical reasons, “group work” is of course not a side order or a topping over the nutritious main course of lectures, but often the main course itself, including as it does:

  • Reading circles
  • Discussion circles
  • Any other type of discussion
  • Peer feedback (at least other than as comments on Google Docs)
  • Group presentations
  • Group projects
  • Information gap activities
  • Minimal pair activities
  • A million things I’m forgetting at the moment

In addition to the above, I can’t imagine a classroom where I stay stuck at the front, unable to interact with my students on a person-to-person basis during class time. It’s quite hard to judge whether students really get the difference between D-identity and A-identity when I can’t listen in on their discussions or pull them aside and ask them a question or two.

I’m not sure, but I suspect that part of the justification for USCIS’s face-to-face rule for language classes is exactly that real-time practice is so important to language acquisition. In that sense, the rule may have been justified as a way to ensure private ESLs were giving pedagogically sound education to students on F-1 visas. If that is true, then what is the point of requiring face-to-face instruction when most face-to-face activities will be impossible to carry out?

The point of this post is not to decry my university’s social distancing guidelines or even its reopening, but to point out that the combination of reopening, social distancing, and the SEVP rule stating that language classes must be face-to-face mean that ESL teachers and their students are stuck in a worst-of-both-worlds situation. If asked, I’m sure most of us would say that face-to-face classes are preferable to strictly online ones, but that is because under normal circumstances we make good use of the synchronous and immediate classroom milieu. When we can’t be physically in the same classroom at the same time, we can still use many of the same or similar activities synchronously or asynchronously over the Internet, often with similar or even better outcomes. We’ve now had half of spring semester and all of the summer to figure out how to adapt our classes to online delivery, and at least in my experience, it now seems that many classroom activities actually work better online (modeling pronunciation for one – I can’t show them nearly as much of the inside of my mouth in person), and I would continue to “outsource” some of my class time to Zoom, Flipgrid, and Google Drive given the choice. It seems that some combination of remote and in-person classes (in other words, hybrid classes) would be ideal. Forcing us back into 18 hours of face-to-face instruction per week with only lectures as an instructional tool exposes us (students and faculty) to risks with not only no reward, but a severe penalty in instructional quality.

False Intermediates

When I was teaching English in Japan, I got to know many false beginners – learners with grammar knowledge but little practical skill. Now that I teach in the US and at the higher end of an academic ESL program, I see them less often, but when I do, the signs are unmistakable: one browser tab always open to Google Translate, long delays in pragmatically simple conversational exchanges, and papers that adhere to some standards of grammar while missing the larger point of the assignment. The term false beginner seems to come from the idea that these students may appear to be beginners, but they’re really not – they just haven’t learned to apply what they know. I don’t believe that this definition accurately describes the phenomenon that I and many other language teachers have observed. Here, I want to expand the range of the term false when applied to learners and question what exactly is false about false beginners.

First of all, what is the grammar knowledge that false beginners supposedly have but can’t apply? Terms like explicit grammar knowledge or declarative knowledge mask large differences between what mental representations our students have of English and those we may wish them to have. The first thing that a very old-school MA TESOL English teacher trained to present, practice and produce discrete grammar items would notice if suddenly asked to teach a grammar course in Japan or China is that the students’ explicit knowledge of English is almost entirely 1) encoded in their L1, 2) aimed at direct translation into their L1, and 3) meant to be applied in a manner that displays depth and breadth of intellect rather than automaticity. It therefore only partly, even coincidentally, overlaps the grammar knowledge that the teacher may have, and is certainly not taught in order to make it less laborious. False beginners don’t just have unapplied knowledge, but often knowledge that the teacher wouldn’t recognize as English in the first place.

Real-world applications of English skill, therefore, are not simply the next step that students haven’t gotten to yet. Speaking in real time, writing papers, or enjoying literature have not been the goals of most false beginners‘ English educations, and the knowledge of English that they have is not just unpracticed for these goals but often unpracticable. What is required is not just activation of dormant knowledge, but new knowledge, taught in different ways with a different purpose.

Paradoxically, what is false about false beginners is not the fact that they are beginners, but that people assume that they are not beginners. With respect to either explicit knowledge or implicit knowledge, false beginners are simply beginners that people treat as if they weren’t because of their success in a related field – as if a helicopter were a kind of false airplane. I’ve never met a false beginner that had any advantage over a “true” beginner; if anything there seemed to be a substantial hole to dig out of. But the local definitions of English competence (the aforementioned regime of translation), encouraged by others and internalized by the student herself or himself, and some behavior that approximates competence, have convinced examiners and placement officials that the student really is acquiring English. The apparent acquisition, which is really just faster and more application of explicit rules of translation, can carry a student quite far in an orthodox EFL or ESL program. I have never seen a student who relied solely on translation all the way through an undergraduate or graduate program, but I have seen a great many get to the higher stages of academic ESL before the sheer amount of language forces them to reassess their approach or just drop out.

I’m not sure what makes the difference between a false beginner who gives up on applying grammar translation fairly early in a mainstream English course and one who sticks with it for years, up to the point when they could be called false intermediates (here, false meaning “not really intermediate”), but two characteristics I’ve noticed have been confidence in their own intelligence (not the intelligence itself – that is a can of worms I’d rather not open) and past success in their first educational culture. Confidence, which in many other cases would be a virtue, encourages learners to continue applying a mentally taxing and arduous routine of translating back and forth from their L1, embracing the strain as a welcome challenge. The teacher’s advice that it doesn’t need to be that hard is counterproductive, since the effort is part of the point. Also, learners have been rewarded for years for successful application of their translation skills by proud teachers, admiring classmates, and admission into exclusive programs or schools in their home country. The current teacher’s implications that they were all wrong threaten years of hard-won self-esteem. A combination of factors have made false intermediates strongly identify with translation as a means of approaching English in a way that makes them resistant to correction.

Conversely, one situation that seems to encourage false intermediates or false beginners to course-correct is one in which their less intellectual (by their standards) or less diligent (again, by their standards) countrymen begin leapfrogging them in their new educational culture. A hardworking but taciturn student from Japan can rationalize away the success of an enthusiastic Syrian as just an outcome of the compatibility of two foreign cultures. A dedicated translationist from Shanghai who sees a lackadaisical but gregarious classmate from Qingdao regularly and publicly showing mastery of difficult material on quizzes, class discussions, or presentations may be forced by cognitive dissonance to either reassess their strategy or their intelligence. Luckily, in my experience, the strategy is the one that is reevaluated.

The El Camino

The sunset definitely looked different in Hawaii, Yukino thought – all blurry around the edges, the sun just the brightest spot in a spectrum of colors that seemed to take up the whole sky – as she looked out the rear window of the car. “Car” was probably the best thing to call it. It was low to the ground and looked like a car from the front, but had a wide, flat bed like a pickup truck. It was not really a car or a truck, but a vehicle for which she had no precise word. If she had been back in Japan, she might have said it looked like a kei-tora, but she hadn’t seen one of those since moving to Honolulu 3 weeks ago, and anyway, her host mother was an ESL teacher, not a rice farmer. It definitely had a working engine, and that was enough. She really wanted to get away from her ESL school as quickly as possible and back to her host mother’s house, east of Honolulu.

Her host mother AND teacher. June filled both roles, although not to the same degree of propriety in Yukino’s estimation. She was a pretty normal host mother as far as Yukino could tell – taking her to Diamond Head, making her Loco Moco, but otherwise giving her space to send LINEs to her friends, which was welcome. These were what Yukino had been led to expect were what host mothers did. But he word “teacher” still didn’t seem appropriate. The way she conducted herself was within the boundaries of normal host mother behavior, but she was way out to left field as a teacher. Yukino wondered how June could even call herself a teacher.

At the start of her first lesson, Yukino had known that her host mother would also be her teacher (the school gave priority to employees of the school for homestay placements), and was consequently more relaxed than she might have been. She had introduced herself to the other students sitting near her – actually with her, since instead of rows of desks they had five peanut-shaped tables – and was setting out her textbook, pen case and electronic dictionary when she she saw on the display that class had already started six minutes ago. It was another five minutes before June walked in and nonchalantly joined one of the five tables in casual conversation, not taking her correct place at the front of the room. She proceded to visit briefly with every table, just as if she were another student, before excusing herself and walking out of the room again, as if the students were to teach themselves. Her classmates, not sure what to do but still excited to get to know each other, were still talking excitedly when one girl at the next table squealed that it was 11:35 and class was over. A few of the students from Africa (Yukino guessed) walked out of the room together, presumably heading to lunch, still bantering in English and occasional French. Yukino and her new friend Jimena (who was Colombian although her name sounded Japanese), who she’d met and gotten to know quite well already while they were both waiting in vain for June to return to the classroom, followed tentatively, not quite sure if the concept of “ending” applied to a class that seemed to observe no rules of time. In the lobby, Yukino saw that she and about ten other numbers had gotten a text from June – Jimena had gotten the same one – thanking them for being such enthusiastic learners (what had they learned?) and to prepare a short speech to introduce one person that they’d met at their table that day for homework. Jimena looked happy, and they hurriedly agreed to make each other their speech subjects, but Yukino wondered how she was supposed to do this without any help. Wasn’t the teacher supposed to give them the language first, then guide them through practice with it, and only last, maybe ask them to do it on their own? Had June actually taught them anything in class that day?

She didn’t voice these thoughts to June on the ride back in the strange car-truck vehicle, but just stared at the blurry-edged Hawaiian sunset and wondered how anyone could tell where the sun ended and the sky began.

A Taxonomy of Jargon

I’ve noticed a consistent difficulty that my ESL students have, which is comprehending words that are particular to a certain academic field, analytic lens, or article/book, especially as distinct from homonymous words in the dictionary. My classes often read Duhigg’s The Power of Habit as their main text, which features a unique definition of habit, among many other words. For example, Duhigg defines cue, routine, and reward thus:

This process within our brains is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future... (19)

and later specifies further that a reward “can range from food or drugs that cause physical sensations, to emotional payoffs, such as the feelings of pride that accompany praise or self-congratulation” (Duhigg 25)

Clearly, a reward to Duhigg is something fairly intuitive and immediate, like the taste of a delicious food or relief from an itch, as he later illustrates with examples of rats and monkeys in behaviorist, stimulus-response-type experiments. Yet I consistently find in my students’ papers that they define reward much more similarly to their dictionaries, something like a biweekly paycheck or a college degree, often abstract and far off. This resetting of the definition of the academic jargon we’ve been learning back to its lay version happens with great regularity.

The issue seems to be that students will default to the dictionary definitions of those words when dictionary definitions are available, even if we’ve been talking about the newly learned definitions for weeks. That is, although we’ve been trying to hang a new concept on an old hook, students reaching for the old hook reliably come up with the old concept instead.

This got me thinking of how the jargon (Merriam-Webster: “the technical terminology or characteristic idiom of a special activity or group”) that students encounter throughout their academic careers varies, and how the differences between types of jargon can lead to easier or harder experiences of mastering them as words and as concepts. And though the word “jargon” can have a bit of a negative connotation, here I’m not at all interested in castigating academics for using the terminology particular to their field (even to the point of alienating non-experts) or even for coining new and potentially confusing terms, just identifying some characteristics that could make academic jargon more transparent or less transparent for English learners.

What follows is a preliminary attempt to categorize types of jargon according to overlap with other words and concepts.

Pure jargon (new words)

Perhaps the easiest jargon to identify is that which is clearly a new word, a term completely unique to its field, and though rare, one that probably occurs in the dictionary and exists in the students’ L1 with almost the same definition. Some examples of this type of jargon might be:

  • gluon, a type of subatomic particle
  • aphasia, a language disorder
  • semaphore, a way of organizing multiple processes in a computer
  • molality, something having to do with chemistry
  • palantir, a magical stone used for seeing

The most common issue with words like these in my experience is that students may translate them into the L1, recognize the translation, and then feel as if because they recognized (as opposed to understood) the translation, they therefore know the word. Obviously, someone who hasn’t studied chemistry in any language (like me) won’t really know what molality is.

But in general, these words’ properties as words aren’t what cause confusion, and what difficulties students have in grasping them are likely to be difficulties in grasping the concepts themselves.

Compound jargon

A step up in opacity is novel compound words, words whose components are known but when used in combination refer to a new concept. Some examples might be:

  • the Honeymoon Stage, one of Kalervo Oberg’s 4 stages of culture shock
  • the New Deal, a group of government programs during the Great Depression
  • the Great Depression, since I brought it up
  • nature-identity, one of Gee’s 4 identities (see references)
  • call-out (or cancel) culture, a straw man of conservatives on the Internet
  • blue book, either the publication containing a used car’s estimated value or the value itself

The superficial familiarity of everyday words like “great” and “depression” can yield a false sense of familiarity with the referent of the term “Great Depression”. In my experience though, most problems with understanding these terms come from incorrect parsing of their grammar: many students seem to read “Great Depression”, ignoring its capital letters and interpreting it simply as an adjective followed by a noun, as any depression which is large or severe.

Interpreting compound nouns, or adjective-noun pairs meant as proper nouns, can dovetail with understanding the role of lexical chunks. I have no evidence of this, but ability to comprehend compound jargon may correlate with ability to parse language as chunks rather than strictly as words and grammar.

Homonymous jargon

This class of jargon, sharing spelling and pronunciation with a lay term, is what I was talking about in the introduction, and to me, the type of jargon most likely to cause confusion. I have broken down this group into a few sub-categories:

Homonymous and conceptually similar

The most difficult jargon to distinguish from its vernacular equivalent is jargon which shares a form with a non-jargon word and refers to almost the same thing, but is defined more specifically or to fit within a particular framework. Some examples might be:

  • Cue-Routine-Reward, the three parts of the habit loop as defined by Duhigg
  • Mindset, either growth or fixed as defined by Dweck
  • Grit, perseverence in pursuit of a goal as defined by Duckworth
  • Health, an integer subject to increase with sleep or decrease with physical damage as defined by the Final Fantasy series

Again, the errors seem to stem most commonly from substituting in the lay version of a word’s definition when the technical one was called for.

Homonymous but conceptually different

Some homonymous jargon extends the meaning of a lay term to the point that the connection may not be clear to outsiders. Consider terms like “sweeten” in production, which means adding effects like a laugh track to make a final product more palatable, much like sugar does to tea.

Mitch Hedberg quote: We're gonna have to sweeten some of these ...

Other jargon which is not particularly close in meaning to its lay equivalent might be:

  • Whale, a high-stakes gambler
  • Remainder, to dispose of unsold books (also tricky for morphological reasons; the lay term “remainder” is a noun while the jargon is a verb)
  • Sleeve, the body into which a digitally stored consciousness is inserted (see also “Shell“)

I have never encountered an instance of a student accidentally reverting to the lay definition of a term like this in writing, perhaps because the definitions are so different as to preclude confusion. No one is going to write about a whale visiting a casino and suggest that he may have been disappointed to find the buffet out of krill.

Homonymous and “technically correct”

Within the type of jargon that is a homonym for its lay counterpart are many words whose definitions are distinct but which are taken as the “true” definitions of those words. That is, the technical definition is thought to be what people “really mean” when they use the word in other, non-technical contexts. Some examples might be:

  • Depression (I have a hypothesis that part of what makes psychology so difficult is that so much of its jargon are homonyms of everyday words like “self” and “positive”)
  • DNA, a stand-in for “heritage” in popular discourse but not in biology
  • Myth, a story with particular cultural power, interpreted in popular discourse as “a falsehood”
  • Million, liable to be corrected even when clearly meant as a synonym for “a lot” and not exactly 1,000,000 of something

To illustrate the difference between this type of jargon and the other homonymous jargon above, consider that someone who uses “DNA” in a sentence like “I love BBQ. It’s in my DNA” may be “corrected” and forced to rephrase, while someone who uses “whale” to refer to an aquatic mammal will never be reproached for sloppy, non-technical language use, nor will someone who uses “grit” to refer to general hardworkingness be shamed for not using Duckworth’s specific definition.

What to do

Some consciousness-raising work on just how common academic jargon is in university classes and the flexibility of words’ meanings is probably a good idea.

Part of this really should be a thorough introduction not just to the idea that dictionaries (bilingual or monolongual) or translation are not reliable ways to understand course content, but many illustrations of why, including showing a list of the possible translations of “grit” (for example) and invitating students to compare any of them to the specific definition that the course uses.

Perhaps jargon can be interpreted not as a stumbling block to success but as an opportunity to raise consciousness as to the relationships of words to the concepts that they refer to.

Works Cited

Gee, James Paul. “Chapter 3: Identity as an analytic lens for research in education.” Review of research in education 25.1 (2000): 99-125.

Duhigg, Charles. The Power of Habit: Why we do what we do and how to change. Random House, 2013.

Discussion Circles

Apologies to whoever I stole this idea from – I don’t remember who I should be crediting with it. It has, however, become a staple of my classes.

Previously, class discussions that I’ve worked into lessons have had problems. If the whole class tried to have a discussion together, a few very vocal students dominated the arena while others either tried in vain to compete or happily ceded the floor and retreated into themselves. If discussion groups were smaller, it was harder for non-participants to avoid notice, but discussions still depended on the willingness of a few people to keep a conversation going to prevent them from dissolving into a group of people sitting together, each checking his or her phone. Even groups that stayed on task would default to talkative students talking more and quieter students nodding along.

Discussion circles are a way of facilitating equally participatory conversation among students who naturally vary in their willingness to speak as themselves and voice opinions on either academic or familiar topics. They do this by:

  • Removing some of the burden on the students of representing themselves, because they are playing assigned roles rather than simply voicing their own thoughts,
  • Supplying pragmatically appropriate language, and
  • Encouraging participants, in various ways, to listen carefully to and respect each others’ contributions.

I use 3 versions of Discussion Circles sheets, each of which has 4 roles that participants need to play:

  • Discussion Leader
    • Chooses questions to ask and asks them
    • Begins and ends the meeting
  • Harmonizer
    • Thanks other members for participation
    • Asks for clarification
    • Rephrases others’ opinions
    • Encourages other members to participate
  • Reporter
    • Takes notes on the members’ contributions
    • Asks members to repeat or rephrase
  • Devil’s Advocate
    • Disagrees with other members’ contributions (constructively!)

These tasks are in addition to actually answering the questions that the Discussion Leader asks.

Each of these roles has a worksheet to fill out with sections for before, during, and after the discussion. These are turned in to the teacher afterward. The teacher, incidentally, is not involved in the discussions except to provide a list of questions and assign roles at the beginning.

The version of the worksheet that I use for at least the first 3 times that I do this activity is about 2 pages long per member. The “Before” and “After” sections are fairly involved and take about 10 minutes to do each. (The discussion itself can take anywhere between 20 minutes to an hour.)

You can get a copy of it here: Discussion circles online (called “online” because it is in a format that is easily distributable on Google Classroom. You can also print it.)

After they are used to the expectations of each role, I use a shortened version of the sheet. This one has a shorter “Before” section and no “After” section.

Find a copy here: Discussion circles lite online

Towards the end of the semester, I use a Turbo version of the sheet in which the participants switch roles with every question.

Get it here: Discussion circles turbo

In my Center for Excellence in Teaching and Learning group that meets on Fridays (basically a community of practice for new professors), I tried a revised Turbo version that had the job Quoter replacing Reporter.

Get it here: Discussion circles turbo 2

I give two grades for this assignment every time it is used: One grade for participation in the meeting and one for completing the worksheet. Now that we’re all online, the participation grade comes from a predetermined member recording their Zoom meeting and sharing the video with me.

Obviously, for the last few weeks of our spring 2020 semester, I’ve been distributing these online and having students share one sheet for the whole group rather than printing and handing out the sheets for an in-class discussion. I find that the distribution of responsibility in Discussion Circles, where everyone has to participate in order to complete their own sheet, suits the slightly impersonal nature of online synchronous discussions fine. Students often remark that they take more easily to some roles than others, but I try to make sure everyone plays every role at least once, so that even if they don’t “naturally” like to disagree with others, they will all be able to do so respectfully when it becomes important.

I find that Discussion Circles are a helpful scaffold for a lot of skills practice that we hope to see in class discussions, to the point that I rarely have a class discussion without them anymore. I hope you get some value from them too.

Thresholds in missed assignments predicting final grades

First, a note on a quick research project that yielded nothing interesting: I checked whether Canvas page views immediately before going remote, immediately after going remote, and the change in page views during that period were correlated with final grades, and they basically weren’t. I was wondering whether students who tended to check Canvas a lot (during the remote instruction period and before it) tended to do better in the class overall, and I didn’t find any evidence of that.

Also, and this will be mentioned again later, in checking the average scores for assignments over the past semester, I noticed that assignments that had to be done in a group were more likely to be completed than those that weren’t. This is interesting to me because setting up a Zoom meeting and talking to classmates, sometimes in other countries, would seem to be harder, not easier, than completing a worksheet by oneself. However, just taking two types of assignments from my Written Language class as examples, Reading Circles, which had to be done as a group via Zoom (or in person before we went remote in March), had a mean score of about 93% for the term, while Classwork, which included many assignments that were completed solo, had an average of 89%. In the Oral Language class, Discussion Circles (a sort of role-playing exercise with questions on an assigned topic) had an average scores of 99%, and Classwork 90%. It seems that Zoom meetings and the rare chance and synchronous interaction that they represent facilitate work, despite the pain of setting them up.

In other news, I have just completed my first academic year at the university IEP that I started at full-time last fall. As a celebration we got Thai takeout from one of the three good Thai restaurants in town (there are, mysteriously, no good Indian restaurants for 40 miles in any direction), and I immediately started blogging, vlogging, and tinkering with Google Sheets to fill the void left by work.

I’ve been slowly adding functionality to the Google Sheets that I use to do my end-of-course number crunching, mostly by figuring out new ways to use the FILTER function along with TTEST to see if there are statistically significant differences in my students’ final grades when they are separated into two populations according to some parameter. I put together a master Sheet for the year that included all of my classes between last August and now.

One possible factor that I had noticed anecdotally throughout the year was that students seemed more likely to fail or do poorly for assignments not turned in at all than for assignments done poorly. There was no shortage of work that was half-finished or ignored instructions, but the really low grades for the course were usually for students with work that was not even turned in.

So I set up a t-test on my Google Sheet to separate my students into two populations by the % of assignments that received a grade of 0 and look for a statistically significant difference in their final grades. Naturally, one expects students who have more 0s to do worse, but I still wondered where the dividing lines were – did getting 0s on more than 5% of assignments produce statistically significantly different populations? Did 10% do the trick? Is there a more graceful way of expressing this idea than “statistically significantly different”?

The relevant cells in my Google Sheet look like this:

As you can maybe figure out from the above, missing 10% of assignments (regardless of the points that those assignments were worth) produced a statistically significant difference in final grades: those who missed 10% or more of assignments had a final course grade of 66.9% (or D) on average while those who missed less than 10% had an average course grade of 90.8% (or A-).

On the other hand, getting full scores (which in my class means you followed all the directions and didn’t commit any obvious mistakes like failing to capitalize words at the starts of sentences) on more than 50% of assignments also produced a statistically significant difference in final grades: those who got full scores on 50% or more of assignments had a final course grade of 93.2% (or A) on average while those who got full scores on less than 50% had an average course grade of 78.4% (or C+). This isn’t the difference between passing and failing, but the ratio of full scores does produce two populations, one of which fails on average and one of which passes – see below.

Other significant dividing lines were:

  • Missing 3% of assignments
    • If you missed more than 3%, your average grade was 83.3% (B)
    • If you missed 3% or less, your average grade was 92.1% (A-)
  • Missing 5% of assignments
    • If you missed more than 5%, your average grade was 78.6% (C+)
    • If you missed 5% or less, your average grade was 91.7% (A-)
  • Getting full scores on 35% of assignments
    • If you got a full score on more than 35%, your average grade was 90.0% (A-)
    • If you got a full score on 35% or less, your average grade was 68.0% (D+)
  • Getting full scores on 70% of assignments
    • If you got a full score on more than 70%, your average grade was 96.6% (A)
    • If you got a full score on 70% or less, your average grade was 85.0% (B)

As you can see, I am not a prescriptivist on the use of the word “less”.

As you can also see, there are some red lines that pertain to the number of assignments that students can miss before they fall into a statistical danger zone: 10% of assignments missed, or only 35% of assignments with full scores. A student who fails to meet these thresholds is statistically likely to fail.

Statistics like these don’t carry obvious prescriptions about what to do next, but I worry a bit that the number of missed assignments will go up as classes are moved permanently online and assignments lose the additional bit of salience that comes from being on a physical piece of paper that is handed to you by a physical person. I also, for mostly bureaucratic reasons, worry that my grades seem to reflect less “achieving learning outcomes” and more “remembering to check Canvas” – although I’m sure this discrepancy is nearly universal in college classes.

I am considering giving fewer assignments per week that are more involved – fewer “read this article and complete this worksheet” and more “read this article, make a zoom discussion, and share the video and a reflection afterward”. We will see if that produces grades that reflect the quality of work rather than the mere existence of it.

Academic ESL and interlanguage: Partially totally effective or totally partially effective (or effective for other purposes)?

Three hypotheses for the observed effectiveness of academic ESL for preparing students for academic work in English:

  1. Academic ESL is perfectly effective at developing interlanguage, but academic ESL classes finish before the end of interlanguage development because students cease being ESL students and matricutate into regular degree programs. Students would still benefit from academic ESL after this point, but rarely have time due to their undergraduate or graduate class schedules. Some stunting occurs in students’ interlanguage because of the premature end of their ESL courses.
  2. Academic ESL is partially effective at developing interlanguage, and academic ESL classes finish at the end of their period of effectiveness. Students would not benefit from more academic ESL after this point because interlanguage development cannot occur through further academic ESL classes. Students are more likely to have student interlanguage development because of excessive time spent in ESL than a premature start to their degree programs.
  3. Academic ESL is partially effective at developing interlanguage but mainly effective at introducing compensatory strategies for students to use to make up for their lower language skills. Some of these strategies are specific to language learners and others are of use to any college student, but former ESL students in degree programs succeed by using them more than other students. Interlanguage development is less predictive of academic success than application of compensatory strategies.

Earlier this semester, we requested some data from our campus researcher, and he just got back to us. I won’t say what exactly he told us, but it pertained to average GPAs among different populations of undegrads, and it was good news for the apparent effectiveness of our IEP.

That said, we don’t know why our IEP appears to be effective. It is possible that we are getting better at our jobs. It is also possible that we are just recruiting better students. It’s possible that our students are far better than average, but we’re doing a worse-than-average job preparing them for college, resulting in performance that converges on the mean. Assuming that the work we do in class is at least part of the reason, it might help us to better focus our efforts in order to improve even more if we knew what part of what we do in class helps our students the most.

(For most of my career, I was used to the idea that interlanguage development started when students joined my class and stopped when they quit. In EFL, you can’t count much on outside factors to keep the interlanguage development ball rolling – students aren’t part of formal or informal organizations that facilitate regular English use and their identities accommodate English as a hobby at most. I tried as the owner of an eikaiwa to get students to start pastimes that included English, only to realize that as an eikaiwa teacher, I was the pastime. In short, I was used to thinking of English class as a self-contained unit; anything I wanted my students to do with English we had to do together.

I realized partway through my first year teaching community college ESL in California that we were by design only giving our students a partial education. We wanted to send them off into English 100 with maybe a bit of a head start and without a lot of baggage, but we expected English 100 to continue the work of interlanguage development. I’m sure some of us thought that ESL would still benefit our students, but they had to get on with their credit-bearing classes eventually, and some of us probably thought that ESL was inherently limited in what it could accomplish. There are also those who think that the one and only way that a student will come to understand adjective clauses is if the teacher explains adjective clauses and have never heard of interlanguage.)

Anyway, this would make a good long-term study project: find a decent sample of former academic ESL students in their undergrad years, give them the TOEFL or IELTS (which they wouldn’t have taken in a few semesters at least), survey them on their “compensatory strategies” (defining those would be a lot of work), and measure those against their undergraduate GPAs.

By the way, I’ve started recording some old blog posts as vlogs, seeing how different people tend to read ELT blogs and watch ELT-related content on YouTube. Feel free to stop by and leave a comment about how I don’t look like you expected.

Do the timing and number of edits on a draft predict improvement?

Since I started using Google Classroom for writing classes a few years back, I’ve noticed a pattern in the emails Google sends you whenever a student clears a comment you left. A few times, I’ve been able to tell when a student was still working on a paper past the deadline or if they got enough sleep the night before (emails at 3:20 AM are a bad sign). Most often though, you just find that a lot of students are making edits the morning that a paper is due, as your first email check of the morning features 30+ emails all saying “Bob resolved a comment in Final Essay”.

There exists a tool called Draftback (introduced to me, as with many edtech tools, by Brent Warner), a browser extension for Chrome, that lets you replay the history, letter by letter, of any Google Doc that you have edit access on. Its most obvious utility is as a tool for detecting academic dishonesty that plagiarism checkers like Turnitin miss (like copy/pasted translations, which show up in the editing history as whole sentences or paragraphs appearing all at once as opposed to letter by letter). It also has the benefit of showing you the exact times that edits were made in a document, which you can use to track how quickly students started responding to feedback, how many revisions they made (grouped helpfully into sessions of edits made less than 5 minutes apart), and whether these revisions were all made in the 10 minutes the student said he was just running to the library to print it. Draftback is the kind of tool that you hope not to need most of the time, but is hard to imagine life without when you need it.

This video gives a good introduction to Draftback.

With the pattern in my email inbox fresh in my mind (a term just having ended here), I thought I’d use Draftback to see whether this flurry of last-minute editing had some bearing on grades. To be specific, I used Draftback to help me answer these questions:

  • Do numbers of edits correlate with scores on final drafts (FD) on papers?
  • Does the timing of edits correlate with FD scores?
  • Do either of these correlate with any other numbers of interest?

This required quite a bit of work. First, I copied and pasted rough draft (RD) and FD scores for each one of my students’ essays for the past 3 terms, totalling 6 essays, into a big Google Sheet, adding one more column for change in grade from the RD to the FD (for example, 56% on the RD and 92.5% on the FD yields a change of 65.18%). Then, I generated a replay of the history of each essay separately. Because each essay is typed into the same Google Doc, this gives me the entire history of the essay, from outline to final product. After each replay was generated (they take a few minutes each), I hit the “document graphs and statistics” button in the top right to see times and numbers of edits in easier-to-read form. I manually added up and typed the timing and number of the edits into the Google Sheet above. Last, I thought of some values culled from that data I might like to see correlated with other values. Extra last, I performed a few t-tests to see if the patterns I was seeing were meaningful.

(The luxury of a paragraph about how annoying the data was to compile is part of the reason I put these on my blog instead of writing them up for journals.)

Example “document graphs and statistics” page. From this, I would have copied 1468 edits for the due date (assuming the due date was Monday the 30th), 79 edits 4 days before the due date, and 1911 edits for 5 days before the due date, with 0 edits for every other day.

The values that I thought might say something interesting were:

  • % of edits (out of all edits) that occurred on a class day
    • I’m curious whether students who edit on days when they don’t actually see my face do better – i.e., if students who edit on the weekends write better. Eliminating class days also helpfully eliminates lab days, the two class days a week when all students are basically forced to make edits. Incidentally, our classes meet Mon-Thu and final drafts are always due on the first day of the week. The average across all the essays was 63%, with a standard deviation of 38%.
  • % of edits that occurred on the due date
    • Specifically, before 1 PM – all my final drafts are due at the beginning of class, and all my classes have started at 1 PM this year. My assumption is that a high % of edits on the due date is a sign of poor work habits. The average was 21% with a standard deviation of 31%.
  • total # of edits
    • One would hope that the essay gets better with each edit. This number ranged from near 0 to more than 6000, with both an average and standard deviation of about 1700. Obviously, if you calculate this number yourself, it will depend on the length of the essay – mine were all between 3 and 5 pages.
  • maximum # of edits per day
    • I’m interested in whether a high number of edits per day predicts final grades more than a high number of edits total. That is, I want to know if cram-editing benefits more than slow-and-steady editing. The average and standard deviation for this were both about 1200.
  • # of days with at least 1 edit
    • Same as the above – I want to know if students who edit more often do better than ones who edit in marathon sessions on 1 or 2 days. The average was 3.25 days with a standard deviation of about 1 day.

All of the above were computed from the due date of the last RD to the due date of the FD, up to a maximum of 1 week (my classes last for 6 weeks, and there is very little time between drafts – read more about the writing process in my classes here). When I was done, after several hours of just copying numbers and then making giant correlation tables, I had hints of what to look into more deeply:

2 essays from each student, each taken separately.

As you can see in cells C9-H14 (or duplicated in I3-N8), students didn’t necessarily use the same revision strategies from essay to essay. A student who had a ton of edits on one day for essay 1 might have fewer edits spread out over more days for essay 2, as evidenced by the not-terribly-strong correlations in the statistics between essay 1 and essay 2. To take one example, “days with > 0 edits” on essay 1 was correlated with “days with > 0 edits” on essay 2 at just 0.21 (cell M7). Some of these differences were still statistically significant at p=0.05 (a good enough p for a blog, imo):

  • Students who did > 2000 total edits on essay 1 had an average of 3428 total edits on essay 2. Students who did <= 2000 total edits on essay 1 had an average of 1650 total edits on essay 2.
  • Students who did > 50% of their edits for essay 1 on the due date did an average of 45% of their edits for essay 2 on the due date. Students who did <= 50% of edits on essay 1 on the due date did an average of 17% of their edits for essay 2 on the due date.

Anyway, because it seemed prudent to consider the strategies used on each essay rather than the strategies used by each student, I made a second spreadsheet where the individual essays rather than the students (who each wrote 2 essays) are the subject of comparison, resulting in this much-easier-to-read correlations table:

Here I treat each essay as a unique data point rather than 2 products of the same student.

Columns I and J (or rows 9 and 10) are probably the most interesting to other writing teachers: those hold the correlations between statistics derived from Draftback data and I) final draft scores and J) change in score between the rough draft and final draft. In plain English, the correlations here suggest:

  • As expected, % of edits on class days and % of edits on the due date are negatively correlated with the final grade for the essay. That is, people who did a lot of their edits in class or right before turning in the essay seemed to do worse (but not by much-neither produces statistically significant differences in FD grades or in improvement between RD and FD).
  • Total # of edits and max edits per day are both positively correlated with final grades (and with each other). Editing more tends to produce better essays.
  • Everything that is true for the final scores is also true for the change in scores between RD and FD. The fact that RDs were even more negatively correlated with % edits on class days and % edits on the due date than those values were with FDs mean that the changes appear to be positively correlated, but I take it as meaning that those strategies with an improvement from very bad RD scores to mildly bad FD scores.

To give a bit more detail, these were some statistically significant differences (p=0.05):

  • Students who did > 2000 total edits had an average grade of 86.8% on the FD. Students who did <= 2000 total edits had an average grade of 78.7% on the FD.
  • Students who did > 3000 total edits had an average grade improvement of 17.8% between the two drafts. Students who did <= 3000 total edits had an average grade improvement of 4.9%.
  • Students who did edits on > 3 days had an average grade of 84.8% on the FD. Students who did edits on <= 3 days had an average grade of 78.9%.
  • Students who did edits on > 5 days (that is, almost every day) had an average grade improvement of 33.6% between the two drafts. Students who did edits on <= 5 days had an average grade improvement of 5.8%.

The data suggests a fairly uncontroversial model of a good writing student – one who edits often, both in terms of sheer numbers of changes and in terms of frequency of editing sessions. In fact, “model student” rather than “model essay” may be what the data is really pointing at – the amount and timing of the work that went into a particular essay seems sometimes to show more about the student’s other work than it does about the quality of that essay.

For example, it’s not clear why data derived from the time period between RD and FD would be correlated with RD scores (in fact, you would expect some of the correlations to be negative, as high RD scores might tell a student that there is less need for editing), but perhaps the fact that the same data points that are correlated with FD scores are correlated in the same ways with RD and final course grades indicates that the data shows something durable about the students who display them (my caveat earlier notwithstanding). It is feasible that the poor work habits evidenced by editing a major paper a few hours before turning it in might affect students’ other grades more than that paper itself.

In fact, this seems to be the major lesson of this little research project. One t-test on % edits on due date was statistically significant – one that compared students’ final course grades. To be precise, students who did > 20% of their total edits on the due date had average course grades of 84.5%. Those who did <= 20% of their total edits on the due date had average course grades of 88.8%.

Just to pursue a hint where it appeared, I went back into my stat sheets for each class for the last year and copied the # of assignments with grade 0 (found on the “other stats” sheet) for each student into my big Google Sheet. Indeed, there was a statistically significant difference. That is, students who made > 20% of edits made on the day an essay was due got a score of 0 on 5% of assignments across the term, and students who made <= 20% of edits made on the day an essay was due got a score of 0 on 3.2% of assignments across the term.

Like many characteristics of “good students”, from growth mindset to integrative motivation, whether a pattern of behavior correlates with success and whether it is teachable are two almost unrelated questions. It doesn’t necessarily follow from this research that I should require evidence of editing every day or that I should move due dates forward or back. It does suggest that successful students are successful in many ways, and that editing essays often is one of those ways.

I might just want to tell my students that I really love the Google Docs “cleared comment” emails that I get on Monday morning and I wish I got them all weekend, too.

The Academic Support Catch-22

There is a pattern among formerly-known-as-remedial “academic support” classes that I’ve noticed that may work against their intended purpose.

The pattern is a result of the assumption that the subtext of planning and preparation in most assignments in college needs to be made text. That is, the assumptions of what needs to happen for a college student to be successful need to be made explicit and accounted for. For example, here is a representation of creative writing that I think gives a pretty accurate representation of the work that has to be done vs. what ends up on the page:

writing iceberg
Source.

Academic support often seems to work by taking all of those hidden parts of the writing process out in the open and making them graded assignments themselves. An assignment that in another class might look like this:

Write a research paper on a topic covered in this class. (100 pts)

might turn into a weeks-long writing unit like this:

  • Brainstorming discussion notes (classwork)
  • Research goal discussion: 5 pts
  • Mind map: 2 pts
  • Library scavenger hunt (classwork)
  • Works Cited and Plagiarism worksheet: 5 pts
  • Outline w/ annotated Works Cited page: 10 pts
  • Outline pair feedback (classwork)
  • Introduction in-class writing (not graded)
  • Rough draft 1: 10 pts
  • RD1 peer feedback (classwork)
  • RD1 tutoring visit reflection discussion: 5 pts
  • RD2: 20 pts
  • RD2 professor feedback reflection Flipgrid: 5 pts
  • RD2 office hours appointment: 2 pts
  • FD: 70 pts
  • FD writing process reflection discussion: 5 pts
  • Optional FD re-submission for makeup points
  • Optional FD re-submission for makeup points reflection
  • Optional FD re-submission for makeup points reflection2

Ok, the last two are jokes, but otherwise this writing process, where every step is explained, given its own rubric, shared, and reflected upon, is quite normal for a writing class that is coded “for English learners”, “academic support”, or just has a professor trying a more workshoppy approach.

This can be invaluable unless it sets too strong a precedent for explicit requirements of the writing process in students’ minds. Some students, particularly in ESL, may have no idea at all what the writing process is supposed to entail or how to use the resources like libraries, tutoring, etc. It’s better that at least one class during a college student’s first year puts this all on the record, but it might be counterproductive if too many do. It shouldn’t be lost on us that each step made explicit in the “academic support” writing process makes it resemble a typical college writing assignment less and less. If students expect these steps always to be explicitly outlined, they may neglect them or delay them on assignments where they are not.

The contrast between two types of assignments in my classes crystallize these concerns for me. The first type resembles the detailed, all-steps-accounted-for work flow above. I have 2 papers in a term whose writing processes basically fill all of the 2 or 3 weeks ahead of their final due dates with discussions, peer review, presentations, and pre-writing. The second type is an “all-term” assignment given the first week of class and due the last week, usually worth a significant amount of points but doable in a few hours with the right preparation. Examples of this type of assignment are “go to an on-campus event and take detailed notes” or “email a professor in the department you plan to major in and ask 3 questions”. Students tend to do the first type of assignment with the appropriate level of dedication, preparing them well for the big essays that come at the end of the two- or three-week unit. At the same time, they tend to leave the second type of assignment until the weekend before the last week of class, days before they are due, and often run into problems like not having campus events to go to on Presidents’ Day weekend (this post is a topical one). This tells me that, in my classes at least, the precedent of having all the “underwater part of the iceberg” work outlined in detail for some assignments results in the underwater part being ignored for others.

Another factor may be that, for the first type of assignment, students are all doing the same thing at the same time and know that avoiding embarrassment during a week’s worth of discussions and presentations depends on their doing their work. For the second, on the other hand, students may all go to different events, email different professors, etc. all at different times and never have to show their work to their classmates. Again though, it is not unusual for major assignments in other classes to be solitary affairs. The many reasons that students seem to neglect solitary assignments with implicit requirements on time and preparation only highlight the problems that that neglect causes.

I don’t really have a solution for the skewing of expectations that academic support seems to produce – I just verbally warn students that most of the steps in our writing process will need to be taken of their own volition in their History, Psychology or Accounting classes. Maybe I need to give points for reflecting on that warning.

COCA for translationists

(Corpus of Contemporary American English, alongside the other BYU corpora from Mark Davies)

For basically all my career, from my eikaiwa days Japanese university to community college to the IEP I teach at now, I’ve been trying to get my students to see vocabulary as more than lists of words with accompanying translations.

Image result for 英単語
Source

Sure, knowing one translation of “marry” is probably better than not knowing anything about “marry”, but it really just gets your foot in the door of knowing that word (and leaves you less able to enjoy semantically ambiguous sentences like “The judge married his son”). You still don’t have much of an idea of what kind of person uses that word, in what kind of situation, and (of special concern for fluency) what other words usually surround that word.

Part of what cramming for tests does to language learners (and really learners of anything) is convince them that the minimum amount of knowledge to be able to fill in the right bubble is efficient and expedient. One of the longest-running efforts of my career is trying to disabuse my students of the notion that when vocabulary is concerned, this kind of efficiency leads to anything worthwhile. To the contrary, the more seemingly extraneous information you have about any given word, the better you will remember it and the more fluently and accurately you will be able to use it.

(Naturally, the site where I first encountered this phenomenon was in Japan, where the question “What does that mean?” is almost incomprehensible except as a synonym for “Translate this into Japanese according to the translation list provided by your instructor”. But knowing a word and being able to use it (a dichotomy which collapses with any scrutiny) demands (again, a collapsed dichotomy being treated as a single subject) quite a lot more than an abstract token in a foreign language being linked to a more familiar token in one’s first language in memory. One can know that “regardless” “means” とにかく or 関係なくin Japanese without knowing what preposition usually follows it, which noun from “outcome”, “result”, or “upshot” most commonly follows that preposition, or that it has an even more academic ring than near-synonym “nonetheless” (which doesn’t have an accompanying preposition at all). Interestingly, overreliance on translation seems to be something of a vestigial trait of language education in Japan – people justify it for its utility on tests, but the tests themselves haven’t required translation in many years.)

Even when my students understand this, however, they still aren’t sure how to implement it. I get a lot of positive reactions to comparisons between chunks in English and in their first language (asking how many words a child hears in phrases like in “Idowanna”, やだ, 我不想 or je veux pas) or between words and animals (a lion can technically eat roast turkey, but what do lions usually eat?). Students readily identify chunks and idiomatic expressions that they hear outside of class (“Would you like to” and “got it” are some of the most-noticed). In the run-up to a vocabulary quiz though, where I want students to show all that they know about vocabulary, what I see most often on students’ desks is the familiar lists of translated pairs:

regardless 而不管 however 然而 nonetheless 尽管如此 nevertheless 但是 notwithstanding 虽然

It seems that students, when they “study”, tend to default to the strategies that they think got them through high school. Usually, students who have this tendency also have familiar patterns of scoring on quizzes: fine-to-high scores on the cloze (fill-in-the-blank) questions and low scores on anything outside of the narrow range where translation is applicable. I see this as a result of not being able to see how to use this knowledge of other features of vocabulary in their customary mode of studying.

I started using COCA in class as a way to plug the fuzzy, often-neglected dimensions of vocabulary learning – in particular register, genre, colligation and collocation – into a behavioral pattern that students have completely mastered. That is, COCA is a way to make a more complete picture of vocabulary compatible my students’ most familiar way of studying – sitting at a desk and looking up discrete words.

With that long preamble over, let’s have a look at the specific activities I use over the course of a term.

First glance at COCA

Starting on the first day, words of particular interest are added to a class web site – either my own, Vocabulary.com, or Quizlet (I’ve tried quite a few) – and drawn on for review, activities, and quizzes. Starting in week two, I introduce the idea of chunks (which they need in order to complete the reading circles sheets from that week on), either with a presentation or less formally, for example with a quiz game.

In a shorter term, I’ll introduce COCA the same week, or in a longer semester, around week 4 (my IEP has lightning-quick 6-week terms). The introduction usually has to be done in the lab – it’s much better if each student can do his or her own searches. I alternate between a worksheet and a presentation for the first introduction. This takes about an hour.

From experience, students never fail to see the utility of COCA at this stage and never seem to have trouble with the idea of another online resource. The issues that typically arise on the first day are:

  1. COCA locks out searches from IP addresses if there are too many in one day (as in a class of 20 or so all using COCA for the first time in a lab). This usually starts to afflict my classes after the first 20 minutes or so of searches.
  2. At minimum, students have to create accounts after the first few searches, which used to require a .edu email address, but doesn’t seem to now.
  3. The use of spaces on COCA is idiosyncratic. A search for ban_nn* (without a space) will find intances of “ban” used as a noun, while ban _nn* (with a space) will find “ban” plus any noun, for example “ban treaty”, or hilariously, “ban ki-moon”. ban* (without space) will find any word starting with “ban”, and ban * (with space) will find “ban” plus any word or punctuation mark. Punctuation needs to be separated with spaces as well. These rules trip up students fairly early on, as they search for, for example due to the fact that* and don’t find what they expect.

Weekly activities

After the first introduction, COCA will be in at least one homework or classwork assignment every week.

Classwork

From time to time, but especially before quizzes, students do a jigsaw-style group activity I call vocabulary circles. As you can see, a good half of it is COCA-derived. If you don’t know how these usually work, students with different jobs are assigned one word per group, share them with “experts” who had the same job from other groups, reconvene and share them with their own group, and then have to take turns presenting all their group’s work to their classmates.

Reading

COCA searches are a part of many of the reading circles sheets I use (reading circles are the only way I do any intensive reading in class). Vocabulary specialists (or whatever you call them) are always responsible for chunks as a category of vocabulary as well as collocations for other words.

Discussions

Starting the week that COCA is introduced, weekly “Vocabulary Logs” on Canvas include COCA work like that reproduced below:

This week, you must use COCA to find something interesting about a word from our class vocabulary list. You must find these 3 things:

What other words usually come before and after that word?
Who usually uses that word? (For example, lawyers, academic writers, news anchors, etc.)
Which forms of the word are the most common? (For example, “present simple”, “plural”, “adverb”, etc.)

You get 6 points for answering all of these questions.
Then, in a reply, use a classmate’s word in a new example sentence that you make. This section will be graded on correctness, so read your classmate’s post carefully. (2 pts)

Or this option to take translationism head-on:

This week, you will compare a word from another language (for example, your first language) to a word in English. The words should be translations of each other.
You will point out how the two words are similar or different in these areas:

Collocation: Do the same or similar kinds of words come before or after the words?
Grammar: Are the words the same part of speech? Are the rules for the parts of speech different in the two languages?
Register: Do the words appear in the same kinds of situations? Are they similar in formality?
Meaning: Do the words have second or third meanings that are different?

This post is worth 6 points. Reply to a classmate for 1 more point.

Quizzes

The quizzes in my classes after COCA has been introduced all have some explicitly COCA-derived questions and some questions that are graded on COCA-relevant considerations.

In questions like the one below, “grammar” includes part of speech and colligation.

Use the word in a sentence that makes the meaning clear. (1 pt for grammar and 1 pt for clear meaning)
(sustainable) _____________________________________________________________

Some questions target collocations specifically (ones that have been discussed specifically in class):

Circle the most common collocation. (1 pt each)
A difficult environment can precipitate ( fights / conflict / argument ).
Adaptation ( onto / to / with )  a new culture takes time.

Other questions target the colligations of vocabulary that should be familiar for other reasons:

Fill in the blank with one of the following. (1 pt each)
Regardless of Owing to Because Also
_______________________ the waiter made a mistake with our order, our meal was free. _______________________, the chef sent us a free dessert. Lucky us!

Students cannot have COCA open during the quiz, but they can (and are advised to) get to know the words inside and out beforehand. As you may have seen, our vocabulary lists can grow fairly long by the end of the term, but words often appear on more than one quiz.

Essays

See my last post on the subject.

I am getting on board the “reflection as revision” train – grading reflection on grammar instead of grammatical accuracy on all drafts besides the first. COCA is the vehicle I use for this.

Conclusions

I presented this to you as a way to get students with an unhealthy focus on one-to-one translation to think about vocabulary in a way that better facilitates real-world use. Actually, it works even better with students predisposed to think of vocabulary in more holistic terms – but those students would often be fairly good learners just with enough input. The advantage of using COCA is that it can easily piggyback on habits that certain students may overuse – many of my students have browser extensions on their computers that translate any word the mouse hovers over. Adding one more dictionary-like tool that includes what dictionaries miss is a way to swim with that tendency rather than against it.