Thresholds in missed assignments predicting final grades

First, a note on a quick research project that yielded nothing interesting: I checked whether Canvas page views immediately before going remote, immediately after going remote, and the change in page views during that period were correlated with final grades, and they basically weren’t. I was wondering whether students who tended to check Canvas a lot (during the remote instruction period and before it) tended to do better in the class overall, and I didn’t find any evidence of that.

Also, and this will be mentioned again later, in checking the average scores for assignments over the past semester, I noticed that assignments that had to be done in a group were more likely to be completed than those that weren’t. This is interesting to me because setting up a Zoom meeting and talking to classmates, sometimes in other countries, would seem to be harder, not easier, than completing a worksheet by oneself. However, just taking two types of assignments from my Written Language class as examples, Reading Circles, which had to be done as a group via Zoom (or in person before we went remote in March), had a mean score of about 93% for the term, while Classwork, which included many assignments that were completed solo, had an average of 89%. In the Oral Language class, Discussion Circles (a sort of role-playing exercise with questions on an assigned topic) had an average scores of 99%, and Classwork 90%. It seems that Zoom meetings and the rare chance and synchronous interaction that they represent facilitate work, despite the pain of setting them up.

In other news, I have just completed my first academic year at the university IEP that I started at full-time last fall. As a celebration we got Thai takeout from one of the three good Thai restaurants in town (there are, mysteriously, no good Indian restaurants for 40 miles in any direction), and I immediately started blogging, vlogging, and tinkering with Google Sheets to fill the void left by work.

I’ve been slowly adding functionality to the Google Sheets that I use to do my end-of-course number crunching, mostly by figuring out new ways to use the FILTER function along with TTEST to see if there are statistically significant differences in my students’ final grades when they are separated into two populations according to some parameter. I put together a master Sheet for the year that included all of my classes between last August and now.

One possible factor that I had noticed anecdotally throughout the year was that students seemed more likely to fail or do poorly for assignments not turned in at all than for assignments done poorly. There was no shortage of work that was half-finished or ignored instructions, but the really low grades for the course were usually for students with work that was not even turned in.

So I set up a t-test on my Google Sheet to separate my students into two populations by the % of assignments that received a grade of 0 and look for a statistically significant difference in their final grades. Naturally, one expects students who have more 0s to do worse, but I still wondered where the dividing lines were – did getting 0s on more than 5% of assignments produce statistically significantly different populations? Did 10% do the trick? Is there a more graceful way of expressing this idea than “statistically significantly different”?

The relevant cells in my Google Sheet look like this:

As you can maybe figure out from the above, missing 10% of assignments (regardless of the points that those assignments were worth) produced a statistically significant difference in final grades: those who missed 10% or more of assignments had a final course grade of 66.9% (or D) on average while those who missed less than 10% had an average course grade of 90.8% (or A-).

On the other hand, getting full scores (which in my class means you followed all the directions and didn’t commit any obvious mistakes like failing to capitalize words at the starts of sentences) on more than 50% of assignments also produced a statistically significant difference in final grades: those who got full scores on 50% or more of assignments had a final course grade of 93.2% (or A) on average while those who got full scores on less than 50% had an average course grade of 78.4% (or C+). This isn’t the difference between passing and failing, but the ratio of full scores does produce two populations, one of which fails on average and one of which passes – see below.

Other significant dividing lines were:

  • Missing 3% of assignments
    • If you missed more than 3%, your average grade was 83.3% (B)
    • If you missed 3% or less, your average grade was 92.1% (A-)
  • Missing 5% of assignments
    • If you missed more than 5%, your average grade was 78.6% (C+)
    • If you missed 5% or less, your average grade was 91.7% (A-)
  • Getting full scores on 35% of assignments
    • If you got a full score on more than 35%, your average grade was 90.0% (A-)
    • If you got a full score on 35% or less, your average grade was 68.0% (D+)
  • Getting full scores on 70% of assignments
    • If you got a full score on more than 70%, your average grade was 96.6% (A)
    • If you got a full score on 70% or less, your average grade was 85.0% (B)

As you can see, I am not a prescriptivist on the use of the word “less”.

As you can also see, there are some red lines that pertain to the number of assignments that students can miss before they fall into a statistical danger zone: 10% of assignments missed, or only 35% of assignments with full scores. A student who fails to meet these thresholds is statistically likely to fail.

Statistics like these don’t carry obvious prescriptions about what to do next, but I worry a bit that the number of missed assignments will go up as classes are moved permanently online and assignments lose the additional bit of salience that comes from being on a physical piece of paper that is handed to you by a physical person. I also, for mostly bureaucratic reasons, worry that my grades seem to reflect less “achieving learning outcomes” and more “remembering to check Canvas” – although I’m sure this discrepancy is nearly universal in college classes.

I am considering giving fewer assignments per week that are more involved – fewer “read this article and complete this worksheet” and more “read this article, make a zoom discussion, and share the video and a reflection afterward”. We will see if that produces grades that reflect the quality of work rather than the mere existence of it.

Do the timing and number of edits on a draft predict improvement?

Since I started using Google Classroom for writing classes a few years back, I’ve noticed a pattern in the emails Google sends you whenever a student clears a comment you left. A few times, I’ve been able to tell when a student was still working on a paper past the deadline or if they got enough sleep the night before (emails at 3:20 AM are a bad sign). Most often though, you just find that a lot of students are making edits the morning that a paper is due, as your first email check of the morning features 30+ emails all saying “Bob resolved a comment in Final Essay”.

There exists a tool called Draftback (introduced to me, as with many edtech tools, by Brent Warner), a browser extension for Chrome, that lets you replay the history, letter by letter, of any Google Doc that you have edit access on. Its most obvious utility is as a tool for detecting academic dishonesty that plagiarism checkers like Turnitin miss (like copy/pasted translations, which show up in the editing history as whole sentences or paragraphs appearing all at once as opposed to letter by letter). It also has the benefit of showing you the exact times that edits were made in a document, which you can use to track how quickly students started responding to feedback, how many revisions they made (grouped helpfully into sessions of edits made less than 5 minutes apart), and whether these revisions were all made in the 10 minutes the student said he was just running to the library to print it. Draftback is the kind of tool that you hope not to need most of the time, but is hard to imagine life without when you need it.

This video gives a good introduction to Draftback.

With the pattern in my email inbox fresh in my mind (a term just having ended here), I thought I’d use Draftback to see whether this flurry of last-minute editing had some bearing on grades. To be specific, I used Draftback to help me answer these questions:

  • Do numbers of edits correlate with scores on final drafts (FD) on papers?
  • Does the timing of edits correlate with FD scores?
  • Do either of these correlate with any other numbers of interest?

This required quite a bit of work. First, I copied and pasted rough draft (RD) and FD scores for each one of my students’ essays for the past 3 terms, totalling 6 essays, into a big Google Sheet, adding one more column for change in grade from the RD to the FD (for example, 56% on the RD and 92.5% on the FD yields a change of 65.18%). Then, I generated a replay of the history of each essay separately. Because each essay is typed into the same Google Doc, this gives me the entire history of the essay, from outline to final product. After each replay was generated (they take a few minutes each), I hit the “document graphs and statistics” button in the top right to see times and numbers of edits in easier-to-read form. I manually added up and typed the timing and number of the edits into the Google Sheet above. Last, I thought of some values culled from that data I might like to see correlated with other values. Extra last, I performed a few t-tests to see if the patterns I was seeing were meaningful.

(The luxury of a paragraph about how annoying the data was to compile is part of the reason I put these on my blog instead of writing them up for journals.)

Example “document graphs and statistics” page. From this, I would have copied 1468 edits for the due date (assuming the due date was Monday the 30th), 79 edits 4 days before the due date, and 1911 edits for 5 days before the due date, with 0 edits for every other day.

The values that I thought might say something interesting were:

  • % of edits (out of all edits) that occurred on a class day
    • I’m curious whether students who edit on days when they don’t actually see my face do better – i.e., if students who edit on the weekends write better. Eliminating class days also helpfully eliminates lab days, the two class days a week when all students are basically forced to make edits. Incidentally, our classes meet Mon-Thu and final drafts are always due on the first day of the week. The average across all the essays was 63%, with a standard deviation of 38%.
  • % of edits that occurred on the due date
    • Specifically, before 1 PM – all my final drafts are due at the beginning of class, and all my classes have started at 1 PM this year. My assumption is that a high % of edits on the due date is a sign of poor work habits. The average was 21% with a standard deviation of 31%.
  • total # of edits
    • One would hope that the essay gets better with each edit. This number ranged from near 0 to more than 6000, with both an average and standard deviation of about 1700. Obviously, if you calculate this number yourself, it will depend on the length of the essay – mine were all between 3 and 5 pages.
  • maximum # of edits per day
    • I’m interested in whether a high number of edits per day predicts final grades more than a high number of edits total. That is, I want to know if cram-editing benefits more than slow-and-steady editing. The average and standard deviation for this were both about 1200.
  • # of days with at least 1 edit
    • Same as the above – I want to know if students who edit more often do better than ones who edit in marathon sessions on 1 or 2 days. The average was 3.25 days with a standard deviation of about 1 day.

All of the above were computed from the due date of the last RD to the due date of the FD, up to a maximum of 1 week (my classes last for 6 weeks, and there is very little time between drafts – read more about the writing process in my classes here). When I was done, after several hours of just copying numbers and then making giant correlation tables, I had hints of what to look into more deeply:

2 essays from each student, each taken separately.

As you can see in cells C9-H14 (or duplicated in I3-N8), students didn’t necessarily use the same revision strategies from essay to essay. A student who had a ton of edits on one day for essay 1 might have fewer edits spread out over more days for essay 2, as evidenced by the not-terribly-strong correlations in the statistics between essay 1 and essay 2. To take one example, “days with > 0 edits” on essay 1 was correlated with “days with > 0 edits” on essay 2 at just 0.21 (cell M7). Some of these differences were still statistically significant at p=0.05 (a good enough p for a blog, imo):

  • Students who did > 2000 total edits on essay 1 had an average of 3428 total edits on essay 2. Students who did <= 2000 total edits on essay 1 had an average of 1650 total edits on essay 2.
  • Students who did > 50% of their edits for essay 1 on the due date did an average of 45% of their edits for essay 2 on the due date. Students who did <= 50% of edits on essay 1 on the due date did an average of 17% of their edits for essay 2 on the due date.

Anyway, because it seemed prudent to consider the strategies used on each essay rather than the strategies used by each student, I made a second spreadsheet where the individual essays rather than the students (who each wrote 2 essays) are the subject of comparison, resulting in this much-easier-to-read correlations table:

Here I treat each essay as a unique data point rather than 2 products of the same student.

Columns I and J (or rows 9 and 10) are probably the most interesting to other writing teachers: those hold the correlations between statistics derived from Draftback data and I) final draft scores and J) change in score between the rough draft and final draft. In plain English, the correlations here suggest:

  • As expected, % of edits on class days and % of edits on the due date are negatively correlated with the final grade for the essay. That is, people who did a lot of their edits in class or right before turning in the essay seemed to do worse (but not by much-neither produces statistically significant differences in FD grades or in improvement between RD and FD).
  • Total # of edits and max edits per day are both positively correlated with final grades (and with each other). Editing more tends to produce better essays.
  • Everything that is true for the final scores is also true for the change in scores between RD and FD. The fact that RDs were even more negatively correlated with % edits on class days and % edits on the due date than those values were with FDs mean that the changes appear to be positively correlated, but I take it as meaning that those strategies with an improvement from very bad RD scores to mildly bad FD scores.

To give a bit more detail, these were some statistically significant differences (p=0.05):

  • Students who did > 2000 total edits had an average grade of 86.8% on the FD. Students who did <= 2000 total edits had an average grade of 78.7% on the FD.
  • Students who did > 3000 total edits had an average grade improvement of 17.8% between the two drafts. Students who did <= 3000 total edits had an average grade improvement of 4.9%.
  • Students who did edits on > 3 days had an average grade of 84.8% on the FD. Students who did edits on <= 3 days had an average grade of 78.9%.
  • Students who did edits on > 5 days (that is, almost every day) had an average grade improvement of 33.6% between the two drafts. Students who did edits on <= 5 days had an average grade improvement of 5.8%.

The data suggests a fairly uncontroversial model of a good writing student – one who edits often, both in terms of sheer numbers of changes and in terms of frequency of editing sessions. In fact, “model student” rather than “model essay” may be what the data is really pointing at – the amount and timing of the work that went into a particular essay seems sometimes to show more about the student’s other work than it does about the quality of that essay.

For example, it’s not clear why data derived from the time period between RD and FD would be correlated with RD scores (in fact, you would expect some of the correlations to be negative, as high RD scores might tell a student that there is less need for editing), but perhaps the fact that the same data points that are correlated with FD scores are correlated in the same ways with RD and final course grades indicates that the data shows something durable about the students who display them (my caveat earlier notwithstanding). It is feasible that the poor work habits evidenced by editing a major paper a few hours before turning it in might affect students’ other grades more than that paper itself.

In fact, this seems to be the major lesson of this little research project. One t-test on % edits on due date was statistically significant – one that compared students’ final course grades. To be precise, students who did > 20% of their total edits on the due date had average course grades of 84.5%. Those who did <= 20% of their total edits on the due date had average course grades of 88.8%.

Just to pursue a hint where it appeared, I went back into my stat sheets for each class for the last year and copied the # of assignments with grade 0 (found on the “other stats” sheet) for each student into my big Google Sheet. Indeed, there was a statistically significant difference. That is, students who made > 20% of edits made on the day an essay was due got a score of 0 on 5% of assignments across the term, and students who made <= 20% of edits made on the day an essay was due got a score of 0 on 3.2% of assignments across the term.

Like many characteristics of “good students”, from growth mindset to integrative motivation, whether a pattern of behavior correlates with success and whether it is teachable are two almost unrelated questions. It doesn’t necessarily follow from this research that I should require evidence of editing every day or that I should move due dates forward or back. It does suggest that successful students are successful in many ways, and that editing essays often is one of those ways.

I might just want to tell my students that I really love the Google Docs “cleared comment” emails that I get on Monday morning and I wish I got them all weekend, too.

Teach a man to find correlations, he posts them for a lifetime

Aphorism showing its age aside, this post is designed for both men and women who use Canvas and are curious about statistics that may be hiding in their classes’ grades.

I have my own data to share about this semester’s classes, but first, here is a tool that you can use to do the same:

Stat sheet for grades 1.1

And an explanation of how to use it:

On to what I found.

I had 4 classes this semester – 2 Oral Language classes and 2 Written Language classes, both in the 2nd to last term of my university’s IEP. My university’s IEP works a bit unusually – my 4 classes were just 2 groups of people meeting for 4.5 hours a day 4 days a week, about half of which was “Oral Language” and half of which was “Written Language”. The first group of people were my students for the first “term” (=half of a semester), and the second group were mine for the second term. All told, I still had 4 gradebooks on Canvas to export and fiddle with. Between the 4 of them, I found these interesting statistical tidbits:

Scores of 0 are more predictive of final grades than full scores are

One would expect the number of 0s on assignments to negatively correlate with final grades, and the number of full scores to do the opposite. That is, thankfully, true. However, they correlate at different rates – across all my classes, on average, 0s are more (negatively) correlated with final grades than full scores are (positively) correlated. The reason for this is that full scores were more evenly distributed among all students than 0 scores, which were concentrated among a few students. The one class for which this was not true was the one that I changed my late work policy and started giving 1/2 credit for certain late assignments.

This would not be a cause for any particular change except for 2 reasons: 1) as shown by the last class, many of the 0s that students were getting were from late work rather than unsubmitted work, and 2) we have a fairly strict policy about grading by SLOs (student learning outcomes, one of the first abbreviations I had to learn upon my return to the USA after years in Japan), and nowhere in our SLOs does it say that students should learn the sometimes-merciless grading policies that one may encounter at university.

Therefore, I should really make the “late work gets partial credit” policy permanent. I should also probably give fewer full scores.

5% 0s is a line in the sand

I enjoy running t-tests to see what values in what grade categories produce statistically significant differences (p=0.01) in my students’ final grades. One t-test I ran (on the “other stats” sheet in the file linked above) was seeing if students who missed 5% of assignments were different in statistically significant ways from those who didn’t. It turns out that they are, in all 4 of my classes this semester. On the other hand, those who missed 2% of assignments weren’t. Perhaps I should give an opportunity to make up homework on about 2% of assignments (as I already do for classwork).

I’m hoping that my future classes have grades that reflect the average quality of their work, which in turn reflects their ability to do academic work in English, rather than their tendency to check due dates and read rubrics thoroughly on Canvas. These are important skills, but I won’t want to make them a bottleneck through which every grade must pass.

RDs need a bump, FDs need a nerf

Across 4 essays in both Written Language classes, the average correlation of rough draft scores with final grades was 0.70. The average correlation of final draft scores with final grades was 0.76. Since final drafts are worth at least twice as many points as rough drafts, this is rather surprising – even moreso because for 3 of the 4 essays, the rough drafts’ correlations are actually higher than the final drafts’ (the last had a very low correlation for the rough drafts).

I’ve been making changes to my writing process over the last few semesters, and it seems I need to make a few more. I think part of the comparitively low correlations that final drafts have is due to my grading practices – I think I take it easier on final drafts precisely because they’re so many points. My average scores for final drafts are higher than for rough drafts, and the standard deviations as lower – roughly 62%-95% with an average of 78% for rough drafts and 65%-95% with an average of 80% for final drafts. It’s not a huge difference, but looking back at the scores now they don’t seem to reflect the range in quality of the essays. Part of the high correlations for the rough drafts is also due to the skills that are involved in producing a first draft – planning, reading, responding to a prompt, and a bit of grammar – that are assessed in a lot of other assignments as well. Final drafts, meanwhile, assess (in addition to the same things that first drafts assess, but less directly) responding to criticism and editing, which don’t figure largely in many other assignments. Seeing how first drafts track more of the skills that I care about, and I seem to grade them with less of a high-stakes mentality, I should probably weight them more. On the other hand, since final drafts have a somewhat narrow range of skills that they assess, I should weight them less, or even separate my grades for final drafts into smaller sub-assignments like the COCA assignments I currently use, but also a written response to criticism and proof of visiting tutors instead of trying to indirectly read those things into the final draft.

I need to keep in mind too that I’m not necessarily serving my students well if I introduce them into a writing process that none of their psychology, history, or any other professors will use – I hear that most papers turned in for any class other than English are just the final drafts, already assumed to be revised and polished to a sheen. Maybe having one paper like this per term is also justifiable just in terms of preparing students for being taught by PhDs who know more than anyone else in the world about the behavior of certain species of field mice under certain conditions but have never studied pedagogy.

Look forward to more like this same time next semester, and let me know if you find the sheets useful for your own classes.

Goodbye to California, pt. 1

Shortly after my acquiescent post on the constant rejection one faces applying for full-time ESL jobs, I got an email curiously positive in nature and free of formulaic boilerplate. I had gotten so used to rejection that I almost didn’t comprehend it at first – but it was an invitation to interview, something I had gotten just a few times in the years since my MA. And after that first interview on Skype, I got another such email from the same place, inviting me for a campus visit. When the date came in late May, after I made sure my grading for the weekend was already done, I boarded a plane at John Wayne Airport at 4 AM and spent the whole day in a state besides the one that I have lived in since returning to the US in 2016.

Now, I was breathing such rarefied air at this point that I felt zero pressure to succeed, happy to plant my flag at the “second interview” stage before what I assumed would be a quick descent back down to solid adjunct ground. This was a Monday. I had classes again at my usual schools on Tuesday and plenty of proctoring and grading to do after that to help push the entire episode into the past tense – I was already imagining the conversations I would have in the break room at all the same schools next semester about the time I came this close to getting a full-time position.

But as a call a few days later informed me, I did get it, and very soon after this post goes up, I’ll be starting my first classes there.

By crazy when-it-rains-it-pours coincidence, this was the 2nd full-time job offer I took this year – although the first was a contract only for the summer. That job, which just ended, has given me a bit of a sneak preview of my life as a full-time teacher in a context other than Californian community colleges. I thought I would share a bit of my reflections here, both as a document of my thoughts for myself and as a guide for other adjuncts hoping to do something similar.

Adjunct Goodbyes and Full-time Goodbyes

I’m excited about my new job, but I do have a few regrets about leaving the colleges where I teach now. One of those regrets is that I did many things for the last time at my main schools without realizing they were the last times. I had my last norming meeting (and I enjoy those), my last walk with a student between the classroom and the lab to show them where it is, and my last unexpectedly long pause while the projector warms up, all without knowing that I would never do those things there again. I saw a bunch of people in passing in a hallway or copy room and said some simple words of greeting or an inside joke not realizing that those were the last times I’d be doing that with those people. Not to strike too melodramatic a tone, but for the most part these were the first workplace acquaintances I made in California, and they witnessed my whole process of getting my feet wet, asking silly or obvious questions really politely (“Sorry if this is obvious to everyone here but me, but what is an SLO?”). I will probably like my new coworkers – teachers are usually nice – but they won’t be my first coworkers in the US. I have a lot of words of thanks to go around, but I won’t be specific here. If we spent more than one microwave’s cooking time together, I appreciated it.

There are a few students who had let me know that they wanted to sign up for my fall classes with whom I’m not holding up my end of the bargain. This makes me feel a bit guilty, as does the fact that I won’t be able to wave or chat to former students that I see around campus, but both of these are a bit of an unnatural extension of the teacher-student relationship, which formally has a lifespan of one semester. The same goes for quite a few “single-serving friends” I made in break and copy rooms, for whom the loss isn’t of a deep friendship but just the potential for a longer one of whatever quality it was for 30 minutes a week while we both ate Amy’s frozen burritos. I got some kind words from my now-former coworkers, but of course the definition of an “adjunct” is something inessential to the major workings of whatever it’s part of. At any school with adjuncts, some portion of instructors and students will have the experience of suddenly not having a colleague or teacher on campus anymore every semester. I suppose part of my newbieness that never wore off was expecting to know when that time was coming for me.

(OK, I will single out for thanks 4 people whose initials are G.P., R.B., C.C., and B.W. who saw me at my most newbieish and imparted some very important and well-timed advice. Shucks, also my most frequent collaborators H.L. and D.P.. Also all my SIs.)

On the other hand, at my full-time summer job, we all knew pretty well from at least mid-June that I would be gone, and the program exists solely so that students matriculate out of it and into another program. The goodbyes here had pomp and ritual and lots of tears. People act differently when they know things are ending, and the entire last day of work was dedicated to ceremonial closing of the program, complete with thank-you cards being exchanged, speeches, skits, musical performances by every combination of students and teachers, and a lovely banquet to top it off. It was the best way to conclude a summer program and my time in California, with some really excellent people.

The lesson here, I guess, is to know as much as possible when you’re heading into a round of goodbyes.

More to come later.

Taking steps in class

I mean this literally. I got a Fitbit last year, and during the spring semester, I tracked how many steps I took during an average of 5 class sessions of each of the 3 courses that I taught.

My classes were a content-based IEP class with 13 students, a mixed-skills intermediate-level credit community college ESL class with 21 students, and an advanced ESL writing class with 25 students.

Across 5 class sessions, the average number of steps total for each class was:

  • Content-based IEP: 236
  • Intermediate CC: 626
  • Adv. writing CC: 440

Of course, since the class sessions were of different lengths, it makes sense to divide the number of steps by the number of minutes in which I had to take them.

Steps per minute of class time, including breaks:

  • Content-based IEP: 2.63 steps per minute
  • Intermediate CC: 2.78 steps per minute
  • Adv. writing CC: 1.96 steps per minute

Last, because higher numbers of students might feasibly require the teacher to move more and farther around the classroom, here are the steps per minute further divided by the numbers of enrolled students:

  • Content-based IEP: 0.20 steps per minute per enrolled student
  • Intermediate CC: 0.13 steps per minute per enrolled student
  • Adv. writing CC: 0.08 steps per minute per enrolled student

What does this tell me?

I tended to walk around more, all other things being equal, in the content-based class. I attribute this to the type of work they typically did – small group discussions in which I would move from group to group and either guide the discussion, participate as an equal, or just listen. The other two classes, at community college, usually involved at least some “lecturing”, standing relatively still or sitting at the computer and typing notes projected onto a screen.

I think my classes could benefit from structuring more lessons around small group work rather than lectures to begin with. As it turns out, a further benefit might be that it helps me reach my fitness goals.

Image result for fitbit blaze
Lecture disincentivization tool. (source)

The corpus of rejection

Every few weeks, depending on the season, I get a message like the following in my inbox:

Dear [name],

On behalf of the application review committee, we thank you for the submission of your application for the [position]. We recognize that the application process requires a great deal of time and effort on your part. Regrettably, you were not selected to move forward for an interview.

[more stuff that I never read]

Sincerely,

[Office of Somethingorother, name of college]

The slightest amount of experience with this type of letter lets you figure out the gist after the first line, or even from the existence of the email itself, coming as it does prefaced “DO NOT REPLY”, a subject line with the illocutionary force of a restraining order.

I’ve gotten enough of these over time (more than some, not as many as others – adjunct is a job with a depressing number of grizzled veterans sporting depressing amounts of grizzle) to start noticing patterns in the language that these messages use. A mini-corpus thereof can be found below.

Image result for gordon ramsay fuck off
Spoiler alert: This level of frankness would be refreshing.
Continue reading “The corpus of rejection”

ESL Students’ Feared Selves

Part 3 of a 3-part series on possible selves (scroll down for parts 1 and 2).

If I’m being honest, these were the most fun to read, although as I stated before I can’t share any of them with you.

It’s not some kind of sadism that prompts me to say that: The descriptions in students’ responses to this final question were much more affective in content than the first two. Rather than lists of future colleges and jobs, here we had responses more along the lines of “I have no friends and I have a SAD SAD life”. Again, you can’t see them, but you can see what types of complaints were the most common, which should be just as fun. As in my last 2 posts, I combed over each entry looking for mentions of specific subjects. Because emotions were much more commonly mentioned for the feared self than for the other 2 selves, I tried sub-categorizing types of negative affect as well.

Below was the prompt, answered by my 2 multi-skill intermediate classes and 2 advanced academic writing classes over the past 2 semesters.

Imagine the worst version of you in 5 years (the opposite of the first). What happened to your English, and why didn’t you succeed? Give details. What is different in your life because you can’t use English?

Continue reading “ESL Students’ Feared Selves”

ESL Students’ Ideal Selves

Part 1 of a 3-part series. As an end-of-semester assignment, I had my summer and fall classes (4 total; 2 intermediate multi-skill and 2 advanced academic writing) write about their ideal, ought-to, and feared selves. Besides being a recent buzzword in ELT, possible selves make an interesting writing assignment for both the teacher, who gets to find out his students’ motivations in a bit more detail, and the students, who get to describe their (hopeful) future lives. Now, in fairness to you, I should point out right at the start that I won’t be excerpting their writing here; I didn’t warn them that I’d be using this assignment for my blog and I am one of those teachers who doesn’t even share pictures with his students’ faces in them without asking each one of them individually. Instead of showing you what they actually wrote, I will be analyzing each of their answers for the prevalences of certain topics and concerns and then doing some basic statistics with these. As it turns out, this takes a lot longer.

This post will only deal with ideal selves, with ought-to selves and feared selves to come later. First, here is the prompt and example that they saw.

“For this discussion, please answer these questions in different posts:

  • Imagine it is 2023, and you have succeeded in English in the best way. What steps did you take to get here? How do you use English now (in 2023)?
  • What can you, now, do every day to bring yourself closer to that future best version of you? What kind of things should you do? How should you “study” or “practice”?
  • Imagine the worst version of you in 5 years (the opposite of the first). What happened to your English, and why didn’t you succeed? Give details. What is different in your life because you can’t use English?

Last, reply to a classmate in at least 3 sentences.

Example first post:

In 2023, I am a college graduate. I have transferred to UCI and graduated with a major in computer engineering. I used English in all of my classes to do homework, work on group projects, and give presentations. Computer engineering was still hard, but my English helped me a lot. It also helped me to make friends and find a job. Now, I work for Blizzard Software and I design graphics for upcoming games. I use English at work, of course, but I don’t think of it as ‘practice’ anymore. Now, it’s just life.”

Continue reading “ESL Students’ Ideal Selves”

I had a TESOL Certificate student

Here’s a short “before I forget”-type post.

An administrator of the TESOL Program from the nearby large, public university reached out to a bunch of the ESL faculty at my college and asked if we’d like to host a TESOL Certificate student for his/her practicum. I volunteered to host one in my intermediate multi-skill course.

(Practicum is not a word we used in my MA program, possibly because almost all of us were already working in ESL/EFL.)

I first met the student in question at a café in town in October, and as it turned out, he is already a professor in another subject and has been teaching for decades, and just wants the TESOL Certificate for something to do after retirement. This shifted my idea of what would happen next from “I beneficently guide an idealistic neophyte teacher” to “I am judged by my pedagogical and academic betters and found wanting”.

During his observations, I managed to forget I was being “observed” and ran my classes more or less normally, even ad-libbing at least a few tasks. I find that I default to gregariousness in the classroom, and just get more ostentatiously relaxed when I know I’m being watched. I heard from the TESOL student after every lesson and apparently he was surprised by some of the things that we did. I was pleased with those lessons as well – if only they were all like those!

After 3 observations, it was his turn to teach, and he prepared 3 of his own lessons on prepositions, conjunctions, and phrasal verbs at my direction. The content of his lessons would fit pretty exactly into the frame we call PPP (present, practice, produce), sometimes with the last P dropped in favor of everyone reviewing answers together from the second P. He gave PowerPoints full of abstract example sentences and demonstrated usage with a bit of “realia”, trinkets brought from home. He handed out worksheets with closed-ended grammar questions and had people work in pairs and then solicited answers.

Needless to say, this was not a modern ELT lesson. It seemed remote, pre-packaged, of little clear relevance and definitely not “student-centered“, although it was delivered with a professional touch. But given everything I’ve said about “playing the teacher role” in the past, I should have been prepared for the students’ reaction: they really liked it. Or rather, the students who don’t generally like my TBLT- or Dogme-ish lessons, the ones I might in a darker moment call ritualists in the cult of failed methods, really liked it. Students who I would have put in the bottom 1/3 of my class responded the most positively. I didn’t hear much from the students I usually get a lot of participation from, but I did see people whose engagement in the class can be described as “tertiary” work quite hard to get their worksheets done and really demonstrate concern that their answers were correct.

I don’t want this to come off as “the TESOL student succeeded despite himself”. He is an experienced teacher who delivered a lesson that understandably didn’t conform to modern ELT expectations. He also improvised when he needed to and established good rapport with the students. The thing I’m reacting to here is just that a lesson that was so different from what I usually plan worked very well with a demographic that my lessons usually succeed less with.

There were other things I noticed about his lessons, most memorably that intentionally striking academic professorspeak like “it can be compared to”, “simultaneously”, or “as a generic term for” from one’s working vocabulary at the podium is a challenge – one that I remember facing at the beginning of my career back in Japan. But my main takeaway as a teacher is that this “playing the teacher role” is even more powerful than I thought. If we take a certain amount of educational ritualism (in the form of embrace of the abstract over the personal, the effete over the practical, the comprehensible over the true, etc.) for granted in certain numbers in each one of our ESL classes, it may really behoove us to spend at least some of every week pedantically explaining grammar at people, for affective reasons if nothing else.