I just turned in grades for the summer intermediate skills class I had over the summer, and set out to compile some useful statistics just as in previous semesters. Unfortunately, the data doesn’t* say much… because the scores were too high. Every assignment category, from attendance to final exams, was higher than the same class in spring semester, sometimes by ridiculous amounts. For example:
Spring 2018: 90.09%, standard deviation 12.9
Summer 2018: 95.85%, stdev 7.5
(including one student who was out of the country for 2 weeks in a row – otherwise it’d be 97.17% and stdev 3.97)
Spring 2018: 84.36%, stdev 14.0
Summer 2018: 96.19%, stdev 7.9
Spring 2018: 83.9% stdev 15.5
Summer 2018: 87.95%, stdev 9.7
As the standard deviations imply, there wasn’t much spread between the highest- and lowest-performing students, and even less between the many varieties of average-performing students. This was basically a good thing – there is no upside to a large spread of homework scores for pedagogy or validity. It’s not as if my homework scores failed to validly** track some educational construct because everyone was doing uniformily well.
Summer classes have a lot of perks. They meet twice as often, 4 days instead of 2, letting you take 2 days out of the week for something like student presentations without creating a yawning 2-week gap between instructional days. The students are more dedicated – only 20% of my students in summer were taking any other classes. The class meetings are shorter too, which probably helped my students, about half of which worked. Of those who worked, 19% had morning shifts, 73% had afternoon shifts, 64% evening, and 30% night (between 10 PM and 5 AM). Despite these fairly high numbers, almost everyone did almost all the homework and did about equally well on projects, quizzes, and tests.
It’s a bit of a shame for data collection, because although I haven’t cracked the statistics textbook I was convinced to buy, I did start the term with a much more complete questionnaire on my students’ jobs, as you can see. In the end, presumably because of the narrow spread in grades overall, this yielded some correlations (evening shifts were most negatively correlated with final grades) but no significant differences between working and non-working students, even at p<0.05. Scores were too similar to yield differences among different types of students.
This didn’t confirm my big hypothesis, that working students are at an unfair disadvantage given that community colleges exist specifically to serve non-traditional college students. I have, however, narrowed my hypotheses for future work surveys a bit because “hours spent using English at work” was about as negatively correlated with final grades as “total weekly working hours” (-0.46 vs. -0.39). Next semester, I will have to compare hours of English use at work to overall hours of English use to see if working students have more opportunity for input and output, and if this is so, ask why this doesn’t yield significantly higher performance on at least some types of assignments. I can anecdotally see that students who use English at work benefit from doing so. I need to plan my classes so that this is reflected in their grades, or at least not reflected negatively.
If future classes continue to find a difference between working and non-working students irrespective of whether they use English at work, it may be that the type of competence fostered by having a service industry job where you use your L2 doesn’t outweigh the necessity of somewhat narrow means of assessment in an academic ESL class. For example, it’s inevitably my working students who have the most natural grasp of which modals can be used for formal and casual requests, offers, or requests for permission, but unless they can carve out time between the end of their shift and taking care of an elderly parent to show that grasp in an assignment, their homework scores won’t be commensurate with their abilities. The lens of assessment is only focused on students when they do assignments, not when they practice modals for hours at a time every day at work.
It may help make my classes more equitable in this regard if I minimize the amount of “assignment” they have to do to prove they’ve been getting input, while still being hard enough to fake to prevent cheating. I have a type of assignment that is aimed at dragging along as much real-world practice as possible for a minimum of “assignment”, which is sometimes very close to “go get some input, then check a box when you’re done”. An example is a book report where the students choose any graded reader from our library and then turn in a pretty perfunctory worksheet that they could probably do in 5 minutes. To me, this type of assignment is justified by 1) the high ratio of interlanguage-developing work to product, 2) the promotion of available outside resources, and 3) the high motivation levels of my intermediate students, which reduce the odds of cheating (also, the low grading time). If a similar assignment said “start 3 conversations and fill out a perfunctory report afterwards”, this could reward the time my working students spend talking without pandering specifically to them.
Maybe the future of all ESL homework is “get input, and prove you got it”. At least at the intermediate (i.e., not academic writing) level, this probably maximizes opportunities for interlanguage development while minimizing what are in my view the less valid aspects of the grading process.
*”Data” is an uncountable noun, unless you are writing for an academic journal or have a mobile datum plan like Titus Andromedon’s that just comes with the one.
**Now that we’ve split the infinitive, the only question is whether we’ll be able to fuse it in a stable way and provide unlimited, grammatical energy for the entire world.