Circular thinking in deterrence

Back in my undergrad criminology classes, the professors would often raise interesting case studies of crimes where the benefits to society of punishment were unclear. Often, these were cases where giving “just deserts” to one party harmed another innocent party, for example the accused’s dependents. Inevitably, one classmate would respond to such case with “they knew what they were getting into” or simply “fry ’em!”

The punitive impulse is strong. It effects an attitude toward crime that one sees everywhere, and especially these days toward illegal border crossing. The reactions to people being punished for crossing the border show this clearly in part because of the inhumanity of the punishment being delivered – in this case, as in many others, not just to the accused.

Here’s exhibit A:

As hilariously disingenuous as the first part of that is, it’s the last part that I want to focus on today – the part where she says that children were brought to the US under “irresponsible” conditions. In her mind, and in many others’, the irresponsibility of bringing kids across the border under threat of separation justifies the harsh punishment of separation, because only irresponsible parents who deserve to lose their kids would cross the border. (Left out of the discussion, as always, are whether children of such parents “deserve” institutionalization and lifelong trauma). That one word, “irresponsible” encapsulates a lot of the circular thinking of deterrence.

That thinking always follows this path: the more severe a punishment, the more deserving of it a rational person will be for committing a crime that carries that punishment. If the punishment for jaywalking were squassation, only a truly irresponsible person would jaywalk, ergo that person would deserve squassation.

The weaknesses of this logic are first, that people aren’t rational in avoiding punishment or in any other domain. Criminology, like economics, has undergone a reevaluation since its purely rationalist days (I want to say it started with Beccaria…), but this post-rationalism hasn’t permeated the collective consciousness.  Just as people aren’t solely motivated by marginal dollars, they aren’t solely motivated by the sticks and carrots of criminal justice. Also, people tend to apply this philosophy of punishment unequally – for populations with which they have little empathy, they see only sticks and carrots, but for their own community, they want trust, norms, and the restitution of dignity to victim and transgressor.

Criminology (at least at the undergraduate level) divides rationales for punishment of criminal behavior into 4 categories: retribution, rehabilitation, incapacitation, and deterrence (specific and general). My feeling is that people feel highly retributive towards people that they feel little connection too, and that this feeling is often recast for public discourse in deterrent terms – terms that are self-justifying.

As a bone to throw my remaining TEFL readership, let me say that I think this idea of self-justifying punishment has some importance in syllabus design as well – we could design a pop quiz on an assigned reading that is 30% of final grade, and it would be justifiable in the same sense that public flogging for vandalism is.

Advertisements

N-identities in Manzanar and Love Wagon (あいのり)

The last day of class, instead of having the potluck that my students were probably hoping for, we did a very quick analysis of the book we had just finished reading (Farewell to Manzanar) using Gee’s NIDA identities.

IMG_0896.jpg

To briefly summarize what those are:

N-identity (nature identity) is the part of identity which is supposed to come from nature. It often includes visible traits like gender and race and the palette of traits and abilities that are thought to stem from them. As the Rachel Dolezal controversy shows, what is N for some people is I or A (see below) for others, and people can be quite unforgiving when they think an N characteristic is being wrongly taken on or rejected. My students were astute in noticing that even N identities change when the people around to perceive and interpret them change – the main character in Farewell to Manzanar has different N-identities when surrounded by other Japanese-Americans than with other Americans..

I-identity (institutional identity) comes from institutions of which one is part. For example, my ability to pass as a teacher comes mostly from my employment by schools, and not many people would accept the legitimacy of a “teacher” identity without it. It can be fun to imagine which kinds of jobs require institutional recognition to be considered a legitimate claim to identity – to me, “artist” is not an I-identity, but “animator” is. “Philosopher” is not an I-identity, but “researcher” is. My students said many characters in FtM lost their I-identities (in most cases, fishermen who worked together) when they were forced to move into the camps.

D-identity (discursive identity) comes from interactions with other people wherein one comes to be known as a certain “type” of person. This tracks what most people call a “personality”, but unlike “personality” has no implication of permanence. That is, one can have different D-identities among different groups of people. The Papa character in FtM is a bit of a stereotypical alpha in the way he interacts with others, which shifts from comforting to ironic as his life circumstances change from independent businessman to unemployed drunk.

A-identity (affinity identity) is similar to I-identity in that it relates to larger social groups of which we consider ourselves part. Unlike I-identity, A-identity doesn’t require any kind of actual membership in a group, only affinity for it. One can have an A-identity as a Premier League fan without any formal affiliation in the form of membership in a team or fan club. Notably, and as some of my very clever writing students mentioned, A-identity can be almost entirely imaginary – Papa from FtM imagines himself to be the inheritor of a samurai legacy, although the samurai ceased to exist before he was born and are well on their way to being more a cultural trope than a social class at the time the story takes place. One student mentioned this aspect of A-identity in a presentation, which was a great example of critical thinking.

What I like about these categories of identity is that they make clear both that identity is a multifaceted and context-dependent phenomenon and that it depends on other people and society. That is, you can have multiple identities, and none of them are purely a result of you choosing the type of person you want to be after doing some deep thinking alone or “finding yourself”.

My students did a very good job applying these on short notice to a book they’d probably grown quite sick of on a day when many people were already mentally on vacation. What they said reminded me of some things I’d been seeing on Netflix recently.

Read More »

Ancestry dot dot dot

Around junior high school, when I realized that “races” were a thing and I had one too, I started making my schoolwork Japan-themed wherever possible and ex nihilo informing my classmates that “taco”, in addition to being a receptacle for beef or chicken, meant “octopus” in Japanese.

(I wonder if the age at which you first realize your own race is a reliable shorthand for the stigmatization of the race of which you are a member…)

My classmates and teachers were nice enough not to call me out on this strange behavior. In fact, it probably would have been seen as improper if they had – after all, I was celebrating my heritage. I had Japanese ancestry, and that earned me the right to “rediscover my roots”, even in an awkward, teenage way.

(It’s funny how learning something new is frame as recovering it if you’re in a demographic thought to be born with that knowledge.)

Later, in high school, there was a club called Asian Cultural Enlightenment (ACE), which I somehow felt that I should join, although I never did. Several of my classmates in Japanese (the only Asian language elective) were members. I think I was putting a little bit of distance between me and Asian-ness, or simply taking advantage of the fact that as a stealth minority (i.e. capable of passing as white – many people assume my last name is Irish), I didn’t need to affirm any particular ethnic identity. I was fine with un-discovering my roots at this point.

Looking back, I wonder if the other members would have thought it was strange that someone with basically one toe in the pool of Asian identity would try to join an almost explicitly ethnically-based club. I also wonder how far back in my family tree I could have an Asian ancestor to legitimize an Asian identity if I had wanted to embrace one. If I merely shared with the other Asians the 99% of DNA that all humans share, would that not count as enough?

This journey down memory lane was spurred by yet another news story about cultural appropriation.

Read More »

Random reflections on economics

For some time now I’ve been lucky enough to have a professor of economics as one of my private students, and helping this person put together presentations, papers, and whatnot has exposed me to a field of inquiry that is quite different than SLA.  It’s been refreshing and somewhat zen-like to see the extreme quantification of social forces and psychological phenomena and to hear the thoughts of people dedicated to to that enterprise.  The following are some thoughts on what I’ve seen over the last year or so.

Quantification is not reductive

The stereotype is that economists view people’s loves and lives as “mere” numbers, which has earned economics as a field the nickname “the dismal science”.  I never got the feeling, though, that economists view quantification as taking away some quintessential human elán from the thousands or millions of people whose behavior they are analyzing.  To the contrary, it seems to be a common understanding of the field that numbers are just the only way to deal with data points that number in the millions; it would be impossible to describe something like a national gender wage gap qualitatively and still be fair to each individual.  It’s certainly not true that economists view that number as the inarguable conclusion of a research question; validity and how to test for it are problems that animate much of the literature (it seems). In short, quantification of human behavior is a necessary part of looking at data sets this large and doesn’t “reduce” people if you have an appropriately skeptical attitude toward what the numbers really mean.

Conservatives tend to place free will at the base of questions of economic justice

A basic assumption of the field which has come under question since the 1980s is that people, when presented with a field of choices, will choose correctly and consistently according to their mostly stable preferences.  It would be hard to find a bedrock principle more at odds with either modern psychology or any adult’s lived experience of other adults.

It follows from this ideology that humans make rational choices based on stable preferences that human choice is above reproach, that whatever people decide given a set of options is a priori proof of justice. Any attempt to “nudge” people into a better choice or to force certain choices will produce warped and economically unhealthy outcomes. If people seem to naturally separate themselves into different groups, it must reflect a natural, stable preference within those groups.  Such is the explanation often deployed to dismiss the gender pay gap as the result of women’s free will rather than any kind of injustice.

You see the basic logic at play here in many areas of public life – certain politicians seem to see no motivation for human behavior that is not economic, and the main or only purpose of government is to encourage (or at least not punish) good economic decisionmaking. When people, either individually or as a group, seem to display an affinity for factors other than income (e.g. family, conformity, culture, or community) when choosing a career, that choice is accounted for in their reduced income. The last thing the government should do when people make uneconomic choices is to reward them economically with nutritional assistance, hiring quotas, or tax credits.

Luckily, I am at a healthy remove from both the ideologies of free will and the prosperity gospel, and I therefore don’t think people’s choices (particularly economic choices) are self-justifying.

Glass ceilings vs. sticky floors

The glass ceiling is probably the most emblematic phenomenon from economics to make it into popular culture. Loosely defined, it is an income gap at the top of the income distribution. In practice, it is often interpreted as a man getting promoted to an upper management position over an equally hard-working woman, who unlike the man is expected to perform childcare and other domestic duties in addition to working full-time.

Of course, I don’t know many men or women in upper management of anything. I do know many men and women in jobs that pay by the hour, and many more who used to have those jobs.  Every week when I went shopping at my local MaxValu (Japanese chain supermarket), I would notice the people stocking the shelves, men and women, the cashiers, almost all women, and the mounted pictures of the store managers, all men. There are, obviously, many more people in jobs like this than in jobs like the last paragraph in any developed country.  But for some reason, there isn’t a metaphor in common currency to describe the observed income gap at the bottom of the income distribution.

Where it is discussed, it is called a sticky floor.  As I understand it, in economics, it is simply a parallel phenomenon to the glass ceiling, but one that concerns vastly larger numbers of people. In my mind, discussions of glass ceilings sometimes have the false-consciousness character of waitstaff on their break debating whether a 39.6% tax on the top bracket is unfairly high. Yes, it matters that Sheryl Sandberg has few peers in the Forbes 500, but it matters more and to more people that men in the bottom 10% of incomes out-earn women in the same bracket (I would include a source here, but it would reveal the identity of my student).

Because all my posts now include mandatory COCA data, The phrase “glass ceiling” occurs 465 times in the corpus, vs. 20 for “sticky floor” (only 3 of which seemed to be about economics rather than literal sticky floors).

A salary scale in a company that isn’t growing

This will strike any of you who have formally learned economics before as shockingly ignorant, even if the rest of this post hasn’t. Basically, when things stop growing, it’s not as if they settle into a flat but stable equilibrium. Sometimes, growth makes the system stable.

Screen Shot 2018-04-25 at 13.38.10.png

This graph, drawn for me at least 2 weeks in a row by my student, shows the salary of a worker in the sort of company that hires people for life compared to that worker’s level of contribution to that company (y axes), over the career of that worker (x axis).  The salary is in blue and the level of contribution (I believe it was called “human capital”) is in green.  There are two periods where these lines are very far apart: at the beginning of the worker’s career, where he/she contributes far more than he/she takes in, and past mid-career, where he/she takes far more than he/she contributes. This graph was drawn for me mostly to explain the phenomenon of mandatory early (sometimes as low as 55) retirement ages, the rationale being that companies want to shorten the length of time that workers can draw more salary than they’re worth. It also helps explain why companies may want more and more recruits every year; it is these recruits who contribute the most to the company. As each cohort ages, larger and larger new cohorts are required to pay for the older cohorts’ increasingly opulent salaries.  This is a stable system as long as each cohort is larger than the last.

When the cohorts stop growing, it starts a chain of events that potentially results in the death of the company. First, without the contributions of new workers, the company can no longer afford the salaries of its older workers.  Older workers may take early retirement or salary reductions (and grouse mightily about today’s youth). New workers and potential recruits notice that the formerly guaranteed high late-career salary is no longer guaranteed and start to question the benefits of accepting such a low early-career salary. The company therefore has an even more difficult time finding large enough cohorts of new workers.

Call me naïve, but I hadn’t seen this clearly before, nor had I seen the implications for national pension systems. Now that I do, I am even more glad to be in ESL rather than working for Toshiba, and I definitely hope all my students have lots of kids who all pay their Social Security taxes.

Varieties of middle C culture

Where is the dividing line between “Culture”, the kind we are obliged to respect, and “culture”, the pattern of living that distinguishes communities? Is a kettle Big C Culture if you use it to brew Earl Grey tea served with scones? Is the sound of a Harley Davidson’s engine revving just a shared reference point in a few countries? What if the main character of a TV show syndicated worldwide rides one?

In an effort to tie together somewhat thematically different chapters on “Culture” in a reading book one of my classes is using, I’ve introduced the concepts of “little c” and “Big C” culture and had the students examine the situations outlined in the chapters through that lens. If the terms are new to you, this or this are decent explanations. It’s been interesting, particularly when we’ve had a venn diagram on the whiteboard and the opportunity for students to put their own candidates for little c or Big C culture up for discussion – for example, students consider LGBT (for some reason, they didn’t want the Q) to be Big C because the term has become well-known and, to some, emblematic of first-world liberalism. Contrarily, they consider karaoke to be little c culture because, in their minds, everyone has it and no one considers it to be the legacy of any particular country.

Needless to say (for anyone who’s lived in Japan), students’ opinions about karaoke surprised me quite a bit, as karaoke is regarded in Japan to be a clear example of Japanese culture succeeding and spreading around the world, alongside sushi and anime. This has raised the question in my mind as to whether the little c/Big C dichotomy needs to be amended with consideration for the fact that different cultures have not only different artifacts and practices, but different perceptions of the importance of those artifacts and practices. What is Big C in the country that produced it may not be understood as a national symbol elsewhere, and what is unremarked upon in a country may be considered a national emblem of it elsewhere.

big-c-little-c.gif
Adapted from here.

(For the purposes of this discussion, I am flattening and homogenizing countries and cultures.  I recognize that no symbol is truly equally and universally shared in any political, ethnic, linguistic, or cultural group.)

Below the jump are my additions to the little c/Big C scheme.

Read More »

Losing my mind

What follows is a long, student-unfriendly version of a 3-paragraph paper (not an essay) on a 30-day challenge that I did with an intermediate integrated skills class.  The paper has to have an academic paragraph on the time before, the time during, and the time after the challenge.  Originally, the paragraphs had to use the past tense, present tense, and future tense (with any aspect), but I haven’t followed that rule faithfully here.

Getting lost in hectic thought was the default mode of my mind before I started my 30-day challenge.  The challenge, which was to meditate 10 minutes a day for 30 days, came at a time when I my mind was almost constantly in a state of emergency.  Every thought of grading, making new assignments, or updating a class vocabulary list was a red alert in a long line of red alerts.  I would be exhausted at the end of a day of classes, but unable to take a nap without thoughts of all the papers I had to grade rushing in and beating back my attempts at rest.  As a result, I was often in a sour mood and was inclined to greet any attempts at contact from colleagues or students as yet another demand on the limited resources of my attention.  When I had a minute, or just a desperate need to pretend that I did, I spent it with value-free distractions (the App Store specializes in them), afraid to glance back at the wave of paperwork threatening to crash over me from behind.

Since I started meditating, I haven’t ceased being distracted, but I have been better able to incorporate distraction into my workflow, i.e. to be mindful of distraction.  In the interior of my mind, thoughts of work have begun to appear less like photobombing tourists in the lens of my attention, and more like part of the shot.  I have become better able to take a long view of my own time and attention and to refuse to devote my full mental resources to every problem, incomplete task, or request that jumped into frame.  What is called “mindfulness” is key to this.  While I meditate, thoughts still appear, and I still think them, but I am aware of the process, and that awareness prevents me from identifying with them completely.  I become something of an observer of my own mental life.  I see how this could be described as being “mindful”, as it does in a sense feel like an additional layer of abstraction has been placed between my stream of consciousness and the thoughts that usually occupy it, but in a sense more important to me, something is also taken away.  That thing is the formerly irresistable urge to load that thought into the chamber of my executive-function pistol and start manically squeezing the trigger.  It is also the need to build a spider’s web around each thought, connected to all my other thoughts, and claim it irrevocably as mine.  In these senses I believe “mindlessness” is just as good a term as “mindfulness” for what occurs in and as a result of meditation.  In any case, disassociation from my thoughts, most of which are proverbial red circles with white numbers in them, has helped me to control the way that I react (or not) to them.

This brief experiment with meditation has given me a good deal of perspective to take with me into future semesters.  I can now see the regular rhythm of the waves of classwork as something other than a renewed threat.  Now, they seem more like tides, dangerous if unplanned for but predictable in their rises and falls.  Importantly, I also see the high water mark and know that as long as I keep my mind somewhere dry, it will recede without doing much damage.  In the future, as long as I refrain from doing something crazy like teaching 20 units, I think I will be able to maintain calm with the help of this perspective.  Also, in a more specific sense, I will be better able to resist the call to distract myself from my work.  I can recognize the formerly irresistable need to latch onto an interesting task, and this recognition enables me to prevent YouTube or WordPress (except for right now) from hijacking monotonous tasks like grading or… well, mostly grading.  Next semester and into the future, I will feel less threatened and better able to deal with inbound masses of schoolwork.

The simple present, unsimplified

Since I started my hobby/rigorous research pursuit of conducting Google Forms surveys on grammar, I have been thinking about the big one.  The one that combines the most assumptions and nuance and the simplest form into a wad of meaning with white dwarf-like density, which is maximally unbalanced in its complexity and the earliness and brevity with which it is treated in grammar textbooks.  The big one is, of course, the present simple.

This is going to be a long post.

Read More »

Fire alarm effects in ELT

I didn’t expect such a great metaphor for the ESL/EFL classroom to come from a writer on artificial intelligence.

In his article “There’s No Fire Alarm for Artificial Intelligence”, Eliezer Yudkowski uses the metaphor of a fire alarm to explain situations in which people act strangely without it being a faux pas.  His version of a fire alarm is a public messaging system that would give people permission to act with what in his opinion is the correct amount of urgency in the face of dangerously advanced and amoral (at least by our standards) AI.  A fire alarm, he postulates, is not simply an indication that danger exists (the other main indication being smoke), but a signal that it is acceptable to act as if it does in front of other people.  The acceptability comes from the fact that (actual and metaphorical) fire alarms are heard by everyone, and one’s knowledge that others also hear it enables one to take part in behavior like descending the stairs and paying a visit to the parking lot in the middle of a workday knowing that coworkers will not hold it against you.  Like many widely-shared messages, a fire alarm turns insane solo behavior into acceptable, even encouraged, group behavior.

(I heard this for the first time on Sam Harris’s podcast.  Yudkowski sounds exactly as you might expect someone with his job description to.  Incidentally, I have some basic disagreements with a lot of what Harris says, but still enjoy listening to his interviews.  I will be more specific in a future post.)

It’s pretty close to universal knowledge that speaking one’s L2 in front of other people is face-threatening behavior.  Consider the range of situations where reproach or shame are possible results – besides the obvious ones (sitting alone on the bus), you may be considered rude, stupid, foreign, pretentious, or just strange for suddenly bursting into French at your pâtisserie or watching Chinese soap operas on your phone.  Naturally, the number of “safe” contexts to speak your L2 increases if you move to a society where most people speak that language, but it is still not close to 100% of them – at the very least, you will mark yourself as a foreigner by “practicing” in public, and in the worst case, people can just be unbelievable assholes around 2nd language speakers.  Of course, there are learners who don’t feel threatened at all by speaking their L2, and maybe those are the same people who would immediately perform a fire drill alone at the first hint of smoke in the air.  Most people need acknowledgement that they won’t be judged negatively for trying and often failing to make themselves understood in a new code – they need a public signal that legitimizes it for everyone.  Something in the ESL/EFL classroom is necessary to transform society’s gaze from judgmental to facilitative.

This may turn out to be another black robe effect.  That is, the teacher might be the variable that turns language practice from face-threatening to the group norm.  The inverse is clearly true – teachers can definitely act in ways the discourage open practice or make students ashamed of failed attempts at communication (or worse, ashamed of imperfect grammar).  Teachers can also strengthen the norm of practicing English within the class by spelling it out explicitly and practicing it themselves.  I suspect though that a lot of the legitimization of language practice is due to the physical edifice of the classroom and the rituals one must go through to join a class – signing up, visiting the bursar’s office, carrying a bookbag, etc.  You can test this by walking out of your classroom during a task and secretly observing how much of the communication in your absence is still in English, and compare it to what happens when a waiter who shares an L1 with the cook is done taking your order.  As in the experiments that Yudkowski cites to make his case, students’ shared understanding of what behavior is validated is essential for any of that behavior to actually take place. Whatever it is that is acting as a fire alarm in language classes, its effects depend as much on the people as on the signal.

An objection to the feasibility of simulation

I used to have this fantasy about being able to predict the future by entering all the relevant data about the real world, down to the location of each atom, into a supercomputer and letting that supercomputer simply run a simulation of the world at a rate faster than actual time.  My inner materialist loved the idea of every geological force, weather system, human brain and every therefore manifestation of the emergent property we call a “soul” being predicted (something about my needing to take the stuffing out of humanity as a teenager), and I believed that doing this with the power of computing was eminently plausible save for our lack of complete data.  I now realize that it is impossible.  No, not because I’ve stopped being a materialist.

Any computer used to run a complete simulation of the real world must be at least as big as the system that it will be used to simulate.  That is, a complete simulation of an amoeba would require at least an amoeba-sized computer, a complete simulation of a human would require at least a human-sized computer, and a complete simulation of a planet would require a planet-sized computer, etc.  This is for a reason that is a “bit” obvious once you come to see it, as I did sometime during my undergrad years (if my memory of conversations over AOL Instant Messenger serves).  Data is instantiated in computer memory in chips as 1s and 0s, or bits, which have mathematical operations performed on them which in aggregate give rise to more complex operations, everything from blogging to Microsoft Flight Simulator.  At the moment, each of those bits needs at minimum a single atom with a charge to represent its value (the details of the bleeding edge of computer memory are quite fuzzy to me; replace “atom” with “quantum particle” in this argument as you see fit).  Any atom in a simulated universe would need great amounts of bits to represent its various properties (number of neutrons, location(s), plum pudding viscosity, etc.), and thus many atoms of real-world silicon would be a minimum to represent a single simulated atom.  Because all matter is composed of particles that would need at least that number of particles of computing hardware to simulate them, hardware must always be at least as physically big as the physical system that it simulates.  So much for running a predictive version of Grays Sports Almanac on my Windows computer.

But maybe not all that information is needed.  Maybe not all aspects of the system need to be accurately represented in the simulation for the result to be close – the number of neutrinos flying through the Milky Way surely can’t have that much to do with whether Leicester beats Arsenal 2-1 or 2-0. But consider that that game takes place in a universe where neutrinos definitely exist and people know and talk about them.  Some proportion of viewers, players, or advertisers are surely affected by the existence of scientific research being done in the city (Leicester and London are both home to renowned universities) where they live, even if indirectly – universities are huge employers with large real estate footprints.  Seen in the broader picture, the existence of neutrinos seems like a variable actually capable of affecting the outcome of a soccer match.  Even a single sporting event isn’t really a closed system – consider how directly they are affected by weather.  And of course the types of simulated realities that are en vogue recently thanks to Black Mirror are earth-like or at least have environments capable of fooling complete human simulacra, which means that the humans in them need referents for the things that they talked about when they were still flesh and blood – can you imagine a physicist being confined in a San Junipero happily if the rules of atomic motion are not part of the virtual world?  What would you do for fun when the 80s nostalgia wears off?

It’s an open question whether a simulated mind deserves moral consideration even if it has the subatomic workings of its nervous system simplified in order to make it run on a smartphone. The point I mean to make is just that it’s impossible to have a completely simulated anything without building a computer of at least that physical size in the real world.

Grammar Mining (and the collected Mark SLA Lexicon)

Many of us agree that teaching “at the point of need” (as I believe Meddings and Thornbury put it) is an ideal context for formal grammar teaching.  Students’ trying to communicate something provides clear evidence that they need the grammar that would facilitate communicating it, and depending on how close they come to natural expression, evidence that their internal representation of English is capable of taking on this additional piece of information.

In interlanguage punting, I conjectured that taking a guess at grammar students may need in the future and organizing a lesson around a particular grammar point was justifiable if the lessons you used to introduce that grammar would be memorable long enough for a “point of need” to be found before the lesson was forgotten.  At the time, I was teaching weekly 1-hour grammar workshops with rotating groups students at different levels, and as I could not teach reactively I had to justify my grammar-first (formS-focused) approach.

Read on for the last post before the new semester starts.

Read More »