Losing my mind

What follows is a long, student-unfriendly version of a 3-paragraph paper (not an essay) on a 30-day challenge that I did with an intermediate integrated skills class.  The paper has to have an academic paragraph on the time before, the time during, and the time after the challenge.  Originally, the paragraphs had to use the past tense, present tense, and future tense (with any aspect), but I haven’t followed that rule faithfully here.

Getting lost in hectic thought was the default mode of my mind before I started my 30-day challenge.  The challenge, which was to meditate 10 minutes a day for 30 days, came at a time when I my mind was almost constantly in a state of emergency.  Every thought of grading, making new assignments, or updating a class vocabulary list was a red alert in a long line of red alerts.  I would be exhausted at the end of a day of classes, but unable to take a nap without thoughts of all the papers I had to grade rushing in and beating back my attempts at rest.  As a result, I was often in a sour mood and was inclined to greet any attempts at contact from colleagues or students as yet another demand on the limited resources of my attention.  When I had a minute, or just a desperate need to pretend that I did, I spent it with value-free distractions (the App Store specializes in them), afraid to glance back at the wave of paperwork threatening to crash over me from behind.

Since I started meditating, I haven’t ceased being distracted, but I have been better able to incorporate distraction into my workflow, i.e. to be mindful of distraction.  In the interior of my mind, thoughts of work have begun to appear less like photobombing tourists in the lens of my attention, and more like part of the shot.  I have become better able to take a long view of my own time and attention and to refuse to devote my full mental resources to every problem, incomplete task, or request that jumped into frame.  What is called “mindfulness” is key to this.  While I meditate, thoughts still appear, and I still think them, but I am aware of the process, and that awareness prevents me from identifying with them completely.  I become something of an observer of my own mental life.  I see how this could be described as being “mindful”, as it does in a sense feel like an additional layer of abstraction has been placed between my stream of consciousness and the thoughts that usually occupy it, but in a sense more important to me, something is also taken away.  That thing is the formerly irresistable urge to load that thought into the chamber of my executive-function pistol and start manically squeezing the trigger.  It is also the need to build a spider’s web around each thought, connected to all my other thoughts, and claim it irrevocably as mine.  In these senses I believe “mindlessness” is just as good a term as “mindfulness” for what occurs in and as a result of meditation.  In any case, disassociation from my thoughts, most of which are proverbial red circles with white numbers in them, has helped me to control the way that I react (or not) to them.

This brief experiment with meditation has given me a good deal of perspective to take with me into future semesters.  I can now see the regular rhythm of the waves of classwork as something other than a renewed threat.  Now, they seem more like tides, dangerous if unplanned for but predictable in their rises and falls.  Importantly, I also see the high water mark and know that as long as I keep my mind somewhere dry, it will recede without doing much damage.  In the future, as long as I refrain from doing something crazy like teaching 20 units, I think I will be able to maintain calm with the help of this perspective.  Also, in a more specific sense, I will be better able to resist the call to distract myself from my work.  I can recognize the formerly irresistable need to latch onto an interesting task, and this recognition enables me to prevent YouTube or WordPress (except for right now) from hijacking monotonous tasks like grading or… well, mostly grading.  Next semester and into the future, I will feel less threatened and better able to deal with inbound masses of schoolwork.


The simple present, unsimplified

Since I started my hobby/rigorous research pursuit of conducting Google Forms surveys on grammar, I have been thinking about the big one.  The one that combines the most assumptions and nuance and the simplest form into a wad of meaning with white dwarf-like density, which is maximally unbalanced in its complexity and the earliness and brevity with which it is treated in grammar textbooks.  The big one is, of course, the present simple.

This is going to be a long post.

Read More »

Fire alarm effects in ELT

I didn’t expect such a great metaphor for the ESL/EFL classroom to come from a writer on artificial intelligence.

In his article “There’s No Fire Alarm for Artificial Intelligence”, Eliezer Yudkowski uses the metaphor of a fire alarm to explain situations in which people act strangely without it being a faux pas.  His version of a fire alarm is a public messaging system that would give people permission to act with what in his opinion is the correct amount of urgency in the face of dangerously advanced and amoral (at least by our standards) AI.  A fire alarm, he postulates, is not simply an indication that danger exists (the other main indication being smoke), but a signal that it is acceptable to act as if it does in front of other people.  The acceptability comes from the fact that (actual and metaphorical) fire alarms are heard by everyone, and one’s knowledge that others also hear it enables one to take part in behavior like descending the stairs and paying a visit to the parking lot in the middle of a workday knowing that coworkers will not hold it against you.  Like many widely-shared messages, a fire alarm turns insane solo behavior into acceptable, even encouraged, group behavior.

(I heard this for the first time on Sam Harris’s podcast.  Yudkowski sounds exactly as you might expect someone with his job description to.  Incidentally, I have some basic disagreements with a lot of what Harris says, but still enjoy listening to his interviews.  I will be more specific in a future post.)

It’s pretty close to universal knowledge that speaking one’s L2 in front of other people is face-threatening behavior.  Consider the range of situations where reproach or shame are possible results – besides the obvious ones (sitting alone on the bus), you may be considered rude, stupid, foreign, pretentious, or just strange for suddenly bursting into French at your pâtisserie or watching Chinese soap operas on your phone.  Naturally, the number of “safe” contexts to speak your L2 increases if you move to a society where most people speak that language, but it is still not close to 100% of them – at the very least, you will mark yourself as a foreigner by “practicing” in public, and in the worst case, people can just be unbelievable assholes around 2nd language speakers.  Of course, there are learners who don’t feel threatened at all by speaking their L2, and maybe those are the same people who would immediately perform a fire drill alone at the first hint of smoke in the air.  Most people need acknowledgement that they won’t be judged negatively for trying and often failing to make themselves understood in a new code – they need a public signal that legitimizes it for everyone.  Something in the ESL/EFL classroom is necessary to transform society’s gaze from judgmental to facilitative.

This may turn out to be another black robe effect.  That is, the teacher might be the variable that turns language practice from face-threatening to the group norm.  The inverse is clearly true – teachers can definitely act in ways the discourage open practice or make students ashamed of failed attempts at communication (or worse, ashamed of imperfect grammar).  Teachers can also strengthen the norm of practicing English within the class by spelling it out explicitly and practicing it themselves.  I suspect though that a lot of the legitimization of language practice is due to the physical edifice of the classroom and the rituals one must go through to join a class – signing up, visiting the bursar’s office, carrying a bookbag, etc.  You can test this by walking out of your classroom during a task and secretly observing how much of the communication in your absence is still in English, and compare it to what happens when a waiter who shares an L1 with the cook is done taking your order.  As in the experiments that Yudkowski cites to make his case, students’ shared understanding of what behavior is validated is essential for any of that behavior to actually take place. Whatever it is that is acting as a fire alarm in language classes, its effects depend as much on the people as on the signal.

An objection to the feasibility of simulation

I used to have this fantasy about being able to predict the future by entering all the relevant data about the real world, down to the location of each atom, into a supercomputer and letting that supercomputer simply run a simulation of the world at a rate faster than actual time.  My inner materialist loved the idea of every geological force, weather system, human brain and every therefore manifestation of the emergent property we call a “soul” being predicted (something about my needing to take the stuffing out of humanity as a teenager), and I believed that doing this with the power of computing was eminently plausible save for our lack of complete data.  I now realize that it is impossible.  No, not because I’ve stopped being a materialist.

Any computer used to run a complete simulation of the real world must be at least as big as the system that it will be used to simulate.  That is, a complete simulation of an amoeba would require at least an amoeba-sized computer, a complete simulation of a human would require at least a human-sized computer, and a complete simulation of a planet would require a planet-sized computer, etc.  This is for a reason that is a “bit” obvious once you come to see it, as I did sometime during my undergrad years (if my memory of conversations over AOL Instant Messenger serves).  Data is instantiated in computer memory in chips as 1s and 0s, or bits, which have mathematical operations performed on them which in aggregate give rise to more complex operations, everything from blogging to Microsoft Flight Simulator.  At the moment, each of those bits needs at minimum a single atom with a charge to represent its value (the details of the bleeding edge of computer memory are quite fuzzy to me; replace “atom” with “quantum particle” in this argument as you see fit).  Any atom in a simulated universe would need great amounts of bits to represent its various properties (number of neutrons, location(s), plum pudding viscosity, etc.), and thus many atoms of real-world silicon would be a minimum to represent a single simulated atom.  Because all matter is composed of particles that would need at least that number of particles of computing hardware to simulate them, hardware must always be at least as physically big as the physical system that it simulates.  So much for running a predictive version of Grays Sports Almanac on my Windows computer.

But maybe not all that information is needed.  Maybe not all aspects of the system need to be accurately represented in the simulation for the result to be close – the number of neutrinos flying through the Milky Way surely can’t have that much to do with whether Leicester beats Arsenal 2-1 or 2-0. But consider that that game takes place in a universe where neutrinos definitely exist and people know and talk about them.  Some proportion of viewers, players, or advertisers are surely affected by the existence of scientific research being done in the city (Leicester and London are both home to renowned universities) where they live, even if indirectly – universities are huge employers with large real estate footprints.  Seen in the broader picture, the existence of neutrinos seems like a variable actually capable of affecting the outcome of a soccer match.  Even a single sporting event isn’t really a closed system – consider how directly they are affected by weather.  And of course the types of simulated realities that are en vogue recently thanks to Black Mirror are earth-like or at least have environments capable of fooling complete human simulacra, which means that the humans in them need referents for the things that they talked about when they were still flesh and blood – can you imagine a physicist being confined in a San Junipero happily if the rules of atomic motion are not part of the virtual world?  What would you do for fun when the 80s nostalgia wears off?

It’s an open question whether a simulated mind deserves moral consideration even if it has the subatomic workings of its nervous system simplified in order to make it run on a smartphone. The point I mean to make is just that it’s impossible to have a completely simulated anything without building a computer of at least that physical size in the real world.

Grammar Mining (and the collected Mark SLA Lexicon)

Many of us agree that teaching “at the point of need” (as I believe Meddings and Thornbury put it) is an ideal context for formal grammar teaching.  Students’ trying to communicate something provides clear evidence that they need the grammar that would facilitate communicating it, and depending on how close they come to natural expression, evidence that their internal representation of English is capable of taking on this additional piece of information.

In interlanguage punting, I conjectured that taking a guess at grammar students may need in the future and organizing a lesson around a particular grammar point was justifiable if the lessons you used to introduce that grammar would be memorable long enough for a “point of need” to be found before the lesson was forgotten.  At the time, I was teaching weekly 1-hour grammar workshops with rotating groups students at different levels, and as I could not teach reactively I had to justify my grammar-first (formS-focused) approach.

Read on for the last post before the new semester starts.

Read More »

Stuff I will miss 3: Intensely bitter grapes

I’ve been out of Japan, my home for almost all of my adult life, for a year now.  There are now some things I’d legitimately think of planning a vacation around to experience again.  Some of these are of the nostalgic flavor variety, and others are more profound.

(If you had had the power to predict back in 1987 that one of the effects of a global information network would be that everything textual had to be organized as a list, would that have helped you to make any wise investments?  What could you do with that information besides corner the market on scroll bars?)

Air conditioning.  People tend to think of Japan as an ancient country.  I disagree with the concept of “ancient countries” on philosophical and historical grounds (“ancient” being but one of many possible stances a modern person can take on the history of a current political entity), but in any case, you see no evidence of this claim living in an apartment in Japan.  It’s quite rare to find any domicile more than 30 years old, and the facilities within any given residence are bound to be either brand new or from around the same time the domicile itself was built (again, not a very long time ago).  Modularity is the philosophy of most interiors, leading to replaceable fluorescent ceiling fixtures, temporary flooring (often the rugs, tile, and the wafer-thin floor itself), and detachable wall AC/heating units.  The philosophy of housing in Japan seems similar to the philosophy of food in the US – ultrarational and at the convenience of industry.  My current residence in the US is older than any building I lived in in Japan, and its AC sounds like a fleet of Huey helicopters.  The idea that American buildings are old and sturdy and Japanese buildings are new and full of ad hoc fixes clashes with stereotype, but more importantly, sometimes slapdashness has perks in easy upgrades.  My AC in our school in Japan was practically silent in comparison.  If only the walls had been made of material capable of keeping heat in.

Unsweetened food choice architecture.  I still believe the point that I used to make, that the stereotype about everything in the US being incredibly sweet is false.  However, the sweet things here are definitely more prominently placed and almost always the first thing you notice on any given supermarket shelf.  There are croissants among the danishes and donuts, and plain yogurt next to the vanilla and strawberry yogurt, but the sweeter options are at least 2/3 of the options available and always hit the eye first.  This doesn’t technically force you to buy sugar-coated or HFCS-filled products, but it does make them harder to ignore.  Shopping here tends to nudge you towards cavities.  At least the dentists wear gloves here.

Meiwaku.  Interaction has a steep transaction cost in Japan.  Initiating random conversation, asking an off-script question to someone whose job isn’t specifically answering questions, mingling, sharing, and basically doing anything that anyone else might conceivably see all come weighted with fees.  Those fees come in the form of aversion to and fear of being called out for disturbing others (迷惑 meiwaku).  I don’t remember if meiwaku was in that book on Cultural Code Words, but it definitely should be – if there’s any single word that explains public behavior in Japan, it’s meiwakuMeiwaku, to me, is why people don’t usually talk on trains, get up to leave during the credits at the movies, or call out or loudly resist perverts on public transportation.  I see the threat of being accused of causing meiwaku as an additional burden that one feels in every public action, encouraging fairly extreme adherence to rules because the threat looms so large.  If it were a monetary cost, it would be a sales tax of 20% on all verbal transactions, pretty strongly disincentivizing public garrulousness.  The thing is, the revenue raised by this tax also allows for quiet and well-maintained (or at least not willfully destroyed) public spaces and a priority placed on social order, and it is something you can begin to miss in its absence.  Its near-complete lack in the US produces something also undesirable – reckless and risky behavior brought on by a lack of cost for disturbing other people – a recklessness subsidy, accompanied by a widely accepted cultural value on confidence and asserting onself.  In public in the US, all options for public behavior are available to everyone at no cost in face or shame.  As a result, people avoid boring, on-rails conversations but are much more likely to engage in all manner of misanthropic behavior because of how highly weighted self-expression is over public order or consideration of others.

The dream of a pretense of post-racial society.  Japan’s mainstream concept of itself is explicitly racial.  Not “we hate other races”, but “Japan’s story is the story of our race”.  I’ve come to think that most countries are this way, and a legalistic, “our club is the best but anyone is welcome to join as long as they follow the rules” definition of citizenship is a distinct minority of countries.  Now, if one squinted at US history and let the bloody details fade into the background, one could have said that this was the story Americans have always told themselves – that America is an idea that rises above race and class.  In fact, it was true until recently that even the most conservative politicians publicly espoused a legalistic rather than blood-and-soil definition of citizenship.  I worry that having had to defend this president will cause larger parts of conservative America to abandon even the rhetoric of equality.  Cognitive dissonance and all that.

I knew there were elements in the US who envied the easy worldview offered by an explicitly racial view of their nation, and sought to tie Americanness to a mythic concept of “whiteness” just as Japan’s is to “Japaneseness”. I didn’t think these people would ever have a voice in polite society or have their chosen candidate win a national election.

Of course, it seems silly to say I miss the rose-colored view of my home country that I had while I was away from it, but that is the truth.  I miss having the US as an example of a country that didn’t mythologize itself as the work of one uniquely virtuous race while I lived in one that did.

Shallow roots.  The US is unique not in its newness (its government is actually very old compared to that of most countries) but in its inability to pretend to be ancient.  Most people, when asked the age of a country like Japan, will inevitably answer in the thousands of years.  If you consider a claim like this in practical terms, that means either that the denizens of these islands have imagined themselves to be part of a Japanese polity before they even grew rice or that a series of governments, even without continuity from one to another, have been able to exercise control over these islands since the bronze age without the benefit of any communication faster than a pony (or in the earliest days, any form of writing).  Nonetheless, part of the current cultural software in some countries like Japan is a claim to continuous existence back into antiquity, made plausible by some historical facts (people really have lived here for a long time) and some guessing enabled by lack of evidence (nobody knows what these people did besides live in huts and make cord-inscribed pottery).  The US, with all of the history of its current government being part of the written record, cannot feasibly claim any of this.

Belonging to an “ancient society” weaponizes many of the arguments conservatives tend to make in every society – that our way of life has been ever thus, distinctive and characteristic of our land and people, until vulgarians and foreigners infiltrated it and began its corrosion.  Of course, you hear these arguments in the US too, but the difference is that in an “ancient society” everyone starts the discussion by ceding most of the conservatives’ points.  Yes, our way of life has existed unperturbed for millenia, but maybe it wouldn’t be so bad to give women maternity leave.  Yes, our culture is unique and distinctive, but a little labor flexibility would help our economy’s competitiveness.  Progressives need to start with an acknowledgment of what they are breaking with or they will lose most people’s sympathy.  As I said, the US has versions of these arguments, but people often have to outsource their roots (“Western” or “Judeo-Christian”, nothing on this continent) and the mists of time don’t seem to reach much further back than 1955.  A loss of national or cultural identity can be quite freeing.

Of course, this is a list of the things I miss, and like in the last entry, moving here has certainly disillusioned me of my fellow citizens’ resilience in the face of appeals from antiquity.  The president seems to come to his particular flavor of chauvinism simply by liking things the way he’s used to (white by default and free of the burden of empathy), but others, even progressives, have come to embrace the framing of conservative vs. liberal as traditional vs. non-traditional or continuity vs. interruption.  I suppose I had hoped I would be coming back to a society that saw interruption as its tradition.

Let’s end on something light…

Natto maki.  More sour beans than sour grapes.

Source.  I’ll be sure to try them if I’m ever in Ho Chi Minh City.

The urinal cake principle

Imagine yourself pushing through a crowded train station during rush hour.  As you pass a certain doorway, you detect hints of lavender and hibiscus coming from within.  Do these smells evoke:

  1. just flowers, or
  2. first flowers then toilets based on your prior knowledge and experience regarding the likelihood that lavender is blooming within 100 feet of a subway platform, or
  3. just toilets?

This is the best way for me to understand the principle of dead metaphors.  A dead metaphor is a process of cutting out a semiotic middleman. The process of a metaphor dying is a powerful and inevitable one that affects culture and particularly language in some subtle ways, as I hope to illustrate in as colorful a way as I can.

The process, in dense and unfriendly language, is this: The definition of a symbol over time changes from the first thing (flowers) to the second thing indirectly through the first thing (toilets via floral-scented deodorizing discs), and finally just comes to stand for the second thing (toilets).  This can be true even if the form of the symbol does not change – e.g. if the deodorizer continues to have a floral scent.  The reference stops being indirect and just goes straight for the thing that it was always eventually getting at.

I’ve been trying to think of more real-world examples of this principle in action.  Here are a few more:

A clothing brand (say, Members Only) is associated with rich people.  Poor people start to buy that clothing brand on sale or used because it evokes rich people.  The brand comes to be associated with poor people’s desperation to look rich.  (Louis Vuitton in Japan is rumored to head off this effect by buying back their used bags and burning them to prevent them going to the secondhand market)

“Political correctness” is a recognized phenomenon in polite discourse.  Reactionaries vocally dislike it and use it as a stick with which to beat their cultural enemies.  It comes to be much more widely known for its rhetorical value in reactionary discourse (specifically, their hatred of it) than as a phenomenon within polite discourse.

A famous person with an idiosyncratic name (say, “Zoey”) appears.  People who have kids during the zenith of that person’s career name their kids after him/her.  That name comes to be associated with that generation of kids rather than the famous person.

Taboo concepts have euphemisms invented to avoid direct reference to them while still maintaining the ability to refer to them indirectly if necessary.  Subsequent generations come to think of the euphemisms as simply the names for those taboo concepts, since those are usually the only words for those concepts that they ever hear.  Those generations invent new euphemisms to replace the no-longer-thought-to-be-indirect old ones.

When I was studying Japanese before moving there, we learned 便所 benjo (“convenient place”) as “bathroom”, when practically nobody alive now still uses that word.  Sometime between the 50s and now they were お手洗い otearai (“hand-wash”) or 化粧室 keshoushitsu (“powder room”). Now they are called トイレ toire, presumably until the next generation starts calling them something like ルーム ruumu.

Hipster beards are destined to go from “modern guy” to “guy who still thinks beards are modern” to “guy who doesn’t know that nobody thinks beards are modern” and in 20 or so years “guy who affects not caring that hipster beards are not considered modern” and back to just “modern guy” again.  Believe me; it happened to skinny ties.

Most words unless they were invented very recently are dead metaphors or have changed meanings through some other process.  A word’s history is like the miles of junk DNA we carry around with us in our cells, only using a small portion to form proteins, transmit messages or enjoy an inside joke.  Words like “give up” (my favorite example, a clear dead metaphor), “Wednesday”, or “cranberry” have stories behind their present forms and usages that are very interesting, but also very optional.  Each living word has untold numbers of lost meanings (in addition to its likely numerous current meanings) which we don’t have and don’t need access to in order to use it.  The process by which a word’s meaning changes isn’t always the middleman-eliding of the dead metaphor, but the idea that one doesn’t need all the information about the past referents of a given token to understand it in its current context is the same.

We language teachers often pride ourselves on the elaborate stories and setups that we use to explain usage of one language item or another.  One time I attended a presentation that asked us straightforwardly, “Why do you use on for buses but in for cars?”, to which several teachers laid out the possibly-made-up-but-convincing stories that they give their students.  These stories can definitely be useful for appearing to give students stage-by-stage mastery and “permission” to use a particular language item, things I definitely wanted in my early stages of Japanese learning.  Nowadays, I tend to think of these as a bootstrap or a foot in the door (are those dead metaphors?) than understanding itself, more affective factors than what we would usually call L2 competence.  Naturally, the end goal of most language learning is to have a grasp of the language similar to the fluent speakers in your target community, not to have complete explicit knowledge of the target language (although many learners confuse these two – some teachers as well).  One does not need to know the origin of a word or the logic behind its present form to use it correctly any more than one needs to have smelled fresh lavender as a reference point to know what that same smell means at the train station.


My black robes

According to The Impact, a judge has an unusually strong effect on mental health patients in causing them to follow treatment plans.  This phenomenon is called the black robe effect, based on perhaps a metaphor for and perhaps the real, physical source of the judge’s authority.  After only on listening to the episode and googling the term “black robe effect” once, this is my understanding of the effect:

  • The effect on the patient is due to the outward signs of authority that the judge carries;
  • The effect is in causing otherwise uncooperative patients under the judge’s purview to follow advice/orders already known to those patients (i.e. the judge is not the orignator of the advice/orders);
  • Most of the effect is realized in the judge’s absence as an indirect effect of his/her authority (e.g. when the patient takes a daily medication at home);

The basic outline of this effect is something I’ve found to be a major part of my job as an ESL or EFL teacher.  I’m often in the position of telling my students do things that they could feasibly do without anyone’s saying anything, but they’re much more likely to do when I tell them.  This is probably the one way in which I most reliably assume the “teacher role” and exercise my authority.

In fact, this is probably one of the best justifications nowadays for teachers existing at all.  We are great at causing (or forcing or allowing or facilitating; I’m not picky on the causal metaphor) people to do things that they could always do for free, and ideally creating norm-governed communities where success at those things is celebrated.  We definitely aren’t the only ones in the room anymore with access to the right information – students have all the human knowledge in the world in their pockets.  We have authority and an agreed-upon role as an arbiter of the values of our in-class community, and not much else.

Reading circles are a good example of the black robe effect in my classes.  This semester, one of my classes has read a non-fiction book over the course of a couple of months, and every 2 weeks during that time we’ve done reading circles that cover the chapters we read in the previous week (for the curious, here are the roles that I use).  Now what is my role in “teaching” the weeks that we share our reading circles sheets?  It’s pretty much the black robe effect without the gavel:

  • The effect on the students is due to the outward signs of authority that the teacher carries; (i.e. they do it because the person in the front of the room told them to)
  • The effect is in causing otherwise uncooperative students under the teacher’s purview to follow advice/orders already known to those students; (i.e. the book we’re reading has always been available to buy, as are millions of other fine books – “uncooperative” here means “wouldn’t do it by default”)
  • Most of the effect is realized in the teacher’s absence (e.g. when the student reads at home – and although I’m physically present in the classroom when they’re sharing their reading circles, I’m not participating, so then too).

One of my staple activities is even more of a textbook example of a black robe effect – I give students something called a Language Log, which is basically a blank sheet with spaces for English input (things they watched or read or people they talked to) outside the classroom and what they noticed.  Nothing about the sheet requires some deep knowledge on the part of the teacher to design or implement – it is a kind of educational MacGuffin that furthers the goals of language development without containing anything meaningful itself (the educational MacGuffin was a staple of my classes back in Japan too).  Still, if some non-authority or even one of the student’s family members gave them the same sheet and instructed him/her to keep track of input, it would not work – family members, in ESL and in mental health treatment, don’t get to wear black robes.

I’ll post again at a later date about what exactly my black robes comprise.

On Tyranny in the ESL Classroom: 20 Lessons from 20th Century Pedagogy

(Pace Timothy Snyder – originally this post was going to be “Democrating Backsliding in the ELT Classroom”, but I haven’t actually read the relevant materials for that.  The point is the same, though – a series of semi-political tips for not letting classes or institutions slide into tranmissive dictatorships.  The usual caveat applies:  I certainly don’t apply as many of these rules as I’d like, and in fact wrote this partly as a warning to myself.)

Do not obey in advance

Let’s assume your students have shown a pattern of reluctance to choose input for themselves or engage in self-directed learning, which is common in language classrooms around the world.  Do not assume that this pattern will continue forever, and do not change your teaching methods in anticipation of this reluctance even before it happens.  Do not treat your students as unready for communicative or other modern methods simply because previous classes may have been.

Defend institutions

Defend modern ELT in principle.  Many classes slide into teacher-domination because expedience seems to demand it – because teachers accept the unilateral authority that the forces of student expectation and curricular deadlines seem to require.  Temporary suspensions of student-centeredness in favor of transmission-style teaching should be resisted, not just because they do not work, but because they encourage the view that  researched and rigorous concepts such as interlanguage are inconveniences standing in the way of truly efficient impartation of knowledge.  In reality, of course, that efficiency is more a path toward perfunctory teacherly role-playing than toward learners’ mastery of English.

Beware the one-party state

Many classroom dictatorships arise not because a teacher arrogates power but because his/her pupils choose to cede it when given the option.  Do not take opportunities that students give you to take full control of the classroom, and do not use your authority as a teacher to consolidate attention and legitimate authority around yourself.

Take responsibility for the face of the world

The appearance of the classroom should not reflect the will of a single person.  The only writing on the whiteboard should not be the teacher’s, the only printed text used should not be from the teacher, and the only voice heard should not be the teacher’s.  Classrooms should physically manifest the priority given to students’, not teachers’, expression.

Remember professional ethics

Oftentimes, a teacher-centered class emerges because students feel pressure to play the part of the student as they understand it.  This part, which is often defined by passive receptivity and obedience, is not simply unconscious habit – students may see it as an affirmative moral value in itself.  That is, the job of the teacher may not be just to present a more interesting alternative to silent absorption of information, but actively discourage students’ preconceived ideas of “how to be a student”.  Students have their own professional ethics of classroom conduct, and teachers would do well to acknowledge their existence.

(Yes, this is the opposite of Timothy Snyder’s point on this subject.  Bear with me.)

Be wary of paramilitaries

Clusters of students that are apparently sympathetic to the communicative, egalitarian, task-based curriculum that the teacher is trying to effect may appear and begin to dominate classroom activities.  The existence of these seeming allies among the student population is welcome to a degree, but can begin to create a hostile environment for students who are reluctant to engage to the same degree for reasons of identity or ability.  Remember that the job of the teacher is not to give more advantage to students who are already advantaged because of a higher starting point or previous experience with modern ELT classes, or to signal a preference for those students.  The creation of a privileged minority of students within the classroom should be avoided.

Be reflective if you must be armed

For students: Being appointed, being selected, or volunteering to be group leader means that you are responsible for the maintenance of communicative norms within that group.  When you have power over your classmates, maintain norms of discourse that do not privilege particular viewpoints – yours especially – or consist only of participation by students who are already fluent speakers.  Some students will take the reduced numbers of eyes on them when working in a small group as an invitation to dominate the conversation or to shrink back into individual study.  As the local authority, your job is to prevent either of these from happening.

Stand out

Taking a modern, communicative approach may distinguish you from your colleagues in ways that are mutually uncomfortable.  You may feel that you are passing judgment on your colleagues’ or institution’s way of doing things by breaking from it.  Indeed, some teaching milieux may have norms so deeply established for so long that trying something new is seen as synonymous with questioning everyone else’s competence.  Be open about trying new techniques and approaches and be honest about their success or failure.  Be prepared to justify them with reference to research.  Above all, be honest about why you teach the way you do, and do not acquiesce to unjustifiable pedagogical norms no matter how many people with pages-long CVs are pushing them.

Be kind to our language

Do not adopt buzzwords needlessly, and certainly do not use them without understanding them.  “Learning styles” were a litmus test for being a modern teacher for 15 years or so, during which many teachers described their classes and students with the vocabulary of what turned out to be a false theory of educational psychology.  Many still use the terminology of “learning styles”, describing an activity as “ideal for kinesthetic learners” when they could just as easily call it “less boring than sitting still”.  By adopting this terminology, teachers have appeared to endorse a theory which was debunked.

Believe in truth

In some teaching contexts, a long career is seen as a substitute for reflected-upon experience and confidence in one’s methods as equivalent to knowledge of their efficacy.  Foreign language pedagogy is a field with a long history and plenty of research.  This body of research is mature enough to offer at least some tentative answers to long-standing questions in our field, such as how central formal grammar should be in classes and how much of a difference input makes.  Access to the current state of knowledge on questions like these, and more importantly, believing that the questions have answers that can’t be ignored in favor of a local or individual long-practiced method, is a step toward more effective and more justifiable pedagogy.


That said, the answers to pedagogy’s big questions may not come in an obvious form.  Sometimes a teacher will have great success with a method or technique that appears to come from the middle ages.  Commit to trying to understand how different teachers have success with different class styles and the principles underlying that success.  Above all, do not accept pedadogical prescription or proscription without the application of your critical faculties.

Make eye contact and small talk

Humanity can be brought to the classroom by simple engagement with learners as people.  Some one-on-one or small group interaction with the teacher not as a fount of wisdom but just as a person, and with the learner not as a receptacle of knowledge or target of remediation but as another person, can bring much-needed humanity back to the classroom.

Practice corporeal politics

PhD researchers who don’t teach and chalk-faced teachers who don’t reflect on practice or theory are a perfect recipe for each other’s stagnation.  Take theory that comes from people who haven’t set foot in a language classroom in years with a grain of salt.  You cannot realize good pedagogical theory without contact with learners.  I mean this in two ways – your theory will be useless if it doesn’t survive contact with actual people, and putting your theory into practice with your own students ensures that at least some people will benefit from it.

Establish a private life

You do not need to share as much with your learners as they share with you.  There is a happy medium between sterile professionalism in the classroom and complete shedding of boundaries.  Affective factors certainly do affect achievement, and that entails at least some rapport and sense of community beyond a shared interest in skillbuilding.  However, oversharing runs the risk of reducing the teacher to merely an affective variable and not an expert in either the subject or how to teach it.

Contribute to good causes

A local, institutional professional culture may fall short of maintaining pedagogical standards.  Sometimes, a national or international group, formal or informal, may function better as a community of practice for a teacher hoping to grow and keep up with current wisdom.  In any case, join (i.e., send money), attend, and especially present.  If a group of which you are a member is failing to provide something of value, you should provide it instead.

Learn from peers in other countries

ELT and especially SLA are worldwide fields, and different cultures, countries, and institutions around the world often practice radically different pedagogy.  Staying in one milieux for too long threatens to particularize your skillset; working in many countries or at least communicating with fellow teachers and learners in other countries exposes you to different sorts of problems to be solved and ways of solving them.  A frequent stumbling block in your milieux may have an extremely commonsense solution elsewhere in the world – and you may be surprised by the depth of thought that goes into an issue you thought only had one answer.

Listen for dangerous words

Pedagogy can be circumscribed a bit too cleanly by the words used to describe it.  “Syllabus”, “material”, “instruction”, “grammar”, “participation”, “master” and even “know” are all words that language teachers have good reason to take with several grains of salt.  If you hear these words being used as if their meanings were obvious, and especially if they are being used with obviously mistaken meanings, don’t be afraid to ask, “what do you mean?”  Often, the most useful discussions with colleagues and students occur over supposedly commonsense terms.

Be calm when the unthinkable arrives

Emergencies and exceptions are dangerous times.  The last day before the test might seem like a time when the norms of student-centeredness might best be suspended in favor of teacher-led review sessions.  This might even be presented as the only responsible option.  Of course, if teacher-centeredness is the most responsible path right before an exam, another exam will come soon, and the exceptional circumstance might be stretched a bit longer.  In fact, every lesson contains something of vital importance which seem to deserve priority over the luxuries of free student participation and self-directed learning.  There are always circumstances that would seem to make every class session a temporary exception or an emergency and cause the teacher to resort to a more “efficient” method.  Be very suspicious of exhortations or enjoinders because of the supposed unique circumstances of the present class period.

Be a patriot

Be a teacher, not a deliverer or keeper of information.  You can take for granted that you know the subject matter better than your students.  Knowing the metalanguage around your subject matter, including serious-sounding terms like “adjective clause”, makes it easier for you to convince other native speakers that you really earn your paycheck, but of course you will never catch up to Google search in your grammar knowledge.  Your job is bringing other people to a more complete understanding (see “dangerous words”) of the subject matter, not just knowing it yourself, and certainly not impressing your students with how much more than them you know.

Be as courageous as you can

If none of us is prepared to work for our betterment, then all of us will labor under mediocrity.

My Knowing You Has Moral Value of Life and Death

I’ve been pushed for the first time since the early weeks of this blog to comment on vegetarianism, thanks to a thoughtful post by Wandering ELT.

Like most of the strongly held opinions that made up my identity when I was in college, such as the utility of taxing custom rims or the superiority of Megadeth over Metallica, vegetarianism has turned into from ideology into mere habit.  It still exists like an old UCI sweatshirt as a vestige of the intellectual life I used to have.  I still practice it (and still listen to Megadeth) more because it’s what I did yesterday and not because I am a consistently, mindfully moral person.  Obviously, no completely moral person can listen to Megadeth as much as I do.

Most of this post will have the odor of long-dormant dogma reactivated.  If I begin to sound too strident, just be glad you know me now and not when I was 22.

The moral crisis that fomented the change in my life from meat eating to not began with the death of one of my dogs in the summer of 2001.  I felt suddenly aware of how much his admittedly simple life had meant to me, and how distressing it was to think of his last moments of suffering.  I suppose almost any pet owner in the same situation feels the same things.  This time, for some reason, I was also very aware of just how few beings in the world would be capable of drawing this kind of reaction, and as weeks went on afterward, this took up more and more of my thoughts.  I was still feeling the loss itself, and some odd guilt as well for feeling this so selectively.  I began to notice that the gap between my overriding preoccupation with my dog’s well-being at the end of his life and my complete ignorance of the well-being of every other animal on earth said something very bad about how my moral circle of concern applied to the world outside.

Screen Shot 2017-10-09 at 18.52.01.png
The way I thought my moral circle was

My dog’s death that summer, and the terror attacks soon to come after it, made very salient in my mind the inhumanity of how I drew my moral circle.  Many of us have heard “expanding circle” arguments about morality, in which we treat the things that are close to us as valuable, and further things less valuable, until we are basically indifferent to things that are very distant or different from us.  What my dog’s death made very clear to me is that 1) my moral circle where animals were concerned had a monstrous gap between the animals I cared about and the animals I didn’t, and 2) the center of my moral circle was quite small and was only justified by my own ego.

Screen Shot 2017-10-09 at 18.55.09
The way I saw it after summer 2001

The way a circle of moral concern is often understood is that we simply don’t care about the things/beings outside it; we don’t wish them harm, but we also don’t actively try to improve their lives.  In my case, the momentary suffering ending in death of one creature debilitated me for some time, which is an inevitable and even healthy response for pets and family members near the center of one’s circle of concern.  The creatures outside of my moral concern, however, weren’t simply benignly outside of my attention.  I paid people to emiserate and kill them, albeit indirectly.  I enjoyed the fruits of their deaths and considered the savings from not giving them comfortable lives a bonus for my wallet.  In short, I wasn’t indifferent to them; I actively participated in their torture and destruction.  The revision of the outline of my moral circle from a slow fadeout into a sheer cliff was intellectually jarring.

And the center of my moral circle was me, just me and all my coincidental associations.  I don’t think things enter or leave my life for cosmically meaningful reasons.  My dog was adopted by my family, lived with us, and was loved by us because we happened to choose him that day when I was in fifth grade, not because he was made of clearly superior stuff and the universe especially wanted him to have a good life.  Through the accident of his association with us, in addition to having been born a domesticated dog rather than a pig, chicken, or cow, he was granted a life of gentle leisure rather than one of neglect, prolonged discomfort or constant agony.  His death would be seen as a tragedy rather than a transaction because he happened to come into contact with us.  In other words, my family and I were the center of a bizarre moral universe in which only the few animals near us had human-like moral value, and all others deserved to die to make our sandwiches tastier.  Our circle of concern wasn’t based on logical or universal criteria like the capacity to feel, consciousness, or a complex nervous system, but was transparently based on whether you happened to be lucky enough to know us.  It was a solipsistic moral circle, and as I mentioned earlier, the edges were dropoffs into moral worthlessness.

So by becoming vegetarian I convinced myself that my moral circle was something I could justify.  Now at least I wasn’t basing the moral value of animal life on its proximity to me, and deeming those animals who failed to meet that arbitrary criterion subject to slow torture and death.

I don’t completely agree with this point of view now, since the suffering of animals outside of human society is arguably worse than that of animals living in countries with decent animal welfare laws, even if they are being raised for meat.  If I had the choice between a meat meal from an actually happy cow (the ones from California regrettably aren’t) and a vegetarian meal of 100% imported and processed rainforest-grown grains, I might really need to think about my choice.  Of course, we don’t live in a country with decent animal welfare laws, so I’ve never had to resolve that conundrum.

(FYI, my last meaty meal ever was chicken soft tacos from the Del Taco on Campus Drive.)