Theistic Performatives

I spent some time talking about performativity with a content-based class this summer, in both the linguistic “I now pronounce you man and wife” sense and the Butlerian “gender is created through its performance” sense. I didn’t anticipate to find the principle illustrated in the responses to two mass shootings in the days after our class ended, in the usual round of “thoughts and prayers” (sometimes in those words exactly and sometimes in other words as the original phrasing has become a bit of a cliché) being offered for the victims.

To be precise, he describes “thoughts and prayers” as a feigned interaction rather than as a performative utterance.

(To be clear, although this post is about language, I think the news and the banal responses are horrifying. This is a topic for a separate post, but you can always count on an ESL teacher not to buy arguments based on national exceptionalism – they seem more ridiculous the more of them you encounter.)

Continue reading

Unfactives

As with the same class last semester, and as happens to me often, I have been spurred to blog by an unusual utterance by a student, or should I say an utterance which in its non-target-likeness highlights an interesting linguistic phenomenon.

Some verbs, like “know”, say something about the mind of the subject of the sentence as well as the mind of the sentence’s speaker. That is, if Kim says, “Eva knows that 3 students will fail the class”, not only Eva but also Kim believes that the proposition “3 students will fail the class” is true. If Kim believes that Eva is wrong about those 3 students, she will probably choose a different verb, like “believe” or “think”, because if Kim says “Eva thinks that 3 students will fail the class”, she avoids giving the impression that she agrees with Eva.

(It’s an interesting question how many clauses deep these verbs have to be before the speaker is no longer presumed to agree with the proposition. For example, if Laura thinks that Kim believes that Eva knows that 3 students will fail the class, is it implied that Laura agrees? Does the factivity of “know” leap out of its clause and infect every person in the sentence, or does one non-factive verb break the chain? I tend to think that if Laura heard a sentence like “Eva knows that 3 students will fail”, but thinks she’s wrong, she’ll change the verb to a non-factive one in relaying that information to someone else.)

As you see from my aside, these verbs are called factive. In short, they imply that the content of noun clause that follows is factual. “Know” is one of these, as are “understand”, “realize”, “prove”, and “remember”.

The error that I saw that inspired this post was the opposite: a verb being used to imply that the content of the noun clause was false, as in “deny”, “disbelieve”, and “doubt”, which all mean that the subject believes or says that the proposition that follows is false. These words, unlike factive verbs, don’t presuppose that the speaker agrees. When the newspaper says, “Dems doubt that Trump will leave willingly”, the newspaper isn’t taking the position that they are right about him. The newspaper is simply relaying the Dems’ state of mind.

(Confusingly for Japanese learners of English, “doubt”, 疑う utagau in Japanese implies that the subject has a sneaking suspicion that the proposition is true, rather than false as it is in English. Another strike against grammar-translation.)

The error that I saw used a factive verb with a negative prefix and was followed by a noun clause that the writer intended to say was false. It was something like “Many people misunderstand that the earth is flat”. The writer, as I understood it, was trying to say that many people believe that the earth is flat, but they are wrong. This left me sitting and re-reading the sentence for a few minutes as I tried to figure out just what seemed so strange about it. I did my customary COCA search and found a relative lack of noun clauses after “misunderstand” compared to “understand”, validating some of my intuition, but it didn’t give me an answer as to why.

One factor that occurred to me is that “deny”, “disbelieve”, and “doubt” still leave the proposition standing on its own two feet epistemologically. They don’t bring up the proposition and in the same breath invalidate it – they just say that the subject disagrees with it. It is still free to exist as a proposition and be believed by other subjects. It seemed perverse to me that “misunderstand” would have a noun clause following it that was presupposed even by the speaker to be false.

As I was typing this though, I remembered “disprove”, which shares with “misunderstand” a factive root and a negative prefix. To my understanding, “disprove” is a true unfactive – if I say “Einstein disproved that matter and energy are distinct”, I am also stating my agreement with Einstein. If we accept the premise that some propositions are true and others are false, the above sentence can only be true if the proposition contained in it (“matter and energy are distinct”) is false. Therefore, the combination of negative suffix with factive verb to mean “the noun clause following this verb is definitely not true” cannot be the source of the strangeness of “misunderstand that…”

Another factor may be that unlike “deny”, “disbelieve”, and “doubt”, and even “disprove”, the speaker’s and the subject’s opinions of the truth of the proposition in “misunderstand” are different. When “Trump disbelieves that” his approval ratings are low, Trump believes that the proposition is false, and the speaker doesn’t take a position on it. When “Einstein disproves that” matter and energy are distinct, Einstein and the speaker agree. However, in my student’s usage of “misunderstand”, the speaker and the subject definitely disagree. “Trump misunderstands that millions of illegals voted”, in my student’s usage, means that Trump believes it, but he is wrong. In my limited exploration of this issue, this is the only case where the speaker uses a verb to imply both that the speaker believes the proposition and that the proposition is false.

Perhaps for an unfactive verb to make sense, as “disprove” does, it has to say not only that the proposition is false, but that the subject is right that the proposition is false. Anything else is uncromulent.

Virtue Signalling, feigned interactions, and In-N-Out

Virtue signalling” has sort of become this generation’s “politically correct”, a term of abuse for supposedly vacuous public communication by the political left. Much like political correctness, it actually describes something universal across political groups, and use of the term is itself is an example of the phenonemon it describes (i.e., calling something out as “virtue signalling” is a way of virtue signalling to one’s peers, much like decrying “political correctness” is a literally politically correct thing to do in certain circles).

Certain kinds of virtue signalling consist of messages ostensibly sent to the out-group, actually meant for the in-group to see, where the appearance of communication with the out-group is an important part of the real message. The real act of communication seems to be, “Look at me, trying to talk to these savages! That’s how committed I am to our cause!” Unfortunately, a lot of political communication these days really consists of ostentatious displays of self-sacrifice to one’s own tribe, where the sacrifice lies in having to tolerate communication with members of the other tribe.

I’ve covered this ground before, but have a few new insights:

  1. The proportion of apparent communication between tribes which is really feigned communication designed for consumption by members of one’s tribe may be increasing
  2. Some communities place a higher value on communication with out-groups than others, perversely raising the likelihood that it is feigned

The first is just a result of the increasing siloing of discourse; communities have more opportunities for self-selection with cable news and social media than any other time in history. Few conservatives watch MSNBC, and fewer liberals watch Fox News. Odds are, when you see a commentator or guest that appears to be ideologically opposed to the main viewership of whichever cable news network you are watching, you are seeing a feigned communication in which the fact that the host is trying hard to “reach the other side” is the real message of value, and that message is solely intended for his or her own political tribe. Any bonafides that the heel commentator may possess only serve to increase the value and validity of the real message. This has been true of talk radio and conservative commentary since at least the days of Wally George, but the fact that any subculture can now have a facebook group or YouTube channel all its own makes the incentive for in-group signalling so much more valuable than genuine out-group communication that a high proportion of fake out-group communication is inevitable.

The second was brought to my attention by my wife, who asked me what the “Revelation 3:20” on the bottom of our In-N-Out burger wrapper meant. She had heard that In-N-Out food comes with Bible verses written discreetly somewhere on the packaging, but still couldn’t decode this apparent combination of TOEFL vocabulary and time of day. It hadn’t occurred to me that In-N-Out’s Bible verses could also be an example of feigned communication, but of course I grew up in a household that at least pretended to think that church was important and hadn’t thought of how opaque something like “Nahum 1:7” (on the bottom of a Double-Double) looks to someone raised without any exposure to the Bible. A straightforward interpretation of the presence of these phrases is that to Christians, this is like whispering a codeword, a message which shows insider knowledge and expertise, while to non-Christians, it is pretty much indistinguishable from “Xanthan Gum”. If that were the sum of its meanings to both groups, it would be either straightforward in-group communication or simply failed communication rather than feigned communication. However, I doubt the owners of In-N-Out, conservative Christians though they are, would waste ink telling fellow Christians something they already knew or giving non-Christians the equivalent of a Dewey Decimal number to look up. They might instead be communicating something to their fellow Christians besides literal Bible verses – they are communicating the fact that they are trying to reach non-Christians, a message with special currency among evangelical Christians. Seen this way, the use of Bible verses makes more sense – it is vastly more important to put the message in an emic form that Christians recognize, since they are the true recipients of the message, than in a form that non-Christians would, since they are only the feigned recipients. In a community where outreach is a core value, feigned communication with out-groups is an especially tempting form of in-group signalling, and although I haven’t been to church in many, many years, I suspect feigned communication with non-Christians is pretty common. I noticed feigned communication first in Japan, but clearly this type of feigned communication takes place in other groups with similar ways of defining themselves.

Source. Hard to believe this was ever questioned.

The affective issues cliff

Some issues that exist in students’ lives affect their academic performance in ways that are unfair and impossible to ignore – kids and jobs are two massive time-sucks that interfere with schoolwork, but everything from mental illness to changing bus routes in the city mediate how well students do academically. Particularly at community colleges, which exist specifically to serve non-traditional students, teachers have a duty to incorporate some treatment of what we call “affective issues” such as anxiety, work or family obligations, or negative self-image into our courses. The duties can be written into law, as with mandated reporting of suspected abuse (a legal obligation) or simply commonly accepted but not required “best practices” such as accepting late work or generally making yourself available to meet with students outside of class. Then there are the students who don’t have anything that has been recognized as an “affective issue” but are clearly affected away from classwork and towards League of Legends, and not much in our training says we owe these students’ issues any particular redress at all.

In American healthcare, there exists a phenomenon known as the “Medicaid cliff”, which is an income threshold below which you are provided with cheap and reliable healthcare, and above which you are required to buy expensive, complicated private insurance. A lot of people decry the existence of this drop-off in public coverage even if they support Medicaid in principle (that principle being that people who cannot afford health insurance still deserve to live). The cliff comes about because our definition of “poverty” has to end somewhere, and once you’re out of poverty, the government no longer takes an active interest in how you afford to stay alive. Thus, you could have an income of 130% of the federal poverty line and qualify for single-payer health care in the form of Medicaid, or get a raise to 140% of the federal poverty line and suddenly have to buy a private health insurance plan with a $7500 deductible. Pass the magic line and you transform magically from a victim of forces beyond your control to an upstanding and responsible citizen.

Read on if my point isn’t obvious enough yet.

beach blue sky cliff clouds

Photo by Danne on Pexels.com

Continue reading

Circular thinking in deterrence

Back in my undergrad criminology classes, the professors would often raise interesting case studies of crimes where the benefits to society of punishment were unclear. Often, these were cases where giving “just deserts” to one party harmed another innocent party, for example the accused’s dependents. Inevitably, one classmate would respond to such case with “they knew what they were getting into” or simply “fry ’em!”

The punitive impulse is strong. It effects an attitude toward crime that one sees everywhere, and especially these days toward illegal border crossing. The reactions to people being punished for crossing the border show this clearly in part because of the inhumanity of the punishment being delivered – in this case, as in many others, not just to the accused.

Here’s exhibit A:

As hilariously disingenuous as the first part of that is, it’s the last part that I want to focus on today – the part where she says that children were brought to the US under “irresponsible” conditions. In her mind, and in many others’, the irresponsibility of bringing kids across the border under threat of separation justifies the harsh punishment of separation, because only irresponsible parents who deserve to lose their kids would cross the border. (Left out of the discussion, as always, are whether children of such parents “deserve” institutionalization and lifelong trauma). That one word, “irresponsible” encapsulates a lot of the circular thinking of deterrence.

That thinking always follows this path: the more severe a punishment, the more deserving of it a rational person will be for committing a crime that carries that punishment. If the punishment for jaywalking were squassation, only a truly irresponsible person would jaywalk, ergo that person would deserve squassation.

The weaknesses of this logic are first, that people aren’t rational in avoiding punishment or in any other domain. Criminology, like economics, has undergone a reevaluation since its purely rationalist days (I want to say it started with Beccaria…), but this post-rationalism hasn’t permeated the collective consciousness.  Just as people aren’t solely motivated by marginal dollars, they aren’t solely motivated by the sticks and carrots of criminal justice. Also, people tend to apply this philosophy of punishment unequally – for populations with which they have little empathy, they see only sticks and carrots, but for their own community, they want trust, norms, and the restitution of dignity to victim and transgressor.

Criminology (at least at the undergraduate level) divides rationales for punishment of criminal behavior into 4 categories: retribution, rehabilitation, incapacitation, and deterrence (specific and general). My feeling is that people feel highly retributive towards people that they feel little connection too, and that this feeling is often recast for public discourse in deterrent terms – terms that are self-justifying.

As a bone to throw my remaining TEFL readership, let me say that I think this idea of self-justifying punishment has some importance in syllabus design as well – we could design a pop quiz on an assigned reading that is 30% of final grade, and it would be justifiable in the same sense that public flogging for vandalism is.

NIDA identities in ESL essays and Love Wagon (あいのり)

The last day of class, instead of having the potluck that my students were probably hoping for, we did a very quick analysis of the book we had just finished reading (Farewell to Manzanar) using Gee’s NIDA identities.

IMG_0896.jpg

To briefly summarize what those are:

N-identity (nature identity) is the part of identity which is supposed to come from nature. It often includes visible traits like gender and race and the palette of traits and abilities that are thought to stem from them. As the Rachel Dolezal controversy shows, what is N for some people is I or A (see below) for others, and people can be quite unforgiving when they think an N characteristic is being wrongly taken on or rejected. My students were astute in noticing that even N identities change when the people around to perceive and interpret them change – the main character in Farewell to Manzanar has different N-identities when surrounded by other Japanese-Americans than with other Americans.

I-identity (institutional identity) comes from institutions of which one is part. For example, my ability to pass as a teacher comes mostly from my employment by schools, and not many people would accept the legitimacy of a “teacher” identity without it. It can be fun to imagine which kinds of jobs require institutional recognition to be considered a legitimate claim to identity – to me, “artist” is not an I-identity, but “animator” is. “Philosopher” is not an I-identity, but “researcher” is. My students said many characters in FtM lost their I-identities (in most cases, fishermen who worked together) when they were forced to move into the camps.

D-identity (discursive identity) comes from interactions with other people wherein one comes to be known as a certain “type” of person. This tracks what most people call a “personality”, but unlike “personality” has no implication of permanence. That is, one can have different D-identities among different groups of people. The Papa character in FtM is a bit of a stereotypical alpha in the way he interacts with others, which shifts from comforting to ironic as his life circumstances change from independent businessman to unemployed drunk.

A-identity (affinity identity) is similar to I-identity in that it relates to larger social groups of which we consider ourselves part. Unlike I-identity, A-identity doesn’t require any kind of actual membership in a group, only affinity for it. One can have an A-identity as a Premier League fan without any formal affiliation in the form of membership in a team or fan club. Notably, and as some of my very clever writing students mentioned, A-identity can be almost entirely imaginary – Papa from FtM imagines himself to be the inheritor of a samurai legacy, although the samurai ceased to exist before he was born and are well on their way to being more a cultural trope than a social class at the time the story takes place. One student mentioned this aspect of A-identity in a presentation, which was a great example of critical thinking.

What I like about these categories of identity is that they make clear both that identity is a multifaceted and context-dependent phenomenon and that it depends on other people and society. That is, you can have multiple identities, and none of them are purely a result of you choosing the type of person you want to be after doing some deep thinking alone or “finding yourself”.

My students did a very good job applying these on short notice to a book they’d probably grown quite sick of on a day when many people were already mentally on vacation. What they said reminded me of some things I’d been seeing on Netflix recently.

Continue reading

Ancestry dot dot dot

Around junior high school, when I realized that “races” were a thing and I had one too, I started making my schoolwork Japan-themed wherever possible and ex nihilo informing my classmates that “taco”, in addition to being a receptacle for beef or chicken, meant “octopus” in Japanese.

(I wonder if the age at which you first realize your own race is a reliable shorthand for the stigmatization of the race of which you are a member…)

My classmates and teachers were nice enough not to call me out on this strange behavior. In fact, it probably would have been seen as improper if they had – after all, I was celebrating my heritage. I had Japanese ancestry, and that earned me the right to “rediscover my roots”, even in an awkward, teenage way.

(It’s funny how learning something new is frame as recovering it if you’re in a demographic thought to be born with that knowledge.)

Later, in high school, there was a club called Asian Cultural Enlightenment (ACE), which I somehow felt that I should join, although I never did. Several of my classmates in Japanese (the only Asian language elective) were members. I think I was putting a little bit of distance between me and Asian-ness, or simply taking advantage of the fact that as a stealth minority (i.e. capable of passing as white – many people assume my last name is Irish), I didn’t need to affirm any particular ethnic identity. I was fine with un-discovering my roots at this point.

Looking back, I wonder if the other members would have thought it was strange that someone with basically one toe in the pool of Asian identity would try to join an almost explicitly ethnically-based club. I also wonder how far back in my family tree I could have an Asian ancestor to legitimize an Asian identity if I had wanted to embrace one. If I merely shared with the other Asians the 99% of DNA that all humans share, would that not count as enough?

This journey down memory lane was spurred by yet another news story about cultural appropriation.

Continue reading

Random reflections on economics

For some time now I’ve been lucky enough to have a professor of economics as one of my private students, and helping this person put together presentations, papers, and whatnot has exposed me to a field of inquiry that is quite different than SLA.  It’s been refreshing and somewhat zen-like to see the extreme quantification of social forces and psychological phenomena and to hear the thoughts of people dedicated to to that enterprise.  The following are some thoughts on what I’ve seen over the last year or so.

Quantification is not reductive

The stereotype is that economists view people’s loves and lives as “mere” numbers, which has earned economics as a field the nickname “the dismal science”.  I never got the feeling, though, that economists view quantification as taking away some quintessential human elán from the thousands or millions of people whose behavior they are analyzing.  To the contrary, it seems to be a common understanding of the field that numbers are just the only way to deal with data points that number in the millions; it would be impossible to describe something like a national gender wage gap qualitatively and still be fair to each individual.  It’s certainly not true that economists view that number as the inarguable conclusion of a research question; validity and how to test for it are problems that animate much of the literature (it seems). In short, quantification of human behavior is a necessary part of looking at data sets this large and doesn’t “reduce” people if you have an appropriately skeptical attitude toward what the numbers really mean.

Conservatives tend to place free will at the base of questions of economic justice

A basic assumption of the field which has come under question since the 1980s is that people, when presented with a field of choices, will choose correctly and consistently according to their mostly stable preferences.  It would be hard to find a bedrock principle more at odds with either modern psychology or any adult’s lived experience of other adults.

It follows from this ideology that humans make rational choices based on stable preferences that human choice is above reproach, that whatever people decide given a set of options is a priori proof of justice. Any attempt to “nudge” people into a better choice or to force certain choices will produce warped and economically unhealthy outcomes. If people seem to naturally separate themselves into different groups, it must reflect a natural, stable preference within those groups.  Such is the explanation often deployed to dismiss the gender pay gap as the result of women’s free will rather than any kind of injustice.

You see the basic logic at play here in many areas of public life – certain politicians seem to see no motivation for human behavior that is not economic, and the main or only purpose of government is to encourage (or at least not punish) good economic decisionmaking. When people, either individually or as a group, seem to display an affinity for factors other than income (e.g. family, conformity, culture, or community) when choosing a career, that choice is accounted for in their reduced income. The last thing the government should do when people make uneconomic choices is to reward them economically with nutritional assistance, hiring quotas, or tax credits.

Luckily, I am at a healthy remove from both the ideologies of free will and the prosperity gospel, and I therefore don’t think people’s choices (particularly economic choices) are self-justifying.

Glass ceilings vs. sticky floors

The glass ceiling is probably the most emblematic phenomenon from economics to make it into popular culture. Loosely defined, it is an income gap at the top of the income distribution. In practice, it is often interpreted as a man getting promoted to an upper management position over an equally hard-working woman, who unlike the man is expected to perform childcare and other domestic duties in addition to working full-time.

Of course, I don’t know many men or women in upper management of anything. I do know many men and women in jobs that pay by the hour, and many more who used to have those jobs.  Every week when I went shopping at my local MaxValu (Japanese chain supermarket), I would notice the people stocking the shelves, men and women, the cashiers, almost all women, and the mounted pictures of the store managers, all men. There are, obviously, many more people in jobs like this than in jobs like the last paragraph in any developed country.  But for some reason, there isn’t a metaphor in common currency to describe the observed income gap at the bottom of the income distribution.

Where it is discussed, it is called a sticky floor.  As I understand it, in economics, it is simply a parallel phenomenon to the glass ceiling, but one that concerns vastly larger numbers of people. In my mind, discussions of glass ceilings sometimes have the false-consciousness character of waitstaff on their break debating whether a 39.6% tax on the top bracket is unfairly high. Yes, it matters that Sheryl Sandberg has few peers in the Forbes 500, but it matters more and to more people that men in the bottom 10% of incomes out-earn women in the same bracket (I would include a source here, but it would reveal the identity of my student).

Because all my posts now include mandatory COCA data, The phrase “glass ceiling” occurs 465 times in the corpus, vs. 20 for “sticky floor” (only 3 of which seemed to be about economics rather than literal sticky floors).

A salary scale in a company that isn’t growing

This will strike any of you who have formally learned economics before as shockingly ignorant, even if the rest of this post hasn’t. Basically, when things stop growing, it’s not as if they settle into a flat but stable equilibrium. Sometimes, growth makes the system stable.

Screen Shot 2018-04-25 at 13.38.10.png

This graph, drawn for me at least 2 weeks in a row by my student, shows the salary of a worker in the sort of company that hires people for life compared to that worker’s level of contribution to that company (y axes), over the career of that worker (x axis).  The salary is in blue and the level of contribution (I believe it was called “human capital”) is in green.  There are two periods where these lines are very far apart: at the beginning of the worker’s career, where he/she contributes far more than he/she takes in, and past mid-career, where he/she takes far more than he/she contributes. This graph was drawn for me mostly to explain the phenomenon of mandatory early (sometimes as low as 55) retirement ages, the rationale being that companies want to shorten the length of time that workers can draw more salary than they’re worth. It also helps explain why companies may want more and more recruits every year; it is these recruits who contribute the most to the company. As each cohort ages, larger and larger new cohorts are required to pay for the older cohorts’ increasingly opulent salaries.  This is a stable system as long as each cohort is larger than the last.

When the cohorts stop growing, it starts a chain of events that potentially results in the death of the company. First, without the contributions of new workers, the company can no longer afford the salaries of its older workers.  Older workers may take early retirement or salary reductions (and grouse mightily about today’s youth). New workers and potential recruits notice that the formerly guaranteed high late-career salary is no longer guaranteed and start to question the benefits of accepting such a low early-career salary. The company therefore has an even more difficult time finding large enough cohorts of new workers.

Call me naïve, but I hadn’t seen this clearly before, nor had I seen the implications for national pension systems. Now that I do, I am even more glad to be in ESL rather than working for Toshiba, and I definitely hope all my students have lots of kids who all pay their Social Security taxes.

Varieties of middle C culture

Where is the dividing line between “Culture”, the kind we are obliged to respect, and “culture”, the pattern of living that distinguishes communities? Is a kettle Big C Culture if you use it to brew Earl Grey tea served with scones? Is the sound of a Harley Davidson’s engine revving just a shared reference point in a few countries? What if the main character of a TV show syndicated worldwide rides one?

In an effort to tie together somewhat thematically different chapters on “Culture” in a reading book one of my classes is using, I’ve introduced the concepts of “little c” and “Big C” culture and had the students examine the situations outlined in the chapters through that lens. If the terms are new to you, this or this are decent explanations. It’s been interesting, particularly when we’ve had a venn diagram on the whiteboard and the opportunity for students to put their own candidates for little c or Big C culture up for discussion – for example, students consider LGBT (for some reason, they didn’t want the Q) to be Big C because the term has become well-known and, to some, emblematic of first-world liberalism. Contrarily, they consider karaoke to be little c culture because, in their minds, everyone has it and no one considers it to be the legacy of any particular country.

Needless to say (for anyone who’s lived in Japan), students’ opinions about karaoke surprised me quite a bit, as karaoke is regarded in Japan to be a clear example of Japanese culture succeeding and spreading around the world, alongside sushi and anime. This has raised the question in my mind as to whether the little c/Big C dichotomy needs to be amended with consideration for the fact that different cultures have not only different artifacts and practices, but different perceptions of the importance of those artifacts and practices. What is Big C in the country that produced it may not be understood as a national symbol elsewhere, and what is unremarked upon in a country may be considered a national emblem of it elsewhere.

big-c-little-c.gif

Adapted from here.

(For the purposes of this discussion, I am flattening and homogenizing countries and cultures.  I recognize that no symbol is truly equally and universally shared in any political, ethnic, linguistic, or cultural group.)

Below the jump are my additions to the little c/Big C scheme.

Continue reading

Losing my mind

What follows is a long, student-unfriendly version of a 3-paragraph paper (not an essay) on a 30-day challenge that I did with an intermediate integrated skills class.  The paper has to have an academic paragraph on the time before, the time during, and the time after the challenge.  Originally, the paragraphs had to use the past tense, present tense, and future tense (with any aspect), but I haven’t followed that rule faithfully here.

Getting lost in hectic thought was the default mode of my mind before I started my 30-day challenge.  The challenge, which was to meditate 10 minutes a day for 30 days, came at a time when I my mind was almost constantly in a state of emergency.  Every thought of grading, making new assignments, or updating a class vocabulary list was a red alert in a long line of red alerts.  I would be exhausted at the end of a day of classes, but unable to take a nap without thoughts of all the papers I had to grade rushing in and beating back my attempts at rest.  As a result, I was often in a sour mood and was inclined to greet any attempts at contact from colleagues or students as yet another demand on the limited resources of my attention.  When I had a minute, or just a desperate need to pretend that I did, I spent it with value-free distractions (the App Store specializes in them), afraid to glance back at the wave of paperwork threatening to crash over me from behind.

Since I started meditating, I haven’t ceased being distracted, but I have been better able to incorporate distraction into my workflow, i.e. to be mindful of distraction.  In the interior of my mind, thoughts of work have begun to appear less like photobombing tourists in the lens of my attention, and more like part of the shot.  I have become better able to take a long view of my own time and attention and to refuse to devote my full mental resources to every problem, incomplete task, or request that jumped into frame.  What is called “mindfulness” is key to this.  While I meditate, thoughts still appear, and I still think them, but I am aware of the process, and that awareness prevents me from identifying with them completely.  I become something of an observer of my own mental life.  I see how this could be described as being “mindful”, as it does in a sense feel like an additional layer of abstraction has been placed between my stream of consciousness and the thoughts that usually occupy it, but in a sense more important to me, something is also taken away.  That thing is the formerly irresistable urge to load that thought into the chamber of my executive-function pistol and start manically squeezing the trigger.  It is also the need to build a spider’s web around each thought, connected to all my other thoughts, and claim it irrevocably as mine.  In these senses I believe “mindlessness” is just as good a term as “mindfulness” for what occurs in and as a result of meditation.  In any case, disassociation from my thoughts, most of which are proverbial red circles with white numbers in them, has helped me to control the way that I react (or not) to them.

This brief experiment with meditation has given me a good deal of perspective to take with me into future semesters.  I can now see the regular rhythm of the waves of classwork as something other than a renewed threat.  Now, they seem more like tides, dangerous if unplanned for but predictable in their rises and falls.  Importantly, I also see the high water mark and know that as long as I keep my mind somewhere dry, it will recede without doing much damage.  In the future, as long as I refrain from doing something crazy like teaching 20 units, I think I will be able to maintain calm with the help of this perspective.  Also, in a more specific sense, I will be better able to resist the call to distract myself from my work.  I can recognize the formerly irresistable need to latch onto an interesting task, and this recognition enables me to prevent YouTube or WordPress (except for right now) from hijacking monotonous tasks like grading or… well, mostly grading.  Next semester and into the future, I will feel less threatened and better able to deal with inbound masses of schoolwork.