We recently had to go through our annual process of assigning "merit" to the faculty members of our department.
In the category of teaching, I got second to last in our department.
This should, really, be interpreted as last as one guy refuses to even fill out the form.
So, last it is.
This is a tedious process in which we all fill out a form that details all of our research, teaching, and service activities for the previous year. The research and service parts are simple for different reasons. For research, you get a high score if you've published an article (preferably after having gone through a double-blind peer-reviewed process toward publication in a high-ranking journal in the field as opposed to a book chapter). For service, we all basically do too much, so the default is usually the maximum score. For unusual service (meaning something that we don't see regularly), one needs to document what they have done and how it has benefited the department, college, university, community or profession.
Documenting our teaching efforts is an altogether different ballgame. How, for example, does one demonstrate effective teaching? There is no analogous peer-reviewed process that one goes through to demonstrate techniques, efforts, attempts, new ideas, effectiveness, etc. At our university, each professor has students fill out student evaluation forms that differ across departments. While such information can be informative at identifying trends, on its own, it is highly problematic and a quite extensive literature has grown out of identifying the strengths and weaknesses of using such a measure to evaluate teaching effectiveness.
Our department, thanks to two colleagues' efforts in particular, has evolved into using a tripartite system of teaching evaluation: student reviews, peer reviews and self-review.
This is great in theory.
The biggest positive consequence of this method is that it gets us away from relying on one single method of evaluation which is usually gets boiled down to some relatively capricious and one-dimensional number gathered from the student evaluations.
The problem is that the relatively small committee that assigns merit (which, by the way translates into salary increases that literally cannot even justify the amount of time that four Ph.D.s spend reviewing the files each year, let alone the effort required to fill out the paperwork) assigns different weights on each of the three teaching categories. So, for example, one individual might wholly believe in the ability of students to correctly evaluate their professor's effectiveness and, therefore, place 80% of his/her weight on that measure.
At the other extreme, one's assessment of oneself can be overly weighted by a member of the committee. An individual could spend an inordinate amount of time and effort gathering and interpreting data on assessing his/her teaching and its effect on students. This is, mind you, on top of regular assignments and exams that are meant to evaluate student learning. Furthermore, the merit process not only encourages professors to do this during the year, it also encourages extensive documentation during the merit process.
I have always been suspicious of this sort of activity. It reeks of Shakespearean protesting a bit too much. But now, after reading the book Blink: The Power of Thinking Without Thinking by Malcom Gladwell, I have evidence that not only is such documentation is over-the-top, it also may not necessarily contribute to teaching effectiveness.
According to Gladwell, [a]llowing people to operate without having to explain themselves constantly . . . enables rapid cognition.
And paraphrasing from Blink:
Here is a simple example of this. Picture in your mind the face of the waitress or waiter who served you the last time you ate at a restaurant. Or the person who sat next to you on the bus today. Any stranger whom you've seen recently will do.
Now, could you recognize this person in a police lineup? I suspect you could. Recognizing someone's face is a classic example of unconscious cognition. We don't have to think about it. Faces just pop into our minds.
But suppose I were to ask you to take a pen and paper and write down in as much detail as you can what your person looks like. Describe her face. What color was her hair? What was she wearing? Was she wearing any jewelry? You won't believe this, but after attempting to describe the face, you'll have difficulty doing the same task you did before: recognizing that person in a police lineup.
This is because the act of describing the face has the effect of impairing your otherwise effortless ability to subsequently recognize that face.
The psychologist Jonathan W. Schooler who pioneered research on this effect calls it "verbal overshadowing." (See also, this paper). Your brain has a part, the left hemisphere, that thinks in words and a part, the right hemisphere, that thinks in pictures. And what happened when you described the face in words, was that your actual visual memory was displaced.
Thinking bumped your memory from the right to the left hemisphere. When it comes to describing the face, you relied on the memory of what you said you saw, not what you remember you saw.
The problem is that with faces, we're better at visual than verbal description.
If I showed you a picture of Marylyn Monroe or Albert Einstein you'd recognize it instantly. But could you accurately describe the picture? If you wrote a paragraph describing how Marilyn Monroe or Albert Einstein looked, would I even know what you were talking about?
We all have an instinctive memory for faces but by forcing you to verbalize that memory, to explain yourself, I separate you from those instincts.
Schooler has shown that the implications of verbal overshadowing carry over to how we solve broader problems. For example, look at the following "insight puzzle": A giant inverted steel pyramid is perfectly balanced on its point. Any movement of the pyramid will topple it over. Under the pyramid's tip is a $1000 bill. How do you remove the bill without disturbing the pyramid?
This is an example of an insight puzzle. The only way to get answer is if it comes to you suddenly, like in the blink of an eye.
One study found that people who were asked to explain themselves in the process of finding solutions to other insight puzzled solved 30% fewer problems than people who were not asked to explain themselves.
In short, when you right down your thoughts, your chances of having the flashes of insight you need, in order to come up with a solution, are significantly impaired. Just like the act describing the face of the waitress made you unable to pick her out of a police lineup.
In the cases of logic problems, explaining yourself will not have the same effect. Explaining yourself may even help solve the problem or others like it.
Problems that require a flash of insight operate by different rules. "It's the same kind of paralysis through analysis you find in sports contexts," Schooler said. When you start becoming reflective about the process, it undermines the ability. You lose the flow. There are certain kinds of fluid, intuitive, nonverbal kinds of experience that are vulnerable to this process. As human beings, we are capable of extraordinary leaps of insight and instinct. We can hold a face in memory, we can solve a puzzle in the blink of an eye. All these abilities are incredibly fragile. Insight is not a light bulb that goes off in our heads. It's a flickering candle that can easily be snuffed out.
For me, teaching is an activity that requires this "flash of insight." Last year I was involved in a project on campus called the "Lesson Study Project" or LSP for short. In the project a group of us chose one particular idea that seemed to be problematic for our introductory students. We then attempted to dissect the lesson to find out what, exactly, we wanted students to glean from it and then we, as a group, created a pedagogical strategy to get at this problem in the lesson. The LSP requirements included assessment, so we collected pre and post tests from the students in treatment and control groups to see if our unique intervention had any real learning consequences.
Last year we chose a lesson on international trade. A recent article (See William Poole's speech here) claimed that the notion of free trade was problematic because it simultaneously garnered nearly full acceptance on the behalf of professional economists (a rarity among followers of the dismal science) and less than a majority of support among the general public. We suspected that students, while appreciative of the increased consumption benefits associated with trade, were nonplussed about the subject when it comes to production efficiency aspects, and even anti-trade when they realize that their dad may have lost his job due to outsourcing.
Using a combination of a class experiment and case studies, our goal was to get students to see the grey areas. To recognize that such a policy could generate winners and losers and that there were short-run and long-run effects to be discerned.
Part of the process is to have yourself videotaped and to have another LSP member come to your class and take notes on the pedagogical strategy. After having seen my course, in addition to many very helpful constructive comments, all three of the other LSP members were very impressed by the questions that I posed to the class and the instinctive changes I made to augment the trade experiment.
Their first suggestion along these lines? Write it down! Document it! Be more formal about it!
In all honesty, I believe that were I to actually write some of those comments down, and attempt to replicate the same situation in next semester's class, that the spark will have died a slow death. With a script, I'd be lost in the classroom and I'm sure the students would be bored to tears. It would be like Taylorism for teachers.
It is one thing to encourage professors to try new things and to try to see if the new things worked. Its another to sit down and write about it. Document it. One of the main problems we had in last year's LSP is that the instruments we used to measure the effects of the technique were relatively anachronistic. How can you measure the learning consequences of a new technique using old questions? Now we're down a road that I am simply unqualified to drive; developing not only new pedagogical techniques, but also corresponding assessment tools. And let's just say that I refuse to grade interpretive dance.
That is not to say that all documentation is bad documentation. I have journaled after classes and found it to be helpful. But the most successful classroom experiences I have had are when I trust myself, my knowledge, my experience, and my intuition.
Of course, this is scary for a university. There are some teachers that can't be trusted, I suppose. And one way to get at them is to have them document their classroom like an obsessed archivist. To document each and every change and measure their effects.
The problem, identified I think correctly by Gadwell, is twofold. First, just because you measure something, doesn't mean its accurate or even that it was correctly measured for that matter. Second, the act of measuring and documenting itself squashes out the flashes of instinct, which would be the last thing you want to happen in the classroom.
So, when it comes to taking the time to fill out the merit form next year and document all of the tweaks I made to my classes this year, I will hopefully have the guts to do what I believe would better serve my students: read the newspaper instead.