Tuesday, January 13, 2009

Merit Pay for College Teaching?

As the Chronicle reported a few days ago and InsideHigherEd reported today, Texas A&M has proposed giving professors bonuses of up to $10,000 based on student evaluations. Predictably--and in my mind, appropriately--many people have raised serious objections to the this. Student evaluations aren't necessarily reliable measures of student learning, some studies indicate that they're biased toward professors with easy grading policies, etc., etc. All fair points. As Clint Magill the A&M Faculty Senate speaker, said to Scott Jaschik, “Any evaluation of teaching that doesn’t include some measure of learning has some real problems.”

BUT -- what seems missing from the discussion is the logical next step: If we agree that there's a need to create better incentives for high quality teaching in higher education, and we agree that the best measures of high quality teaching are based not on subjective student evaluations but objective measures of how much students learn, then why not give professors a $10,000 bonus based on objective measures of how much their students learn? Learning is measurable, after all. Not completely and not to the same extent in all subjects, but you'd have a hard time convincing me that there's no way to arrive at an accurate estimate of how much a group of 300 students learned over the course of a semester in, say, Introductory Physics. And a substantial percentage of all the courses taught in higher education are similar to Introductory Physics in that they're based on a well-established body of knowledge that is testable and doesn't vary tremendously from course section to course section or even campus to campus.

If we gave colleges and educators more incentives to improve the quality of teaching and the larger educational environment, maybe we'd read more stories like this one, in today's New York Times, describing how the physics department at M.I.T has:

...replaced the traditional large introductory lecture with smaller classes that emphasize hands-on, interactive, collaborative learning. Last fall, after years of experimentation and debate and resistance from students, who initially petitioned against it, the department made the change permanent. Already, attendance is up and the failure rate has dropped by more than 50 percent.

M.I.T. is not alone. Other universities are changing their ways, among them Rensselaer Polytechnic Institute, North Carolina State University, the University of Maryland, the University of Colorado at Boulder and Harvard. In these institutions, physicists have been pioneering teaching methods drawn from research showing that most students learn fundamental concepts more successfully, and are better able to apply them, through interactive, collaborative, student-centered learning.
The only real problem with these two paragraphs is the word "pioneering"--it's not like educators are only just now discovering that students learn more in an interactive, collaborative, student-centered environment. That's been known for a long, long time. Yet large lecture classes and other inhospitable learning environments have been allowed to persist--and are still in wide use today--because there are few if any incentives to change them. Which is not to say that professors can change them alone--a superior educational environment for students requires a shared commitment from and collaboration between the faculty and the institution. But while the implementation of the Texas A&M plan is obviously problematic, the underlying goal is sound.

Update: Speaking from experience, Ezra Klein weighs in

1 comment:

Jason Paul Becker said...

I actually think that MITs OpenCourseWare is an even greater incentive to better teaching. Put the course materials and professors online for the world to see. People tend to perform their best when they're performing for a larger, public audience that is judging them.

You take autonomy away from professors when you use so-called "objective learning measures" for courses which are dynamic from semester to semester and on such a high level of content that few people, other than the professor teaching the course, really has the expertise to evaluate. That's one of the major differences between higher ed and K-12. Sure, this kind of process is somewhat easier applied to a highly standardized first year course, but higher level courses could never be measured this way.