Friday, October 26, 2007

Cloud-y Thinking

Via This Week in Education, Time's John Cloud says:
Harvard professor Martin Feldstein used to tell students in his introductory economics class that economists agree on 99% of the issues in the field. From the nature of monopolies to the basic laws of inflation, Feldstein asserted, economists of all political stripes are in accord on the same principles. He claimed that what we read about in the popular press are the 1% of economic issues where the data support no clear-cut conclusion.

I'm pretty sure Feldstein was exaggerating the 99-1 split in economics, but I have often thought that education research shows precisely the opposite ratio of agreement to disagreement. Education experts seem to concur on almost nothing. Research in the field is so politicized and contradictory that you can find almost any study to support your view. If economics is a 99-1 science, education is a 1-99 circus.

Cloud's characterization of education research is exaggerated and, frankly, kind of obnoxious. Education is more politicized than I'd like, but I don't see how that makes it different from other fields. Alas, what a shame that education research doesn't enjoy the pristinely empirical, de-politicized, consensus-rich environment that characterizes debates over tax policy, entitlement reform, and other issues studied by economists like Martin Feldstein.

Most of Cloud's piece is about the relative efficacy of public schools vs. private schools. Conventional wisdom, along with a fair amount of research, has it that private schools are marginally better. But then the Center on Education Policy comes out with a study that suggests otherwise. Aha! says Cloud. Apparently, if education research were a "science" not a "circus" there would be no such disagreement. Moreover, CEP is allegedly an "advocacy group for public schools" (they're not), so they can't be trusted.

I reality, the tendency for education researchers to draw the differing conclusions that so frustrate Cloud stems from the fact that the things education researchers often compare are, obectively speaking, not that different from one another. When you aggregate lots of private schools together and compare them to lots of public schools, the two populations are pretty similar. The same is true when comparing public schools to charter schools, certified teachers to non-certified teachers, fourth graders nationwide in 2006 to fourth graders nationwide in 2007, etc. When actual variance is small, legitimate differences in methodology, population, etc. can put one study on the plus side of zero and the other on the minus--not because one researcher is lying for political reasons and one isn't, but because they're both trying to quantify effects that are objectively de minimis.

Apparently, this frustrates journalists who crave certainty and simple answers.

Note: Erratic blogging over the next week from me as I'm leaving tomorrow morning for the "3rd Meeting of the International [College] Rankings Expert Group" in Shanghai. I missed the first two and I hear the after-parties are insane. Assuming various electronic connections and conversions work as planned, I'll be posting pictures and random observations.

No comments: