Thursday, August 07, 2008

What We Do Cannot Be Measured

Two main camps square off in education over and over again. One side, typified by the Broader, Bolder coalition in K-12, emphasizes student demographics. They point out, justifiably, that students enter school with widely divergent skills and expectations. As such, schools can only do so much to rectify the situation. The other, reformist side, says demographics be damned, every child deserves a quality education. Children and schools should all be held to high standards, and we can account for differences on the back end.

The first camp is far less willing to measure results in any systematic way. It makes some sense too. If you believe demography is destiny, no mathematical formula, no matter how complex or inclusive, can address all the factors that go into schooling.

Let's leave the math and the standardized achievement measures alone for a bit. Surely there are other ways to measure a school's, or even an individual teacher's, value? The first crowd says no. Insidehighered published a piece today questioning learning assessments in colleges. The author, Bernard Fyrshman, argues that, because colleges educate many different types of students in many fields, we cannot encapsulate a school's contribution to learning in a number:
Do you want to know whether the school will help a student learn to think, to examine, or to innovate? And of course every one of those talents may differ depending on the discipline. Do you care about what’s happening in the fine arts department or in engineering? And even in engineering, is it civil engineering or software development? Different talents, different intellectual demands, different skills.

But wait, we didn’t ask you yet about the student you’re interested in helping. Is he bright and driven, or laid back and not particularly ambitious? Was his high school a place that turned him on to learning or to text messaging? Does he need remedial coursework or is his transcript full of AP credits? Does your daughter stand out or is she happy sitting at the back of a large lecture hall? Will she grow under pressure or shrivel up and leave? Does your child want competition or collaboration?
The problem with Fryshman's argument, and the entire first camp's in general, is that we really have no good alternatives to assess student learning. In the higher ed world, colleges and universities have successfully kept new data sources from the public (see the recently passed Higher Education Act). For colleges, the only data we really have are graduation rates, and those mask wide differences. Some schools have small or nonexistent gaps between black and white graduation rates. Some, like my alma mater, a large public Big Ten school, have wide discrepancies.

But what about other sources of information? As Fryshman says, engineering students are different than ones studying fine arts. That's a given, but incomplete. How involved are they with campus life? How many papers or projects are they asked to complete? And are they able to find jobs after graduation? Are those jobs in their field of degree? If they graduated from a public school, do they stay in-state after graduation? Do they earn salaries worthy of their credentials? Are their employers satisfied with their skills? Do they go on to pursue, and to succeed, in more education? Are they involved in civic life through voting or volunteering? Most importantly, how do the answers to these questions stack up with the school's peers?

There's more than one way to take a measurement, but instead of pursuing other avenues vigorously, schools are mostly reluctant to release data proving their merit. We're left arguing, as Fryshman does, about standardized tests. But policymakers no longer accept accountability by assurance; they want to see results.

5 comments:

Anonymous said...

Speaking of accountability in Higher Ed, how would you grade the following paragraph?

"The first camp is far less willing to measure results in any systematic way. It makes some sense too. If you believe demography is destiny, no mathematical formula, no matter how complex or inclusive, can address all the factors that go into schooling."

It is full of factual errors and logical fallacies and it violates the fundamental spirit of collegiality they we in education seek to foster.

I think you would find that the signers of the Broader Bolder Challenge have a far greater record of empirical analysis. In fact, how many signers have ever had a peer reviewed publication?

Listen to Bill Clinton's speech before the Harlem Children's Zone and I think you would get a glimpse of some of the quantitative assessments that would be used for the early education part of the proposal. Check out the great matrices from the National Board to see the assessments for the training and development of educators. In fact, abandon the attempt to devise some all-encompassing test driven matrix, and a whole host of very professional, very solid assessments become practical. Limit ourselves to rifle-shot accountability, and the ptotential becomes unlimited very quickly.

You are like an accountant who wants to upgrade your profession and you hear that the Legal Bar's test is better, so you use that test to license accountants. You would not allow that sort of absurdity to be held over your head. But against an overwhelming body of evidence, you still don't want to concede a point.

Or to mix metaphors, think of those basketball players who will never respect a call by other while calling every little touch foul. Then ask why educators don't trust you and your numbers to evaluate their schools.

In fact, since you were writing about evidence, where was yours? Any evidence for the paragraph I just cited? Any at all?

john thompson

AldeBeer said...

John,

I'm not sure you read past the paragraph you cite. Being anti-test doesn't get us very far if we don't have other measures on which to assess performance. And, if you agree that public schools merit some form of public scrutiny, lack of data is a real problem. Frankly, all of the metrics I mention are probably better tools to measure collegiate performance than a single test, but we don't have access to that data.

Fryshman's extreme argument about the human element (one that you've made here before) ignores both our statistical and our inherent abilities to balance outside characteristics. It says, as the title of this post suggests, "what we do cannot be measured."

Anonymous said...

Hey, if you want to retract the criticism of the Broader. Bolder Challege, we're cool.

john thompson

I

Unknown said...

First, I don't think the Bolder crowd is saying demography IS destiny, I think they're acknowledging that demography plays a role, and in some cases a rather significant role. Second, I don't think they're "anti-test" rather critical of the role and power of the current testing regime - which is basically using a single assessment to determine a school's worth. Something that virtually every psychometrician will tell you is irresponsibile.

Lack of data is a problem. But using data from one test is even worse - and doesn't take into account anything else going on in the school (or college).

Furthermore, I think your logic gets a little tripped up in that you're using an editorial piece about measurement in higher ed to beat up on a K-12 reform initiative. They may both fall under the category of "fruit" but we're definitely talking apples and oranges here.

Unknown said...

Also, in the interests of disclosure, maybe you should actually name the "other, reformist side" you're referring to. Could it, perchance, be the Education Equality Project?