The first camp is far less willing to measure results in any systematic way. It makes some sense too. If you believe demography is destiny, no mathematical formula, no matter how complex or inclusive, can address all the factors that go into schooling.
Let's leave the math and the standardized achievement measures alone for a bit. Surely there are other ways to measure a school's, or even an individual teacher's, value? The first crowd says no. Insidehighered published a piece today questioning learning assessments in colleges. The author, Bernard Fyrshman, argues that, because colleges educate many different types of students in many fields, we cannot encapsulate a school's contribution to learning in a number:
Do you want to know whether the school will help a student learn to think, to examine, or to innovate? And of course every one of those talents may differ depending on the discipline. Do you care about what’s happening in the fine arts department or in engineering? And even in engineering, is it civil engineering or software development? Different talents, different intellectual demands, different skills.The problem with Fryshman's argument, and the entire first camp's in general, is that we really have no good alternatives to assess student learning. In the higher ed world, colleges and universities have successfully kept new data sources from the public (see the recently passed Higher Education Act). For colleges, the only data we really have are graduation rates, and those mask wide differences. Some schools have small or nonexistent gaps between black and white graduation rates. Some, like my alma mater, a large public Big Ten school, have wide discrepancies.
But wait, we didn’t ask you yet about the student you’re interested in helping. Is he bright and driven, or laid back and not particularly ambitious? Was his high school a place that turned him on to learning or to text messaging? Does he need remedial coursework or is his transcript full of AP credits? Does your daughter stand out or is she happy sitting at the back of a large lecture hall? Will she grow under pressure or shrivel up and leave? Does your child want competition or collaboration?
But what about other sources of information? As Fryshman says, engineering students are different than ones studying fine arts. That's a given, but incomplete. How involved are they with campus life? How many papers or projects are they asked to complete? And are they able to find jobs after graduation? Are those jobs in their field of degree? If they graduated from a public school, do they stay in-state after graduation? Do they earn salaries worthy of their credentials? Are their employers satisfied with their skills? Do they go on to pursue, and to succeed, in more education? Are they involved in civic life through voting or volunteering? Most importantly, how do the answers to these questions stack up with the school's peers?
There's more than one way to take a measurement, but instead of pursuing other avenues vigorously, schools are mostly reluctant to release data proving their merit. We're left arguing, as Fryshman does, about standardized tests. But policymakers no longer accept accountability by assurance; they want to see results.