Tuesday, July 28, 2009

What the Princeton Review Rankings Miss

Before diving into my first post, I just wanted to say hello. My name is Ben Miller and I just joined Education Sector, where I will be working on undergraduate education issues. I come here by way of the New America Foundation, where I spent the past two years working on public policy with college access, quality, and affordability. Feel free to contact me at bmiller [at] educationsector [dot] org.

The best postsecondary classroom experience in the country is not at any Ivy League school. In fact, according to a set of rankings released yesterday, the best collegiate classroom experience is not even on the East Coast—it is thousands of miles away on the Claremont, Calif., campus of Pomona College.

The classroom experience ranking is just one of 62 different “top” lists released yesterday by the Princeton Review. Using student surveys, it purports to tell consumers just how a selection of 371 college stack up against each other on a host of topics both serious (accessible professors, high levels of class discussion) and more inane (students more likely to be “dodge ball targets” or “jocks”).

But while the rankings can tell you everything about a school down to its quality of fire safety, none of them say anything about the actual quality of the education students receive. Look at Pomona’s classroom experience ranking, which is based on:

“…student assessment of professors'' teaching abilities and recognition in their fields, the integration of new business trends and practices in the curricula, and the intellectual level of classmates'' contributions in course discussions.”

Now, Pomona certainly seems like it provides a good education, and the factors it is judged by fit fit well with what we think a quality academic experience looks like. But that's as far as these data can elaborate. There's nothing in them that actually shows that students with small classes taught by engaged and accessible professors have better academic outcomes than those that do not. Moreover, even if these factors do contribute to a quality education, there's no data to show which elements have the greatest or least effect on student learning.

The fact is, the Princeton Review does not even attempt to rank quality of education and student learning because there is no source it could turn to for this information. Schools provide minimal data, if any, on their student learning outcomes, and the federal government does not collect any of this information either.

The need for better data on student learning should be apparent just by looking at Reed College. That small private college had the second best classroom experience according to the Princeton Review. But federal graduation rate indicators suggest that maybe Reed's education leaves something to be desired. According to a recent report co-authored by ES’s Kevin Carey (PDF), just 76 percent of Reed’s first-time full-time students graduated within six years of enrolling. That’s the lowest graduation rate among the most selective schools in the country.

Now maybe Reed's students do leave school having learned a great deal. Or maybe they do not. But without better information it's impossible to know definitively one way or the other. Either way, it is clear that just equating positive classroom experience with academic quality is not sufficient.

Instead of tackling the quality issue, the Princeton Review focuses on “Best Value” schools. Unfortunately, this metric suffers from many of the same problems that other rankings have. It factors in stats based on admission and acceptance rate, which capture the quality of students coming in, but lacks information on how they progress by the time they graduate. Beyond that, it’s a return to the same data on small classes and professor accessibility used elsewhere, only with the cost of tuition and amount of grant and scholarship aid taken into account.

Looking at these factors together seems less like a measure of value than a list of schools that are somewhat hard to get into but still have low net costs. It says nothing about whether schools help students progress academically and graduate on time. That would be the best measure of value given the high costs that come from failing to graduate or leaving school unprepared for the workforce.

It’s easy to understand why parents and students like publications like the Princeton Review’s rankings. They lay things out clearly and help simplify the decision-making process. But without taking into account more meaningful data on college completion and academic success, data schools need to do a better job providing, these guides will remain little more than a rank exercise.

8 comments:

Anonymous said...

Nice to meet you Mr. Miller. I have read this essay of yours, and I would like to grade it a perfect "o" - for it's circularity.

I suppose it's convenient to criticize Reed for its graduation rate, given the fact that you seem to define a good school as one which has students who "graduate on time" prepared for the workforce.

You see, Mr. Miller, given the high proportion of Reed students who go on to receive terminal degrees in their fields, even if Reed achieved a 100% graduation rate - "on time" - many, many Reedies would be a couple of graduate degrees short of being prepared for their chosen work, often in academia or advanced research.

If you want to get an idea of whether Reed is a superlative school, why not have a simple roundtable discussion on a topic, and see how the workforce-ready, high graduation-rate kids stack up against Reedies. I'll bet some of the wagering types at Reed would even stack their "dropouts" against the industrious conformists rushing headlong toward the "workforce."

I think you'll get your "definitive answer" quickly and without much need for idle number crunching.

Ben Miller said...

To Anonymous,

First, thanks for reading. The issue that you are bringing up about knowing that Reed is a good school is a point that I'm trying to make in the post. Intuitively, we know that Reed is a good school. Students seem to really like it, it's graduates do well, and it gets high marks in rankings. But if I were to remove the name Reed College and obscure its reputation, how would you prove that it is a good institution? You certainly could achieve that with a roundtable discussion, or you could do it with some other metric--growth in student learning, for example.

But what if instead of talking about Reed we were discussing a school without a well-known reputation, say the National College of Natural Medicine? According to the College Navigator it's 2.8 miles away from that campus, but not even in the same circle in the higher ed world. Now again, we could measure the quality of that school with a roundtable or some other metric, but the issue is we do not do so now.

And that really is the overall concern. Not whether the education provided at a well-regarded school is worth it, but how successful are those other 7,028 schools that are not in the top 100 in the country? Those schools educate the vast majority of students and are almost always ignored in any rankings scheme. And without that information it becomes very hard for students that aren't as academically successful--those that cannot get into a Reed College--to pick an institution that will give them a quality education. Moreover, it also makes it impossible for us to identify those diamonds-in-the-rough: schools that provide an excellent education but because they are below the radar we never hear about them.

And that's really my concern. Not to impugn Reed, but to suggest that we need more information about those schools that we hear so little about.

Anonymous said...

Mr. Miller: I did pick up on your point that rankings as currently carried out are misleading inasmuch as they imply that they are more meaningful than they are.

I think where we differ is that you seem to believe that it might be possible to create a useful objective metric of an institution if only there were enough data collected and it were the right kind of data.

I disagree.

I would argue that the "success" of a college education is, at the end of the day, subjective in nature. Has the school broadened and sharpened the student's thinking? Will the student have a more enriched and fulfilled life, as defined by the student? Is the student a better person for having attended? Does the student leave with new paradigms for approaching problems, and can the student think interdisciplinarily? These are questions which defy meaningful expression in statistics.

Your point that the vast majority of colleges are a bit of a "pig in a poke" is well-taken. Perhaps the group of students who would elect, or are limited, to a choice between those schools could benefit from some better objective metrics.

I concede that any college degree is better than no college degree. I concede that one should receive something of value for one's education dollar. I concede that some students see higher education as simply a stepping stone to a better paying job, to be endured but never embraced.

I hope that you would concede that there are students who are learning for the sake of learning and those who seek excellence because excellence in one area can translate to the recognition and attainment of excellence in other areas.

In the end, I would contend that it is the culture of an institution of higher learning which is the paramount issue when considering which school is "best" for an individual student. I believe that objective metrics can be useful in providing insight into that culture, but can never define that culture.

I do understand that the educational establishment increasingly views higher education in vocational terms rather than educational terms. The rising costs (far above the rate of inflation) of higher education cry out for some justification, and like a car salesman promising that the high price of the vehicle must be viewed in terms of the savings afforded by the higher fuel economy, the educational establishment seeks objective metrics which might make the case for the expense.

I don't buy it.

Before earning my undergraduate degree, I amassed undergraduate credits from no fewer than 5 institutions, mostly from Reed (surprise, surprise) - and I can assure you that no objective "metric" describes the actual experience or value (or lack thereof) to me of any of those schools.

So, Mr. Miller, I ask you: Is it fair to say that the very nature of your argument assumes a class-based educational paradigm, where for some students graduation rates and workforce readiness are acceptable measures of "student outcomes" and more elite institutions will always be beyond the grasp of such crude objective metrics and will thus continue to be somewhat mysterious and unknowable to "those that [sic] cannot get into a Reed College?"

Or is it fair to say that you believe that although the numbers in the current rankings system are flawed, that more and better numbers can make definitive distinctions between Reed College and "Joe's House O'Learnin' and Rib Shack?"

I'd want to meet "Joe" and taste his ribs before I decided whether I might want to attend his "House O'Learnin."

I'm not sure what "metrics" could ever exist that would allow me to make a decision on the quality of his BBQ sauce absent a taste. Especially not when we're talking about a $200k meal.

Anonymous said...

Mr. Miller: Perhaps this article can more eloquently explain the fundamental divide between the concepts of "education" and "metrics." Please enjoy: http://www.reed.edu/reed_magazine/winter2009/features/defending_the_citadel/7.html

Anonymous said...

http://www.reed.edu/reed_magazine/winter2009/features/defending_the_citadel/7.html

Anonymous said...

It won't post the full link, but if you want to find it, you can go to page 7 of a Reed Magazine, Winter 2009 article entitled "Defending the Citadel" - which should be easily enough found using a search engine of your choosing.

Ben Miller said...

Hi Anonymous,
I'll read the article and respond to that in a second, but there's a lot in your previous comment, so let me get to that first.
First on the class-based component. I really do not think that is the case at all. I think any college, regardless of its student population would benefit from being more upfront about the quality of the education it provides. This is not mutually exclusive from the idea of learning for learning's sake. I 100% absolutely believe that one can attend college solely for the sake of learning and still come out fully prepared for the workplace. Why is that? Because the skills that one picks up from a liberal arts degree--how to think through problems, dissect a text, write and present concepts in a clear and engaging manner, and so on--translate perfectly to the workplace. Whether I picked those up studying Plato or by reading about 19th pirates in the Caribbean (both classes I took) the skills I gleaned ended up being a large component to my ability to find a job. Similar arguments can be made about the scientific method and other things like that which one would pick up in a hard sciences course.

Now let's discuss what you bring up, the idea of learning. How does a student actually know that they are learning? I'm sure that they could go to some professors and tell them "I just wanted to let you know that I just mastered this concept," but I would guess that most professors would dismiss that and then assign either a test or a paper or some other form of assessment. Sure, those things are testing for content knowledge, but they also require students to demonstrate skills and concepts--the things that I am referring to as being mutually beneficial both for workforce and academic success.

Now let me ask you another question: Do you believe the mission of a college/university is at least partially to provide its students with the best education that it can? Or at least assist the students as much as it can to help them as they gain that education? If so, then doesn't it logically follow that the responsibility is on the college to find ways to continuously prove that education? To identify classes where students are not succeeding and to make changes that help them learn more? If that's the case, then the school itself needs some metric to measure that--whether its a test, a paper, portfolio of work--something that is able to convey their mastery of concepts so that an instructor can gauge their progress.

This is getting on quite long and I'm sure that I've missed some things, but I'll just close with this. Deciding on a college or university involves a huge information asymmetry. The school has a vague sense of how good it really is (I bet that most lousy for-profits know that they aren't good), but the student doesn't really know. The marketing materials won't tell them the truth--they are designed to persuade the individual to attend. They could do lots of site visits, take a class here or there, etc., but frankly, that's probably an expensive endeavor and a luxury many students, especially those from lower income levels, just cannot afford. In that light, I think that anything that can be done to reduce that information gap is a net gain.

Anonymous said...

We're on different planets here.

I had to laugh when you asked "How does a student actually know that they [sic] are learning?" and then proceeded to talk about a professor's method of evaluating a student.

Obviously, a student knows that he or she is learning when new facts or methods of analysis have been acquired, and the student notices that. What do grades or professors have to do with knowing if one has learned something?

You go on to ask, seriously, I am assuming, if a college's responsibility to teach somehow logically leads to some responsibility to prove it has taught. Huh? Obviously, the student already knows if he or she has learned something. Wouldn't the student be able to know if the college had somehow contributed to that learning? So to whom does the college owe a duty to "prove that education?" - whatever that means.

Marketing materials are designed to secure applications, not attendance. More applications allow colleges to create a "metric" - admission rates - that make it appear more exclusive.

I never argued that students didn't acquire skills by learning for the sake of learning. This is a complete straw-man argument on your part. What I was arguing was exactly the opposite, namely, that those who sought learning for the sake of learning would indeed acquire skills, while those who sought a diploma and a high graduation rate would in fact be less likely to emerge with a comparable set of skills. They are the ones who have learned to do "just enough." Is that what you think of our workforce?

I am having trouble following you. Just as when in your first response to me you pointed out to me that your point was that the rankings of colleges leave a lot to be desired and then cited college rankings to support the "intuitive" understanding that Reed was a quality place, I am finding the logic hard to follow.

You double back on yourself yet again when in attempting to refute my proposition that objective metrics are unable to meaningfully describe a college, you suggest that colleges should be "more transparent" by releasing data.

I don't think we're getting very far here, because as I stated previously, you argue from an unproven and biased set of assumptions. It is as if you are arguing from the point of view that if only Shakespeare had used more adjectives and metaphors in his love sonnets, we would be better able to gauge the quantity and nature of the love his character felt.

Silly stuff.

You even assume that college tests are testing for content knowledge. I once had a very wise professor who liked to give hybrid economics exams. He would pose a question, offer three answers, and then ask for a half-page explanation of the chosen answer. Each answer was right, from a certain perspective. The test was graded on the ability to justify the answer. Of course, he never told this to the students - we figured it out on our own. He was testing the ability to think like an economist.

In the spirit of Jeff Foxworthy, let me propose this: "You might not belong in college if you need a bunch of numbers to tell you about what college is."

I feel, Mr. Miller, that you are caught hopelessly inside a box of your own making, and most tragically, that you can not see the box or that you are in it.

Maybe, though, you could design a metric that would allow us to measure the extent to which people succumb to this form of thinking, and a scale upon which to interpret it.

Thanks for your time and attention, Mr. Miller, and good luck with helping the poor to select between bottom tier schools while not really helping anyone understand how good an education can be or describing to people the difference between a good education and a great education, unless of course there is an "objective metric" which can convey that, which there is most definitely not.

I am betting you never attended Reed. No objective metric necessary. ;)