Monday, June 09, 2008

Free Advice

Eduwonkette writes:


In this month's issue of Educational Evaluation and Policy Analysis, a new study by UT-Austin professor Julian Vasquez-Heilig and Linda Darling-Hammond, "Accountability Texas-Style: The Progress and Learning of Urban Minority Students in a High-Stakes Testing Context," revisits the Houston miracle by analyzing years of student-level test score and graduation data (1995-2002). There's no version up on the web yet, but here are some key findings: Growth on scores on TAAS exam outpaced scores on the Stanford exam. This appears to be prima facie evidence of test score inflation.

I've always been puzzled by this line of reasoning. NCLB was designed to give states a free hand to set academic standards and adopt tests as they like. Schools are then put under considerable pressure to help students achieve those standards as measured by those tests. And the evidence is incontrovertable that student achievement as measured by state NCLB tests has increased substantially. Evidence from other tests, by contrast, is much less clear. NAEP scores have increased, but not nearly at the same rate as state tests. Ditto tests like the Stanford exams.

But that's to be expected, isn't it? If the entire state system is purposefully geared toward teaching a certain set of standards, wouldn't we expect more improvement there than on a test that measures some other standards? Wouldn't test scores logically go up on the test that matters, instead of the one that doesn't? I can see how some kind of wild divergence would seem suspicious--e.g. a 50% increase on the state test while SAT-10 scores plummet--but shouldn't our baseline expectation be greater progress on the signficant tests aligned to curricula? But to say such divergence is prima facie evidence of test score inflation seems strange.

Maybe Vazquez-Heileg and Darling-Hammond explain all of this, but I can't tell since "there's no version up on the Web yet" and given that Eduwonkette in her day job is not exactly a disinterested observer in what she blogs about, I'm not going to take her word for it. Which brings me to my second point, regarding the research / policy divide.

This chasm of understanding, in which evidence from research fails to permeate the policymaking process, is constantly lamented and discussed, particularly at mass convocations like AERA. I don't understand why. Not because it doesn't exist, but because the solution strikes me as obvious. Here, free of charge, I present my secret methods for bridging the gap, based on time spent on both sides of the divide:

1) Write well. As everyone knows, research written in academese is often hard to read. Some of this can be solved be applying universal principles of good writing, which can be found in various books and guides and thus won't be rehashed at length here, other than to say: writing is a request to be a guest in someone else's consciousness. So be polite, and don't overstay your welcome. Write clearly and briefly, keeping in mind what the reader wants and needs, not what you want and need.

Beyond the above, there are also some important issues of structure to understand. Academic writing tends to be structured something like this:

1) Lit Review
2) Methods
3) Results
4) Conclusions (Possibly--"More research is needed" does not count as a conclusion.)

This is a terrible way to communicate with policymakers. They don't care about the lit review and the methods. You're the expert; they trust that you know what you're talking about and conducted the analysis correctly. (Maybe they shouldn't, but they do.) They're interested in context, results and conclusions. By that I mean: What did you find, and why does it matter? To present this information, you should write like this:

1) Context: why is this issue important?
2) Results: what did you find?
3) Conclusions: what do these results mean? (This should flow logically from the context)

Put the methods in an appendix, and only include stuff from the lit review if it helps establish context or supports your conclusions. I understand that this means writing in a different way than is appropriate for peer-reviewed academic research. Which is fine; different audiences and purpose, different format. To repeat: write for the reader, who wants the good stuff at the beginning, not the end. In fact, all he or she wants is the good stuff. So give that, and nothing else.

2) Let people know. The world is an extremely busy place and nobody is waiting for your next great idea or my next great idea with bated breath. So figure out who needs to know what you've discovered and call them on the phone. Or send them an email, or write them a letter. Reach out -- you'd be surprised how often this works. As long as your message is short and cogent, they'll probably appreciate it and ask for more.

3) Don't make things harder than they already are. Even if you follow steps one and two, getting traction in the policy sphere is still difficult. Politics, limited attention spans, competitors grasping for that same sliver of mindshare--it's not easy. So keep in mind some realities of the 21st Century, such as: If people can't find it on the Internet within two minutes, for free, it doesn't exist. Don't make more work for busy people, and don't try to sell them something that other people are giving away.

Vazquez-Heileg and Darling-Hammond got Step 2 right--getting your paper mentioned in a well-read blog like Eduwonkette is smart. But then--ack!--the paper isn't available. This cuts the potential Eduwonkette-driven audience by 90%. Most policymakers are not going to check back every day waiting for the free Internet version to show up, nor are they going to go buy a subscription to Educational Evaluation and Policy Analysis, or pay some kind of absurdly expensive per-article fee on a Web site. They'll just move on to other things.

No comments: