Friday, March 06, 2009

Bubble Trouble

If you haven't yet read the discussion room chat on Bill Tucker's Beyond the Bubble, it's well worth your time. I found this paragraph most striking:

There is an implication in [the] question that classroom assessment are rich, performance based tasks compared with the low-level multiple-choice tests administered by the state. It appears that Monty hasn't been in schools lately to listen to the hum of scanning machines scoring these "wonderful" classroom assessments. When I was the director of assessment in Wyoming, we included extended performance tasks on the state assessment (the first state assessment under IASA) that shocked the field. Why was it a shock? Because teachers-by their own admission-had not moved to the depth of knowledge called for in our performance tasks. Therefore, I would argue that state tests can serve as a model of what we want to see in the classroom.
This is a really interesting idea and one that often gets overlooked. While a lot of people deride the use of bubble tests, we tend to ignore the fact that the exact same score sheets are used to evaluate classroom learning all the time. As someone who hasn't been out of the classroom that long, I can attest that very good teachers, teaching high-level classes, employ Scantrons, too. They aren't just used for the statewide exams mandated by NCLB. Teachers at all levels choose to use them for their own formative assessments. Partly this is done out of ease and partly it's a reflection of limited resources. Scantron machines return quick, decisive results. Newer versions even format the results into a gradebook. Individual teachers simply do not have the time or resources to investigate more promising assessments. It will be up to districts, states, or other partnerships to develop the types of assessments that move us beyond the bubble.

1 comment:

Anonymous said...

The quote you cite came from the participant who most impressed me. But I don't think that that paragraph was representative of his thoughts. He spoke most effectively against the bubble "morass." He observed that performance testing failed in the 90s because, among other reasons, the they were high stakes.

Yes, scantrons CAN be consistent with engaging instruction. but how often are they?