Friday, July 31, 2009

Pork Projects in the House

This week, the U.S. Senate Appropriations Committee followed on the heels of the House of Representatives by passing its version of a bill that sets funding levels for Department of Education programs in the 2010 fiscal year. Included within this legislation will be the first figures for earmarks given to specific colleges and universities within the Fund for Improvement of Postseondary Education, or FIPSE. While we wait to see just what pet projects the upper chamber would fund, let's take a look at some of the House's more questionable spending priorities.

First, a quick FIPSE refresher. Originally intended as a way to fund innovative reform projects through a competitive grant process, FIPSE instead became the main vehicle for Congressmen to direct a few hundred thousand dollars toward their favorite colleges and universities each year. The pork pursuit has grown so much that twice in the last five years there have not been any leftover competitive grant funds once the pet projects got their money.

This year, the House requested a total of $68.2 million in earmarks funding, significantly less than the $91.2 million Congress provided last year, though the final number is likely to increase once the appropriation bill gets to the conference committee. Either way, it's still nearly double the $34.8 million the House provided for competitive grants.

That said, here are a few of the earmarks in this year's House bill that raised eyebrows based on their descriptions. (For anyone who wants to play along, I've scanned the relevant pages and put them in a PDF here.)

  • Livingstone College, $300,000 for the school's Center for Holistic Learning "to provide academic and student support services, which may include equipment and technology."
  • University of Virginia's College at Wise, $150,000 to install a voice over Internet protocol system (basically what Skype does) and "demonstration activities through its Emerging Technologies Learning Center."
  • Evergreen State College, $325,000 for its Bioregion initiative, which "aims to better prepare undergraduates, as well as ourselves, to live in a world where the complex issues of environmental quality, environmental justice, and sustainability are paramount."
  • Niagara Community College, $100,000 to buy equipment and technology for its hospitality and tourism training programs.
  • Metropolitan State College, $200,000 for an aviation training program (at least its accredited).
  • Oklahoma State University, $450,000 for a wildlife management technician program, including buying equipment.
  • University of Massachusetts, Boston, $12.6 million for the Edward Kennedy Institute for the Senate, including supporting an endowment. This award is 12.6 times greater than any other FIPSE grant listed.
Now, picking on these institutions is not entirely fair because they actually bothered to provide some information about their award. More common were vague grants for schools like the State University of New York, Geneseo ($500,000 "for purchase of equipment") or Rockford College ($250,000 for "technology upgrades.") If those schools get their awards, we won't have any idea what the money actually funds until the online FIPSE database updates. Once it does, we'll find out if that money is going to meaningful reform, or to study what affects the quality of wine. With this little information available, how could any Congressman claim to make a meaningful judgment of whether these initiatives merit federal dollars?

Regardless of the quality of the proposal, these FIPSE grants subvert the program's initial intent of being one of the few federal funding streams that encourages institutions to innovate by competing for awards. And so long as Congressmen willingly fund pork over reform this program will continue to be little more than an annual wasted opportunity.

Why Teach for America and The New Teacher Project Exist

If you stop and think about it, Teach for America (TFA) and The New Teacher Project (TNTP) are well-functioning, non-profit, national human resource departments for schools. They recruit, screen, and hire candidates, all functions of a traditional HR department. TFA and TNTP do provide a lot more induction and support for their hires, but at the base level their purpose is to find and recommend potential teachers. Of course, school districts have their own human resource departments as well, so it's worth asking why these programs were needed in the first place.

If you look at the data on the teacher hiring process (some of the best of which has been put together by TNTP itself), what you see is that districts just aren't very good at it. They're slow, which causes them to lose out on better candidates. They don't recruit all that well, which means they have fewer candidates to choose from. And they tend to privilege more experienced teachers throughout the process, which, fair or not, limits their ability to attract young and motivated applicants.

Take, for example, the city of Philadelphia, which employs about 10,000 teachers in its 274 schools. Assuming a 9 percent teacher turnover rate (that's the national average--it's much higher in urban and low-income areas), the city needs to hire at least 900 new teachers every year. The graph below from the National Council on Teacher Quality shows how many applicants they've gotten over the last six years. Simple division suggests that Philadelphia public schools are getting a little more than three applicants for every open position.

Compare that to the competition for spots in TFA or TNTP programs. Only one out of nine TFA applicants get hired, and New York City's Teaching Fellows program, run by the TNTP, had 14,000 applicants apply for 700 spots (or 20 applicants per position).

These numbers matter. At the base level, it means districts have more and better options on who they want in front of its classrooms. Not to mention the symbolic impact for the ones who are selected to know that the position should be coveted, that if you do not care to be there, there are other people who do.

To understand...

...the satisfaction of seeing My Bloody Valentine live, imagine you're a masochist, standing next to a jet engine, getting beaten about the head with chunks of frozen whiskey by the four most serene Irish people in the world.

Thursday, July 30, 2009

Charter Schools and Unions—One Size Fits All??

Unionization of charter schools seems to be the hot topic these days. A recent NYT article raises the critical question:

“…whether unions will strengthen the charter movement by stabilizing its young, often transient teaching force, or weaken it by preventing administrators from firing ineffective teachers and imposing changes they say help raise achievement, like an extended school year.”

For unions to organize charter schools without weakening them, charter school faculty need to be able to create their own collective bargaining agreements (like Green Dot) that align with the educational philosophy of the school and its staff. This is my fear…You will have public school union leaders, who don’t reflect the actual teaching population in charter schools, advocating for and bargaining on behalf of the charter school teachers. This model wouldn’t work for the charter schools or their teachers. Most teachers in charter schools have chosen their particular school because they buy into the way that school functions, and are willing to do the extra stuff (longer hours, tutoring etc.) because they see that it works, or believe that it can work. For unionization of charter schools to be successful, it needs to allow the school to implement innovative reform strategies and allow teachers to choose both unionization and to work in schools operating under different educational models.

This begs the question of why traditional public schools don’t unionize in this way. Currently many traditional schools are a part of union that negotiates collective bargaining agreements for a large number of schools that vary in many ways, from mission to resources. And these district-wide unions often do not reflect the viewpoints of many reform-minded educators. If more traditional public schools would step away from the one-size-fits-all union structure, you might be surprised at how much teachers would be willing to engage in discussions around reform. Why can’t we have both--- teacher empowerment and progressive education reform? That’s the ideal. Maybe the unionization of charter schools can shed light on ways unions, in traditional public schools, can remain relevant in current education reform debates.

--posted by Marilyn Hylton

Not All Higher Education Spending is Created Equal

At least that’s the conclusion reached in a new working paper from Cornell University’s Higher Education Research Institute. First mentioned in Inside Higher Ed, the paper takes advantage of data from the Delta Cost Project to study the relation between certain types of higher education spending and student achievement.

Specifically, the researchers looked at four different categories of spending:
  • Instruction
  • Student services, such as supplemental instruction, on-campus organizations, and other “activities that contribute to students’ emotional and physical well-being”
  • Academic support services, which includes spending on libraries, curriculum development, and other items that “support the instruction, research and public service missions of the university”
  • Research
Expenditure figures were calculated per full-time equivalent (FTE) students. Federal graduation rates, meanwhile, were the researchers’ barometer for student success.

Overall, the researchers found that increasing student spending on either instruction or student services led to statistically significant gains in an institution’s graduation rate. Increased spending per student on research, meanwhile, had the opposite effect.

Intuitively, this makes sense: spending money on things that deal directly with students improves their academic success, while expenditures on research or other areas that could draw professors away from students do not.

But it’s the second part of the paper’s findings that is really noteworthy. According to the data, spending $500 more per student on student services leads to a larger increase in graduation rates than an equivalent spending increase on instruction. This outcome is even more pronounced at schools with low average SAT scores, high numbers of Pell Grant recipients, or low graduation rates. These findings even held when treating spending as a zero-sum game. The researchers found that graduation rates still rose if an increase in student support spending was offset by an equivalent decrease in instructional expenditures.

These findings have important public policy ramifications and should be good news to schools with strapped budgets. It suggests that once a certain level of instructional spending is reached, schools may be better off directing dollars toward supplemental assistance, rather than just plunking more cash down on professors or adjuncts. For schools with monetary problems, this matters. It means they could spend money on cheaper alternatives to instructors, such as tutors or (after sunk costs) a computer lab, saving money and boosting student success in the process.

The redirection of spending from instruction to student support is similar to the model used by the National Center for Academic Transformation (NCAT), which has helped redesign over 100 courses by using a combination of technology and better planning. For example, Virginia Tech worked with NCAT to completely revamp its linear algebra classes, replacing expensive lecture sections with a math emporium, where students could go at any time to work through materials on a computer with tutors often on-hand to provide assistance. This model not only reduced per-student course costs from $91 to $21 (an annual savings of $140,000), but student learning improved in most areas. Similar results occurred at the University of New Mexico, which redesigned a psychology course that had disproportionately negative outcomes for minority students.

As the NCAT model attests and the working paper confirms, substituting spending on instruction for student support can have real benefits for those enrolled. At schools that do not have much existing student supports, this also could provide a route for cost savings at no sacrifice to academic learning. (The paper looks at four-year institutional data, but community colleges appear to be good candidates for similar spending changes given their limited resources and characteristics similar to the high-Pell, low-grad rate schools described in the paper.)

While these findings are important, there are some real limitations to the available data. Student support is a very large and all-encompassing category that includes everything from tutoring to the always-maligned climbing wall. Without greater disaggregation it will be impossible to know exactly what factors make the greatest contribution to increased student success. But based upon the findings from NCAT, it’s a decent guess that the success comes more from the emporiums than from the fake rock facades.

Wednesday, July 29, 2009

Shoddy Academic Study Denounces Media for Non-Citation of Shoddy Academic Studies

A couple of days ago, I received an email from the teachers union-funded "Great Lakes Center for Education Research and Practice" touting a new study written by Holly Yetick of the University of Colorado at Boulder, allegedly uncovering rampant pro-think tank bias in the mainstream media. As the policy director of a think tank, I was naturally interested--we're always looking for new ideas when it comes to prosecuting our nefarious media-manipulation plans. Alas, I was disappointed. In an analysis of 864 articles published in the New York Times, Washington Post, and Education Week, the author found that:

Although university and government sources were cited more often, a higher percentage of reports produced by advocacy-oriented think tanks were cited by both types of publications. Universities produce 14 to 16 times more research than think tanks, but the three publications only mentioned their studies twice as often as think tank reports. As a result, any given think tank report was substantially more likely to be cited than any given study studies [sic] produced by a university.


That's not a bad way of counting press hits, although I probably would have added the AP, Wall Street Journal, and USA Today. (Note also the K-12 bias -- Ed Week but no Chronicle or InsideHigherEd). It's the denominator that really throws these numbers out of whack. Presumably, nearly every one of the think tank studies in question was written with the hope of garnering some media coverage. The universe of academic studies, by contrast, was calculated in two ways: the total number of papers accepted at the 2008 meeting of AERA (8,064), and the total number of articles published in 2007 in 176 peer-reviewed journals (7,172).

Now, maybe third-rate journalism is at the root of the Washington Post's failure to provide A1-coverage to articles like "Still jumping on the balance beam:continued use of perceptual motor programs in Australian schools," from the April 2007 edition of the Australian Journal of Education, one of the peer-reviewed journals in question. Ditto "Contributions and challenges to vocational psychology from other disciplines: examples from narrative and narratology," from the International Journal for Educational and Vocational Guidance. And maybe Ed Week needs to take a long, hard look at its standards and practices after failing to cover "Complicating Swedish Feminist Pedagogy" and "Complexity Theories at the Intersection of Hermeneutics and Phenomenology" from the 2008 AERA.

Then again, maybe not.

The article also alleges a conservative bias in news coverage, as evidenced by the fact that newspapers tend to cite studies from notorious right-wing outfits like...Education Sector, where I work. Without going into the political and professional histories of our staff at length, let me assure you that this view is completely absurd. If we're on the "right" side of the spectrum and "centrist-libertarian," why is the Cato Institute always insisting I'm wrong? 

What accounts for the relatively high think tank batting average? In announcing the paper, the Great Lakes center said, "Yettick indicates that this is likely due, at least in part, to the skill and resources think tanks devote to publicity for their reports, using sophisticated marketing campaigns targeting journalists, policy makers and the public for which university professors generally lack the resources and motivation to do."

You hear this a lot. Well, I've worked at three of the think tanks covered in the report--the Center on Budget and Policy Priorities and The Education Trust are the other two--so I have pretty good sense of how they operate. And I probably shouldn't be revealing the sophisticated marketing secrets that allow us to crowd out allegedly more-rigorous university-based research with our "ideologically driven" work. But what the heck. Here's my secret recipe:

1) Before a report is released, send an email to editors and reporters at publications where you'd like it be covered. Describe the findings, briefly, and explain why it might make a good story.

2) Give them a copy of the report, for free.

3) Include your email address and phone number, in case they have any questions. Check your messages. If they email or call back and say "I'm on deadline for five o'clock," respond before five o'clock.

4) Be succinct. Don't, for example, write "It is, in fact, true that advocacy-oriented think tanks rarely have their research peer reviewed and repeatedly have been found to engage in research practices that cast suspicion on the validity of the findings..." If something is, in fact, true, then it's true. Moreover, as a reader, my assumption is that you're not deliberately lying to me. If you say it, I assume you believe it's true. So the sentence should begin "Advocacy-oriented think tanks..." and go from there. These things matter! See Strunk and White for further advice. 

Also, proofread. "study studies"?

In the end I think the marketplace of ideas is quite a bit more efficient than Yettick believes. Reporters aren't all idiots and think tanks don't succeed through P.R. witchcraft. If the media isn't covering your research, it's probably not my fault. 

About CBO's Alternative Student Loan Cost Estimate

Last week, the U.S. House of Representatives’ Committee on Education and Labor voted 30-17 to pass a bill (PDF) that increase the Pell Grant and also establishes new programs to help community colleges, increase college completion rates, and improve early childhood learning.

These initiatives would be paid for by eliminating subsidies that are currently given to private lenders so they will offer loans to students through the Federal Family Education Loan (FFEL) Program. The loans are nearly identical to ones offered by the government through the Direct Loan Program, and lenders making them receive federal insurance to cover 97 percent of any losses they sustain if a loan defaults.

Not surprisingly, the loan companies aren’t thrilled with this proposal and are fighting back.

The latest salvo over this proposal concerns how much the Congressional Budget Office (CBO) thinks the government would save by eliminating FFEL. Last week, CBO released a cost estimate (PDF) that put these savings at $86.8 billion over the next 10 fiscal years by having the Department of Education issue all federal student loans. Yesterday, the same organization sent to a letter to Sen. Judd Gregg (R-N.H.) saying a different calculation would yield savings of $47 billion over 10 years—a difference of roughly $4 billion a year.

Certainly, savings of $47 billion over 10 years should not be dismissed outright. That figure is actually greater than the savings estimated by the Office of Management and Budget when President Obama first proposed the end of FFEL in February (PDF, Page 23). Nearly $5 billion a year could fund a lot of interesting and creative programs for access and success.

But that’s assuming the alternative CBO estimate isn’t another iteration of a somewhat misleading budgeting tactic lenders have used in the past to try and make the FFEL Program seem cheaper than Direct Loans.

This tactic is known as market risk, and attempts to measure the costs of the two student loan programs by treating them as if they were products offered without a government guarantee on the private market. (My former colleague Jason Delisle wrote a paper on this subject last October, which can be found here.)

Market risk matters in this case because federal student loans are accounted for as a net present value—a process that compares the current cost of making a loan by discounting future cash flows. Operating off of the principle that $1 in the future is worth less than $1 today, the net present value method thus estimates the total cost of a loan at the time it is disbursed.

The net present value method places a great deal of importance on the discount rate—a number that reflects the value of a future dollar versus one today. If it is easy to obtain money for the loan and it is likely to be repaid, then the discount rate is likely to be low. But if there is a high default risk or money is hard to borrow, then future payments become less valuable, resulting in a higher discount rate.

The current discount rate used by CBO does not, however, reflect any of these factors. Instead, it is legally required to use a discount rate equal to the yields on Treasury securities—a number so low it is basically a risk free rate.

If future borrower payments are treated as risk-free, then their present value remains high and the net present cost of a direct loan seems to be fairly small. It may even appear to have a negative cost, meaning it seems to make money for the government.

FFEL loans, however, look more expensive under a risk-free discount rate. This is because CBO’s estimates of FFEL loans only include subsidies, fees, and default payments between lenders and the government. (Loan disbursement and borrower payments are not measured because they occur only between the lender and borrower.) Since all of these government expenses occur in the future, a low discount rate means they have a high present value. The sum of these future costs to the government thus drives up the net cost of a FFEL loan.

For years lenders have claimed that using a risk-free discount rate is unfair since student loans are in fact risky investments. (They even paid a previous head of CBO to put out a paper [PDF] arguing in favor of using market costs.) Instead, of the risk-free Treasury rate, lenders argue that CBO should use a higher discount rate that reflects the financing and default risk the private market would assign to issuing student loans.

Asking for market-based rates in cost estimates makes sense, but just increasing the discount rate just reverses problems with the current system. Direct loans cost more when using a higher, market-based discount rate because future borrower payments would be worth less in present dollars. FFEL loans have the opposite effect, as future government subsidy or default payments appear cheaper. (Just as $1 paid to you is worth less in the future than it is in the present, so too is $1 owed.)

Simply substituting the risk-free Treasury rate for a higher market cost rate thus presents the Direct Loan Program in a more costly (and realistic) light without affording the same treatment to FFEL loans.

It’s important to have a good sense of the savings from reforms to the student loan programs, but it is disappointing to see discussions of how best to spend savings on student access and success get caught up in additional political wrangling.

Tuesday, July 28, 2009

Golden Parachutes

The Wall Street Journal had an interesting article recently about pension spiking, a practice where workers use the calculation of their pension benefits to their advantage:
Pete Nowicki had been making $186,000 shortly before he retired in January as chief for a fire department shared by the municipalities of Orinda and Moraga in Northern California. Three days before Mr. Nowicki announced he was hanging up his hat, department trustees agreed to increase his salary largely by enabling him to sell unused vacation days and holidays. That helped boost his annual pension to $241,000.
This is entirely legal, but it amounts to real money spent on the part of state and municipal retirement systems:
Mr. Nowicki recently turned 51 years old. If he lives another 25 years, his pension payments will cost the fire district an estimated additional $1 million or more over what he would have received had he retired at a salary of $186,000, not including cost of living adjustments, a fire board representative said.
And, even though Mr. Nowicki is "retired," he is still employed by the fire department as a consultant earning $176,000 a year.

Teachers are able to take advantage of these provisions as well, sometimes through informal ways like selling back vacation or sick days, and sometimes more subtly, through their district-negotiated contracts. In my recent study of large urban district salary schedules, I discovered an interesting example from the state of Florida. In Florida, teachers are awarded retirement benefits based on their salary over their last five years on the job. So, a district that opts to pay their teachers high salaries for these five years has to compensate them slightly higher for those five years only. But the state will be paying higher retirement benefits, based on these higher salaries, for the rest of that teacher's life.

Broward County, Fla., for instance, gives teachers an average raise of only $320 in their first 10 years on the job, but back-loads $20,000 in raises into a short time period between year 18 and 21 on the job, packing almost 40 percent of all experience-based compensation into these three late-career years, a stage when teachers are unlikely to gain effectiveness in a commensurate way. This trend has accelerated over the last decade, as teachers crossing the threshold from 20 to 21 years of experience in Broward County have netted average raises of 16.1 percent, compared to 5.2 and 6.5 percent raises, respectively, for teachers crossing the 18- to 19-year and 19- to 20-year thresholds.


These salaries are a huge win for both district and union negotiators. The union is able to show its members large benefits for those teachers who teach for their entire career. The district is able to keep total expenses down. The money comes from the state, after all. In turn, young, mobile teachers and state taxpayers lose out. Most states have similar retirement calculations (some use only the final year, or the final three years), but only Florida's districts have taken advantage of this provision to the full extent. None of this should surprise or shock anyone; this is what happens when people making the decisions are not the ones paying for the consequences.

Tight Budgets and Accountability

It's funny what a budget shortfall does to perspective. In January of 2007, the University of California (UC) system, one of the largest and most symbolically important systems of higher education in the country, released its latest accountability document. At 40 pages, it was remarkable, for such a highly regarded system, for its lack of breadth, specificity, and all-around vagueness. It mentions its institutions, including such prestigious places as UC-Berkeley and UCLA, just once each, in an entirely unhelpful table on the number of transfer articulation agreements each school had made with the state's community colleges. All reported identical figures.

Flash forward a year and a half to last September, when the UC released a 211 page accountability "discussion draft." It had a lot more information, this time presented in attractive tables and charts and separated for each individual institution. After releasing all this formerly private information, the sky didn't fall (except in the state's budget), and the UC has followed up with an even better system of relaying information about its institution's performance to the public.The new Web site has 15 "chapters" that each contain multiple measures presented in easily digestible graphic form.

These things matter, because they easily show the public how well their institutions are performing. The LA Times ran a piece this weekend on students who want to pursue bachelor's degrees but opt to start the process at community colleges. The article was informative but generic, and it provides a perfect example of how a good accountability system could be used.

In 1960, California codified how it would educate a growing mass of baby boomers. The University of California system would award doctoral degrees, conduct most of the research, and educate the top 12.5 percent of high school graduates. California State University institutions would educate the top third of high school graduates, focus on the undergraduate experience, and be responsible for training the state's teachers. This left everyone else to the community colleges.

The community colleges were designed to be open access, free for all students, and a cheap route for state policymakers to educate the masses without paying UC or CSU prices. Then, if the students proved themselves at the community colleges, they would have clear routes to transfer and be on their way to bachelor's degrees at the more prestigious four-year schools.

The UC's new accountability system shows that it has held up its end of the equation. First, users can see that transfer applications from California community colleges have risen over the last 15 years, but that admits and enrollees have pretty much kept pace. Transfers have a lot of competition to get into either UC-Berkeley or UCLA, but they have really good chances of being accepted to one of the UC schools in Davis, Irvine, Merced, Riverside, San Diego, Santa Barbara, or Santa Cruz.

But it's not just about getting in; transfers are able to graduate as well. The two-year systemwide graduation rate for transfers is now above 50 percent (which is actually quite good), and nearly 90 percent of transfers have earned a degree four years after transferring. The numbers vary by campus, which we're now able to see.

This new information tells us a lot. Without it we might blame the UC admissions offices for unfairly rejecting qualified applicants. But the newly revealed information suggests that low transfer rates from the community colleges is not the fault of admissions policies, but rather needs to be addressed with increasing the pool of transfer-ready students. That requires an entirely different set of policy responses. And it's just one more example of how new, better accountability systems can inform the public to make better decisions.

What the Princeton Review Rankings Miss

Before diving into my first post, I just wanted to say hello. My name is Ben Miller and I just joined Education Sector, where I will be working on undergraduate education issues. I come here by way of the New America Foundation, where I spent the past two years working on public policy with college access, quality, and affordability. Feel free to contact me at bmiller [at] educationsector [dot] org.

The best postsecondary classroom experience in the country is not at any Ivy League school. In fact, according to a set of rankings released yesterday, the best collegiate classroom experience is not even on the East Coast—it is thousands of miles away on the Claremont, Calif., campus of Pomona College.

The classroom experience ranking is just one of 62 different “top” lists released yesterday by the Princeton Review. Using student surveys, it purports to tell consumers just how a selection of 371 college stack up against each other on a host of topics both serious (accessible professors, high levels of class discussion) and more inane (students more likely to be “dodge ball targets” or “jocks”).

But while the rankings can tell you everything about a school down to its quality of fire safety, none of them say anything about the actual quality of the education students receive. Look at Pomona’s classroom experience ranking, which is based on:

“…student assessment of professors'' teaching abilities and recognition in their fields, the integration of new business trends and practices in the curricula, and the intellectual level of classmates'' contributions in course discussions.”

Now, Pomona certainly seems like it provides a good education, and the factors it is judged by fit fit well with what we think a quality academic experience looks like. But that's as far as these data can elaborate. There's nothing in them that actually shows that students with small classes taught by engaged and accessible professors have better academic outcomes than those that do not. Moreover, even if these factors do contribute to a quality education, there's no data to show which elements have the greatest or least effect on student learning.

The fact is, the Princeton Review does not even attempt to rank quality of education and student learning because there is no source it could turn to for this information. Schools provide minimal data, if any, on their student learning outcomes, and the federal government does not collect any of this information either.

The need for better data on student learning should be apparent just by looking at Reed College. That small private college had the second best classroom experience according to the Princeton Review. But federal graduation rate indicators suggest that maybe Reed's education leaves something to be desired. According to a recent report co-authored by ES’s Kevin Carey (PDF), just 76 percent of Reed’s first-time full-time students graduated within six years of enrolling. That’s the lowest graduation rate among the most selective schools in the country.

Now maybe Reed's students do leave school having learned a great deal. Or maybe they do not. But without better information it's impossible to know definitively one way or the other. Either way, it is clear that just equating positive classroom experience with academic quality is not sufficient.

Instead of tackling the quality issue, the Princeton Review focuses on “Best Value” schools. Unfortunately, this metric suffers from many of the same problems that other rankings have. It factors in stats based on admission and acceptance rate, which capture the quality of students coming in, but lacks information on how they progress by the time they graduate. Beyond that, it’s a return to the same data on small classes and professor accessibility used elsewhere, only with the cost of tuition and amount of grant and scholarship aid taken into account.

Looking at these factors together seems less like a measure of value than a list of schools that are somewhat hard to get into but still have low net costs. It says nothing about whether schools help students progress academically and graduate on time. That would be the best measure of value given the high costs that come from failing to graduate or leaving school unprepared for the workforce.

It’s easy to understand why parents and students like publications like the Princeton Review’s rankings. They lay things out clearly and help simplify the decision-making process. But without taking into account more meaningful data on college completion and academic success, data schools need to do a better job providing, these guides will remain little more than a rank exercise.

Monday, July 27, 2009

Charter Schools and Unions

The New York Times reported yesterday on recent efforts to unionize charter schools and the ongoing debate over the impact unionization could have on the growth and performance of charters. It's an important discussion, but no one knows where it will end - will unionized charter schools be a small part of the larger movement or is this the beginning of widespread unionization in charters? And what will the impact be on charter school operations and ability to try new things, including different pay scales and work hours? And, of course, what will the impact be on student achievement?

To help dig into these issues, Education Sector hosted an online discussion earlier this month with a diverse group of public school teachers, including teachers at charter schools. The discussion likely won't answer any questions, but it does shed some light on the role unions play in today's education system and the lively, and at times contentious, debate over their place in the future.