Tuesday, October 20, 2009

The Way It Could Be

Two out of five of America’s teachers are disheartened and disappointed about their jobs, says a new study by Public Agenda and Learning Point Associates.

But, as I argue here, it doesn’t have to be this way. There are better designs for teaching, designs that can improve teacher satisfaction and effectiveness at the same time. Read on

Friday, August 07, 2009

Borrower Angst = Business Opportunity

A new product for loan repayment, SafeStart, officially launched its website today. The product promises to ease students' anxieties about repaying their college loans by offering to make payments for three years if students face financial hardship.

See today's Inside Higher Ed article for a good description of the details, the pros and cons, and the uncertainties around who this might help.

One thing is certain - this is an innovative product and probably the first to tap into a growing anxiety among students and parents over ever-increasing debt loads. So what does a product like SafeStart say about the state of higher education? That it costs too much. That it relies too much on student debt for revenue. And that paying for college, both up-front and after the fact, is causing a lot of angst.

House vs. Senate Higher Ed Earmarks

Are the U.S. Senate and House of Representatives heading for a fight over higher ed earmarks in their 2010 appropriations bills? The earmark figures in both bills indicate yes. But don't be fooled. The lack of overlap between funding priorities means the level of postsecondary pork is only likely to grow.

All told, the Senate bill (PDF) provided $85.6 million for the Fund for the Improvement of Postsecondary Education (FIPSE) -- the program that contains earmarks for higher education and funds innovative reform projects when money is available. The Senate's figure is about $48.3 million less than what the House provided in its bill.

The funding gap, however, is a bit of an illusion because the two bills only have a few spending priorities in common. The House, for example, provided $52.8 million for the FIPSE comprehensive program -- which gives out competitive innovation grants -- a college textbook rental initiative, and support for veteran student success centers. The Senate did not fund any of these, but instead directed $3.5 million toward the Erma Byrd Scholarship Program, a realtime writing training initiative, and an off-campus community service program.

Earmarks tell a similar story. The Senate provided 110 earmarks under FIPSE, 63 fewer than the House. Of these, however, only 17 appear in both bills. Even when both chambers included an earmark, it was almost always for substantially different amounts of money. The Senate, for example, provided $800,000 for St. Norbert College, double the amount provided in the House. Ironically, the largest earmark difference was the money given to the Edward M. Kennedy Institute for the Senate. It received $12.6 million from the House but just $1 million from the Lion of the Senate's counterparts.

The biggest earmark winner in the Senate bill appears to be Northern Kentucky University, which got $2.4 million for "purchase of equipment." Equipment must be particularly expensive in the Bluegrass State because the Western Kentucky University Research Foundation received $2 million for the same purpose.

The lack of earmark similarity is bad news for taxpayers. When the conference committee meets to reconcile the two bills, don't be surprised to see the final FIPSE increase because no one is willing to sacrifice their pet project.

But limited earmark overlap also means there are a whole new set of project lowlights to mention:
  • Center for Empowered Living and Learning, Denver, Colo., $300,000 for "an education program on terrorism."
  • Dickinson State University, Dickinson, N.D., $600,000 for its Theodore Roosevelt Center, which "is designed to raise the profile of Theodore Roosevelt in North Dakota, to deepen our understanding of one of the most remarkable statesmen and intellectuals in American history, and to convene Roosevelt-related events of local, state, and national significance."
  • Kalamazoo Valley Community College, Kalamazoo, Mich., $200,000 for equipment to help train wind turbine technicians.
  • Keene State College, Keene, N.H., $100,000 for its Monadnock Biodiesel Collaborative, which hopes to create a plant to convert grease from around New England into biofuel.
  • Philadelphia University, Philadelphia, Penn., $100,000 for "equipment relating to science."
  • University of Virginia Center for Politics, Charlottesville, Va.,$100,000 for interactive civic lessons for high school students.
While making fun of projects can be entertaining, the opaque and wasteful nature of FIPSE is not. This is a program that was supposed to fund exciting reform, not grease the coffers of schools with Congressional influence.

At least Keene State can turn its pork byproducts into fuel.

Thursday, August 06, 2009

Wednesday, August 05, 2009

Will Race to the Top Spur a New Generation of Assessment?

IBM Type 805 Test Scoring Machine 1938Perhaps. Significantly improving student assessment is the real "moon shot" for the stimulus funds.

A new Education Week article highlights the potential impact of these funds:

What now seems to be an intractable choice between richer tasks and reliable data, though, could be mediated by advancements in technology that could improve access, cost, and reliability of performance-based testing, some experts argue....Experts add that the infusion of federal cash could also provide more opportunities to devise tests that will better engage teachers in the cognitive science about how knowledge develops over time.
But, while Secretary Duncan has set aside $350 million of his “Race to the Top” fund to improve student assessments, plans for these funds remain vague. The article quotes me and others warning that the investment could be wasted if we invest $350 million without thinking differently about our decades-old assessment practices.

Photo: IBM Type 805 Test Scoring Machine, circa 1938

Willful Misunderstanding

Over at the National Journal's group edu-bigwig blog, they're debating the question "Are the Race To the Top Requirements Fair?" A lot of the discussion centers on the RTT requirement that states eliminate prohibitions against linking student test score data with individual teachers. Most of the bloggers are in favor of this, on the grounds that outlawing the use of information about how much students learn in evaluating the extent to which teachers help students learn is insane. But National Education Association president Dennis Van Roekel disagrees, writing:

We’re concerned about the effectiveness and reliability of requiring states to link data on student achievement to individual teachers for the purpose of teacher and principal evaluation. Teachers who work with disadvantaged students shouldn’t be “evaluated” based on whether their students hit a particular test target on a particular timeline. And we certainly shouldn’t base additional compensation on whether students meet particular testing targets on a particular day. We need to offer incentives so that our best teachers teach the students most in need of assistance, not incentives to teach students most likely to score highest on a standardized test. As with NCLB, good intentions can lead to unintended—and unacceptable—consequences.

An interesting sort of political science question is just how long the NEA can get away with vague and disingenuous comments like this. Nobody--nobody--wants to judge teachers based purely on the percentage of students who meet a given cut score on a test at the end of the year. That would be crazy. All reasonable conversations about the use of end-of-the-year student test score data for teacher evaluations begin with the assumption that we should (A) account for where students were at the beginning of the year and/or take into account other data about their academic histories, and (B) not rely exclusively on test-score data. Van Roekel knows this. Everybody knows this. The people pushing for the student data-teacher linkages are the same people who want to get more high-quality teachers in the classrooms of disadvantaged students. They're not idiots; of course they don't want to create an unfair evaluation system that would directly counter that goal. But conceding that makes the whole thing seem a lot more rational and then where would the NEA be?

Also, it's easy enough to raise the specter of measurement error by talking about "particular testing targets on a particular day." But that requires a level of seriousness and empiricism about how the numbers tend to play out. Measurement error is real and significant but also finite and measurable and thus subject to sensible decision-making and interpretation. If a given teacher's students all consistently fail to meet a particular target on a particular day, year after year, even though most of them were hitting targets in previous years--hey, that might mean something! But acknowledging that would take the conversation to a place the NEA clearly doesn't want to go.

9.5 Scandal Makes its Way to Sallie Mae

While cost and increased aid for low-income students have rightfully dominated the discussion of the proposal to end subsidies for private lenders making federal student loans, it's worth remembering that terminating the bank-based system would also help return integrity to a system with an ever-growing history of scandal and abuse.

Case in point, an audit report released Monday by the U.S. Department of Education's Office of the Inspector General (PDF), which found that a subsidiary of Sallie Mae -- the country's largest student loan company -- had improperly billed the federal government for an estimated $22.3 million in unwarranted subsidies.

This finding makes Sallie Mae the latest in an increasingly long line of lenders to get caught overbilling the government for student loan subsidies under the "9.5 percent student loan scandal."

The 9.5 percent scandal arose because lenders were taking advantage of an old subsidy rate from the 1980s that allowed them to receive a guaranteed 9.5 percent return on student loans financed by a tax-exempt bond -- a rate significantly higher than what they earn on loans now. Congressional action in 1993 should have prevented the growth of loans receiving this high subsidy rate, but several companies exploited loopholes to substantially increase the number of loans receiving this subsidy rate. Doing so allowed lenders to improperly bill the U.S. Department of Education for at least hundreds of millions of dollars in unnecessarily high subsidies until former Education Secretary Margaret Spellings put a stop this abuse in 2007.

(The complete story about the 9.5 scandal, including how the Department of Education granted amnesty to the worst offenders is too long to explain here, but for further reading check out this 2004 report and this blog post series.)

In many respects, the new audit's findings against Sallie Mae are not as bad as what similar audits turned up with regards to Nelnet (PDF) and the Pennsylvania Higher Education Assistance Agency (PDF). Unlike those two agencies, Sallie Mae and its affiliate did not exploit loopholes that allowed them to improperly increase the amount of loans eligible for the 9.5 subsidy. The audit did, however, find that Sallie Mae continued billing the government for 9.5 subsidies longer than it should have, allowing it to accumulate $22.3 million in unwarranted payments. Not surprisingly, the lender strongly denies this conclusion, claiming that the inspector general's calculations of 9.5 percent eligibility were flawed.

This is the fifth inspector general audit released released this federal fiscal year that is critical of the bank-based system. It's highly likely that it will not be the last. That's because history has shown that the lenders in the current system have repeatedly found ways to find loopholes or engage in illicit practices that undermine the integrity of the bank-based loan program. Each successful attempt to game the system results in yet another unnecessary instance of waste on the part of taxpayers or students. A transition to 100 percent government lending would certainly not eliminate all corruption in the system. But at least it would be a step toward eliminating the financial incentives and opportunities for abuse that have repeatedly jeopardized the federal student loan program. It might also make life a little less busy for the inspector general.

UPDATE: Higher Ed Watch weighs in with more detail on Sallie Mae's past claims about 9.5.

Tuesday, August 04, 2009

Ironic Twist in the SAT Debate

A growing number of top liberal arts colleges, dissatisfied with the SAT, the reliance on a standardized test in admission decisions, and the college rankings culture that feeds it, have opted to make college entrance exams optional. Anyone who's studied statistics or human behavior could predict the outcome: students with less than stellar SAT scores will opt not to submit them, while those with higher test scores and lower grades will see them as bolstering their chances. Jay Matthews covers a new article documenting just this scenario at 32 SAT-optional schools:

The result was stunning. Nearly all of them admitted to submitting inflated averages that did not include scores from students who did not submit them during the admissions process. Two refused to comment. Only one of them, Muhlenberg College, reported “a full and honest SAT average, requiring students who took the test to submit scores after enrolling and reporting their SAT average inclusive of those scores,” Epstein said.

This made a significant difference, he discovered. “Publicly available and privately shared data reveal that SAT scores for non-submitters average 100-150 points lower than submitters,” he wrote in his article. “Eliminating those scores for 25 percent to 50 percent of enrolling students results in manufactured SAT average increases between 25 and 75 points. These results imply that 31 of the 32 SAT-optional institutions in question are the beneficiaries of SAT average boosts...”

To see how this affected individual institutions, see this graphic from the NY Times. These results should not be surprising. By marketing the SAT as an option, these institutions naturally had more applicants who might not have otherwise applied. But it's ironic that to demonstrate against a manipulable test and a college rankings system that favors institutions that lower their admission rates and increase their test scores, these colleges bolstered their rankings by...lowering their admission rates and increasing their test scores. Funny how that works out.

Monday, August 03, 2009

Commonplace Corruption

As the Chicago Tribune reported a couple of days ago, University of Illinois trustee Lawrence Eppley has resigned "against the backdrop of a state investigation of a shadow admissions system that gave preferential treatment to students with ties to trustees, politicians and deep-pocketed donors. About 800 undergraduate applicants had their names placed on clout lists, known internally as Category I, at the Urbana-Champaign campus during the last five years, a Tribune investigation found. Dozens more received special consideration from the law school and other graduate programs."

Now, it'd be easy enough to use this merely as a point for Illinois in the Great American Corrupt-o-Thon it seems to be contesting against the state of New Jersey , or to chuckle at the inevitable appearance of the words "whose law firm donated $105,000 to Blagojevich's campaigns."

But really, isn't the only out-of-the-ordinary thing here that the people in Illinois were dumb enough to give their list an official name? It's been well-documented that many selective colleges and universities use their admissions processes in all manner of self-serving and untoward ways as a means of garnering money, fame, and political power. Admissions are the coin of the realm in elite higher education and it's not surprising that institutions can, if they so decide, spend it for corrupt purposes. What's surprising is that they usually get away with it and manage to keep their elevated reputations intact.

The justifications for this kind of behavior are always the same, and always absurd. In no particular order:

  • The students admitted through our corrupt process were qualified (variant: they can "do the work.") Any selective college turns away tons of students who were, in theory, "qualified" to attend, in the sense that they met some absolute minimum standard (not that you'd be able to actually find said standard if you asked.) That's more or less the definition of "selective." The best 500 students Urbana-Champaign rejected last year were probably all but indistinguishable from the worst 500 students they accepted. All if this happens well inside the margins of "qualified," however defined. The question is who was most qualified, and admissions is a zero-sum game. The whole point of accessing the corrupt process is to bump out someone more qualified than you, otherwise why spend the money and/or political capital?

  • Only a small number of students were admitted through our corrupt process (variant #1: "they only get a little push"; variant #2 "only if all else was equal"). Being a little bit corrupt is like being a little bit pregnant. If you only break the law a couple of days a year, you still go to prison. Etc. I imagine most colleges and universities have someone in the philosophy department who could explain the ethics here.

  • Admitting students through our corrupt process benefits the university financially via donations and state appropriations, which are in turn used for virtuous purposes such as need-based financial aid. College admissions is, again, a zero-sum game, and I'm going to go out on a limb and guess that well-qualified low-income students generally don't have a special in with the governor or the trustees or the development office. It doesn't do a low-income students much good to drop extra money into the need-based aid fund if they can't access it because they weren't accepted because their slot went to the less-qualified child of some guy who wrote some other guy a big fat check. Education Sector could do all kinds of wonderful new education policy things if I robbed a bank this evening and deposited the money in our account, but that doesn't mean it's the right thing to do.

  • There's no official list or policy, our development people and admissions people just, you know, sit down to talk from time to time. Over coffee. To talk about the weather. And things. Right.

In the end, these practices persist because they're a kind of genteel, behind-closed-doors corruption, veiled by the deliberate vagueness of the admissions process and given a sheen of respectability by institutions that we look to for intellectual and cultural leadership. A lot of the people in positions of power within the government and certain business circles benefitted from these policies and hope their children will too. I imagine they think of themselves as generally moral people and aren't all that interested in thinking otherwise.

But that doesn't make it any less wrong. There's a simple test to apply here. Selective colleges and universities have Web sites with information for applicants that include admissions criteria. My alma mater, for example, includes things like "strength of high school record," "depth and overall quality of application essay," "standardized test scores," "extracurricular activities," and "proven ability to think and act independently." Colleges should ask themselves: is this a complete list? Does it include items like "father runs a hedge fund" or "uncle recently gave the governor a lot of money" or "mother is long-time majority leader of state senate"? No? Then those kinds of things shouldn't matter in admissions. If you'd be embarrassed to write it down and put it on your Web site, there's a reason.

And, since you asked: No, I don't think affirmative action is a corrupt process per above. There's a difference between naked bribe-taking and contributing to larger social goals of diversity and justice.

Taking Your College to Court

A college should be responsible for the career prospects of its graduates. That's the conceit put forth by Trina Thompson, a 27-year-old graduate of Monroe College, who is suing her alma mater for the $70,000 she paid for tuition because she has not been able to been able to find a job since graduating in April with a bachelor's degree in information technology.

In addition to its high likelihood of a quick dismissal, the suit seems pretty specific to Monroe. The school bills itself as providing "professional, career oriented higher education to students from diverse backgrounds" and plays up how it uses this focus to "to prepare graduates for successful careers." Since Thompson claims the school did not meet these promises, schools with less of an overt job emphasis would probably not have to worry about copycat suits.

But what if a judge found that Monroe's promises of career assistance should be held to a meaningful standard? What existing data could the school present to show that it in fact did not rip off its graduates and that it does provide an education that leads to reasonable workplace success?

The answer appears to be not much. Monroe's Web site talks a lot about career information and opportunities, but provides little in the way of figures. Statewide data is little better. An Education Sector study of all 50 state higher education accountability systems plus the District of Columbia found that New York collects a fair amount of information on students' postsecondary educational success, but only for public colleges and universities.

Federal data provide more information about Monroe, but nothing on employment status. The school's 2006 cohort default rate -- the percentage of student loan borrowers that left school in 2006 and defaulted within two years -- was higher than the national average, but it's impossible to know how many of those students graduated and then defaulted because they could not find a job. Likewise, the school's federal graduation rate figure of 70 percent does not help either since Thompson graduated and was not able to find a job.

In short, unless Monroe is sitting on a storehouse of data it chooses not to publicize, it would have a hard time showing whether its business-oriented education is worth the money or not. If Monroe does have this information, then the question becomes why not publicize it sooner?

To be sure, Monroe data will not be the reason why Thompson's lawsuit succeeds or fails. But consider the flip side. If Monroe could produce meaningful data demonstrating the success (immediate and long-term) of its graduates -- something that makes sense for a school focused on vocational training -- wouldn't that be a very useful marketing tool? Rather than resorting to vague statements about maximizing student success, it could use that information to make a compelling case for why students should attend Monroe rather than a competitor institution. It might also help incoming students get a better sense of just what to expect once they enter that post-graduation workforce.

Neighbors

In an illustrative example of how states differ in the way they are reacting to economic shortfalls, here's how Illinois is meeting their financial obligations:

The state will deny the financial aid applications of an estimated 130,000 students -- the most in Illinois history.

They were denied because they applied for state aid after May 15, a cutoff months earlier than in years past, thanks to Springfield's budget woes.

Hardest hit? Students at community colleges and returning adult students, because they tend to apply for aid later.

What's more, under the state budget compromise reached earlier this month, which slashed funding for the state's Monetary Award Program in half, no student at any Illinois school will receive aid for the second half of the 2009-2010 school year....

The cuts in state support for the program come as more students submitted applications for aid than ever before. Through June, nearly 200,000 eligible students -- 27 percent more than at the same time last year -- sought aid.

With a budget of $385 million, the state was able to grant more than 145,000 awards of up to nearly $5,000 to students who applied by mid-August last year. That's the typical cutoff. But this year, with more eligible students applying and half the money, the date was moved up to May 15.

Last year, 65 percent of the students who applied after mid-May attended community colleges, meaning those types of students will be hardest hit this year.

Consider these cuts against the example provided by neighboring Indiana:
The Indiana Department of Workforce Development today announced a new program today that will provide $6,000 over two years to pay for students to earn an associate’s degree or a vocational certificate.

The scholarships, called Workforce Acceleration Grants, are earmarked for unemployed workers or their spouses, or students from impoverished families.
It's a tough time for state budgets, and, while these articles say nothing about the particular difficulties of each state, they do serve as a reminder that the way policymakers deal with crises matters.

Friday, July 31, 2009

Pork Projects in the House

This week, the U.S. Senate Appropriations Committee followed on the heels of the House of Representatives by passing its version of a bill that sets funding levels for Department of Education programs in the 2010 fiscal year. Included within this legislation will be the first figures for earmarks given to specific colleges and universities within the Fund for Improvement of Postseondary Education, or FIPSE. While we wait to see just what pet projects the upper chamber would fund, let's take a look at some of the House's more questionable spending priorities.

First, a quick FIPSE refresher. Originally intended as a way to fund innovative reform projects through a competitive grant process, FIPSE instead became the main vehicle for Congressmen to direct a few hundred thousand dollars toward their favorite colleges and universities each year. The pork pursuit has grown so much that twice in the last five years there have not been any leftover competitive grant funds once the pet projects got their money.

This year, the House requested a total of $68.2 million in earmarks funding, significantly less than the $91.2 million Congress provided last year, though the final number is likely to increase once the appropriation bill gets to the conference committee. Either way, it's still nearly double the $34.8 million the House provided for competitive grants.

That said, here are a few of the earmarks in this year's House bill that raised eyebrows based on their descriptions. (For anyone who wants to play along, I've scanned the relevant pages and put them in a PDF here.)

  • Livingstone College, $300,000 for the school's Center for Holistic Learning "to provide academic and student support services, which may include equipment and technology."
  • University of Virginia's College at Wise, $150,000 to install a voice over Internet protocol system (basically what Skype does) and "demonstration activities through its Emerging Technologies Learning Center."
  • Evergreen State College, $325,000 for its Bioregion initiative, which "aims to better prepare undergraduates, as well as ourselves, to live in a world where the complex issues of environmental quality, environmental justice, and sustainability are paramount."
  • Niagara Community College, $100,000 to buy equipment and technology for its hospitality and tourism training programs.
  • Metropolitan State College, $200,000 for an aviation training program (at least its accredited).
  • Oklahoma State University, $450,000 for a wildlife management technician program, including buying equipment.
  • University of Massachusetts, Boston, $12.6 million for the Edward Kennedy Institute for the Senate, including supporting an endowment. This award is 12.6 times greater than any other FIPSE grant listed.
Now, picking on these institutions is not entirely fair because they actually bothered to provide some information about their award. More common were vague grants for schools like the State University of New York, Geneseo ($500,000 "for purchase of equipment") or Rockford College ($250,000 for "technology upgrades.") If those schools get their awards, we won't have any idea what the money actually funds until the online FIPSE database updates. Once it does, we'll find out if that money is going to meaningful reform, or to study what affects the quality of wine. With this little information available, how could any Congressman claim to make a meaningful judgment of whether these initiatives merit federal dollars?

Regardless of the quality of the proposal, these FIPSE grants subvert the program's initial intent of being one of the few federal funding streams that encourages institutions to innovate by competing for awards. And so long as Congressmen willingly fund pork over reform this program will continue to be little more than an annual wasted opportunity.

Why Teach for America and The New Teacher Project Exist

If you stop and think about it, Teach for America (TFA) and The New Teacher Project (TNTP) are well-functioning, non-profit, national human resource departments for schools. They recruit, screen, and hire candidates, all functions of a traditional HR department. TFA and TNTP do provide a lot more induction and support for their hires, but at the base level their purpose is to find and recommend potential teachers. Of course, school districts have their own human resource departments as well, so it's worth asking why these programs were needed in the first place.

If you look at the data on the teacher hiring process (some of the best of which has been put together by TNTP itself), what you see is that districts just aren't very good at it. They're slow, which causes them to lose out on better candidates. They don't recruit all that well, which means they have fewer candidates to choose from. And they tend to privilege more experienced teachers throughout the process, which, fair or not, limits their ability to attract young and motivated applicants.

Take, for example, the city of Philadelphia, which employs about 10,000 teachers in its 274 schools. Assuming a 9 percent teacher turnover rate (that's the national average--it's much higher in urban and low-income areas), the city needs to hire at least 900 new teachers every year. The graph below from the National Council on Teacher Quality shows how many applicants they've gotten over the last six years. Simple division suggests that Philadelphia public schools are getting a little more than three applicants for every open position.

Compare that to the competition for spots in TFA or TNTP programs. Only one out of nine TFA applicants get hired, and New York City's Teaching Fellows program, run by the TNTP, had 14,000 applicants apply for 700 spots (or 20 applicants per position).

These numbers matter. At the base level, it means districts have more and better options on who they want in front of its classrooms. Not to mention the symbolic impact for the ones who are selected to know that the position should be coveted, that if you do not care to be there, there are other people who do.

To understand...

...the satisfaction of seeing My Bloody Valentine live, imagine you're a masochist, standing next to a jet engine, getting beaten about the head with chunks of frozen whiskey by the four most serene Irish people in the world.

Thursday, July 30, 2009

Charter Schools and Unions—One Size Fits All??

Unionization of charter schools seems to be the hot topic these days. A recent NYT article raises the critical question:

“…whether unions will strengthen the charter movement by stabilizing its young, often transient teaching force, or weaken it by preventing administrators from firing ineffective teachers and imposing changes they say help raise achievement, like an extended school year.”

For unions to organize charter schools without weakening them, charter school faculty need to be able to create their own collective bargaining agreements (like Green Dot) that align with the educational philosophy of the school and its staff. This is my fear…You will have public school union leaders, who don’t reflect the actual teaching population in charter schools, advocating for and bargaining on behalf of the charter school teachers. This model wouldn’t work for the charter schools or their teachers. Most teachers in charter schools have chosen their particular school because they buy into the way that school functions, and are willing to do the extra stuff (longer hours, tutoring etc.) because they see that it works, or believe that it can work. For unionization of charter schools to be successful, it needs to allow the school to implement innovative reform strategies and allow teachers to choose both unionization and to work in schools operating under different educational models.

This begs the question of why traditional public schools don’t unionize in this way. Currently many traditional schools are a part of union that negotiates collective bargaining agreements for a large number of schools that vary in many ways, from mission to resources. And these district-wide unions often do not reflect the viewpoints of many reform-minded educators. If more traditional public schools would step away from the one-size-fits-all union structure, you might be surprised at how much teachers would be willing to engage in discussions around reform. Why can’t we have both--- teacher empowerment and progressive education reform? That’s the ideal. Maybe the unionization of charter schools can shed light on ways unions, in traditional public schools, can remain relevant in current education reform debates.

--posted by Marilyn Hylton

Not All Higher Education Spending is Created Equal

At least that’s the conclusion reached in a new working paper from Cornell University’s Higher Education Research Institute. First mentioned in Inside Higher Ed, the paper takes advantage of data from the Delta Cost Project to study the relation between certain types of higher education spending and student achievement.

Specifically, the researchers looked at four different categories of spending:
  • Instruction
  • Student services, such as supplemental instruction, on-campus organizations, and other “activities that contribute to students’ emotional and physical well-being”
  • Academic support services, which includes spending on libraries, curriculum development, and other items that “support the instruction, research and public service missions of the university”
  • Research
Expenditure figures were calculated per full-time equivalent (FTE) students. Federal graduation rates, meanwhile, were the researchers’ barometer for student success.

Overall, the researchers found that increasing student spending on either instruction or student services led to statistically significant gains in an institution’s graduation rate. Increased spending per student on research, meanwhile, had the opposite effect.

Intuitively, this makes sense: spending money on things that deal directly with students improves their academic success, while expenditures on research or other areas that could draw professors away from students do not.

But it’s the second part of the paper’s findings that is really noteworthy. According to the data, spending $500 more per student on student services leads to a larger increase in graduation rates than an equivalent spending increase on instruction. This outcome is even more pronounced at schools with low average SAT scores, high numbers of Pell Grant recipients, or low graduation rates. These findings even held when treating spending as a zero-sum game. The researchers found that graduation rates still rose if an increase in student support spending was offset by an equivalent decrease in instructional expenditures.

These findings have important public policy ramifications and should be good news to schools with strapped budgets. It suggests that once a certain level of instructional spending is reached, schools may be better off directing dollars toward supplemental assistance, rather than just plunking more cash down on professors or adjuncts. For schools with monetary problems, this matters. It means they could spend money on cheaper alternatives to instructors, such as tutors or (after sunk costs) a computer lab, saving money and boosting student success in the process.

The redirection of spending from instruction to student support is similar to the model used by the National Center for Academic Transformation (NCAT), which has helped redesign over 100 courses by using a combination of technology and better planning. For example, Virginia Tech worked with NCAT to completely revamp its linear algebra classes, replacing expensive lecture sections with a math emporium, where students could go at any time to work through materials on a computer with tutors often on-hand to provide assistance. This model not only reduced per-student course costs from $91 to $21 (an annual savings of $140,000), but student learning improved in most areas. Similar results occurred at the University of New Mexico, which redesigned a psychology course that had disproportionately negative outcomes for minority students.

As the NCAT model attests and the working paper confirms, substituting spending on instruction for student support can have real benefits for those enrolled. At schools that do not have much existing student supports, this also could provide a route for cost savings at no sacrifice to academic learning. (The paper looks at four-year institutional data, but community colleges appear to be good candidates for similar spending changes given their limited resources and characteristics similar to the high-Pell, low-grad rate schools described in the paper.)

While these findings are important, there are some real limitations to the available data. Student support is a very large and all-encompassing category that includes everything from tutoring to the always-maligned climbing wall. Without greater disaggregation it will be impossible to know exactly what factors make the greatest contribution to increased student success. But based upon the findings from NCAT, it’s a decent guess that the success comes more from the emporiums than from the fake rock facades.

Wednesday, July 29, 2009

Shoddy Academic Study Denounces Media for Non-Citation of Shoddy Academic Studies

A couple of days ago, I received an email from the teachers union-funded "Great Lakes Center for Education Research and Practice" touting a new study written by Holly Yetick of the University of Colorado at Boulder, allegedly uncovering rampant pro-think tank bias in the mainstream media. As the policy director of a think tank, I was naturally interested--we're always looking for new ideas when it comes to prosecuting our nefarious media-manipulation plans. Alas, I was disappointed. In an analysis of 864 articles published in the New York Times, Washington Post, and Education Week, the author found that:

Although university and government sources were cited more often, a higher percentage of reports produced by advocacy-oriented think tanks were cited by both types of publications. Universities produce 14 to 16 times more research than think tanks, but the three publications only mentioned their studies twice as often as think tank reports. As a result, any given think tank report was substantially more likely to be cited than any given study studies [sic] produced by a university.


That's not a bad way of counting press hits, although I probably would have added the AP, Wall Street Journal, and USA Today. (Note also the K-12 bias -- Ed Week but no Chronicle or InsideHigherEd). It's the denominator that really throws these numbers out of whack. Presumably, nearly every one of the think tank studies in question was written with the hope of garnering some media coverage. The universe of academic studies, by contrast, was calculated in two ways: the total number of papers accepted at the 2008 meeting of AERA (8,064), and the total number of articles published in 2007 in 176 peer-reviewed journals (7,172).

Now, maybe third-rate journalism is at the root of the Washington Post's failure to provide A1-coverage to articles like "Still jumping on the balance beam:continued use of perceptual motor programs in Australian schools," from the April 2007 edition of the Australian Journal of Education, one of the peer-reviewed journals in question. Ditto "Contributions and challenges to vocational psychology from other disciplines: examples from narrative and narratology," from the International Journal for Educational and Vocational Guidance. And maybe Ed Week needs to take a long, hard look at its standards and practices after failing to cover "Complicating Swedish Feminist Pedagogy" and "Complexity Theories at the Intersection of Hermeneutics and Phenomenology" from the 2008 AERA.

Then again, maybe not.

The article also alleges a conservative bias in news coverage, as evidenced by the fact that newspapers tend to cite studies from notorious right-wing outfits like...Education Sector, where I work. Without going into the political and professional histories of our staff at length, let me assure you that this view is completely absurd. If we're on the "right" side of the spectrum and "centrist-libertarian," why is the Cato Institute always insisting I'm wrong? 

What accounts for the relatively high think tank batting average? In announcing the paper, the Great Lakes center said, "Yettick indicates that this is likely due, at least in part, to the skill and resources think tanks devote to publicity for their reports, using sophisticated marketing campaigns targeting journalists, policy makers and the public for which university professors generally lack the resources and motivation to do."

You hear this a lot. Well, I've worked at three of the think tanks covered in the report--the Center on Budget and Policy Priorities and The Education Trust are the other two--so I have pretty good sense of how they operate. And I probably shouldn't be revealing the sophisticated marketing secrets that allow us to crowd out allegedly more-rigorous university-based research with our "ideologically driven" work. But what the heck. Here's my secret recipe:

1) Before a report is released, send an email to editors and reporters at publications where you'd like it be covered. Describe the findings, briefly, and explain why it might make a good story.

2) Give them a copy of the report, for free.

3) Include your email address and phone number, in case they have any questions. Check your messages. If they email or call back and say "I'm on deadline for five o'clock," respond before five o'clock.

4) Be succinct. Don't, for example, write "It is, in fact, true that advocacy-oriented think tanks rarely have their research peer reviewed and repeatedly have been found to engage in research practices that cast suspicion on the validity of the findings..." If something is, in fact, true, then it's true. Moreover, as a reader, my assumption is that you're not deliberately lying to me. If you say it, I assume you believe it's true. So the sentence should begin "Advocacy-oriented think tanks..." and go from there. These things matter! See Strunk and White for further advice. 

Also, proofread. "study studies"?

In the end I think the marketplace of ideas is quite a bit more efficient than Yettick believes. Reporters aren't all idiots and think tanks don't succeed through P.R. witchcraft. If the media isn't covering your research, it's probably not my fault. 

About CBO's Alternative Student Loan Cost Estimate

Last week, the U.S. House of Representatives’ Committee on Education and Labor voted 30-17 to pass a bill (PDF) that increase the Pell Grant and also establishes new programs to help community colleges, increase college completion rates, and improve early childhood learning.

These initiatives would be paid for by eliminating subsidies that are currently given to private lenders so they will offer loans to students through the Federal Family Education Loan (FFEL) Program. The loans are nearly identical to ones offered by the government through the Direct Loan Program, and lenders making them receive federal insurance to cover 97 percent of any losses they sustain if a loan defaults.

Not surprisingly, the loan companies aren’t thrilled with this proposal and are fighting back.

The latest salvo over this proposal concerns how much the Congressional Budget Office (CBO) thinks the government would save by eliminating FFEL. Last week, CBO released a cost estimate (PDF) that put these savings at $86.8 billion over the next 10 fiscal years by having the Department of Education issue all federal student loans. Yesterday, the same organization sent to a letter to Sen. Judd Gregg (R-N.H.) saying a different calculation would yield savings of $47 billion over 10 years—a difference of roughly $4 billion a year.

Certainly, savings of $47 billion over 10 years should not be dismissed outright. That figure is actually greater than the savings estimated by the Office of Management and Budget when President Obama first proposed the end of FFEL in February (PDF, Page 23). Nearly $5 billion a year could fund a lot of interesting and creative programs for access and success.

But that’s assuming the alternative CBO estimate isn’t another iteration of a somewhat misleading budgeting tactic lenders have used in the past to try and make the FFEL Program seem cheaper than Direct Loans.

This tactic is known as market risk, and attempts to measure the costs of the two student loan programs by treating them as if they were products offered without a government guarantee on the private market. (My former colleague Jason Delisle wrote a paper on this subject last October, which can be found here.)

Market risk matters in this case because federal student loans are accounted for as a net present value—a process that compares the current cost of making a loan by discounting future cash flows. Operating off of the principle that $1 in the future is worth less than $1 today, the net present value method thus estimates the total cost of a loan at the time it is disbursed.

The net present value method places a great deal of importance on the discount rate—a number that reflects the value of a future dollar versus one today. If it is easy to obtain money for the loan and it is likely to be repaid, then the discount rate is likely to be low. But if there is a high default risk or money is hard to borrow, then future payments become less valuable, resulting in a higher discount rate.

The current discount rate used by CBO does not, however, reflect any of these factors. Instead, it is legally required to use a discount rate equal to the yields on Treasury securities—a number so low it is basically a risk free rate.

If future borrower payments are treated as risk-free, then their present value remains high and the net present cost of a direct loan seems to be fairly small. It may even appear to have a negative cost, meaning it seems to make money for the government.

FFEL loans, however, look more expensive under a risk-free discount rate. This is because CBO’s estimates of FFEL loans only include subsidies, fees, and default payments between lenders and the government. (Loan disbursement and borrower payments are not measured because they occur only between the lender and borrower.) Since all of these government expenses occur in the future, a low discount rate means they have a high present value. The sum of these future costs to the government thus drives up the net cost of a FFEL loan.

For years lenders have claimed that using a risk-free discount rate is unfair since student loans are in fact risky investments. (They even paid a previous head of CBO to put out a paper [PDF] arguing in favor of using market costs.) Instead, of the risk-free Treasury rate, lenders argue that CBO should use a higher discount rate that reflects the financing and default risk the private market would assign to issuing student loans.

Asking for market-based rates in cost estimates makes sense, but just increasing the discount rate just reverses problems with the current system. Direct loans cost more when using a higher, market-based discount rate because future borrower payments would be worth less in present dollars. FFEL loans have the opposite effect, as future government subsidy or default payments appear cheaper. (Just as $1 paid to you is worth less in the future than it is in the present, so too is $1 owed.)

Simply substituting the risk-free Treasury rate for a higher market cost rate thus presents the Direct Loan Program in a more costly (and realistic) light without affording the same treatment to FFEL loans.

It’s important to have a good sense of the savings from reforms to the student loan programs, but it is disappointing to see discussions of how best to spend savings on student access and success get caught up in additional political wrangling.

Tuesday, July 28, 2009

Golden Parachutes

The Wall Street Journal had an interesting article recently about pension spiking, a practice where workers use the calculation of their pension benefits to their advantage:
Pete Nowicki had been making $186,000 shortly before he retired in January as chief for a fire department shared by the municipalities of Orinda and Moraga in Northern California. Three days before Mr. Nowicki announced he was hanging up his hat, department trustees agreed to increase his salary largely by enabling him to sell unused vacation days and holidays. That helped boost his annual pension to $241,000.
This is entirely legal, but it amounts to real money spent on the part of state and municipal retirement systems:
Mr. Nowicki recently turned 51 years old. If he lives another 25 years, his pension payments will cost the fire district an estimated additional $1 million or more over what he would have received had he retired at a salary of $186,000, not including cost of living adjustments, a fire board representative said.
And, even though Mr. Nowicki is "retired," he is still employed by the fire department as a consultant earning $176,000 a year.

Teachers are able to take advantage of these provisions as well, sometimes through informal ways like selling back vacation or sick days, and sometimes more subtly, through their district-negotiated contracts. In my recent study of large urban district salary schedules, I discovered an interesting example from the state of Florida. In Florida, teachers are awarded retirement benefits based on their salary over their last five years on the job. So, a district that opts to pay their teachers high salaries for these five years has to compensate them slightly higher for those five years only. But the state will be paying higher retirement benefits, based on these higher salaries, for the rest of that teacher's life.

Broward County, Fla., for instance, gives teachers an average raise of only $320 in their first 10 years on the job, but back-loads $20,000 in raises into a short time period between year 18 and 21 on the job, packing almost 40 percent of all experience-based compensation into these three late-career years, a stage when teachers are unlikely to gain effectiveness in a commensurate way. This trend has accelerated over the last decade, as teachers crossing the threshold from 20 to 21 years of experience in Broward County have netted average raises of 16.1 percent, compared to 5.2 and 6.5 percent raises, respectively, for teachers crossing the 18- to 19-year and 19- to 20-year thresholds.


These salaries are a huge win for both district and union negotiators. The union is able to show its members large benefits for those teachers who teach for their entire career. The district is able to keep total expenses down. The money comes from the state, after all. In turn, young, mobile teachers and state taxpayers lose out. Most states have similar retirement calculations (some use only the final year, or the final three years), but only Florida's districts have taken advantage of this provision to the full extent. None of this should surprise or shock anyone; this is what happens when people making the decisions are not the ones paying for the consequences.

Tight Budgets and Accountability

It's funny what a budget shortfall does to perspective. In January of 2007, the University of California (UC) system, one of the largest and most symbolically important systems of higher education in the country, released its latest accountability document. At 40 pages, it was remarkable, for such a highly regarded system, for its lack of breadth, specificity, and all-around vagueness. It mentions its institutions, including such prestigious places as UC-Berkeley and UCLA, just once each, in an entirely unhelpful table on the number of transfer articulation agreements each school had made with the state's community colleges. All reported identical figures.

Flash forward a year and a half to last September, when the UC released a 211 page accountability "discussion draft." It had a lot more information, this time presented in attractive tables and charts and separated for each individual institution. After releasing all this formerly private information, the sky didn't fall (except in the state's budget), and the UC has followed up with an even better system of relaying information about its institution's performance to the public.The new Web site has 15 "chapters" that each contain multiple measures presented in easily digestible graphic form.

These things matter, because they easily show the public how well their institutions are performing. The LA Times ran a piece this weekend on students who want to pursue bachelor's degrees but opt to start the process at community colleges. The article was informative but generic, and it provides a perfect example of how a good accountability system could be used.

In 1960, California codified how it would educate a growing mass of baby boomers. The University of California system would award doctoral degrees, conduct most of the research, and educate the top 12.5 percent of high school graduates. California State University institutions would educate the top third of high school graduates, focus on the undergraduate experience, and be responsible for training the state's teachers. This left everyone else to the community colleges.

The community colleges were designed to be open access, free for all students, and a cheap route for state policymakers to educate the masses without paying UC or CSU prices. Then, if the students proved themselves at the community colleges, they would have clear routes to transfer and be on their way to bachelor's degrees at the more prestigious four-year schools.

The UC's new accountability system shows that it has held up its end of the equation. First, users can see that transfer applications from California community colleges have risen over the last 15 years, but that admits and enrollees have pretty much kept pace. Transfers have a lot of competition to get into either UC-Berkeley or UCLA, but they have really good chances of being accepted to one of the UC schools in Davis, Irvine, Merced, Riverside, San Diego, Santa Barbara, or Santa Cruz.

But it's not just about getting in; transfers are able to graduate as well. The two-year systemwide graduation rate for transfers is now above 50 percent (which is actually quite good), and nearly 90 percent of transfers have earned a degree four years after transferring. The numbers vary by campus, which we're now able to see.

This new information tells us a lot. Without it we might blame the UC admissions offices for unfairly rejecting qualified applicants. But the newly revealed information suggests that low transfer rates from the community colleges is not the fault of admissions policies, but rather needs to be addressed with increasing the pool of transfer-ready students. That requires an entirely different set of policy responses. And it's just one more example of how new, better accountability systems can inform the public to make better decisions.

What the Princeton Review Rankings Miss

Before diving into my first post, I just wanted to say hello. My name is Ben Miller and I just joined Education Sector, where I will be working on undergraduate education issues. I come here by way of the New America Foundation, where I spent the past two years working on public policy with college access, quality, and affordability. Feel free to contact me at bmiller [at] educationsector [dot] org.

The best postsecondary classroom experience in the country is not at any Ivy League school. In fact, according to a set of rankings released yesterday, the best collegiate classroom experience is not even on the East Coast—it is thousands of miles away on the Claremont, Calif., campus of Pomona College.

The classroom experience ranking is just one of 62 different “top” lists released yesterday by the Princeton Review. Using student surveys, it purports to tell consumers just how a selection of 371 college stack up against each other on a host of topics both serious (accessible professors, high levels of class discussion) and more inane (students more likely to be “dodge ball targets” or “jocks”).

But while the rankings can tell you everything about a school down to its quality of fire safety, none of them say anything about the actual quality of the education students receive. Look at Pomona’s classroom experience ranking, which is based on:

“…student assessment of professors'' teaching abilities and recognition in their fields, the integration of new business trends and practices in the curricula, and the intellectual level of classmates'' contributions in course discussions.”

Now, Pomona certainly seems like it provides a good education, and the factors it is judged by fit fit well with what we think a quality academic experience looks like. But that's as far as these data can elaborate. There's nothing in them that actually shows that students with small classes taught by engaged and accessible professors have better academic outcomes than those that do not. Moreover, even if these factors do contribute to a quality education, there's no data to show which elements have the greatest or least effect on student learning.

The fact is, the Princeton Review does not even attempt to rank quality of education and student learning because there is no source it could turn to for this information. Schools provide minimal data, if any, on their student learning outcomes, and the federal government does not collect any of this information either.

The need for better data on student learning should be apparent just by looking at Reed College. That small private college had the second best classroom experience according to the Princeton Review. But federal graduation rate indicators suggest that maybe Reed's education leaves something to be desired. According to a recent report co-authored by ES’s Kevin Carey (PDF), just 76 percent of Reed’s first-time full-time students graduated within six years of enrolling. That’s the lowest graduation rate among the most selective schools in the country.

Now maybe Reed's students do leave school having learned a great deal. Or maybe they do not. But without better information it's impossible to know definitively one way or the other. Either way, it is clear that just equating positive classroom experience with academic quality is not sufficient.

Instead of tackling the quality issue, the Princeton Review focuses on “Best Value” schools. Unfortunately, this metric suffers from many of the same problems that other rankings have. It factors in stats based on admission and acceptance rate, which capture the quality of students coming in, but lacks information on how they progress by the time they graduate. Beyond that, it’s a return to the same data on small classes and professor accessibility used elsewhere, only with the cost of tuition and amount of grant and scholarship aid taken into account.

Looking at these factors together seems less like a measure of value than a list of schools that are somewhat hard to get into but still have low net costs. It says nothing about whether schools help students progress academically and graduate on time. That would be the best measure of value given the high costs that come from failing to graduate or leaving school unprepared for the workforce.

It’s easy to understand why parents and students like publications like the Princeton Review’s rankings. They lay things out clearly and help simplify the decision-making process. But without taking into account more meaningful data on college completion and academic success, data schools need to do a better job providing, these guides will remain little more than a rank exercise.

Monday, July 27, 2009

Charter Schools and Unions

The New York Times reported yesterday on recent efforts to unionize charter schools and the ongoing debate over the impact unionization could have on the growth and performance of charters. It's an important discussion, but no one knows where it will end - will unionized charter schools be a small part of the larger movement or is this the beginning of widespread unionization in charters? And what will the impact be on charter school operations and ability to try new things, including different pay scales and work hours? And, of course, what will the impact be on student achievement?

To help dig into these issues, Education Sector hosted an online discussion earlier this month with a diverse group of public school teachers, including teachers at charter schools. The discussion likely won't answer any questions, but it does shed some light on the role unions play in today's education system and the lively, and at times contentious, debate over their place in the future.

Thursday, July 23, 2009

The Bland Accuracy of the GAO

Today the GAO released an evaluation of District of Columbia Public Schools (DCPS). Long known as one of the worst-performing districts in the country, it has been the site of radical change in the last two years ever since Mayor Adrian Fenty took over the schools and hired Chancellor Michelle Rhee. Today's GAO report is both a sober reminder of how hard change is, as well as a refresher course on just how bad things were.

Change is hard, and the implementation has been anything but smooth:
DCPS lacks certain planning processes, such as communicating information to stakeholders in a timely manner and incorporating stakeholder feedback at key junctures, which would allow for a more transparent process. In addition, DCPS did not gauge its internal capacity prior to implementing certain key initiatives, which, if addressed in the future, could help ensure the sustainability of initiatives. Without these planning processes, an organization risks having to revamp initiatives, leading to delays and compromising the implementation of timely, critical work. While having these planning processes in place will not eliminate all implementation issues, it will help to identify and mitigate risks associated with implementing bold initiatives and identify needed changes in the early stages of the initiative. Furthermore, a lack of these planning processes can result in decisions that are made on an ad hoc basis with resources unevenly distributed as was the case with the District’s new staffing model. Ultimately, the lack of such processes while planning and implementing initiatives has impeded the success of some of DCPS’s initiatives and could impede the District’s continued success and progress in reforming its school system.
But it was needed:
To increase accountability of its central office, DCPS developed an accountability system and an individual performance management system for central office departments and employees. The central office, which is responsible for providing academic and nonacademic supports47 to DCPS, had operated without such accountability systems prior to the recent reform efforts. For example, previously, performance evaluations were not conducted for most DCPS staff. As a result, central office employees were not held accountable for the quality of services they provided to support schools.

It's no wonder Chancellor Rhee inherited a central office where employee records were kept in boxes, paychecks were often inaccurate, and repair orders went unfilled for long stretches of time: the employees responsible for these tasks were never evaluated on whether these assignments were completed. It's worth remembering that what most people take for granted as a basic element of a well-functioning organization, evaluating employees and holding them responsible for completing their work, isn't so basic everywhere.

Just Asking

To all those who argue mayoral control of schools is bad for democracy, isn't it a good thing that schools are the issue in this year's New York City mayoral race? There's an incumbent mayor up for reelection using his success running the schools as his major claim, and now we have a challenger disputing those claims, issuing audits, and questioning the data. Someone please explain to me how schools could be more accountable to the public.

Monday, July 20, 2009

The Libertarian's Dilemma, Cont'd

Last week I wrote that the problem of runaway college spending presents libertarians with something of a dilemma, because, "the best way to bend down the long-term higher education cost curve and thus reduce government spending is to increase government regulation in the form of mandatory reporting [of information about institutional performance]."

Unsurprisingly, Neal McCluskey of the libertarian Cato Institute disagrees. I think he's unpersuasive, but before I explain why it's worth reviewing the central argument of the paper that prompted this discussion, The Revenue-to-Cost Spiral, by Robert Martin, published by the conservative John William Pope Center for higher education policy.

Martin begins with the principal / agent problem, an issue that's endemic in large modern organizations. Essentially, the problem arise when the interests of people who own or otherwise have a stake in an organization (the principals) are misaligned with the people who actually run the organization (the agents).

For example, a few years ago the shareholders (i.e. principals) of insurance giant A.I.G. employed a guy (i.e. an agent) named Joseph Cassano who sold billions of dollars of insurance to other large financial companies, essentially protecting them against the risk that their securities backed by sub-prime mortgages would become worthless in the event of a huge real estate market collapse. Cassano was paid tens of millions of dollars based on the short-term profits A.I.G. booked, some of which he used to buy a really expensive house in the Knightbridge section of London. Now A.I.G. shareholders have been devastated, but Cassano still owns the house.

In higher education, Martin argues, the principal / agent disconnect is less about risky profit-taking and more about status. Colleges are inherently status-maximizing institutions, even if the principals--taxpayers, donors, and students--would rather colleges focused on a different set of priorities, like giving every student a high-quality affordable education. As Martin writes,"senior administrators can persuade themselves that lavish offices, extensive building projects, expensive public relations events, luxury travel, and high compensation are in the institution’s interest. Board members may consider expensive social events to be in the institution’s interest." The same could be said for giving too much weight to the research mission at the expense of teaching and lots of other things.

How do you get more status, particularly in an industry where reputations are seemingly as ancient and permanent as the stone buildings themselves? You buy it, by purchasing nicer buildings (old-looking stone is a popular choice of materials) and more prominent researchers and students with better SAT scores. Or you just let it accumulate in the endowment, also a major benchmark of prestige. All of this dovetails with Bowen's revenue-to-cost-hypothesis: college spending is capped only by revenues and colleges have every incentive to spend, so they constantly build up fixed costs, raise more money, spend more money, raise more, spend more, and so on.

Martin's solution? More information. To mitigate the principal / agent problem, give the principals more data so they know what's really going on. And the government has to play a role:
[Reform] has to involve private groups, state and local governments, and the federal government. The most important federal government contribution to reform would be a significant increase in transparency requirements. The information requirements for tax-exempt status should be increased, and the IRS should conduct more and more-intense audits of these institutions. Further, the information provided to the IRS should be in the public domain immediately and available on the institution’s Web site or gathered in a single place. The federal government can also increase the quantity and the quality of the information reported to the National Center for Education Statistics (NCES).

That's the libertarian's dilemma in a nutshell--if you think seriously about restraining college costs, it brings you around to more meddling by the IRS, the Department of Education, etc.

McCluskey disagrees. "Wouldn’t the best, most direct way to “reduce government spending” obviously be to, well, reduce, or even stop, government spending?" he asks, before advocating for massive public disinvestment in higher education that would cripple thousands of institutions and shut the doors to college for hundreds of thousands of students nationwide. Well, sure! But that's like saying the best way to control long-term health care costs is to spend less money on health care. The relevant question is how. (Just to be clear: I'd like to spend more public money on higher education, not less, albeit in a way that's substantially more performance-sensitive and directed toward institutions that serve academically and economically at-risk students.)

McCluskey goes on to assert that "Clearly, we don’t need government to set standards or inform consumers – markets will do those things themselves." He notes that the market provides consumers with plenty of information about things like hamburgers and cars. Which is true in part while ignoring the government's role in mandating reporting of things like nutritional information and gas mileage.

But the much more obvious example is the way the free market has reacted to the issue at hand, higher education. The free market has given us the U.S. News & World Report college rankings, which are all about status and spending. Fully ten percent of each college's score is based on a simple measure of spending per student--the more you spend, the higher you rank. Another 20 percent is based on things that cost money to buy--low class sizes, faculty salaries, etc.--and much of the rest flows from larger reputational and selectivity factors that are directly and indirectly enhanced by spending.

In other words, the free market has created an information environment that exacerbates the runaway college cost problem that McCluskey is supposedly interested in trying to solve.

Meanwhile, George Leef weighs in at Phi Beta Cons (at the National Review) to endorse the McCluskey spend-less-money-by-spending-less-money solution, assert without evidence that there is "wildly excessive demand for educational credentials" (From who? The private sector employers who have freely chosen to pay more and more for those credentials over the years?) and essentially disavow the central conclusion of a paper published by the Pope Center, where he (Leef) happens to be the Director of Research.

Jane Shaw, president of the Pope Center, also rejects Martin's proposed solution, saying:

Wouldn’t it be better if schools were motivated to provide the information that their customers — parents and students — want? Different schools could provide information suited to their potential customers. Wouldn't it be more valuable to have information along the lines of Princeton Review's multi-dimensional ratings, which tell you, say, where the party schools are — and let students decide whether those are positive or negative features? I believe that we would have a richer, more satisfying marketplace for education that way than we would with a mandatory website containing statistical "student-learning outcomes" that end up looking rather similar to one another. Rather than asking the federal government to intervene (which it does much too much of already), let's figure out ways to empower the customers.

"Motivated"? What would motivate a college to disclose information that didn't flatter the institution and burnish its status and reputation? Look, I'd be pleased as punch if colleges disclosed the good with the bad out of a sense of civic obligation, but I'm not going to hold my breath. And I'd sort of assumed that the steely-minded conservatives over at the National Review would have a similar view of human nature. There's no contradiction between Shaw's hope for multi-dimensional ratings, which I support, and a transparency agenda. But it's naive to think that colleges are going to get there on their own, which leaves one option--the government, like it or not.

Reviewing the Review of What Happened in Montgomery County

Jay Mathews reviews the new book Leading for Equity, which chronicles Montgomery County’s successes, so far, in closing the achievement gap. Straight out of the gate, Mathew’s is right about one thing —the six “lessons” are convoluted and sound more like titles for paper submissions to AERA than book chapters (Lesson 1, for example: Implementing a strategy of common, rigorous standards with differentiated resources and instruction can create excellence and equity for all students). But his critique of the book as too process-oriented is wrong. Process has tripped up many a reform and understanding what sequence of events and efforts lead to change is key to any district’s improvement strategy. Sit in on union-district negotiations, listen to testimonies at board and council meetings, dig into PTA minutes going back ten years and more, and you’ll see that Weast’s success is one of process---getting a strategic collective of people (aforementioned) to make difficult decisions for the right reasons.

Central to this success, which the book describes, was the mapping of two zones of affluence—the wealthier Green Zone and the less-affluent Red Zone—that illustrated for all the inequities of the county and its schools. As someone who was educated by MCPS (in the Red Zone before it was the Red Zone), and is now sending my son to MCPS (still Red Zone), I know the practical implications of living in the lesser of the zones. My kids will go to school with a lot of kids who don’t have as much as they do, who have parents that work two jobs and who don’t speak English and who don’t walk them to school every day or read with them every night or schedule extra conferences with their teachers. But they will also be in schools that give a little extra to these kids to even the playing field—from the initial full day kindergarten program to the extended learning opportunity summer sessions that are going on right now.

Mathews says the book misses the real story, which is how MCPS gets and keeps great teachers. I agree that human capital tops the list of public education concerns and that MCPS is successful largely because it has quality teachers, but I’m unconvinced that the story of Montgomery County rises and falls on the teacher reforms. MCPS has done a lot to improve teaching and teachers—its professional growth system, for example, is touted as one of the best in the nation. But Superintendent Weast’s struggle to close achievement gaps is not merely a teacher problem, at least not the way Rhee’s might be in DC. Getting and keeping great teachers in all MCPS schools is a product of the county’s convenient close-in spot to DC (it would be great to know, by the way, the % of MCPS teachers than are spouses to the federal government, think tank and World Bank trifecta—count my family as one) and its ability to offer a job that’s better (in pay and otherwise) than PG and DC school systems.

The real story is about how a county that was unaware of or unconcerned with school inequities, or both, bought into a differential approach to schooling that has resulted in significant gains for the poorest kids. This doesn’t always happen, is still quite contentious, and is definitely a long, involved process—one that is as important as it is difficult to capture.

Friday, July 17, 2009

A Monopoly for Non-Profit Lenders

SAFRA, the latest acronym in financial aid, refers to the Student Aid and Fiscal Responsibility Act - a large and ambitious piece of legislation released in the House this week. The legislation follows President Obama's budget proposal to move all future federal student loans to the Direct Loan Program, eliminating private loan companies from the business of making and holding student loans.

But, it keeps private loan companies in the business of servicing loans - keeping track of borrowers, collecting payments, and communicating with schools and students. The companies that get to service student loans will be chosen through a competitive bidding process run by the Department of Education. Well, almost all of them that is - the legislation allows non-profit loan companies in some states to be guaranteed a monopoly on the servicing of loans in those states.

A small section of the 181-page bill guarantees loan servicing business to eligible non-profit loan companies in each state, and in states with only one eligible non-profit, it allocates the lesser of 100,000 borrowers or all borrowers in the state. When I read this, I wondered how many states this might effect - in how many states would students have no choice in who services their loans?

Fortunately, I didn't have to do the math on that. Student Lending Analytics posted about this yesterday and estimated that 24 states might end up with one non-profit servicer in the state. SLA estimates that this would be 13 percent of all borrowers - 13 percent of borrowers would not have a choice in the company they rely on for help with repayment, to get a forbearance if necessary, and for communications about their loans.

Monopolies don't lead to the best customer service, and students need very good customer service when repaying their loans. These non-profit loan companies should compete with other servicers, rather than be guaranteed business in their state. And students should have the right to decide which company will do the best job of helping them repay their loans.

Charts You Can Trust - Revised

Earlier this week, the National Center for Education Statistics announced technical changes in the measure of student loan amounts for the 2007-08 NPSAS - the survey conducted every four-years on student financial aid. I won't get into technical details (you can find them here), but the end result limits the ability to compare the most recent data from 2007-08 to prior years of data using the publicly available DAS system.

These changes impact the data presented in Drowning in Debt, the CYCT published by ES last Thursday in which we compare student loan data from the 5 most recent NPSAS surveys, from 1992-93 to 2007-08. NCES has already revised the data for 2003-04 and will finish updating the 1999-2000 and 1995-96 data sets by October 2009. Once NCES finishes revising the publicly available data sets so that they can be compared with each other, ES will re-publish adjusted charts.

Despite this change, the primary conclusion of our report - that student debt is rising - remains unchanged. The revisions to the 2003-04 data reduce the average total loan amount presented in Chart 2 of our report by an average of $600. This means that the increase from 2003-04 to 2007-08 was even steeper than we originally presented.

Briefly,

En route to Pitchfork music festival in Chicago so blogging via blackberry and limited to short post:

1) David Brooks' column today about community colleges is quite good, much more so than yesterday's higher ed piece in the Post from E.J. Dionne.

2) The Post review of the Dead Weather concert @ 930 club misses the point spectacularly, the whole enterprise is clearly a controlled experiment to see if rock greatness can be achieved through sheer force of charisma, stage presence, and overwhelming cool. (Answer: Indeed it can!)

3) Neal McCluskey's arguments seem to deliberately ignore the actual history of higher education in America over the last 50 years, more on this next week.