Wednesday, May 04, 2011

Measuring Outcomes

One of the most interesting manias of the age in which we live  is the obsession with measurement.  That obsession actually takes two forms.  The first involves the quantification of knowledge and the resultant reduction of reality to a set of number and number relationships.  Effectively neither facts nor relationships among facts exist unless they can be reduced to the symbol system of numbers.  The second involves the transformation of numbers, that is the measures themselves,  into fetish objects. Numbers appear solid, they seem to represent something that may not be manipulated--like words.  There can not possibly be ambiguity in a number, or a set of numbers or a set of relationships among numbers memorialized in some sort of equation or the like. 

 (From 10 Ways to Measure Your Event (ROI) with Social Media, Event Manager Blog, March 25, 2009)

But numbers can be as easily manipulated, and as ambiguous, as words.  And measurement can be used to effectuate substantive changes in behavior as easily as any command of law written in words.  Two recent writings suggest the difficulties and politics of numbers in a society obsessed with measurement and convinced that numbers, unlike words, neither lie nor manipulate. 
The first is a posting by Jason Saul entitled "The Dirty Little Secret About Measurement" Mission Measurement, March 23, 2011.  What one chooses to measure and the measure chosen may be asmuch about the entities for which measurement is conducted or the need to construct a reality for those who depend on the existence of that reality than it is about "objectivity" or the the production of facts.  Indeed, there is a suggestion that facts, like numbers, standing alone, mean about as much as a single word unconnected to anything that comes before or after. The entire post is worth a read, here are some highlights:
For the last 15 years I have been focused on a single knotty question: how do you measure social impact?   . . .   How is it that we can measure the temperature on Mars, but we can’t measure what happens within the orbit of a nonprofit organization?  Why is measurement so confounding?  

Getting the Right Measurement

After years of consulting with thousands of nonprofits on this issue, it finally struck me: we’re focusing on the wrong problem.  The problem isn’t actually a problem of measurement - it’s a problem of strategy.  The reason why it’s so hard to quantify impact is because, far too often, nonprofits are trying to measure outcomes their programs are not designed to produce.  Simply put, we’re trying to cheat our way to the answer. . . .

Where we get in trouble is when we try to “stretch” our statements of impact beyond the outcomes that are reasonably proximate to our work.   Take the case of an after school sports club that we recently advised: in an effort to attract “Gates money,” the executive director wanted to demonstrate that her program was impacting high school graduation rates.  The only problem was that the program primarily involved playing basketball with kids after school.  While there was a study program, few attended it and those that did basically just worked on their homework.   So I guess we could bemoan the measurement challenge of estimating the program’s impact on high school graduation, or we could just be intellectually honest.  There are many bone fide (and valued) outcomes that this program produces: reducing risky behaviors, increasing student interest in school, encouraging healthy lifestyles, etc.  While those outcomes may not be as “sexy” as improving graduation rates, they are quite important predicates.   

Intellectual honesty is one way to solve the measurement problem.  . . .  If you’re an advocacy organization looking to pass a law, substantial contribution means you led the coalition, lobbied the legislature and helped craft the legislation. If you’re running a direct service program, substantial contribution to an outcome is a function of dosage, frequency and duration. . . 

A New Generation of Social Strategies

Of course, the other way to solve the measurement problem is to just improve our programs.  If we want to be able to say more, we need to actually do more.  I recall meeting with an arts group whose primary goal was to engage younger artists and support them in their careers.  The organization spent 80% of its budget on a weekly newspaper for artists. When I asked whether young artists ever read the paper, the executive director replied: “no, they’re all online!” Yet the organization kept publishing the newspaper because that’s what it always did.  Measuring this organization’s impact on young artists would have been extremely difficult – but not because of measurement, because the strategy was never designed to produce that impact.  Put simply, we are using yesterday’s strategies to produce today’s outcomes. If we want to really make a difference, we need a new generation of social strategies, not a new generation of social metrics. . . .

Setting a New Standard

Funders have a role to play, too.  Instead of goading nonprofits to prove the impossible, let’s set reasonable expectations for results.  Funders should ask organizations to state their intended outcomes upfront, and make the case for how they will make a substantial contribution toward achieving those outcomes.  Instead of requiring 10% of the grant be used to hire an evaluator, foundations should require that 10% of the grant be used to design the program for greater impact. 

But numbers, and measures, can also produce substantial substantive effect.  What one chooses to measure, and what one chooses not to measure, expresses substantive presumptions, and creates strong incentives, to skew reality to meet the measure.  The second appeared in a recent article in the Chronicle of Higher Education about the ouster of the University of Nebraska from the prestigious Association of American Universities.  Jeffrey J. Selingo and Jack Stripling, Nebraska's Ouster Opens a Painful Debate Within the AAU, Chronicle of Higher Education, May 2, 2011. 

The moral of that story is not so much that numbers can produce an efficient way of measuring conformity to reality, but that measures grounded in numbers can disguise decisions about choices and preferences the articulation of which in words would be politically or socially unacceptable.   The article provided, in relevant part:
In the end, the University of Nebraska at Lincoln lost its bid to remain in the exclusive Association of American Universities by just two votes.
Nebraska’s chancellor, Harvey Perlman, learned the institution’s fate on April 26 after an angry, isolating month in which he had fought to keep it in the association. A two-thirds majority was needed to remove Nebraska, and 44 ended up voting against the institution during a balloting period that was extended to solicit votes from as many members as possible. . . .

Though two member institutions had left on their own before, the association had never voted to throw out a member. The unprecedented move was fraught with intrigue and politics not typical of staid and collegial academic associations, say several presidents in the AAU with knowledge of the process who asked not to be named because of the group's confidential proceedings. . . .

The AAU's two-phase membership criteria focus primarily on an institution's amount of competitive research funds and its share of faculty members who belong to the National Academies. Faculty awards and citations are also taken into account.
Presidents say that in recent years discussions about membership in the association have become much more quantified, with an increasing emphasis on a rankings methodology developed by the membership committee and senior AAU staff. Last April the association as a whole adopted revised criteria that compared AAU institutions with nonmembers on research dollars and eliminated the assumption that current members would automatically continue on.
"It was very clear that the easiest path to scoring high on the criteria is to have large medical schools or large science and engineering faculties," said Nancy Cantor, chancellor of Syracuse University, which was reviewed along with Nebraska and has decided to leave the AAU voluntarily in the coming months (see a related article).
The membership committee was responsible for drafting the new criteria, and presidents who recall last spring's meeting said there was little discussion of the new method among the full membership before it was adopted. "Many of us didn't realize the full impact of that new criteria," Ms. Cantor said.
Another university leader said that, given that the association is made up of presidents who regularly criticize university rankings, "there's concern by some of us that too many membership decisions are being made purely by the numbers." . . . .

 A year ago, the AAU invited its first new member in nearly a decade, the Georgia Institute of Technology. Some presidents don't want the group to get too big, and so as it adds members, they believe those at the bottom of the rankings should be pruned. . . .

'Strong Forces Against' Nebraska

The vote on Nebraska began at an association meeting at the Four Seasons here, on April 10. That Sunday evening, as AAU presidents prepared for a reception, Mr. Perlman learned that the group's Executive Committee had voted 9 to 1 to end the relationship. Now the question would go to the full membership for a vote. . . .

AAU leaders asked that the ballots be cast before the presidents left town on April 12. Ballots were sent to those presidents who didn't attend the meeting the next day, and all ballots were due by April 18.
The April 18 deadline for votes apparently caused some confusion among several presidents. Those who talked with The Chronicle said it was clear that April 18 was the day ballots would be counted. But AAU officials told Mr. Perlman that April 18 was the deadline for ballots to be postmarked, according to an e-mail exchange between the chancellor and Mr. Berdahl, which was released with other documents on Friday by Nebraska at the request of The Chronicle.
When April 21 arrived and the AAU still hadn't heard from several presidents, the association e-mailed them, asking them to vote by overnight mail or indicate they didn't intend to vote.
"We have established no hard deadline after which we would disqualify votes," Mr. Berdahl wrote in an e-mail to Mr. Perlman on April 22. "We have two cases of presidents out of the country from whom we will hear on Monday or Tuesday at the latest, and we will then have heard from everybody."
The deadline was significant because any abstention would be counted as a vote in favor of retaining Nebraska, Mr. Perlman said. When the AAU appeared to be seeking additional ballots beyond the deadline, Mr. Perlman concluded that "they probably didn't have the votes" and were determined to get them. . . .

Quantifying a University's Research

It's not clear what prompted the special reviews of Nebraska and Syracuse last year, except that they ranked at the bottom of the AAU metrics.
What particularly hurt Nebraska in those metrics is that as a land-grant institution in a farming state, it gets a large share of its research dollars for agriculture. The entire University of Nebraska system had $13.2-million in federally financed farm-related research in 2008, or about 10 percent of its total federal research dollars, as compared with a nationwide average of about 3 percent.
The AAU, however, does not give such research the same weight in its membership criteria because much of federal support for agricultural work is awarded through formulas and earmarks rather than peer-reviewed grants. As a result, presidents of land-grant institutions say that the AAU metrics are stacked against them. They maintain that differences between states in climate, soil, and crops necessitate formula-driven funds.
Large public institutions like Nebraska are also hurt in the AAU rankings by a process the association calls "normalization," which seeks to determine per-faculty research rewards by dividing total research dollars by the number of faculty members at an institution.
For Nebraska, that means the total research dollars are divided by a significant portion of faculty devoted to agricultural research, even though their research rewards are not considered as valuable under AAU metrics. The normalization process tends to help smaller members with smaller overall research budgets, like Brandeis and Rice Universities.
Mr. Cohon stressed that the metrics are a product of years of discussion and analysis by the organization's membership. It is possible that a large number of faculty conducting agricultural research could penalize an institution, but "that's not the case here," he said.
"While there is no perfect set of metrics, I think there is a broad sense of satisfaction with the metrics we have," Mr. Cohon said.
In his e-mail to the campus and in interviews with The Chronicle, Mr. Perlman said what put Nebraska at a particular disadvantage was the lack of an on-campus medical school.
While other AAU members, such as Cornell and Pennsylvania State Universities, for instance, lack medical schools on their main campuses, Nebraska's medical school is also under a totally separate administrative structure from the Lincoln campus, an arrangement that is unlike the ones at those other institutions. As a result, its research dollars are not counted by the AAU, even though, as a medical school, it can't belong to the association on its own. . . . .

Plea for a 'Qualitative Judgment'

The AAU also put Nebraska at a comparative disadvantage by not penalizing institutions that rely heavily on a small number of academic fields for their overall research dollars, despite the association's stated commitment to diversity of mission, Mr. Perlman said. . . .

Rather than emphasize exact national rankings on those points, Mr. Perlman told The Chronicle, he asked the AAU to "make a qualitative judgment, as their rules require, about whether we were compatible with other AAU institutions.". . . .

Mr. Perlman's plea for counting medical-school activity isn't necessarily supported by data from the National Science Foundation, which publishes a tally of university research spending from all sources by institutions with medical schools. The NSF's latest annual compilation, for 2008, shows that the University of Nebraska system ranked 39th in the nation.

Mr. Perlman's strongest claim may be that of recent growth, even if it isn't a major factor in the AAU evaluation process. From 1999 to 2009, the University of Nebraska system had the fifth-largest percentage growth in federally financed research expenditures of any college that was in the top 100 for federal money in 1999. Its federally financed expenditures more than doubled over that period, to $148.6-million. The Nebraska system also rose 19 places—to 68th from 87th—on the list of universities reporting the most federally financed expenditures. . . .
In the case of the University of Nebraska's ouster, numbers camouflaged a decision that only universities with strong science and engineering programs would be privileged over universities with a strong presence ion the production of knowledge, at the highest levels, in other fields.  The numbers did not say that, and the AAU did not say that, but the aggregate effect of the measures chosen said that precisely without the bother of having to be held accountable for the policy choices the measure represented.

Of course, this is not to suggest that quantification is a bad thing.  Nor is this to suggest that quantification has no value as a tool for generating facts that are useful for analysis.  But it is to suggest that ignoring the fundamental importance of the qualitative element in quantitative analysis is folly.  Turning numbers into fetish objects and measures as aspect of the face of the divine opens quantitative analysis and measurement in general to the sort of manipulation that both distorts reality and serves to mask intentions.  There is a short step between this sort of exercise and the diminution of accountability--a perverse result, indeed.  A greater focus on assumptions underlying measures, and on the production of numbers, on the qualitative measure of definitions of what constitutes facts worth measuring is as important as the production of numbers and measures--perhaps more important than either.

And so, the University of Nebraska lost its position in the AAU not with the vote but with the adoption of standards of measures that while purporting to be neutral and a realistic measure of quality, actually served to mask substantive decisions about the composition of a university worthy of membership, that is, it masked the decision that AAU universities  would have to favor large science and engineering departments and treat medical schools in a particular way.  Those universities that conform will be rewarded, and the quantitative measures will be proof of both conformity as the basis for reward.  

These sorts of issues are at the center of law and governance as well.  As law and governance systems move from systems grounded in direction to systems grounded in objectives, they become more and more dependent on measurement for implementation.  The issues that shape the measurement of compliance with objectives and the misuse of measurement to shape objectives--the subject of the first essay--and the issues that substitute the choice of measure for substantive rule, the subject of the second essay, are as relevant to regulatory systems as they are to the travails of grant recipients and the fight over membership in organizations.  

No comments: