It’s always pleasant to find confirmation of one of my favorite targets for scorn in the “legitimate” media. Joe Nocera’s NYTimes [column](http://www.nytimes.com/2012/09/29/opinion/nocera-the-silly-list-everyone-cares-about.html?_r=0) on September 28th took the latest U.S. News & World Report’s college rankings to task, arguing, as I often have, that the methodology doesn’t justify the precision of the numbers. If I didn’t know better, I might guess he was cribbing from one of my earlier posts on the folly of these rankings, whether of colleges, green companies, or sustainable products. Here’s his own words. I couldn’t have said it better.
> The U.S. News & World Report’s annual college rankings came out earlier this month and — knock me over with a feather! — Harvard and Princeton were tied for first… Followed by Yale… Followed by Columbia.�
> It’s not that these aren’t great universities.�But c’mon.�Can you really say with any precision that Princeton is “better” than Columbia?�That the Massachusetts Institute of Technology (No. 6) is better than the California Institute of Technology (No. 10)?� That Tufts (No. 28) is better than Brandeis (No. 33)?
There may not even be a significant real difference between Harvard (#1) and Tufts (#28) in terms of prospective students choices. Nocera answers his own question as to whether schools close together in the list are really different from one another.
> Of course not. U.S. News likes to claim that it uses rigorous methodology, but, honestly, it’s just a list put together by magazine editors. The whole exercise is a little silly. Or rather, it would be if it weren’t so pernicious.
I agree with his assessment that this and similar rankings are bad for us. They become an end itself and encourage gaming the system just to score high. Easy to do by focusing on the factors that are used in the methodology. Rich schools have a great advantage, but do not necessarily produce better outcomes that those lacking the resources. Nocera writes:
> And they imbue these rankings with an authority that is largely unjustified. Universities that want to game the rankings can easily do so. U.S. News cares a lot about how much money a school raises and how much it spends: on faculty; on small classes; on facilities; and so on. It cares about how selective the admissions process is.
But what does selectivity mean. It means that the schools high on the list will attract the largest numbers of applicants and, given limited class size, will admit the smallest percent. We have a classic reinforcing loop here. More students will apply to the high rankers, enhancing the schools selectivity this year, raising the ranking (possibly) and attracting even more students next year. And so on and so on.
Nocera continues to expose more of the mischief inherent in these rankings; his whole column is well worth reading. My concerns are not about colleges, although my grandchildren are still at ages when college admission will come in the not too distant future. I see the same issue with similar ranking of the top 100 green companies. The ranking are taken as indicators of some real distinctions in their impacts on the world, a completely unjustified conclusion. Even if the rankers are open and transparent about the methodology used, few ever bother to get beyond the numbers in the list. All suffer from a methodological problem common to any composite ranking that combines more that a single factor. A ranking that simply orders the weight of a bunch of portable computers can be interpreted directly, but means little unless your only consideration in making a choice is weight.
But that is rarely what people who consult rankings are looking for; they want some composite rating based on a number of factors that are important to them. But how important to them? Individuals have different preferences or utilities for the factors. I might prefer speed (60) over weight (40) when it comes to buying a laptop. The more factors are involved in the ranking, the more preferences (or weights) have to be used in calculating a single composite index. But I do not get to decide the weights, the rankers do. U.S. News & World Report does for the college rankings. The chance that they use exactly my preferences is virtually nil, and so the results cannot mean much to me. There is no practical way for me to tell if the ordering of the results matches what it would be using my own weightings. In theory, I could take the raw data and apply my own weights and create John Ehrenfeld’s version of the U.S. News & World Report rankings, but I am very unlikely to do so.
So what can/do the rankings mean? They mean sales for U.S. News & World Report They are used by the schools for public relations and fund raising. They are [mis]used by graduating students to guide their choices. In a political economy that is moving ever closer to a pure market system, they distort the economic calculus used by these applicants, by potential donors, or by faculty candidates. Some argue that it is better to have faulty data rather than none at all. I disagree. After all, a stopped clock is right twice a day while one running a few minutes slow or fast is far less accurate. Without accurate data to guide choices, (economic) actors are forced to find alternatives to the decision processes, relying more directly on their own preferences for any number of factors they consider important. This argument applies almost exactly to the case of a consumer considering the purchase of some product, and wants to buy the “greenest” item on the shelf. Both consumers (college-bound student or supermarket customer) face the same dilemma. Even after making a choice based on the ranking provided, there is no way whatsoever to know whether it was the right one.