In Defense of BusinessWeek's MBA Rankings

Posted by: Louis Lavelle on October 23, 2008

Yesterday, my colleague Francesca Dimeglio blogged on a new study on BusinessWeek’s full-time MBA rankings by Frederick P. Morgeson, a professor of management at the Eli Broad Graduate School of Business at Michigan State University.

In the interest of generating debate on the rankings, I want to share with everyone an email I sent to Prof. Morgeson and his co-author on the study, doctoral candidate Jennifer Nahrgang, back in April when I first learned about the paper. Neither Prof Morgeson or Nahrgang ever responded, but perhaps they’ll add their thoughts in the comments section of this post.

Before you read it, you should take a look at Francesca’s post, which does a fine job of summarizing the main points of the study. You can also read the press release the school put out about the research. Or if you happen to have a subscription to Academy of Management Learning & Education, you can read the actual paper.

The main points of the study seem to be that since the BusinessWeek rankings change very little from year to year they must be flawed; that the rankings encourage business schools to cheat and to divert resources; and that the rankings fail to convey information about how far apart ranked schools are in terms of quality. In an attempt to show how sensitive our methodololgy is to small changes from year to year, the authors attempted to recalculate the 2004 ranking using the student responses to 15 survey questions published in the magazine that year. Finally, the authors propose a rating system (as opposed to a ranking). I address each of these points in my response.

None of these points are very new—in fact, I could have borrowed the paper’s title for the headline on this post: Same as It Ever Was: Ivory Tower Critiques BusinessWeek’s Rankings for the 112th Time and Still Gets It Wrong.

I want to apologize in advance for the snarky tone of my response, and for the references to Vanderbilt, Rochester, Maryland, and Emory—which are all fine schools and mentioned here only to make a point.

Also, I want to point out where Michigan State ranks: No. 29 in BusinessWeek’s 2006 ranking. Somehow these critiques of our rankings never come from schools at the top of the list. The last anti-ranking broadside came in 2005 from USC B-school profs Harry and Linda DeAngelo and the University of Rochester’s Jerold Zimmerman—after USC fell 10 spots in the BusinessWeek ranking, to 27, and Rochester came in at No. 29. You can read my story, their study, or both.

I know what you’re thinking: What about Harvard and Wharton, which pulled out of the BusinessWeek rankings in a huff back in 2004? To which I say this: Harvard and Wharton, which we continue to rank without their active participation in the process, never had the audacity to dress up their philosophical objections to the rankings and try to pass it off as research. Rankings dogma is fine; just label it “rankings dogma.”

My email response to the latest criticism from Morgeson and Nahrgang follows after the jump.

Here's the text of my email:

Interesting paper, but like most of what passes for scholarship on the subject of business school rankings, wrong in its conclusions.

You seem to think that the relatively unchanging nature of the BusinessWeek rankings from year to year makes them "fundamentally flawed," that the inability of new schools to break into the rankings reveals a hidden "reputational component" that skews the results. Is it unfair that older schools and schools that have been ranked highly before receive a reputational boost that makes it more likely that they'll be ranked highly again? Maybe so, but that reputational boost that you're so quick to dismiss has a real dollar value in terms of starting salaries, in terms of the quality of the student body, and in terms of resources--it counts for something in our ranking because it counts for something in real life.

You say the rankings encourge business schools to game the rankings and divert resources. I've heard this many times, and it's still not accurate. The rankings encourage schools to provide the best education possible, period. If business schools would just do that--no concierge services for recruiters, no fancy new digs for MBA students, just the best education money can buy--they would have students and recruiters that are both happy and satisfied, and they would quickly find themselves at the top of our rankings. The fact that some b-schools prefer to cheat and/or take short-cuts in pursuit of a high ranking isn't the fault of the ranking any more than tax fraud is the fault of the IRS.

Your discussion of how rankings fail to convey information about how far apart ranked schools are in terms of quality reveals ignorance about how media rankings in general and BusinessWeek rankings in particular are conducted. It's true that we have not provided that type of information in our full-time MBA rankings, but we do in fact provide it in our rankings of undergraduate b-school programs. It's not that "ranking systems do not allow for such determinations"--they very much do, and we know exactly what separates the No. 1 school from the No. 2 school from the No. 28 school. It's true that these differences are sometimes small, perhaps even immaterial in select cases. But overall the differences are huge. In our undergraduate ranking, we ranked 96 schools. The "index number" that we use to determine the final ranking--the standardized sum of all our methodology elements--is set at 100 for the No. 1 school; the index number for the No. 96-ranked school was 24. These are hardly "microscopic" differences.

By the way, your attempt to re-calculate the 2004 ranking using the student responses to the 15 questions we published in the magazine that year is ludicrous. In your effort to show how sensitive our ranking methodology is to "essentially meaningless" changes, you eliminated 93% of the rankings methodology--you focused on a one-third of one student survey, ignoring the remaining two-thirds of that survey, 2 other student surveys (2000 and 2002), the recruiter survey, and the intellectual capital component. I'm sorry, but 93% is not meaningless.

As for your solution, it assumes facts that are not only not in evidence, but facts that are contradicted by the evidence. You argue for a rating system because you're starting from the assumption that "there are essentially no differences among the programs." Really? The way I figure it, all those recruiters bypassing Vanderbilt and Rochester on their way to Chicago and Wharton must have a reason. And all those students with ridiculously high GMAT scores and extensive work experience...are they heading to Maryland and Emory, or MIT and Stanford? The market speaks, folks, all you have to do is listen.

Finally, this entire paper misconstrues the purpose of the BusinessWeek rankings. It's difficult to break into the top 30 because it should be difficult to break into the top 30. This isn't a game where every school gets to be queen for a day. It's not part of the b-school marketing machine. It's a serious journalistic undertaking, an independent source of information that students can consult as they're deciding where to get their MBAs. That's why I don't think your solutions will work. The b-schools can construct their own rating system, but it won't be viewed as "more credible" by anyone who values an independent voice. And if anyone is waiting for an invitation from BusinessWeek to help decide our ranking criteria, they shouldn't hold their breath.

Reader Comments

Sunil

March 6, 2010 11:31 AM

The argument that reputation counts for something because it's captured in salary, quality of students, etc. is a silly one. Capture it either in reputation, or in these other "drivers of reputation" that you reference. Don't double count them.

MarketRank

November 16, 2010 12:59 AM

I think the business week rankings are flawed, but not for the reasons stated in the study. The best ranking would be the one that takes into account the most information, which would be the one closest to being driven by the votes of those in the market for an MBA. People vote with their feet and wallet by choosing from the schools they are accepted to. That is why I think yield is the best measure of a school, and by that measure Harvard takes the cake by a mile. Stanford would rank second, and then the rest of US News rankings top 5 would be clumped quite closely together below that.

Business Week fails miserably at predicting where people would want to go. I think there are very few people that would take Chicago over Harvard, or both Wharton and Kellogg over Stanford, and the yield stats bear this out. I guess the other way to look at it is, how likely is it that school A is really better than school B if very few people would ever choose A over B when accepted to both. Assuming that the market incorrectly values MBAs A and B would be to assume that the vast majority of people are ignorant or irrational (or both). Maybe a handful are irrational, but to assume that the majority for almost the last 10 years incorrectly valued Harvard over Chicago, Wharton and Kellogg seems a bit far fetched. BW has totally abandoned the wisdom of the crowds with this ranking.

Post a comment

 

About

Read daily reports from BusinessWeek editors and reporters Louis Lavelle, Geoff Gloeckler, Alison Damast and Francesca Di Meglio and boost your chances of getting into your best-fit B-school.

BW Mall - Sponsored Links

Buy a link now!