Graduate Management News
 
 
 
Column

A Rating System to Define and Measure MBA Program Quality

Guest columnists Robert S. Rubin and Erich C. Dierdorff of DePaul University discuss a model they developed to define MBA program quality, funded by a 2010 research grant by GMAC’s Management Education Research Institute.

By Robert S. Rubin and Erich C. Dierdorff

The last decade has witnessed intense criticism regarding the value and veracity of media rankings. Despite these criticisms, business school stakeholders continue to exhibit an insatiable thirst for rankings as a means to differentiate program quality. Media rankings are not only popular, their influence on recruiter behavior, alumni donations, placement, applicant quality, and so forth, is also undeniable.

MERInstitute grant winners Robert Rubin, left, and Erich Dierdorff recently presented their ratings model at GMAC’s Leadership Conference in San Francisco.

The prevailing logic of media rankings is that they serve to objectively and comprehensively summarize an MBA program’s most fundamental product: educational or academic quality. Unfortunately, recent academic research shows rankings are highly suspect in depicting educational quality. Given the high stakes associated with MBA programs, as well as the resulting dysfunctional behavior spawned by “playing the rankings game,” we argue that it is time move from merely criticizing existing systems toward creating improved ones.

Creating a new evaluation system first requires a complete understanding of the essential criteria that constitute academic quality in graduate management education. Interestingly, such an important undertaking had yet to be conducted. Toward this end, along with our colleague Fred Morgeson of Michigan State University, we set about to systematically define what constitutes MBA quality, assess the content of that quality model, and refine it. Our research stops short of implementing a new system, but instead builds the necessary foundation for a rating (as opposed to ranking) system. 

Building a Quality Model

Our research was informed by multiple sources of information and input. We began by extensively reviewing the academic literature pertaining to educational quality, both within business education as well as in post-secondary education in general. We also reviewed media sources and accreditation standards. From this initial effort we identified four dozen sources,  from which 314 unique criteria purported to indicate quality were gleaned. These criteria were reviewed and sorted into broader clusters that were independently verified by subject matter experts. A total of nine quality meta-dimensions resulted. Twenty-four more specific quality dimensions further summarized the nine meta-dimensions. Below is a content synopsis of the quality model:

  1. Curriculum: (a) content, (b) delivery, and (c) program structure
  2. Faculty: (a) qualifications, (b) research, (c) teaching, and (d) overall quality
  3. Placement: (a) alumni network, (b) career services, and (c) corporate/community relations.
  4. Reputation: (a) perceptions of program quality
  5. Student learning and outcomes: (a) personal competency development, (b) student career consequences, (c) economic outcomes, and (d) learning outcomes
  6. Institutional resources: (a) facilities, (b) financial resources, (c) investment in faculty, (d) tuition and fees, and (e) student support services
  7. Program/institution climate:  (a) diversity and (b) educational environment
  8. Program student composition: (a) the overall makeup and quality of students
  9. Strategic focus (a) the quality of the articulated institutional mission and strategic plan 

Understand First, Measure Second

One particularly challenging aspect of understanding academic quality is the sheer complexity of the task. This is especially evident when the intent is to avoid the so-called “criterion problem,” which underlies any evaluative framework. Briefly, the criterion problem refers to the inherent difficulties in conceptualizing and measuring concepts that are multi-dimensional in nature. Criteria developed with little thought regarding the conceptual or ultimate criterion (i.e., quality) often ignore critical domains that comprise the phenomenon of interest. Indeed, one critical misstep of any evaluative system is to begin evaluation without first developing a complete understanding of the criteria of interest.

Based on our research, most existing indicators of program quality lack the complex multidimensionality we uncovered. When comparing our model’s 24 quality dimensions against media rankings, we discovered that media rankings are highly deficient. BusinessWeek and Financial Times rankings cover only nine or 10 of the 24 dimensions in their rankings’ criteria, and US News & World Report covered just four. Put bluntly, at least 60 percent of the criteria we uncovered are not represented by media rankings. Thus, stakeholders should be cautioned against overinterpreting the meaning of rankings.

At the same time, if stakeholders are interested in knowing about a few dimensions (e.g. student economic outcomes), rankings appear to capture some of this information, albeit on very limited number of institutions (and this says nothing about the measurement problems inherent in rankings). Yet, even here, many of the criteria captured in rankings are largely out the direct control of a program. For example, our subject matter experts indicated the most malleable aspects of MBA program quality as those falling under Curriculum and the least malleable as Reputation, the latter being a predominate factor driving most media rankings.

A New Way Forward: A Rating System of Quality

Our research also captured reactions to assessing quality from MBA policy makers (e.g., program administrators, associate deans, etc.). A few results were troubling to say the least. For instance, only 9 percent of policy-makers endorsed the idea that media rankings provide “good measures” of overall quality, yet 73 percent reported that their institutions pay close attention to rankings. Herein lies the tyranny of rankings: Academic stakeholders (and even many publishers of rankings) know that media rankings are deficient, but with few alternatives available, they feel stuck chasing the rankings. Indeed, we have heard firsthand from these policy makers of the dysfunctional byproducts of this tyranny (e.g., resource misallocation, “gaming” the system, etc.).

Using our multidimensional model of program quality content, we believe we have built the foundation for the development of an alternative system based on ratings (e.g., Consumer Reports). Such a system would offer some key advantages over rankings, including but not limited to:

  • Depiction of multidimensionality. Ratings systems allow for depictions of “quality profiles” across schools. Stakeholders could evaluate the full breadth of quality, by applying their own weighting to what matters most for them. And ratings are by nature compensatory, meaning that the quality dimensions programs choose to emphasize will be more clearly depicted.
  • Focus on important differences (and similarities). Programs that are not substantially different in terms of quality ratings can be treated as functionally equivalent. Actual differences would be more clearly highlighted across quality criteria while taking certain “baseline” information, such as accreditation, into account.
  • Improvement of transparency. Rating systems are not limited to an arbitrary “best set” of institutions, which allows for differences to emerge across a wide swath of programs. No longer would debates be structured around whether or not the school ranked 15th is really different than the school ranked 19th.
  • Reduction in “gaming” education. A rating system is likely to reduce the dysfunctional behavior involved in chasing or manipulating the most heavily weighted criteria in media rankings.

Hard Work Ahead

A rating system would require significant involvement from institutions, both in terms of transparency and cooperation. Yet, it is clear from our findings that the identified dimensions adequately represent quality in graduate management education and that media rankings fall well short. With this in mind, we firmly believe it is no longer viable to stand on the sidelines while inadequate systems dictate conversations of business school quality. Instead, it is time to take accountability seriously and apply the general philosophy of rigorous and evidence-based decision making that we so often espouse in schools of business. Put simply, we can─and should─restore control of quality to its rightful place and regain the positive outlook of our collective futures. We hope our research has formed the basis for this to occur.

To request a copy of the full report, (Re)Defining MBA Program Quality: Toward a Comprehensive Model, by Robert S. Rubin and Erich C. Dierdorff of DePaul University and Frederick P. Morgeson of Michigan State University, contact Robert Rubin at rrubin@depaul.edu.
GMAC’s Management Education Research Institute recently announced two 2011 award recipients. Proposals for 2012 research grants will be accepted through October 8, 2011.

GMAC
Click here to visit the gmac.com home page
Click here to Read Our Archive