Jamie P. Merisotis, President, Lumina Foundation for Education
Center for Higher Education Development Symposium, Berlin, Germany

Thank you, Professor Müller-Böling, for the opportunity to share my thoughts on the complex and controversial issue of achieving transparency in higher education through the use of university rankings. I’m particularly grateful for my placement on today’s conference agenda. As the keynote speaker for this first session, I have the privilege of being the first to acknowledge what every subsequent speaker surely will echo—the enormous contributions that the Center for Higher Education Development (CHE), under your leadership, has made to higher education in Germany, across Europe, indeed around the world over the last decade and a half. The CHE’s groundbreaking work covers an impressive array of issues, from governance to teaching to research. As an American, I’m particularly grateful for your efforts in taking a dubious American export—the concept of measuring and ranking the performance of higher education institutions—and vastly improving it. No ranking system is perfect. All systems are works in progress. But the CHE system, thanks to your efforts, is the state of the art.

Our challenge now is to build on what you have accomplished.

I describe this as our challenge because we all have a stake in the outcome. Whereas Americans may have been credited with the first generation of work and Germans clearly improved on that model and produced a second generation, we now collectively must think about a third generation. What should this third generation look like? What have we and our colleagues around the world learned from the ways we have created categories, collected data, measured performance, and compiled rankings? Where have we succeeded? Where have we failed? How can we benefit from these experiences and together, build something better?

Of course, some might argue that we shouldn’t try. They might suggest that the best course of action is to dispense with rankings altogether. But the time for that discussion has long passed. Ranking systems now exist in more than two dozen nations and include many variations and improvements that go well beyond the first generation models based on input measures and reputation factors. Whether we like it or not, ranking systems are here to stay. Over the years, we’ve seen the reaction to rankings evolve from outright disdain, to grudging resignation, to a growing acceptance of their value. As one who continually has been skeptical of rankings, I now recognize that they serve a purpose, and that purpose is likely to expand in the future.

For that reason, we must get them right.

The American consumer public has embraced rankings for more than two decades. We are a nation of list makers. We love competition, and we use superlatives to express our choices. We like knowing who is first, what is best, which is number one, and who are the runners-up—in descending order. We rank the performance of everything: our products, our companies, our corporate leaders, and our American idols. Each year the media tell us the 10 best places to retire, the 25 most reliable cars to drive, and the 50 best investments to make.

So it’s no surprise that when U.S. News and World Report began researching and ranking America’s best colleges and universities in the early 1980s, consumers responded positively. When universities achieve favorable rankings, they post the news on their Web sites; they create banners to decorate their campuses; and they remind their alumni in their solicitation letters. Virtually every four-year higher education leader in America knows that the issue of U.S. News and World Report that announces America’s best colleges and universities is that magazine’s best-selling issue—out-ranking all others. It’s Number One.

The public’s demand for rankings is one reason that my personal response has evolved. It’s not that I have given in to the inevitability of a phenomenon. Indeed, I still harbor serious doubts about the ways in which most rankings are conducted. Nevertheless, I have come to understand that the practice of evaluating institutions and ranking them—especially when arranging them by tiers, as the CHE ranking does—has value. Why? Because the practice helps to define what is an emerging paradigm for higher education accountability across many nations. Let me explain.

Higher education institutions answer to three powerful forces. These three forces work together to hold higher education institutions accountable. Each force sets performance expectations; each force influences institutional behavior; and each force has the potential to affect quality and to drive that quality upward. Taken together, the three forces comprise what might be called a three-legged stool of accountability.

  • The first force is the government, which provides oversight, passes legislation, and either allocates or withholds financial resources.
  • The second force is the higher education community, which conducts peer-based reviews, creates standards, and either grants or denies accreditation.
  • The third force is the marketplace, which supplies students; provides jobs for graduates; and either responds to or ignores requests for financial support.

Rankings offer information that is useful to all three powerful forces. Exactly how useful is the information? We don’t really know. We’ve speculated that policymakers take rankings into consideration when they allocate funds. But to what extent, we’re not sure. We’ve heard about universities that have achieved a higher ranking after they’ve initiated positive changes. We’ve wondered if the better ranking was the motivation for the changes or the reward for the changes. We can only guess. We have anecdotal evidence that indicates that some businesses take rankings into account when they weigh the academic credentials of job candidates. How much weight does an alma mater have? That’s a matter of conjecture.

Without quantifiable data, we have no way of knowing the precise impact that rankings have on the decisions of government policymakers, the higher education community, or the marketplace. But I’m pleased to say that this is likely to change in the future, at least to some extent, with the new kinds of information that are being developed.

The quality and reliability of data are probably the most important elements of the emerging accountability systems in many nations. In the relatively new field of ranking, there is a growing body of literature that addresses these data issues. Much of the best of that work has been done by a relatively small number of individuals whom I’ve had the privilege of convening over the last few years under the auspices of the International Rankings Expert Group (IREG). Research currently in progress in several countries has the potential to replace speculation with facts. These facts will guide us in the future as we set about to design that next generation of rankings. They will add new insights to the global discussions that began six years ago in Warsaw, Poland, were expanded two years later in Washington, D.C., continued right here in Berlin in 2006, and achieved what may be the high point of global participation at the IREG conference held late last year in Shanghai, China.

One of the most interesting studies of ranking was conducted last year on behalf of the Organization for Economic Cooperation and Development (OECD) by Ellen Hazelkorn from the Dublin Institute of Technology. Her research is one of the only studies that takes a critical view of rankings globally. Dr. Hazelkorn’s research examined the effects that ranking have on the behavior of higher education institutions and showed that universities and colleges often react in important and meaningful ways to ranking. The rankings drive them to change curricula, choose administrative structures and modify student services. So clearly rankings are having an important effect on the ways in which higher education institutions function.

I am particularly close to one research project that got under way last August and will conclude next year. This project is examining the effects of ranking on other important actors in the higher education system. For example, how do U.S. government policymakers at the state and federal levels use rankings as they evaluate universities and create policies that affect higher education institutions?

As you know, here in Europe rankings have gotten the attention of government leaders in some interesting ways. For example, the French Minister for Higher Education and Research recently pointed out that France, which serves in the rotating Presidency of the European Union beginning on July 1, will place a high priority on quality assurance of higher education programs across Europe. The Minister explained that this drive to quality assurance will entail a thorough analysis of international indicators of higher education, as well as a focus on the impact of international rankings. The Minister expressed her aspiration that there will be progress toward defining ranking criteria that are better adapted to European higher education and that these discussions will promote dialogue about possible European-wide rankings.

The research being conducted in the United States also will shed further light on the degree to which rankings influence the decisions that university leaders make on their own campuses by conducting in-depth case studies. Findings of the research project will be widely disseminated in a series of products that will include policy reports, papers, and issue briefs. They will become part of an online clearinghouse for research related to ranking assessment and quality assurance.

I am intensely interested in this project on both policymaker and institutional influences of ranking for several reasons. First, the research is supported by a grant from Lumina Foundation for Education, the organization that I now serve as president. I can take no credit for Lumina’s decision to award the grant because that decision was made many months before I joined the Lumina staff. I can only add my stamp of approval. I believe the money was well spent.

The second reason I’m interested in the results of the project is that the research is being conducted by the Institute for Higher Education Policy, the recipient of the Lumina grant and the organization that I founded 15 years ago. I can take no credit for the quality of the data that will emerge from the project because I left the Institute five months after the work began. I can only say that I am well acquainted with the team that is overseeing the inquiry, and I have great confidence in the ability of its members.

The third and most important reason I’m interested in this project is that its findings will help move us past what is largely anecdotal evidence. We will have solid data that will give us a better understanding of the power of rankings—a power that goes beyond influencing students who are trying to decide which institution to attend. That influence has probably been overstated in the past. We already know that some students dismiss the value of rankings altogether. We became aware of this sentiment two years ago when a survey, conducted here in Europe, showed more than half of the student respondents expressed no interest in institutional rankings. The deciding factors that led those students to selecting one university over another were reputation and prestige.

Of course, indirectly these students were affected by rankings because to some degree, rankings drive reputation—and to some degree, reputation drives rankings. This is especially true in the United States where reputation is often part of the assessment criteria. As you know, one of the many valuable aspects of the CHE ranking is that it minimizes the direct influence of reputation on the rankings.

We’ve seen evidence of what can happen when reputation overwhelms other more dependable survey criteria such as outcomes. The results can be both humorous and audacious. As an example: Princeton University, one of the great American research universities, is globally recognized for having an excellent reputation among its peers and is highly respected in the marketplace. A recent survey ranked Princeton as having the eighth best business school in the United States. There was just one problem: Princeton University doesn’t have a business school.

Ranking systems, if done right, have the potential to motivate positive change. If used appropriately, they can promote healthy competition among higher education institutions. While I agree with those who have argued that the best strategy for the future of the increasingly global higher education system is collaboration, we have seen countless examples of how the public benefits when there is competition for top positions in the marketplace. These examples are particularly plentiful in the business world. We know of companies that have worked hard to improve the quality of their products in an effort to boost their reputations and, as a result, earn a larger share of the market.

I’m reminded of a legendary advertising campaign between two of the most dominant car-rental companies in America. This advertising campaign dates back to 1962 when Hertz was ranked number one, and Avis was a struggling competitor in a cluster of runners-up. Avis made two strategic moves that catapulted it out of the pack and into fierce competition for the top spot. First, it concentrated on improving the quality of its product. After it had met and surpassed the performance expectations of the marketplace, it went public with its now-famous slogan. It admitted that Hertz was Number One and that Avis was Number Two. But, insisted the company, as Number Two, “We try harder.”

When higher education institutions try harder, they get better. And when they get better, everyone benefits. Ranking systems can serve as incentives, and as incentives they can drive change and improve quality.

Let me offer some brief recommendations about how rankings might continue to evolve to that third generation standard I mentioned earlier. One way is really quite simple, and that is to implement the minimum standards outlined in the Berlin Principles on Ranking of Higher Education Institutions. The Berlin Principles, which were developed as part of the 2006 IREG meeting, are certainly not a gold standard or immutable set of principles. Instead, they represent a preliminary attempt to hold up the best examples of quality ranking and, conversely, to shine an unfavorable light on the most egregious of the poorly done rankings.

Another important part of the evolution of rankings will be an expanded use of student learning outcomes as a key part of the rankings. One challenge for ranking systems in general is that the quality of the data systems often dictates what can be included in the rankings. Student learning outcomes, which represent some of the most important byproducts of the higher education enterprise, are beginning to be measured in different ways across nations. There is even a somewhat controversial—though I think terribly important—effort to determine if learning outcomes can be measured in common ways across nations, under the auspices of the OECD.

In addition, I would make a plea to those conducting rankings to include other factors that take into account who is being educated. With the increasing massification of higher education taking place around the world, we must incorporate equity or diversity measures into the rankings to assess whether all qualified citizens of a nation-state are being educated and in what ways.

In my position as president of Lumina Foundation for Education, I am passionately committed to improving the quality of higher education. I see rankings as tools that can help accomplish this, but only if done well. At Lumina Foundation we have a tightly focused mission. We are a private, independent, grant-making foundation that exists solely to help more students enroll in higher education institutions and persist through graduation. We want students to leave our higher education institutions with the best postsecondary credentials that are possible. These credentials should include the knowledge, skills, and abilities to help the graduates function in a rapidly changing economic and social context.

At Lumina we are very specific about what we plan to accomplish in the next decade and a half. By the year 2025, we want 60 percent of Americans to hold degrees from higher education institutions. This translates into 16 million more university graduates than we have today. We are motivated by a ranking—in this case, the OECD rankings that show educational attainment rates across countries. Right now, the United States ranks 10th in the proportion of college graduates between the ages of 25 and 34. Tenth. We used to be Number One in the world. Clearly global forces are pushing all countries to higher levels of attainment, and the United States must respond to this changing context.

Each year our foundation awards about $50 million in grants to organizations, people, and programs that have the potential for helping us to meet our ambitious goals. Over the years we’ve formed partnerships with educators, researchers, and policymakers who are as determined as we are to reduce the obstacles that get in the way of students attaining a high-quality education.

We are committed to change, and we realize that to bring about change we need to utilize all the tools available to us. Rankings are one such tool. But they need to get better—much better. Among the many improvements needed are more and better quality data, especially as they relate to outcomes, more meaningful measures that are used as factors in the rankings, and less of a focus on annual, ordinal ranking.

And so we come full circle to the beginning of my remarks today. I am certainly not an enthusiastic fan of ranking. My primary interest has been, and continues to be, how to make them better. Nevertheless, ranking clearly has emerged as an important part of the accountability marketplace and needs to be a more frequent focus of the dialogues about higher education quality. I believe it is at our own peril to avoid these discussions. We need to learn what ranking offers that other information systems do not.

If rankings become more credible, they will earn the respect and the support of those who doubt their value. They will become worthy of the trust that the marketplace already places in them. They will evolve into more dependable tools that we can use to achieve our shared goal of improving higher education around the world. Lumina Foundation looks forward to providing leadership in a collaborative effort to create the next generation of rankings, in large measure by helping to improve the quality of the data that are used to develop these rankings.

Thank you for the privilege of being here at this important occasion and for the chance to show my great admiration for the work of Professor Müller-Böling and the CHE. Professor Müller-Böling, may the next phase of your life achieve the top tier ranking in all categories that are important—health, happiness, and satisfaction with knowing that you have made an indelible impact on the global debates about higher education accountability and quality. We are indebted to you and the important contributions you have made for the future of higher education.

Categories: Select speeches by Jamie Merisotis | Tagged: ,

Comments are closed.