Friday, November 23, 2012

Unranked


Last week U.S. News and World Report moved George Washington University to the Unranked category in the 2013 “America’s Best Colleges” rankings.  The move came in response in GW’s admission that it had misreported data regarding class rank for entering students, both on its website and to U.S. News (see previous post).

I must admit that my first response to the news was simple and without nuance. Does anyone care?  Should anyone care?  Does anyone think less of GW because it lost its U.S. News ranking (which is not the same thing as asking if anyone thinks less of GW because it misreported information)?

“Upon further review” (to borrow National Football League replay language), I realized that:

1)      “Simple and without nuance” makes for short blog posts;

2)      This incident is an opportunity for introspection not only for George Washington University but also for U.S. News and World Report.

I am hoping that the introspection is taking place at GW and guessing that it’s not at U.S. News.  I would be more than happy to have U.S. News and World Report hire me as a consultant to evaluate the methodology and assumptions underlying the rankings, but as a public service here are some questions and recommendations for consideration and introspection.

 

Question:  Should U.S. News rank colleges utilizing information that is unverified?

            U.S. News relies on information self-reported by colleges in compiling rankings.  The fact that there have been three incidents in 2012 alone involving reputable institutions misreporting data would suggest that the Honor System is not working.  One of the foundations of reputable journalism is fact-checking.

Recommendation:  Spend some of the considerable profit U.S. News makes from the rankings and hire an auditor to verify data.

 

Question:  Is it time to get rid of the peer assessment reputation survey?

            The U.S. News rankings began in 1983 as a magazine article (the rankings have outlived the magazine), and were based exclusively on a survey of college presidents.  I was a college faculty member at the time, and the joke on our campus was that no one was sure the President knew much about our campus, much less any others.  Through the years U.S. News has incorporated other data into the rankings, but the reputational survey remains the biggest component, counting 22.5%.  Provosts, admissions deans, and high school counselors (I choose not to participate) are now surveyed in addition to Presidents.  How reliable is the peer assessment?  The percentage of respondents is relatively low and has been declining, reputations may lag behind realities, and Presidents and other officials receive incentives for improving an institution’s ranking, leading to revelations of several Presidents ranking their own institution higher than Harvard, Yale, and Princeton.

Recommendation:  Get rid of the peer assessment altogether or publish it as a separate ranking, making it clear that it reflects opinion rather than fact.

 

Question:  Do the input measures used by U.S. News tell anything about output, a college’s success in educating students?

            U.S. News doesn’t pretend to measure educational quality, although that fact is hidden in the fine print if mentioned at all.  Output is too hard to measure, and colleges are hesitant to share publicly their results on measures such as the Collegiate Learning Assessment and the National Survey of Student Engagement. Is the assumption that selectivity=quality valid? Focusing on admission stats such as selectivity and SAT scores and other stats like Alumni Giving, all of which can be manipulated, is like ranking “America’s Best Churches” without regard for spiritual growth.

Recommendation:  Start a conversation with college and other educational leaders about metrics that might measure how much education is taking place on campus.

 

Question:  Do the year-to-year changes in rankings reflect actual changes in institutional quality or tweaks to the methodology to produce different rankings?

            I don’t have an answer to that question, just a suspicion.

 

Question:  Does U.S. News want to be in the news business or the entertainment business?

            Years ago I attended a NACAC conference session where Bob Morse, the man behind the numbers for the U.S. News rankings and author of the “Morse Code” blog, described the rankings as a “good product.”  He took umbrage when I asked him if it was good journalism.  That question is just as relevant today.  Is U.S. News reporting the news or making news?

            The plethora of stories each fall about the new rankings would suggest that U.S. News has become a newsmaker, perhaps even a trendsetter, rather than a news outlet.  In fairness to U.S. News, though, that is consistent with the direction that journalism, and particularly television journalism, has taken.  Today journalists are celebrities who socialize with those they are supposed to be covering, and career advancement is more tied to Q rating or ability as an entertainer rather than ability to sniff out news.

            I would argue that U.S. News chose entertainment over news as early as 1983, long before it became clear how closely the U.S. News brand would become tied to college rankings. The original rankings article listed only top ten lists in the National Universities and National Liberal Arts Colleges categories and ignored the real news story.  The #10 school in the National Universities category was Brown.  The fine print showed that Brown was considered one of the top ten schools by only 25% of those responding, meaning that 75% didn’t think Brown belonged in the top ten.  The real news from the survey was the diversity of quality schools in American higher education and how little agreement there is about top schools beyond the first three or four.

Recommendation:  Add a disclaimer to the rankings, either “For Entertainment Purposes Only” or “Your Results May Vary.”

 

Question:  Do the rankings help students and parents make more thoughtful college decisions?

            U.S. News states that “The intangibles that make up the college experience can’t be measured by a series of data points,” then proceeds to rank America’s “best” colleges based on a series of data points.  The U.S. News rankings are part of a balanced college search the way Sugar Smacks or Count Chocula are part of a balanced breakfast.  The balance comes from everything other than the product.

            There is a lot of helpful information in “America’s Best Colleges,” ranging from the topical articles to the use of Carnegie categories to divide schools.  The attempt to rank college negates most of those benefits.  College rankings provide a precision (We’re #6!) that leads students and parents away from thinking about the quality of the college experience.  They also simplify a process that should be both complex and personal.

Recommendation:  Expand the “Unranked” category to include not only George Washington University but all other colleges and universities as well.   

Friday, November 9, 2012

Data and Voice


The number 3 carries with it a power and significance that few other numbers possess.  In Christian theology there is the Trinity, and in hockey there is the hat trick.  There are three wise men, three musketeers, three tenors, three little pigs, and three stooges.  In baseball you have three strikes and three outs.

There is also an old saying that bad things happen in threes.  Those of us in the college admissions profession better hope that bad things happen only in threes after the news this morning that for the third time this year a prominent institution has admitted to inflating and misreporting admissions data.

Today’s culprit is George Washington University, which has updated the famous quote from its namesake, “I cannot tell a lie,” to “I cannot tell a lie (any longer).”  According to today’s Washington Post and the Chronicle of Higher Education, an internal investigation showed that GW has been submitting incorrect data regarding class rank.  For the current year GW reported that 78% of incoming freshman were in the top 10% of their high school classes when the actual number of 58%.  The discrepancy comes from the fact that rank was estimated for some outstanding students coming from schools that do not provide class rank.  According to the Chronicle, only 38% of GW freshmen had class rank reported.

I don’t find the revelations about GW quite as egregious as the manipulation of SAT scores reported earlier this year for Claremont McKenna and Emory.  (I will follow up this post with some thoughts about the Emory situation shortly.) I concur with my St. Christopher’s colleague Scott Mayer, who said this morning upon learning about GW, “It’s a shame that schools get into trouble for doing stupid stuff.”  Implicit in his comment is that the real shame is doing the stupid stuff in the first place.

Calling it stupid is not excusing it or lessening judgment that it’s wrong.  The estimating of class rank was not an accident but deliberate, a form of institutional cosmetic surgery designed to make GW look more attractive.  It’s also not clear whether rank was estimated for every student from schools that don’t rank, or only those likely to raise the percentage.  The latter would make the deception more ethically offensive.

The broader, recurring question for all of these cases is how meaningful these measures of institutional “quality” or “prestige” really are.  What do admit rate or yield or mean SAT scores really tell us, and is any potential meaning mitigated by how easily they can be manipulated? In the case of class rank, is there any point in reporting what percentage of freshmen are in the top 10% of the class when two-thirds of applicants come from school that don’t rank?

A year ago, at the NACAC Conference in New Orleans, I was a presenter on a panel devoted to “College Admission and Counseling in the 21st Century.”  My fellow panelists were Jerry Lucido, Executive Director of the Center for Enrollment Research, Policy, and Practice at the University of Southern California, and Lee Coffin, Dean of Undergraduate Admissions at Tufts.

Jerry’s remarks at the session referenced an article he wrote for the Chronicle of Higher Education in January, 2011.  That article called on colleges to rethink the metrics they use.  He argued that colleges and universities would be better served measuring their success in:

--educating first generation and low-income students;

--habits of mind and skills developed;

--student participation in research, international experiences, community service, and interdisciplinary study.

Lee talked about the selective admissions process, and differentiated between “data” and “voice.”  He argued that voice is far more important because so many students have strong data (grades, scores).

That’s true for institutions as well as for students.  Colleges and universities focus on making their data look impressive but ignore or fail to find their voice (or the focus on manipulating image through data reflects their voice).  The college experience, as well as the college admissions business, is far more about voice than data.