As
I was driving to work on the Friday that Christmas break began, I heard on the
radio that the U.S. Department of Education was releasing its plan for federal
college ratings that day. I had two
immediate reactions reflecting different parts of my DNA.
Putting
my blogging hat on, my initial thought was that I needed to write a post analyzing
the plan for Monday publication, but then I came to my senses and realized that
no one would have the time or interest to read about federal college ratings (or
any other issue I might write about) three days before Christmas.
The
cynic/conspiracy theorist within me noted that a common government tactic is to
“hide” bad news by releasing late on a Friday afternoon when the media and
public are not paying attention. How bad
must the plan be to justify “dropping” it on the Friday before Christmas?
I
have read the plan and realize there was no sinister intent. The Obama administration had promised release
of the plan in fall of 2014, and the following Sunday happened to be the first
day of winter.
There’s
also no plan. A Chronicle of Higher
Education article describes it as “heavy on possibilities and light on
details.” That assessment is generous. At this point the Department of Education has
only a vague idea of what the final version might look like. The release describes it as a college ratings
“framework.” It might be more accurately
described as a skeleton, only with enough bones missing that a casual observer
would be hard-pressed to identify the animal.
The
goal of measuring access and affordability is laudable. So is the decision to “avoid rankings and
false precision” and focus on outcomes rather than input factors. The question is how easy it is to actually
measure those things.
The
easiest way to measure an institution’s commitment to access is the percentage
of enrolled students receiving Pell Grants, but how good a measure is that? I
have previously written about the danger of confusing measuring what we value
with valuing what we can easily measure. Does the current threshold for Pell
eligibility capture all the students for whom access to higher education is
limited economically? Another potential metric, the number or percentage of
first generation students, is complicated by lack of a consistent definition
for what constitutes a first gen student.
With
regard to affordability, what do metrics like “average net price” and “average loan
debt” tell us, and what are their limitations? The Department of Education
acknowledges that current net price data is incomplete, including only students
receiving aid (which might be okay). In
addition, public institutions only report average net price data for in-state
students. At this time, average federal
loan debt is not being considered in the proposed ratings, and the Education
Department recognizes that using that data could lead some institutions to
encourage students to take out more expensive private loans rather than federal
loans in order to game the ratings.
The
proposed ratings are on shakiest ground when it comes to measuring
outcomes. Should degree completion be
measured over four years or six years?
Should four-year institutions be penalized for students who transfer to another
four-year school? And how meaningful is
data on earnings? Those numbers are more
heavily influenced by what a student majors in than from where he or she graduates. Should we measure earnings five years beyond
graduation or over a lifetime? And is a
school that produces lots of investment bankers and lawyers “better” than one
which produces teachers and those with non-profit service careers?
Another
issue to be determined is how institutions will be grouped for meaningful
comparison given differing missions and student populations. In Virginia, the College of William and Mary
and Virginia State University are both four-year public institutions, but have
little else in common. Should they be
compared?
Far
more interesting are several larger philosophical questions. What’s the purpose of the ratings? Is it to provide information to consumers, or
is it to hold institutions accountable?
Is it possible to design a rating system that does both?
Are
ratings preferable to rankings? The Department
of Education plans to place schools in three categories for each metric—“high-performing,”
“low-performing,” and those in the middle.
Those categories would seem to have been developed in consultation with
Goldilocks and the three bears. A year
ago two analysts at the American Enterprise Institute crunched the numbers
using three thresholds—25% Pell recipients, 50% graduation rate, and net price
under $10,000. They concluded that only
a few institutions are terrible in all three areas (access, affordability, outcomes),
but only 19 four-year institutions exceed all three thresholds.
That
would seem to answer a question raised in the Department of Education draft,
about whether consumers would find it easier to see only a single comprehensive
rating. A single rating would probably
be easier, but easier is not better when it leads to the “false precision” that
so many of us find troubling in attempts to rank colleges. Back in February, Bob Morse, U.S. News’s guru of false precision,
gave advice and asked questions at a symposium on the technical issues
underlying federal college ratings. That’s
like Wyle E. Coyote serving as an expert witness at a conference devoted to
Roadrunner protection.
The
ultimate question is whether rating colleges is a legitimate function of the
federal government. The answer to that
question may depend on one’s political leanings about the role of government,
but you don’t need to be a member of the Tea Party to question whether the
Department of Education should be rating colleges. At the same March meeting where Bob Morse
spoke, another speaker suggested that the government should develop a database
and leave it to others to figure out how to use it.
A
lot depends on whether this is comparable to the gainful employment rules put
into place with regard to for-profits, and I don’t think it is. In that case, the federal government had a
legitimate interest in protecting taxpayers from fraud, because a number of
for-profits were operating an economic model where a huge amount of revenue was
coming from federal financial aid for an “education” that was leaving students
unprepared for employment and in debt. A
fundamental principle of ethics is “treat like cases alike,” and this doesn’t
seem to fit. In any case, there’s a lot
of work to be done and questions to be answered before federal college ratings
will make sense.
I am way excited to finally get to be in college. I have been waiting for this my entire life. I really hope that I can get into the college I want. I will need to go and meet with the admissions office soon. http://www.collegestartonline.com/our-coaches/
ReplyDeleteThought provoking as usual. And this bit:
ReplyDelete"Back in February, Bob Morse, U.S. News’s guru of false precision, gave advice and asked questions at a symposium on the technical issues underlying federal college ratings. That’s like Wyle E. Coyote serving as an expert witness at a conference devoted to Roadrunner protection."
is seriously funny. Well done!
There has been possible concerns and values which are even considered to be important and surely would even help them to proceed further. paraphrase website
ReplyDelete