Thursday, January 29, 2015

4 or More


“Is it just me, or is this simply a stupid idea?”  That was the question posed in a post on the NACAC Exchange a week or so ago. 

I was immediately intrigued.  I am drawn to college-admissions-related stupidity the way a moth is drawn to a flame or a dog to a fire hydrant.  Like Supreme Court Justice Potter Stewart and pornography, I may not be able to define it, but I sure know it when I see it, and it is one of the things that keep this blog in business.

I was even more intrigued when I saw that the “stupid idea” in question was a product of the College Board.  I certainly have my issues with the College Board, which I have described tongue-partly-in-cheek as America’s Most Profitable Non-Profit Organization. It has chosen to be a corporate entity rather than a membership organization, a .com rather than a .org, and College Board meetings often feel more like infomercials than professional conferences.  I suspect every policy decision made by the College Board is grounded in cost-benefit analysis, in profit rather than principle, so it may be calculating, but never stupid.

The “stupid idea” in question is the Apply to 4 or More™ program.  I was not familiar with that name, but in looking at the section of the College Board website devoted to the program I recognized it as one of the Board’s programs to increase access to higher education and particularly an attempt to deal with the issue of “undermatching” as described by professors Caroline Hoxby at Stanford and Christopher Avery at Harvard, where students from economically-disadvantaged backgrounds apply to less selective colleges than their credentials might allow them to earn admission.

The College Board website describes Apply to 4 or More ™ as “a national movement to encourage all students—but primarily low-income, college-ready students—to apply to at least four colleges.”  Students are identified for the program based on having received a fee waiver for the SAT or SAT subject tests, or in some cases based on Census data.  They receive a packet of information including a personalized cover letter, a college application timeline, and in some cases fee waivers.

The goal of increasing access to higher education for low income students is laudable, and in fact needs to be a national priority.  Is Apply to 4 or More a better way to accomplish that than President Obama’s “free community college” initiative?  I’m not sure they address the same population or the same issue, but I give the College Board credit for trying to do something.

I am more interested in the messages sent by and the assumptions underlying Apply to 4 or More.  To what extent does the program provide understanding about the college admissions process and good college counseling?

One of those assumptions has to do with “undermatching.” The embedded assumption is that the student could “do better,” with better=more prestigious=more selective.  I recognize that many students who come from homes without financial resources and lack good college counseling may be unaware of places that might be good options, but undermatching is not automatically negative. I believe that the value of college lies in the educational experience rather than the name on the diploma. A student who attends a less selective school where he or she is a top student may have a better college experience and better educational opportunities.

I don’t find the advice offered in Apply to 4 or More “stupid,” but I do find it quaint.  It’s the kind of advice that a guidance counselor might have provided back in the days when “guidance counselor,” not “school counselor,” was the operative term.  It’s exactly the kind of college counseling I would expect to find if there was a college counseling office on Main Street USA at Disneyland.

Take, for example, the advice to “Build a Diverse College List,” including 1 “Safety,” 2 “Good Fits,” and 1 “Reach.”  Back in the fall there was discussion on the NACAC Exchange about whether the term “safety school” is pejorative.  Certainly no college wants to be seen as a safety school, with its connotation as a place where you’ll go if all else fails.  Apply to 4 or more defines “safety” as “a college you’re confident you can get into.”  There are students who have a unique self-esteem problem, in that they have far too much self-esteem, and are more confident than they should be about where they’ll get in.

As a college counselor I have never liked the term “safety,” although I think it will be unfortunate if we get to a point where students and counselors can no longer predict admission likelihood. I tell students that I want them to apply to at least one school that they know, and more important that I know, they’ll get in. I also don’t believe that every student must apply to a reach.  The notion of “ good fit,” which to its credit Apply to 4 or More emphasizes, is more about finding places that offer a program and culture that meets the student’s needs and values, and a thoughtful college search can result in a good fit even when a student applies to one or two places.

The Apply to 4 or more student website states that applying to four or more colleges increases your chances of being admitted.  I find that to be terrible advice.  Admission has more to do with the quality of applications and options rather than the quantity.  If your credentials make you a long shot for the Ivy League, applying to all eight rather than two doesn’t increase your chances of getting into one but rather your chances of getting rejected by eight rather than two.  And if applying to four is better than two, is applying to 30 even better?  I do accept the argument that students for whom financial aid is important may benefit from being able to compare offers, but doesn’t the Net-Price Calculator allow that without having to apply? (If I am showing my ignorance or naivete on that point, feel free to correct me.)

The first rule of ethics is “Do no harm.” Apply to 4 or More ™ meets that test, but I’m not sure it provides students with the kind of information and advice they need to apply to college in 2015.  I’d love to see a conversation about what information we should be providing, what advice we should giving, and how best to do that.

 

Thursday, January 15, 2015

Ratings, Not Rankings


As I was driving to work on the Friday that Christmas break began, I heard on the radio that the U.S. Department of Education was releasing its plan for federal college ratings that day.  I had two immediate reactions reflecting different parts of my DNA.

Putting my blogging hat on, my initial thought was that I needed to write a post analyzing the plan for Monday publication, but then I came to my senses and realized that no one would have the time or interest to read about federal college ratings (or any other issue I might write about) three days before Christmas.

The cynic/conspiracy theorist within me noted that a common government tactic is to “hide” bad news by releasing late on a Friday afternoon when the media and public are not paying attention.  How bad must the plan be to justify “dropping” it on the Friday before Christmas?

I have read the plan and realize there was no sinister intent.  The Obama administration had promised release of the plan in fall of 2014, and the following Sunday happened to be the first day of winter.

There’s also no plan. A Chronicle of Higher Education article describes it as “heavy on possibilities and light on details.”  That assessment is generous.  At this point the Department of Education has only a vague idea of what the final version might look like.  The release describes it as a college ratings “framework.”  It might be more accurately described as a skeleton, only with enough bones missing that a casual observer would be hard-pressed to identify the animal.

The goal of measuring access and affordability is laudable.  So is the decision to “avoid rankings and false precision” and focus on outcomes rather than input factors.  The question is how easy it is to actually measure those things.

The easiest way to measure an institution’s commitment to access is the percentage of enrolled students receiving Pell Grants, but how good a measure is that? I have previously written about the danger of confusing measuring what we value with valuing what we can easily measure. Does the current threshold for Pell eligibility capture all the students for whom access to higher education is limited economically? Another potential metric, the number or percentage of first generation students, is complicated by lack of a consistent definition for what constitutes a first gen student.

With regard to affordability, what do metrics like “average net price” and “average loan debt” tell us, and what are their limitations? The Department of Education acknowledges that current net price data is incomplete, including only students receiving aid (which might be okay).  In addition, public institutions only report average net price data for in-state students.  At this time, average federal loan debt is not being considered in the proposed ratings, and the Education Department recognizes that using that data could lead some institutions to encourage students to take out more expensive private loans rather than federal loans in order to game the ratings.

The proposed ratings are on shakiest ground when it comes to measuring outcomes.  Should degree completion be measured over four years or six years?  Should four-year institutions be penalized for students who transfer to another four-year school?  And how meaningful is data on earnings?  Those numbers are more heavily influenced by what a student majors in than from where he or she graduates.  Should we measure earnings five years beyond graduation or over a lifetime?  And is a school that produces lots of investment bankers and lawyers “better” than one which produces teachers and those with non-profit service careers?

Another issue to be determined is how institutions will be grouped for meaningful comparison given differing missions and student populations.  In Virginia, the College of William and Mary and Virginia State University are both four-year public institutions, but have little else in common.  Should they be compared?

Far more interesting are several larger philosophical questions.  What’s the purpose of the ratings?  Is it to provide information to consumers, or is it to hold institutions accountable?  Is it possible to design a rating system that does both?

Are ratings preferable to rankings?  The Department of Education plans to place schools in three categories for each metric—“high-performing,” “low-performing,” and those in the middle.  Those categories would seem to have been developed in consultation with Goldilocks and the three bears.  A year ago two analysts at the American Enterprise Institute crunched the numbers using three thresholds—25% Pell recipients, 50% graduation rate, and net price under $10,000.  They concluded that only a few institutions are terrible in all three areas (access, affordability, outcomes), but only 19 four-year institutions exceed all three thresholds.

That would seem to answer a question raised in the Department of Education draft, about whether consumers would find it easier to see only a single comprehensive rating.  A single rating would probably be easier, but easier is not better when it leads to the “false precision” that so many of us find troubling in attempts to rank colleges.  Back in February, Bob Morse, U.S. News’s guru of false precision, gave advice and asked questions at a symposium on the technical issues underlying federal college ratings.  That’s like Wyle E. Coyote serving as an expert witness at a conference devoted to Roadrunner protection.

The ultimate question is whether rating colleges is a legitimate function of the federal government.  The answer to that question may depend on one’s political leanings about the role of government, but you don’t need to be a member of the Tea Party to question whether the Department of Education should be rating colleges.  At the same March meeting where Bob Morse spoke, another speaker suggested that the government should develop a database and leave it to others to figure out how to use it.   

A lot depends on whether this is comparable to the gainful employment rules put into place with regard to for-profits, and I don’t think it is.  In that case, the federal government had a legitimate interest in protecting taxpayers from fraud, because a number of for-profits were operating an economic model where a huge amount of revenue was coming from federal financial aid for an “education” that was leaving students unprepared for employment and in debt.  A fundamental principle of ethics is “treat like cases alike,” and this doesn’t seem to fit.  In any case, there’s a lot of work to be done and questions to be answered before federal college ratings will make sense.