How we changed our rankings

David Levy
David Levy

David Levy manages the product and data strategy for Degreechoices and writes about college rankings and accountability.

How we changed our rankings

Our college rankings will be updated each time the Department of Education releases new degree earnings and cost data – the numbers that underpin our methodology. (This usually happens in April). However, this year, to coincide with the release of our school profile pages, we are updating our rankings early to incorporate 2 important adjustments we will explain here.

This article assumes the reader has a basic familiarity with our methodology.

Two new data adjustments

We determine school rankings by comparing each school’s economic performance against weighted average benchmarks.

In the newest iteration of our rankings, we adjust for 2 important variables:

  • a school’s program ecology
  • the split between in-state and out-of-state students

Program ecology

Program ecology refers to the proportion of a student body in each of the different majors offered by a college or university. Until now, we disregarded program ecology when calculating EarningsPlus at the institutional level and used a single, student-wide earnings figure.

However, we believe that failure to consider program ecology can distort institutional-level economic score calculations and mislead students.

To use a real-life example, last year, Colorado School of Mines ranked as the 26th best national university in the country, thanks in large part to an exceptional EarningsPlus of $44,139, almost double Colorado’s bachelor-level earnings cohort.

If, however, we narrow the focus to its most popular major – Mechanical Engineering – student earnings are approximately the same as the relevant earnings cohort, and the school falls outside of the top 100 programs. This is more or less the case when examining each program, the large majority of which are in engineering or computer science. In our new rankings release, CSM drops to number 76 among national universities.

Additionally, failure to disaggregate earnings by major sets up a negative incentive structure within our rankings – favoring schools with a narrower focus on more lucrative majors. This can be the case even if the more diverse school outperforms when comparing lucrative majors directly.

To fix this issue, we have changed the EarningsPlus calculation to compare the median earnings of students in each major against their own program and graduate-year cohort. We then aggregate program-level performance, based on each program’s share of total graduates, into a single weighted average institutional-level EarningsPlus figure.

Popular online programs

Out-of-state student adjustment

Our earning cohorts are standardized according to 4 student demographics:

  • location
  • degree level
  • graduation year (or enrolment year in the case of payback)
  • program

Example – bachelor’s in psychology

For instance, the weighted average earnings for bachelor’s in psychology graduates would consider the pooled 2017-8/2018-9 median earnings of all students awarded a bachelor’s in psychology in the 2014/2015 and 2015/2016 academic years. (Based on Department of Treasury data, as reported by College Scorecard.) To generate the weighted average earnings we:

  1. Calculate a school’s total earning contribution based on the median earnings and the number of students, by program.
  2. We allocate each school to its respective state, and then take a weighted average according to the school’s student percentage of the entire cohort.
  3. We also do this at the national level, taking a weighted average of each school with a cohort of psychology bachelor grads.

Until now, we have used only in-state earnings figures to calculate payback and EarningsPlus benchmarks. In-state earnings benchmarks make sense for schools with large in-state student populations, students who are much more likely to continue living in the state after graduation.

However, schools that draw a large out-of-state population can have their economic performance disproportionately affected by location.

Consider for instance Vanderbilt and Harvard. Both schools are elite and extremely selective universities. They draw students from across the country (Vanderbilt’s in-state student percentage is 9%; Harvard’s is 16%), and their alumni very often compete in the same job markets.

Student earnings, however, are compared against very different state earning benchmarks:

  • Tennessee: $43,000 (bottom 25% nationally)
  • Massachusetts: $68,000 (second in the country)

In our 2022 rankings, the effect of these benchmark earning disparities was that the Vanderbilt student median earnings of $80,000 represented an EarningsPlus of $36,000 – an 82.5% premium, while Harvard student earnings of $85,000 represented an EarningsPlus of $17,000 – a 25% premium. Based on the national profile of their students, we believe this distorts student comparative earnings.

To rectify this issue, earning benchmarks are now derived proportionally from in-state/out-of-state earning figures, based on student population. Schools with a large percentage of in-state students are thus compared to the more relevant in-state earning benchmark; schools with a large out-of-state student percentage (and fully online schools) are compared, proportionally, to a single national earning standard.

Did you enjoy this post?