Contents

    Why all college rankings suck, at their best

    David Levy
    David Levy

    David Levy

    David Levy manages the product and data strategy for Degreechoices and writes about college rankings and accountability.

    Author
    Harvey J. Graff
    Harvey J. Graff

    Harvey J. Graff

    Harvey J. Graff is Professor Emeritus of English and History at The Ohio State University and inaugural Ohio Eminent Scholar in Literacy Studies. Author of many books, he writes about a variety of contemporary and historical topics for Times Higher Education, Inside Higher Education, Academe Blog, Washington Monthly, and more.

    Reviewer
    Updated Jul. 17, 2023
    Why all college rankings suck, at their best
    Contents

      All college ranking systems are fundamentally flawed. At their best, they struggle to weigh myriad conflicting factors that affect students’ choice of college. At their worst, they become playthings in the hands of the very colleges they’re supposed to objectively evaluate. What is prioritized in the ranking calculations can become a negative incentive to the colleges hoping to rank.

      Rankings can distort the market.

      In a balanced, open, and honest school ranking ecosystem, there would be a range of different rankings, each specialized in a particular aspect of the “college experience”. Students could compare and cross-reference the data and metrics based on their own priorities. We are not there yet.

      Popular online programs

      Degreechoices offers a singular perspective: the relative short and longer-term economic benefits – at least in part – for students graduating from different colleges and universities. We base our rankings solely on data derived from the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) and College Scorecard (CSC). We supplement these with a limited amount of additional data from the American Community Survey, an ongoing statistical survey by the U.S. Census Bureau that collects detailed demographic, social, economic, and housing data from a sample of the population each year. We do not use the often-questionable self-reported information on which the U.S. News & World Report rankings are based.

      We adhere to this data for transparency and objectivity. Despite that, we cannot escape bias completely.

      For example, we grapple with such questions as:

      • How do we weigh the importance of relative earnings against relative costs?
      • We omit schools with low graduation rates from our rankings – but the question of where we should set that threshold remains.
      • There are substantial differences in earnings that relate to geographic location and areas of study. How can we appropriately adjust for these differences?
      • Should we consider social mobility indicators that measure school performance specifically for lower-income students? And how?
      • Should we – and the users of our data – balance the differences between economic returns and career satisfaction? How?

      Each ranking system inevitably reflects decisions shaped by the personal experiences and biases of its creators. We made a conscious choice to prioritize transparency in our system. This commitment also requires a candid acknowledgement of the strengths and limits of the data and how we choose to analyze and present them, warts and all.

      This article specifically explores the limitations of Degreechoices rankings, as well as those of College Scorecard-based rankings in general.

      Apples and elephants: the problems with the Carnegie classification

      Comparisons within and across any college ranking system must be judicious. Comparing a community college to a private university, for example, is unreasonable.

      We use the Carnegie classification system to create ranking categories because Carnegie is the established standard. It allows for transparency about why each school is included in its respective category. Nonetheless, this system is not without its shortcomings.

      For example, so-called “national universities” are defined as those offering doctoral programs; regional colleges and universities offer bachelor’s and master’s degrees; and liberal arts colleges focus on arts, humanities, social and natural sciences, usually limited to bachelor-level studies.

      However, within these categories there are often massive differences between schools. It is a stretch to compare CUNY (City University of New York) – a public university system with a student body that is more than 90% in-state – to a private historic Ivy League like Harvard, which enrolls students from across the country and around the world and has much larger graduate programs and private resources.

      The current categorizations within the Carnegie system are not ideal. At the same time, attempts to modify them risk introducing a new degree of subjectivity. Thus, we lack a better alternative. That must be admitted and considered when reviewing our college comparisons.

      The problem of cost

      Determining and then comparing the complex costs of attending and graduating from college presents a significant challenge. The discrepancy between the sticker price and the net price of higher education is as substantial as it is hidden from students and their families.

      Like most CSC-based systems, our rankings begin with the median net price – including tuition, fees, room and board, books, and living expenses – for students who have received at least one dollar in federal aid. We make clear that students who do not receive federal aid are not included in the calculation. This is a major but necessary limitation for the purposes of comparative data.

      The proportion of students receiving financial aid varies greatly across different institutions. At Harvard, approximately 22% of students receive some form of federal aid; at CUNY City College, in sharp contrast, 95% of students do. Consequently, our median net cost figures provide a much more accurate picture of what students pay at CUNY than they do for Harvard and comparable institutions.

      This does not mean that 78% of students at Harvard are paying sticker price. Private institutions like Harvard have their own methods for assessing need, although they do not admit all these variations. But it means that 78% of students are not considered when CSC calculates Harvard’s net price.

      Institutional- vs program-level costs

      Colleges and universities report undergraduate costs at the institutional level. (Graduate schools do not report costs at all, but debt is reported at the program level.) However, some schools use a differential tuition model in which certain majors, perhaps due to higher demand or more specialized equipment/training, have higher costs than others. These differences are not captured by CSC, and so they are not reflected in our program-level rankings.

      In addition, anecdotal evidence suggests that some majors, particularly in STEM fields, require a longer duration to complete. However, because of the unavailability of specific time-to-graduate data, we uniformly apply the same average completion times across all majors within an institution. Theoretically then, some majors might take longer to complete and have a higher than reported cost.

      The problem(s) of earnings

      CSC data provide only one window into the early stages of students’ careers:

      • 10 years post enrollment at the school level
      • 4 years after graduation at the program level

      They cannot encapsulate the entire career journey – the highs and lows, encounters with bosses both inspiring and challenging, big breaks and unexpected setbacks, issues of health, location, tragedy, or triumph – all integral factors that shape the trajectory and associated earnings of a career.

      Some careers take more time to develop. Not all first jobs continue. Others require additional education or other training. Some majors might even have a so-called “food service industry” phase, where graduates spend time exploring their prospects and deciding on their ultimate direction. Many students take a “gap year” of one kind or another.

      CSC data offer a snapshot of students’ earnings 4 years after graduation, classified by program. While they serve as one valuable indicator of early-career economic performance, it is important to keep in mind that many majors exhibit initially lower earnings, only to catch up or surpass others later in the career trajectory. Job and life satisfaction are entirely different matters and are not considered in these data.

      Who is the cohort?

      There are other data limitations that stem from whose earnings are being measured. Only the salaries of students who are repaying loans are reported. Studies consistently show that students from low-income families earn less on average. (Although a few studies show the opposite.) This means the data set may undervalue the average earnings of high-income students.

      Additionally, schools with a high percentage of loan-repaying students will likely have more representative earnings data than schools with a smaller percentage, simply because a larger percentage of the student body is represented in the sampling.

      The data disaggregation challenge

      More importantly, CSC lacks the data disaggregation needed to compare certain earnings reliably. Racial disaggregation is among the most glaring omissions. We know from the American Community Survey that Black college graduates earn, on average, approximately 21% less than white college graduates. However, we are unable to disaggregate Black student earnings in the CSC data.

      This is particularly frustrating when we attempt to measure the performance of HBCU students. Ideally, we would compare the performance of Black students at HBCUs against Black students at non-HBCUs. Instead, we are only able to compare HBCUs, which are generally 80+% Black, against non-HBCUs (with a weighted average Black student population of around 10%).

      The question of location

      There are significant differences in graduate earnings based on geographic location, both between different states, and with respect to urban and rural residence. We attempt to adjust for in-state and out-of-state differences, but we have no way to assess the influence of the type of community in which a student eventually settles.

      Final thoughts

      Economic performance is an important consideration when choosing a college – but not necessarily (especially by itself) – the most important. We do not consider campus life, accessibility, quality of teachers, a well-rounded curriculum, or any high-impact activities that may contribute to a positive and lasting college experience.

      We do not attempt to measure salary versus satisfaction. We do not consider class size, or the availability of different student services, organizations, or alumni networks. Many of these elements are important to students. In our view, our economic performance metrics are arguably the best in the market for what they do. Nonetheless, we strongly recommend that students supplement our rankings with information from other trustworthy sources.

      Did you enjoy this post?