Western culture loves to rank everything. From foods to sport teams to cities, we obsess over figuring out the best thing, the second best thing, and so on. One of the more interesting ranking systems is that of colleges. Every year, U.S. News and World Report publishes its popular list of the “best colleges” in the country., which means that a lot of people read and absorb these rankings,yet what actually goes into them? Why are they important?
The U.S. News and World Report is broken down into a number of statistical categories. These categories include factors like graduation rates, student selectivity, and the alumni giving rate. Each category is given a certain percentage weight in the total score for each college. Then each category contains subcategories that are weighed as a percentage of that category.
Graduation and Retention rates form the first category, representing 22.5% of the total score. 80% of this figure arises from a school’s 6-year graduation rate, while 20% arises from a school’s first-year retention rate. Graduation rates are certainly important. Schools have an obligation to help everyone they accept graduate. Yet issues of class highly impact how students graduate. Schools that accept a lot of rich kids (such as Cornell), do not have to worry about as many students dropping out because they cannot afford college (even though this remains a problem at our university). Schools that have a wider range of economic backgrounds face the issue of dropout more frequently, especially as tuition costs have risen. In addition, studies have shown that issues of race and gender affect graduation rates. Colleges with higher percentages of underrepresented students are consequently punished in these rankings for their diversity.
The second category is Academic Peer Assessment, representing another 22.5% of the total score. Even from the outset, this category seems incredibly silly. To form the assessment, the U.S. News and World Report uses a survey of high school counselors from the top public high schools and asks them to rate colleges. First of all, what do high school counselors know about the academic strength of colleges? They obviously know a lot about how the college admission process functions, but they are not students. While they might talk to a lot of students, they do not attend classes, talk to professors, or take exams. Even worse, U.S. News and World Report only surveys college counselors who work for schools that rank at the top of its Best High Schools list. In the United States, where we fund high schools through local property taxes, this basically means the richest high schools. These elite high schools decide that the elite colleges are the best. U.S. News has decided that only people who come from rich places and benefit most from the current education system can have valid opinions on colleges.
Faculty Resources constitutes the third category, comprising 20% of the total score. 45% of this category comes from the class size and student-faculty ratio. I actually believe this subcategory effectively measures a college’s value. Smaller classes encourage more discussion and better access to professors. Students learn in a more intimate way. 15% of the ranking emerges from analyzing how many professors possess the highest degree in their field. While this might seem fine, it favors schools with the resources to pay professors and therefore reinforces an elitist perspective. Just because a professor could afford to attain a PhD does not mean they teach well. Another 5% of this category is dedicated to the number of full-time faculty, which seems pretty legitimate. However, many schools that specialize in the arts, law, or business will hire professors who are practitioners in their discipline as well as academics. These professors often have insights on the current state of their field, even if their background is less academic. This subcategory discourages schools to broaden their faculty lines outside the strict confines of academia.
Another category — one that all of us here at Cornell know well — is Student Selectivity. It represents 12.5% of the ranking. 65% of this category comes from SAT and ACT scores. These types of standardized tests are practically meaningless. One study found almost no difference in academic achievement between students who submitted SAT scores to colleges and those that did not. There is also a huge gap in SAT scores depending on racial background. Students with the resources and time to dramatically improve their SAT scores through practice sessions end up benefiting greatly from these advantages. The percentage of students attending the university in the top 10% of their high school class, counts for 25% of student selectively. While on the surface this seems great, every student in the US should be entitled to go to college regardless of where or how they graduate from high school. Therefore, the use of this subcategory only reinforces the idea that the “best colleges” should be the most exclusive. It suggests that only students who have the highest grades in high school deserve a great college education. Finally, 10% of the Student Selectivity category results from a college’s acceptance rate. Since acceptance rates are statistically determined by how many students applied to a college, this measure is simply an appraisal of the reputation of the school. And since ranking like the U.S. News and World Report influence college reputations, they’re really only measuring themselves.
As you may have noticed, U.S. News uses a lot of metrics that depend on the wealth of a school and the affluence of its students. However, it seems they needed two more categories to measure this. 10% of the ranking comes from average spending per student on research and education. The giving rate of alumni constitutes another 5% of their formula. Both these categories just measure how much money the school has and how rich the alumni of the school are. They do not measure anything about the actual academics, campus environment, or anything related to the university itself. They only exist to show that for U.S. News, money equals education.
The final category is Graduation Rate Performance, comprising 7.5% of the ranking. Basically, U.S. News predicts the graduation rates of each school. If schools meet or exceed that predicted rate, they increase the ranking. If they fall below that predicted rate, they fall in the rankings. They do not seem to publish the formula for this statistic, so it is hard to evaluate its efficacy.
As you can probably tell the U.S. News and World Report ranking system heavily favors schools. In fact, just look at the difference between the top twenty schools in endowment, and the top twenty schools in the U.S. News rankings (this excludes schools with joint endowments):
|Top 20 by Endowment||Top 20 by U.S. News, National Universities|
|1. Harvard||1. Princeton|
|2. Yale||2. Harvard|
|3. Stanford||3. Chicago|
|4. Princeton||4. Yale|
|5. MIT||5. Columbia|
|6. UPenn||6. MIT|
|7. Michigan||7. Stanford|
|8. Northwestern||8. UPenn|
|9. Columbia||9. Duke|
|10. Notre Dame||10. Cal Tech|
|11. Duke||11. Dartmouth|
|12. WashU||11. Johns Hopkins|
|13. Chicago||11. Northwestern|
|14. Emory||14. Brown|
|15. Cornell||14. Cornell|
|16. Virginia||14. Rice|
|17. Rice||14. Vanderbilt|
|18. Dartmouth||18. Notre Dame|
|19. Ohio State||18. WashU|
|20. Vanderbilt||20. Georgetown|
Sixteen out of twenty of these schools have spots on both lists. The ones that don’t still have large endowments and are highly ranked in U.S. News. Even though the ranking criteria changes the order of the top twenty a little bit, the trend stays the same: rich schools get ranked higher. In this way, the U.S. News and World Report rankings measure something that everyone already knows. The so-called “elite” schools are elite because they’re the whitest, wealthiest schools in the nation. The ranking system even penalizes schools for encouraging diversity, accepting more students from across the economic spectrum, and helping students who struggle academically obtain a college degree.
We could consider this a system of elite reproduction. It encourages schools to enhance their reputation by accepting wealthier students and not broadening their applicant base. This has a dramatic impact because so many people take these rankings seriously. Research published by higher ranked universities are considered more meaningful, student graduates get better jobs, and more donor money flows to higher ranked institutions. In turn, schools shape their policies, allocate funds, and make admissions decisions based on these lists. It creates a feedback loop. These schools try to become more exclusive to fit the lists, and the lists measures schools based on how exclusive they are. It helps keep the access, the education, and the power in the hands of those who already possess it.
Of course, we could think of a better ranking system. We could measure schools on social mobility, or how they help students enhance their economic circumstance. We could measure schools on economic, racial, and ethnic diversity. We could measure schools based on how many students whose parents did not graduate college achieved an education. Or maybe, we could just ignore these rankings entirely.