Problems with new ranking methodology
In the new admit ranking methodology two of them make me raise an eyebrow.
Admissions data. "We incorporated acceptance rates, median MCAT and GPA figures, yield rates, the percentage of in-state versus out-of-state matriculants, and the proportion of first-generation medical students in each entering class."
>>>> First the new AAMC MSAR data for MCAT and GPA isn't updated yet so this is based on old data. A school producing excellent physicians, strong residency matches, and supportive training environments may rank lower simply because it admits more diverse or mission-driven applicants. Public schools heavily favor in-state applicants, which penalizes schools for fulfilling their public mission, not for being worse institutions.
Student decision data. "When Admit users hold multiple acceptances, we see which school they actually choose to matriculate at. These head-to-head decisions show which schools students are consistently picking over others, which is one of the most honest signals of how a school is valued by the people who've done the most research on it."
>>>>This approach is highly vulnerable to selection bias, since Admit users are not a representative sample of all applicants and head-to-head comparisons may rely on small or uneven sample sizes. This also assumes decisions are fully informed and quality-driven, when in reality they are often shaped by financial aid differences, geographic constraints, proximity to support systems, and prestige perception (which itself is influenced by incomplete or crowd-sourced information such as reddit and SDN), so these outcomes reflect applicant circumstances, incentives, and biases more than objective differences in program quality or training.