The Wrong Brain for the Right Job: Why Top STEM Students Don't Always Make the Best Researchers
A reflection on the discontinuity between undergraduate excellence and PhD research aptitude in STEM education.
From my journey as an undergraduate engineering student to the completion of an engineering PhD, I have witnessed a counterintuitive twist in competence assessment that quietly prevents the research community from recruiting its most capable candidates. PIs of research groups want the most insightful and creative PhD candidates, and so they ask for applicants with top grades in their undergraduate and master’s study. However—sometimes ironically—the STEM graduates who achieved top grades are not always competent PhD candidates, and the most competent researchers are not always the top students in the lecture theatre.
My point is this: the cognitive profile required for excelling in research is not the same as the one required for excelling in coursework.
The Coursework Game
Undergraduate and master’s programmes pour existing knowledge and skills into the cranium of a student, whereas being a PhD candidate asks the student to create new concepts and ideas for a subject. The required aptitudes for these two categories are incomparable.
The outcome of a STEM bachelor’s programme is certain. Students are assured of the subjects they will learn and the questions they will solve in examinations. As such, the structure of an engineering programme—whether in a rigid East Asian university or an enlightening North American liberal arts college—is almost identical across the globe: learn the equations, recite the applications of the equations, practise the questions, and then sit the exams. The students who memorise the equations and solution steps fast win the game and achieve excellent grades. In more complimentary words, the students with quick minds rise to the top of the class.
Psychologists have a term for this kind of thinking. J. P. Guilford, a psychologist who studied the structure of intelligence, drew a distinction between convergent thinking—the ability to quickly narrow down to a single correct answer—and divergent thinking—the ability to generate multiple novel ideas from an open-ended prompt. Traditional STEM examinations are overwhelmingly convergent: one problem, one known method, one correct answer. The students who thrive are those whose minds converge fastest.
The Research Game
Working and studying as a PhD candidate in a STEM subject is another game entirely. The goal is to discover something unprecedentedly noticed by other researchers and scientists. To play this game, one has to connect what they have learnt and know, and plot a graph across disparate domains—the aptitude of which we call “insight” and “creativity.” These candidates are not necessarily prodigies at memorising perplexing equations and solving exam papers; they might have struggled with solution procedures of frequent exam questions during their undergraduate years. However, once they have been shown a physics or maths theorem, the potential directions to explore by leveraging the theorem flashingly occur to their minds.
This is divergent thinking in action. And research suggests it matters. A study published in PLOS ONE in 2025, tracking students at two California State University campuses over thirty years, found that high undergraduate GPA did not predict greater success in entering or completing a PhD, nor did it predict time to degree. Students with undergraduate GPAs below 3.0 who went through a structured research training programme achieved PhD completion rates above 80%—far exceeding the national average of roughly 50% in biomedical sciences. The authors bluntly concluded that undergraduate GPA is a poor proxy for talent and motivation for a research career.
This finding is not isolated. A widely cited analysis of biological and biomedical PhD graduates at the University of North Carolina and Vanderbilt University found that neither GRE scores nor GPA were associated with first-author publications or other measures of research productivity. The strongest predictor of success turned out to be something far less quantifiable: detailed reference letters from previous research advisors. As the researchers at Science reported, traditional admissions metrics did not predict anything recognisable as scientific productivity—not publications, not conference presentations, not fellowships, not even passing the qualifying exam.
The Mismatch
Ironically, the second type of thinker—the divergent one—is usually not outstanding in their undergraduate and master’s study, because the assessment method can be disastrous for them. Examinations in STEM undergraduate school are technically selecting who thinks faster but less broadly, in order to recognise the types of questions taught as examples and keep their calculations swift. These top players in exam-taking are usually deemed the top STEM graduates and hence selected by PhD recruitment.
But being talented at adapting oneself to a framework of assessment does not equal being gifted at studying STEM subjects in the truest sense. A PhD candidate has to decide which direction to go in and how to travel down an uncharted path—and that is where many top undergraduate students falter. Unfortunately, those who might have the genuine aptitude for exploration and innovation are likely to be filtered out by the very admissions criteria designed to find talent, because their cognitive wiring does not give them the swiftness in exam-taking needed to achieve a top grade.
The evidence supports this concern beyond the individual level. A landmark 2020 paper in the American Economic Review by Bloom, Jones, Van Reenen, and Webb found that research productivity in the United States has been declining by an average of 5.3% per year since the 1970s. Just to sustain constant growth in GDP per person, the country must double its research effort every thirteen years. Complementary work by Park et al. (2023) showed that disruptive, paradigm-shifting research has been declining across virtually all scientific fields and technology domains. One contributing factor identified by Bhattacharya and Packalen (2020) in an NBER working paper is that scientific incentive structures—including citation-driven evaluation—have shifted researchers away from exploration and toward incremental science. When recruitment selects for convergent minds and incentive structures reward safe bets, it is no surprise that the rate of genuine breakthroughs slows.
A Structural Problem
This mismatch between one’s aptitude and the system’s method of identifying research potential is, I believe, one of the contributing causes of the stagnation of technological progress. We are not necessarily running out of fruit on the tree of knowledge—we may simply be sending the wrong climbers up the ladder.
The fix is not to abandon grades altogether, but to recognise their limits. Research experience, reference letters that speak to a candidate’s curiosity and resilience, and evidence of divergent thinking should carry at least as much weight as a transcript. Some institutions have already begun moving in this direction: many major research universities have dropped the GRE requirement, and holistic admissions are slowly gaining ground. But the culture of grade worship runs deep, and PIs under pressure to justify their recruitment choices still reach for the easiest number on the page.
If we want a research community capable of the breakthroughs that the coming decades demand, we need to get better at spotting the minds that connect—not just the minds that compute.