Reframing rejection in academia

“It is possible to commit no mistakes, and still lose. That is not a weakness. That is life,” said then-Captain Jean-Luc Picard. This was thirty years ago. (The episode aired on 10 July 1989.) But it’s worth remembering. It’s also timely: Picard returns to screens this Autumn, albeit as a retired Admiral. And because I don’t remember saying it at either Young Talent Grants Week or BSS PhD Day.

Briefly, that’s what academia is: doing everything right, getting good reviews, and then receiving a rejection letter anyway. It has happened to me, and—if you choose this life—it will happen to you. It’s part of the game. More important, though, is realizing that you can’t win if you don’t play.

Nobody wins at everything. Although this stings, my advice is to not ruminate on it when it happens. Pick yourself up, dust off your ego, and get back to doing whatever it is that you judge to be worthwhile. The meaningfulness of your work cannot be derived from the number of your acceptance letters.

Rejection as part of the game

Even if none of this sounds familiar, you will probably still have heard about the Rubicon-Veni-Vidi-Vici scheme sponsored by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (which is known in English simply as NWO). This is the primary policy vehicle by which the Dutch government supports academic research. And winning one of their grants can set up your career. So everyone applies. Unfortunately, though, there isn’t enough money to go around.

Rejection rates are sky-high: 95.7% this year, across the two stages of the Veni process (according to the letters I received). And the inter-rater reliability of the entire approach is practically zero. The difficulty of assessing such technical and specialized proposals then leads inevitably to false positives and false negatives: some projects get funded over others more deserving. Indeed, it is common to hear senior academics—including the NWO-voorzitter—describe the system as a kind of lottery. Tenure and promotion criteria reflect this too: rather than actually having to win the grants, emerging and early career scholars need only to have their projects deemed “fundable” (see our intranet for the policy in our faculty).

Dr Burman cracks jokes less than 48-hours after receiving his Veni rejection. Also shown are fellow panellists Dr Jonkers and Dr Desjardins, from the “VENI: The inside scoop” session at Young Talent Grants Week.
[Photo by Daniël Houben; included here with permission.]

Still, the stress is unbelievable. I gave up a big chunk of my summer last year to the Veni pre-application, and most of my Christmas break was dedicated to responding to the resulting invitation. The reviews, fortunately, were good. (My pre-application was rated “excellent,” and one of my reviewers was “excited.”) In the time since my application, an EU policy related to my work also changed in a way that supported my proposal. But then I wasn’t invited to an interview. Because the letter informing me of this included the word “unfortunately,” it’s safe to say that the final judgment later this summer will be a rejection.

For me, this isn’t career-ending. I am fortunate to already have a tenure-track position. And I have been successful in applying for other grants. That is then perhaps why I was invited to speak about the Veni application process at Young Talent Grants Week: I’ve played the game enough times to have won and lost, so I can smile about it no matter the outcome. And because this summer will again be dominated by these applications, for many in this audience, I wanted to share some additional reflections on the process.


In short: these applications are hard on purpose. The word counts are insufficient, the format is constantly changing, the CV requirements are unusual, and the publication expectations are unpredictable. The focus on Knowledge Utilization, rather than Knowledge Translation, is also frustrating. This, though, is all part of the strategy; how granting agencies weed out lower-quality projects before they even start doing their peer reviews.

Think about it: if a large fraction of the applicant pool doesn’t think their proposal will be competitive, then they will conclude that it’s not worth the very substantial effort to apply. So there will be fewer applications to review, and therefore a lower rejection rate.

This is diabolical, because it means some inefficiencies are intentional. It’s also very clever, given the institutional goal of funding the right things: How do you solve a choice-problem where the decision-maker knows less, at the level of each project, than the individual experts who are encouraged to frame their applications in the best possible light?

Journal editors face the same problem. Everyone wants to present their research as ground-breaking, and therefore eminently publishable (and fundable). But that is rarely the case. Everything can’t be revolutionary, or the system of scientific knowledge production would collapse. Science needs stability, as well as change. So editors, reviewers, and funders struggle: with very limited resources to invest, and many different kinds of research to invest in, how can the overall value of the portfolio be maximized in an efficient way?

One tool that’s often used has nothing to do with the research itself. Instead, the idea is to optimize the stress of the submission process. This is explained in a new preprint authored by a team led by dr Leonid Tiokhin of the Human Technology Interaction Group at the Eindhoven University of Technology:


Submission costs de-incentivize scientists from submitting low-quality papers to high-impact journals via two mechanisms: differential benefits (e.g., low-quality papers are less likely to be published than high-quality papers) and differential costs (e.g., low-quality papers have higher submission costs than high-quality papers). Costs to resubmitting rejected papers also promote honesty. Without any submission costs, scientists benefit from submitting all papers to high-impact journals, unless papers can only be submitted a limited number of times.

In other words, the inconvenience of submitting is planned. The stress and frustration, strategic. Higher application costs in time and energy mean fewer applicants, which becomes fewer false negatives, fewer false positives, and a more efficient distribution of funds to those whose projects both need and merit the investment. (The limited number of submissions per applicant has also been adopted by NWO, and the European Research Council uses it too.)

You would be forgiven for thinking that that they don’t want your application. They don’t. And they make sure you feel that they don’t. Indeed, that’s the strategy described by Tiokhin and colleagues.


The trouble with this, for would-be applicants, is that just getting the reviews is a valuable part of the application process. Indeed, the reviews are so valuable that—if you’re ready to benefit from the feedback, and if the reviewers are honest and conscientious in providing it—the inconvenience of explaining your work in their way using their templates is often worthwhile. My advice regarding applying for grants is therefore the same as it is for writing-for-publication: aim for revise-and-resubmit, expect an unenthusiastic response, and then use the resulting expert commentary to make your next draft stronger. Even a complete misunderstanding shows you where your explanation needs to be clarified.

Allow me, on this basis, to reframe rejection: if you aren’t having your work rejected, it’s not because you’re a genius. It’s because you aren’t aiming high enough. So aim high, expect to miss, learn from the feedback, and then try again. To put it another way: play the long game. That’s how you turn rejection into acceptance; more importantly, it’s how to win at life.

Jeremy Trevelyan Burman on Twitter

Dr Burman joined the tenure-track at the University of Groningen in 2016, after working for two years at the Archives Jean Piaget in the University of Geneva, Switzerland. He received his doctorate in Psychology from the History and Theory Graduate Programme at York University in Canada. In addition, he has a separate terminal MA in Interdisciplinary Studies (also from YorkU), and an Honours BSc in Psychology from the University of Toronto. Prior to his turn to academia, he worked in broadcasting and—before that—even made a go of it as an entrepreneur during the dot-com bubble. You can find him most easily on Twitter: @BurmanPhD

Select recent publications

Burman, J. T. (2022). Meaning-change through the mistaken mirror: On the indeterminacy of “Wundt” and “Piaget” in translation. Review of General Psychology, 26(1), 22-48. doi:10.1177/10892680211017521

Burman, J. T. (2021). The genetic epistemology of Jean Piaget. In W. Pickren (Ed.), The Oxford Research Encyclopedia of the History of Psychology. Oxford University Press. doi:10.1093/acrefore/9780190236557.013.521

Ratcliff, M. J., Tau, R., & Burman, J. T. (2020). Overcoming mind-brain dualism. Constructivism, interdisciplinarity, and psychophysiological parallelism in Piaget’s cognitive evolutionary synthesis. Mefisto: Journal of Medicine, Philosophy, and History, 4(2), 39-60.

Burman, J. T. (2020). On Kuhn’s case, and Piaget’s: A critical two-sited hauntology (or, on impact without reference). [In C. Millard & F. Callard, eds. of special issue dedicated to the memory of John Forrester.] History of the Human Sciences, 33(3-4). doi:10.1177/0952695120911576

Burman, J. T., & Collins, B. M. (2020). Commentary: Why study the History of Neuroscience? Frontiers in Behavioral Neuroscience, 14, 1-2. doi:10.3389/fnbeh.2020.00127

Burman, J. T. (2020). On the implications of object permanence: Microhistorical insights from Piaget’s new theory. Behavioral and Brain Sciences, 21-22. doi:10.1017/S0140525X19002954

Burman, J. T. (2019). Development. In R. J. Sternberg & W. Pickren (Eds.), Handbook of the Intellectual History of Psychology (pp. 287-317). Cambridge University Press.

Burman, J. T. (2018). Digital methods can help you… If you’re careful, critical, and not historiographically naïve. [Introduction to special issue on digital history of psychology.] History of Psychology, 21(4), 297-301. doi:10.1037/hop0000112

Burman, J. T. (2018). Through the looking-glass: PsycINFO as an historical archive of trends in psychology. History of Psychology, 21(4), 302-333. doi:10.1037/hop0000082

Burman, J. T. (2018). What is History of Psychology? Network analysis of Journal Citation Reports, 2009-2015. Sage Open. doi:10.1177/2158244018763005

Ratcliff, M. J., & Burman, J. T. (2017). The mobile frontiers of Piaget’s psychology: From academic tourism to interdisciplinary collaboration / Las fronteras móviles de la psicología de Piaget. Del turismo académico a la colaboración interdisciplinaria. [English original published alongside a Spanish translation by Julia Fernández Treviño.]. Estudios de Psicología: Studies in Psychology, 38(1), 1-33. doi:10.1080/02109395.2016.1268393

Burman, J. T. (2017). Philosophical histories can be contextual without being sociological: Comment on Araujo’s historiography. Theory & Psychology, 27(1), 117-125. doi:10.1177/0959354316682862

Burman, J. T. (2016). Piaget’s neo-Gödelian turn: Between biology and logic, origins of the New Theory. Theory & Psychology, 26(6), 751-772. doi:10.1177/0959354316672595

You may also like


  • Eric Rietzschel July 16, 2019  

    Thanks for this piece, Jeremy. The ‘submission cost’ idea is very interesting. It may help to explain the culture of narcissism that has been infecting psychology for decades. The people who are least likely to be deterred by high submission costs are the people who are most convinced that their work is truly awesome, whether this is true or not. Your advice to aim for rejection (or at least not aim for immediate acceptance and praise) can be helpful in counteracting this, although of course it doesn’t help us get rid of the notion that it is all about ‘winning’ somehow. 🙂

  • Maarten Derksen July 17, 2019  

    Another issue is that the journals that reject more are not necessarily higher quality journals. They tend to have a higher impact factor, but that is not the same thing as higher quality. See Quote: “using journal rank as an assessment tool is bad scientific practice”.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.