Reframing rejection in academia
“It is possible to commit no mistakes, and still lose. That is not a weakness. That is life,” said then-Captain Jean-Luc Picard. This was thirty years ago. (The episode aired on 10 July 1989.) But it’s worth remembering. It’s also timely: Picard returns to screens this Autumn, albeit as a retired Admiral. And because I don’t remember saying it at either Young Talent Grants Week or BSS PhD Day.
Briefly, that’s what academia is: doing everything right, getting good reviews, and then receiving a rejection letter anyway. It has happened to me, and—if you choose this life—it will happen to you. It’s part of the game. More important, though, is realizing that you can’t win if you don’t play.
Nobody wins at everything. Although this stings, my advice is to not ruminate on it when it happens. Pick yourself up, dust off your ego, and get back to doing whatever it is that you judge to be worthwhile. The meaningfulness of your work cannot be derived from the number of your acceptance letters.
Rejection as part of the game
Even if none of this sounds familiar, you will probably still have heard about the Rubicon-Veni-Vidi-Vici scheme sponsored by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (which is known in English simply as NWO). This is the primary policy vehicle by which the Dutch government supports academic research. And winning one of their grants can set up your career. So everyone applies. Unfortunately, though, there isn’t enough money to go around.
Rejection rates are sky-high: 95.7% this year, across the two stages of the Veni process (according to the letters I received). And the inter-rater reliability of the entire approach is practically zero. The difficulty of assessing such technical and specialized proposals then leads inevitably to false positives and false negatives: some projects get funded over others more deserving. Indeed, it is common to hear senior academics—including the NWO-voorzitter—describe the system as a kind of lottery. Tenure and promotion criteria reflect this too: rather than actually having to win the grants, emerging and early career scholars need only to have their projects deemed “fundable” (see our intranet for the policy in our faculty).
Still, the stress is unbelievable. I gave up a big chunk of my summer last year to the Veni pre-application, and most of my Christmas break was dedicated to responding to the resulting invitation. The reviews, fortunately, were good. (My pre-application was rated “excellent,” and one of my reviewers was “excited.”) In the time since my application, an EU policy related to my work also changed in a way that supported my proposal. But then I wasn’t invited to an interview. Because the letter informing me of this included the word “unfortunately,” it’s safe to say that the final judgment later this summer will be a rejection.For me, this isn’t career-ending. I am fortunate to already have a tenure-track position. And I have been successful in applying for other grants. That is then perhaps why I was invited to speak about the Veni application process at Young Talent Grants Week: I’ve played the game enough times to have won and lost, so I can smile about it no matter the outcome. And because this summer will again be dominated by these applications, for many in this audience, I wanted to share some additional reflections on the process.
Why?
In short: these applications are hard on purpose. The word counts are insufficient, the format is constantly changing, the CV requirements are unusual, and the publication expectations are unpredictable. The focus on Knowledge Utilization, rather than Knowledge Translation, is also frustrating. This, though, is all part of the strategy; how granting agencies weed out lower-quality projects before they even start doing their peer reviews.
Think about it: if a large fraction of the applicant pool doesn’t think their proposal will be competitive, then they will conclude that it’s not worth the very substantial effort to apply. So there will be fewer applications to review, and therefore a lower rejection rate.
This is diabolical, because it means some inefficiencies are intentional. It’s also very clever, given the institutional goal of funding the right things: How do you solve a choice-problem where the decision-maker knows less, at the level of each project, than the individual experts who are encouraged to frame their applications in the best possible light?
Journal editors face the same problem. Everyone wants to present their research as ground-breaking, and therefore eminently publishable (and fundable). But that is rarely the case. Everything can’t be revolutionary, or the system of scientific knowledge production would collapse. Science needs stability, as well as change. So editors, reviewers, and funders struggle: with very limited resources to invest, and many different kinds of research to invest in, how can the overall value of the portfolio be maximized in an efficient way?
One tool that’s often used has nothing to do with the research itself. Instead, the idea is to optimize the stress of the submission process. This is explained in a new preprint authored by a team led by dr Leonid Tiokhin of the Human Technology Interaction Group at the Eindhoven University of Technology:
Submission costs de-incentivize scientists from submitting low-quality papers to high-impact journals via two mechanisms: differential benefits (e.g., low-quality papers are less likely to be published than high-quality papers) and differential costs (e.g., low-quality papers have higher submission costs than high-quality papers). Costs to resubmitting rejected papers also promote honesty. Without any submission costs, scientists benefit from submitting all papers to high-impact journals, unless papers can only be submitted a limited number of times.
In other words, the inconvenience of submitting is planned. The stress and frustration, strategic. Higher application costs in time and energy mean fewer applicants, which becomes fewer false negatives, fewer false positives, and a more efficient distribution of funds to those whose projects both need and merit the investment. (The limited number of submissions per applicant has also been adopted by NWO, and the European Research Council uses it too.)
You would be forgiven for thinking that that they don’t want your application. They don’t. And they make sure you feel that they don’t. Indeed, that’s the strategy described by Tiokhin and colleagues.
Revise-and-resubmit
The trouble with this, for would-be applicants, is that just getting the reviews is a valuable part of the application process. Indeed, the reviews are so valuable that—if you’re ready to benefit from the feedback, and if the reviewers are honest and conscientious in providing it—the inconvenience of explaining your work in their way using their templates is often worthwhile. My advice regarding applying for grants is therefore the same as it is for writing-for-publication: aim for revise-and-resubmit, expect an unenthusiastic response, and then use the resulting expert commentary to make your next draft stronger. Even a complete misunderstanding shows you where your explanation needs to be clarified.
Allow me, on this basis, to reframe rejection: if you aren’t having your work rejected, it’s not because you’re a genius. It’s because you aren’t aiming high enough. So aim high, expect to miss, learn from the feedback, and then try again. To put it another way: play the long game. That’s how you turn rejection into acceptance; more importantly, it’s how to win at life.
Thanks for this piece, Jeremy. The ‘submission cost’ idea is very interesting. It may help to explain the culture of narcissism that has been infecting psychology for decades. The people who are least likely to be deterred by high submission costs are the people who are most convinced that their work is truly awesome, whether this is true or not. Your advice to aim for rejection (or at least not aim for immediate acceptance and praise) can be helpful in counteracting this, although of course it doesn’t help us get rid of the notion that it is all about ‘winning’ somehow. 🙂
Another issue is that the journals that reject more are not necessarily higher quality journals. They tend to have a higher impact factor, but that is not the same thing as higher quality. See https://www.frontiersin.org/articles/10.3389/fnhum.2013.00291/full Quote: “using journal rank as an assessment tool is bad scientific practice”.