If you are familiar with the text above, 1) you’re not alone; and 2) I humbly apologize if I’ve awoken some deep trauma within you.

In the world of academic publishing, scientists are always on the lookout for the next big, shiny thing. A new discovery. A novel hypothesis. A methodology that could revolutionize the field. You name it. If crows are attracted to shiny things (it’s an urban myth btw; hence the thumbnail), then academics are attracted to shiny ideas. This extends into my field of ecology too – the papers that garner the most citations are often the first ones that propose a new hypothesis or framework that packages complex ideas neatly and generalizable across ecosystems and species.

My personal take is that our obsession with novelty has impeded the advancement of ecology, not stimulated it.

The art of publishing

It is well known in academia that replication studies aren’t the way to go. If you want to make it big in the field, don’t bother replicating someone else’s work – journals aren’t going to be interested. Instead, one is forced to propose something novel, either through a sexy new conceptual diagram, a hypothesis no one has ever thought before, or a possible mechanistic pathway.

But is this how science is supposed to work? Where’s the evidence?

This has become a concerning issue within several soft-science fields in academia such as psychology, ecology and sociology known as the replication crisis. Basically, researchers have brought up concerns that a growing number of publications’ results were unreproducible, leading them to question the validities of hypotheses and theories proposed in recent years. Without that replication, science loses its mechanism to falsify false, coincidental findings and self-correct itself. At a systemic level, the literature is getting increasingly saturated with unverified (or untested) novel hypotheses and frameworks, forcing the next generation of scientists to build their research on increasingly flimsy scaffolds of “knowledge”.

This obsession with novelty poses more problems than just the replication crisis alone. Dr. Sameer Shah eloquently summarizes the problem with equating novelty with publication value – scientists start to artificially gatekeep the body of academic literature on unscientific grounds, motivated not by rigorous pursuit for the truth but to advance their own agendas and perspectives however unfounded they might be. This problem is further compounded when we consider how scientists aren’t exactly known for having small egos. How does science correct itself if big-ego scientists refuse to let their pet theories get (in)validated out of “a lack of novelty”?

Real dangers in the fetish for novelty over reproducibility

Part of a researcher’s job is to propose new theories and frameworks to visualize the world, based on their own experience in their field combined with the knowledge built from others’ work. That’s the novelty aspect of the job, and the one which gets captured in the media most prominently (think Nobel Prize winners, Fields medals etc.).

The other part includes making sure we get things right, not new.

We owe much of our scientific process to the unsung heroes of academia, spending their lives mulling away at data to ensure what gets published is as rigorous and robust as possible. However, more often than it should, such checks fail disastrously, leading to real consequences onto the rest of the scientific community. For example, millions of dollars of research funding go to waste when future researchers try to build their works on novel, yet unreproducible findings in medical trials. Even when lives and money may not be at stake, these phantom theories may only serve as a barrier to entry for honest investigations that do not align with the so-called “consensus”. Even if these unreproducible findings do get retracted eventually (usually because of fraud) this means that a bunch of scientists spent precious hours and resources validating theories that had no business to be in the literature in the first place. Finally, when too many researchers go on a novelty fetish, the system becomes saturated with papers that are marketed as the next shiny new thing like the new iPhone. Is it really novel then? And who’s going to end up reviewing all these “novel” publications?

I can’t speak the same for the entirety of ecology since the field is massive, but I do encounter this frustration over my subdomain of plant ecology. As an anecdotal example, how many novel trait frameworks do we really need by this point (see this, this, this, and this)?!

Isn’t demonstrating novelty a necessity in science?

No, I don’t think so. Both aspects of research are essential in making sure good science gets put out. Even in ecology where reproducing results is often a herculean (if not impossible) task, I think there is real value in simply doing the quiet job of producing data solely to make sure that any novel findings are appropriately substantiated. In fact, I would prefer it if future frameworks and hypotheses-generating papers are to be published only after showing that they 1) did a broad survey of the literature to the same extent and level as any meta-analyses, 2) amassed sufficient observations to validate the generality of all components of the proposed framework/hypotheses, and 3) demonstrated their practical utility in the same paper (e.g. modelling, or a use case by practitioners).

Onto the broader issue of why I don’t think novelty is necessary in science. Not to be mistaken, it is still a good trait to have that desire to publish novel ideas and findings. Science, after all, thrives when experts harness their imagination and connect the dots to unravel a new perspective. But at the same time, I think it is time for us to admit to ourselves that we have a fetish for novelty and we need to dial down, remembering that science is rarely built by giant leaps but by baby steps.

And by baby steps, I envision the following changes to the academic publishing system:

  1. Journals should be open to publishing replication studies, or studies that seek to validate existing frameworks and ideas rather than propose new ones. Within this realm, editors should be quick to dismiss cheap, low-effort reviews such as the one at the start of my post, and not to give the same comments themselves.
  2. Frameworks and hypotheses-centric papers should be held to a higher burden of evidence (see above for what I think are the minimum criteria) and utility (to sieve out novel-sounding frameworks that end up becoming self-serving and merely rehashing old ideas).
  3. Organizations need to respect the need for scientists to test the basic tenets of their disciplines rather than craft out new ones, and be willing to sponsor such basic research accordingly.

I hope that this post sufficiently highlights what I deem to be one of the most perverse incentives in science and in ecology. There’s nothing wrong with being ambitious and proposing grandiose theories and frameworks if one seeks to make it big. But this should not come at the cost of the efforts of the silent bulk of scientists who work hard behind the scenes to validate. Journals, reviewers and organizations are already explicitly rewarding the former. It’s the latter that we need to respect and reward too.

But as to that reviewer 2 above…

Leave a comment