One of, if not the most, frustrating parts of being in science is the process of publishing research. At best, from the time you feel you’ve finished writing up your science until it is published likely involves editing it 4 more times. First there is editing based on co-author(s) input. Then there are the edits from the peer review process. This is when other scientists request edits as a condition of accepting the research to the publishing source, i.e., the targeted scientific journal. If you are lucky this involves edits for 2 reviewers. And then, after all of that, is the actual editorial process where the journal’s formatting and proofreading editor provides edits – the step one would think of as editing for a journalist or book author. So 4 edits, and again, this is if everything goes as well as could be hoped.
In practice, things rarely are that easy. Better research typically comes from collaborations. This increases the number of authors and therefore the number of opinions and number of edits to respond to. Most journals use 3 peer reviewers, and occasionally a lead editor or additional reviewer makes comments that require a response bringing those edits up to 4 or 5. And journals often have their own image and table formatting guidelines, or data archival rules that despite your best efforts you don’t get quite right and have to fix.
This is all what happens when an article is accepted to the first journal it is submitted to. Needless to say this process from the end of analyzing data to publication can take a long time – 6 months to a year is common.
It starts to get much worse when a journal submission isn’t successful. An author typically has two choices when this happens, resubmit to the same journal or try another one. Well, and a third option of giving up on publishing in a journal.
While articles are rejected, you are typically still given suggested edits and an invitation to resubmit. This can seem appealing and is often the most direct route to successful publication. Making the suggested edits is no guarantee your edited re-submission to the journal wont be rejected again though. And if it isn’t rejected you will get most likely get additional comments leading to another round of edits. Also, the suggested edits if you are rejected are often quite significant, requiring additional data collection, additional analyses, or both, as well as the corresponding updates to the text to reflect those changes before a re-submission can occur.
Therefore, submitting to a different journal can also seem appealing. In this case you might be able to successfully publish without having to make the changes that were suggested when you received the rejection. However, this often will require submitting to a lower tier journal, and a fair bit of hoping. And again, there will be an editing process should the article be accepted to the second journal.
The real fun begins if you get multiple rejections, or conflicting comments about what to change to avoid rejection. The revision process following rejections, if it goes long enough, has the possibility of leading to rejection on its own. It can take so long to go through the process that maybe it has been 2 years since the data was collected and the article was first written. To be current and get accepted new data collection, and an update to cited literature can be necessary.
If you are a rational human being, you might be asking yourself why you wouldn’t just give up at this point. Personally, I agree that is the rational question. The problem is, in academia for some reason the primary way of measuring success is the number and the influence of your publications. For example, the Linkedin for scientists – ResearchGate – and Google Scholar even given academics scores based on this.
Are you scored based upon how many species your research helps conserve, how many scientists change their methods, how many cancers you contribute to treating, how many students’ lives you impact through teaching, how many “grey list” publications you author, or any other direct relationship between the status of the world being researched and and the research that was done? Nope. I’m not going to say it doesn’t get factored into hiring decisions and salaries, surely it does. But only secondarily, at best.
And who does that help? Maybe it makes it easier for hiring committees to sort through CVs. But does it lead to hiring better candidates? And most importantly, does it help accomplish the mission of science, to provide the best available evidence to decision makers and society?
I think it very much does not do that. When research languishes and disappears its value is lost. Some might say this is desirable, because the research wasn’t of a quality that it should be shared. However, if that is the case then apparently the peer review process needs to make publishing even harder. According to some there is a “replication crisis” in science and plenty of research isn’t replicable, the gold standard for quality research.
As an aside to my point, this doesn’t mean that science is failing, or that most of the conclusions in science are wrong. This is how science is supposed to work. It is through learning over time, repeated data collection and analysis that the truth rises up as the replicable conclusions and those supported by similar studies are given more weight.
So rather than try to make a process that slows publishing down even slower, maybe we should go the other way.
How can the journal article review process be designed to best support the advancement of scientific knowledge and support decision making through the dissemination of evidence?
- Minimize the pain caused to researchers by the publication process
- Maximize the dissemination of evidence from research
- Maximize the rate of learning that results from research
- Maximize the quantity of learning that results from research
- Maximize the quality of learning that results from research
- The status quo is as I described above.
- Another option that has been proposed is to make peer review occur post-publication (https://blogs.scientificamerican.com/information-culture/post-publication-peer-review-everything-changes-and-everything-stays-the-same/).
- Bypass or eliminate peer review.
- My suggestion is to have a journal, or perhaps a collection of journals with one for each major field, devoted to publishing all submissions without the possibility of rejection.
- Articles are only accepted to this journal after they have been rejected twice previously, or one year following a rejection.
- They are published with responses to the comments from the most recent set of peer reviews, either with revisions or appropriate caveats with regards to why any suggested revisions weren’t made. This is the normal process, but publish the comments and responses as well, as an appendix or supplemental material
- They are published with commentary from the authors, and the journal, about what can be learned from the article. This should be a guide to future researchers and decision makers explaining how to responsibly use or build upon the research contained in the article, or how advice on avoiding any shortcomings that contributed to the difficulty in obtaining acceptance for the research.
- Data must be provided along with the publication.
Consequences and Trade-offs
I already enumerated the consequences of the status quo approach above, namely the long time frame and likelihood of languishing or discarded research and data.
While a post-publication peer review seems like a good idea in theory, it likely will not perform very well in practice. The issue I believe is that while this alternative minimizes pain to researches and maximizes the dissemination of evidence, it will be poor at maximizing the rate of learning. A reader would have to read the article as well as all of the peer reviews to get a sense of the quality of the article, whereas with peer review prior to acceptance readers are able to fall back on a journal’s assessment of quality. While this would make the process more transparent to readers, this is asking a larger time commitment from readers than most are willing to make. This also lets authors off the hook for revising their work. What incentive is there to respond to peer-reviews if the publication step, and therefore the credit, has already been received.
Bypassing peer review altogether in some ways is an appealing approach. However, this approach essentially requires every reader to conduct their own peer review. While this is how individuals do function, and this is how we are being forced to live in the information dense, evidence scarce culture of our times, I think society asks more of science. While peer review isn’t going to make all publications perfect, and it does slow the publication rate down, those articles that get published are likely disseminating more evidence dense research. Research that is likely of higher quality both in terms of the analyses conducted and the prose used to communicate it.
I believe the no rejection journal alternative gets around the issues with the other alternatives. The normal peer-review process still occurs, and if readers only want to obtain their evidence from journals using the status quo peer review process they would be able. However, rather than lose the benefits to learning that could be gained from articles than languish and disappear under alternative 1, or are published without revision or caveat in alternatives 2 and 3, a potential win-win – or at least optimal trade-off – between objectives may be achieved here. While there is still some pain to authors here, it is of a known maximum intensity and duration. I’d rather publish in this form than feel the pain of research that forever feels like it is simmering on a back burner. While the average quality across all publications may decline, the amount of information available to other researchers and decision makers could increase.
This way authors are still able to publish their research, and readers are still able to see the caveats associated with that research. Most importantly, learning can occur in multiple ways from this form of publication. Data doesn’t get lost, allowing for better future analyses. Readers can better learn how to evaluate research, as they read it. My favorite benefit is that other researchers can learn from the mistakes other authors made, and avoid succumbing to the same pitfalls that resulted in past rejections, hereby helping to limit the pain of future authors.