open access

Abstract

We commonly make the assumption that information provided in instructions and/or publications is correct and based on proven and unbiased knowledge.  In a recent study about strawberry seed germination, which will be reported concurrently with this letter, seed supplier’s instructions (based on accepted wisdom among growers) were actually tested, and the results of that part of the study brought to mind the insidious nature of publication bias. This testing was done because the seed planting in question was a crucial component of a scientific study I recently completed on the effects of electromagnetic fields on seed germination in which more than 7,000 seeds were planted, carefully and individually, and their germination rates were studied closely.

Briefly, the recent study to which I will make reference describes the interaction between three different treatments prior to planting the seeds:  pre-freezing, pre-soaking, and the application of PEMF.  It is widely held that the first two, especially pre-freezing, is essential for strawberry seed germination.  These procedures are also often reported in the methods section as a side note in scientific reports, so I took them as a given.  Our goal was to determine the extent to which the application of PEMF interacted with either or both of the widely accepted pre-treatments.  But when the data came in, I did not see evidence of any positive effects on germination resulting from either pre-treatment when studied separately or in combination.  In fact, both pre-treatments appeared to have a slight negative effect on the germination rate, and neither under any circumstances interacted with PEMF treatment in a positive way at any level of practical importance.  These negative results will be submitted for publication in JoSaM, in keeping with our policy to fight against publication bias.

Citation

Dennis R. (2020). Letter to the Editor: Strawberries: Facts, Truth, Misinformation, Publication Bias and the Importance of Negative Results. Journal of Science and Medicine; 3(1):1-7. https://doi.org/10.37714/josam.v2i4.59.

Background

It is certainly possible that I botched one or both of the pre-treatments; I am, after all, a notorious brown-thumb. It is certainly possible that one or both, if properly applied, would have resulted in significantly enhanced germination rate. And with this in mind, I reported sufficient methodological detail that such errors may later be brought to light. If this comes to pass, I offer a sincere mea culpa. But so long as the results of this study stand, the simple truth is that there is a negative result. Is it important? Who am I to judge; are strawberries important? It is but one small reminder that unless we guard it closely, science can become an unwitting tool of willful misinformation at its worst, or a propagator of silly but otherwise harmless myths at the very least. But I suspect the real danger is in the middle of this spectrum, where unwitting bias short-circuits the self-correcting nature of science when well conducted, leading to an amplification of irreproducible results, and a massive loss of time and financial resources, chasing bad Ideas, repeating and suppressing negative findings, and giving false hope, while the public pays taxes and waits and endures decades of needless suffering. This outcome has been reported in the general health and medicine literature by Richard Harris, an NPR Science Correspondent [1] and others [2]. The title of Harris’ book is sobering, especially coming from a correspondent for NPR, an organization known for its careful, balanced and data-driven approach to reporting, and uniform support of science. His title is: “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”. It gives sobering perspective to occasionally revisit and contemplate that title.

Academic scientists have known about this problem for many decades. Some have taken it seriously. Most remain blithely unaware and blindly believe that peer review in science is a bullet-proof mechanism of quality assurance. A few scholars who study the trends in the medical research literature attribute this problem to the simple need to regularly release stunning new results (always positive) using catchy phrases to grab the attention of readers and grant-reviewers [3], thus the alternative titles of this letter. That should improve my readership…

But many academics are on record asserting that this is not a serious problem, or they willfully neglect it, or they forcefully contradict it in public. Most damaging are the silent deniers, those who sit on editorial boards for major scientific journals and those who regulate the flow of information into and out of their research groups. I have experienced this personally, and I doubt my experience was unique. I had a graduate advisor who refused to even contemplate negative results. He sported more than 200 peer-reviewed publications to his credit while I was his graduate student in the early 1990s. He boasted that he had never carried out a study that “didn’t work” (i.e., had negative results). Everything he had ever tried had worked, he claimed; he made certain of that and was known to deal harshly with graduate students who were unable to deliver the expected positive results. I pointed out that I had found negative results during my dissertation research, and that it should be published because it related directly to ongoing controversies in the field of muscle physiology. His response was an abrupt “Absolutely not.” He would not allow negative results to be published from his laboratory. I pointed out that negative results were inevitable in science when hypotheses were being tested honestly: sometimes they are supported, other times they are not, which is by definition a negative result.

His response was swift and dismissive. He pointed out the window of the laboratory and asked condescendingly: “Can I publish a paper proving that the red VW out there DOES NOT cause cancer? Well, can I?” I responded by stating that the hypotheses under test needed to be plausible, but before I could finish the sentence he had turned away and was engaged in a different conversation. I learned quite a lot from my advisor.

The Importance of Negative Results

The life sciences face serious challenges from the replication crisis, and more specifically publication bias. People tend to avoid testing and verifying “the known”, and preferentially publish that which reinforces existing belief, especially for positive results. When this is reinforced by the publication policies of scientific journals and the grant awarding policies of federal research agencies, this tendency results in increasing levels of scientific hype, fraud, the fabrication and falsification of data, and an increasing aversion to high-risk and innovative research [4].

Independent replication and subsequent publication of a negative result is indeed a rare event. And this results in a lot of lost knowledge. Perhaps most of what science has to teach us has been sacrificed on the altar of publication bias. Some invariably ask rhetorically: why is it important to know when something does not work? This is not a predisposition relegated to the non-scientist. Many scientists harbor this bias against the publication of negative results, and I know of cases where highly distinguished academic scientists (including my dissertation advisor), with several hundred publications to their credit, who serve on the editorial staff of prominent journals, refuse to entertain the thought of publishing negative results. Another very senior academic who sat on the editorial board for a major scientific journal for three decades, known well to me, has consistently rejected papers that showcase negative results, insisting that scientists “… need to keep trying until they get it to work.” (personal communication). One very distinguished scholar went so far as to force her graduate student to take a course in Japanese Tea Ceremony, to “teach her to do things right, every time” with the full expectation that positive results would be forthcoming. Unfortunately, that is not a proper or even helpful way to conduct science. The student dutifully studied Tea Ceremony, but as her skill grew she repeatedly demonstrated a negative result, each time more convincingly, of her advisors favorite Pet Theory. The crushingly decisive negative results were never published, but the graduate student successfully aged several years, was marginalized by her indignant advisor, and eventually dropped out of graduate school to start a family. So I guess it was not entirely negative…

If the importance of this is not intuitively clear, a simple analogy illustrates the fundamental importance of registering negative findings: when searching for a set of car keys, it is just as important to know where the keys are not. And there will invariably be many more plausible places where the keys could be, but are not, than places where the keys are. Unless you keep track of where they are not, you end up repeatedly looking in the same places. When searching for car keys, this is mildly irritating and sometimes comical. When searching for the cure for a major disease… not so much.

In the conduct of science, the process of systematic search is much more nuanced. But nonetheless it rings even more true that in scientific research, the number of plausible explanations that turn out to be unsupported by experiment far outnumber the few explanations that describe actual reality and are supported by experiment. And to really learn from negative findings, it is critical to report not just negative results, but also methodological detail sufficient for independent replication, especially when critical hypothesis tests or independent replication of critical findings have been carefully conducted and yielded negative results. It can be argued that the Methods Section of any scientific paper is more important than the Results Section. Unfortunately, there has been a trend for decades to progressively shorten, marginalize, or intentionally conceal methodological detail. It is entirely plausible that at least 80% of our scientific knowledge, especially in the life sciences, has been lost simply because of the insidious and pervasive effects of publication bias, and a great deal of faulty knowledge has been promulgated by the fundamental irreproducibility of many peer-reviewed works.

The negative results in my recent study, insofar as strawberry seeds are concerned, is but one small example of this much larger problem that is particularly endemic to the life sciences [5]. It is almost certainly the case that scientific reports in the general fields of plant and agricultural sciences are subject to considerable publication bias, where results of “no significant effect” or even “negative effect” are frequently withheld from the scientific literature, either by the reviewers, the editor, or the researchers themselves, who often view a negative result as a failed experiment. This can leave the uninformed reader with the erroneous impression that “everything that is published in peer-review really has been proven to work”. Without reports of negative results from carefully designed and executed studies, it is quite literally impossible to determine exactly what works, and what does not.

Flawed interpretations based upon a severely biased set of data are not just a human frailty. For example, if we were to feed the entire body of peer-reviewed data concerning any field of medicine, say for example to discover potential new cancer treatments, into an artificial intelligence or machine learning program, the machine would lack at least 80 - 90% of the data that indicates all of the things we have tried that do not work, leaving us with the biased and erroneous impression (and the false hope) that we have the correct answers already in our published scientific literature… if only we could mine them out. This has been the false hope promulgated by the fields of bioinformatics and “Big Data” for the past several decades, as just a few common examples.

This is an enormous problem in the sciences because when there exists a strong bias to publish only positive scientific findings, their repetition eventually elevates them to be accepted as ‘fact’ [5,6]. And once a ‘fact’ has been established, it will further bias scientific publications and consensus through the additional psychological mechanism of confirmation bias. Thus, science, rather than self-correcting the inevitable errors as it should, is directed strongly toward positive findings, and this leads to subsequent findings that reinforce (confirm) the initial positive findings. Most negative findings are simply not published and distributed, and therefore play no role in correcting this type of self-reinforcing error. When aggregated, the large number of positive findings, due to bias, will outweigh the much smaller number of negative findings, and this leads to a scientific consensus simply by weight of the available evidence. By detailed mathematical modeling of this process of repetition and acceptance, it has been shown that false claims frequently can become canonized as fact [6]. After all, everyone knows you need to freeze strawberry seeds to get them to germinate. This is what you will read anywhere you look. The truth of this matter would be relatively simple to correct because it is of minor consequence. This is inconsequential compared to matters pertaining to major diseases or federal regulation and policy, but the underlying psychological mechanisms are the same. Individual people, governments, and increasingly, artificial intelligences make decisions largely upon the relative weight of available evidence.

Active Disinformation

While the academic community has been largely slipping into this unwittingly and somewhat fecklessly, the same psychological mechanism has been used willfully for nefarious purposes. When a false statement is amplified, but assertions to the contrary are not, it eventually becomes “truth” simply because of the ostensible “weight of evidence”. A quote, often attributed (perhaps erroneously) to Joseph Goebbels, the Nazi Reich Minister of Propaganda is “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” The assertion that he actually said this is disputed, but the fact that the Nazi Propaganda Ministry actually practiced this is not. This practice did not die in a bunker with Hitler and Goebbels. It is alive and well and is actively undermining the faith of the general public of the Free World in science, expert opinion, and democracy itself.

Since 2016, the New York Times [7] and news sources of similar prominence have published numerous articles and editorials, based on expert opinion including statements from KGB defectors, on how Russian Disinformation has had a major influence on American elections, trust in vaccines and science in general, belief in the covid-19 pandemic, and amplification of conspiracy theories ranging from dangerous (a belief in QAnon Deep State conspiracies) to strange, such as amplifying the belief that the earth is a flat disc. The number of spherical earth deniers now numbers in the tens-of-thousands, and is expanding rapidly “across the Disc”, enough to support annual Flat Earth conferences of 600 people or more. And an estimated 11 million people in Brazil reportedly believe the Earth is flat. This has been attributed to their evangelical religious beliefs [8], but is also part of the alarming modern trend in the rejection of science, experts, and facts that is at the heart of Russian Disinformation. Russian Disinformation has been analyzed in detail by major non-partisan think tanks, and is considered to be a real and present threat to democracy [9]. The way that Russian Disinformation works is counter-intuitive: they generally do not need to create lies, they simply find them on fringe sites on the Internet, and then using an army of Internet trolls, amplify the ones that serve their nefarious purposes by factors of tens of thousands on social media [7]. The innocent reader predictably formulates their opinion based on the weight of evidence: “everyone on the web is talking about it!”

Impact on Funding and Policy

The truly sad thing about this is that the academic medical research community in the Free World has been unwittingly undermining their own credibility through their growing failure to deliver, decade on decade [1], on health and medical hype and promises made based on irreproducible science and academic publication bias. It is only a matter of time before some clever FSB (the modern version of the KGB) agent leverages this demonstrable weakness of academic science in the Free World. Imagine the impact on NIH funding when 100 million taxpayers howl about “fake science” and demand to defund “fake academic medical research”, reinforced and amplified by a reality talk show con man hiding in a bunker in Washington DC, with 88 million followers on social media. When people lose faith in facts, science, and experts, we have no way to combat such misinformation, whether it is scientific or political, or whether it is the result of feckless bias that is built into academic practice or the willful effort of a hostile foreign state.

Current Trends

Publication bias is real, growing, and it continues to be formally studied [4]. In a study of 4,600 peer-reviewed papers over the period of 1990 to 2007, it was found that the ratio of positive results being published over negative results has been greater than 80% since 1999, and peaked as high as 88.6% in 2005, and that the problem was especially severe in the biological and social sciences, when compared to the physical sciences.

There is no reason to believe that the trend of increasing publication bias has reversed, suggesting that, in the biological sciences, a publication bias on the order of 90% or greater is likely at this time. This has been verified by meta-analysis of 74 FDA-registered studies of anti-depressants [10], showing that, at that time, studies with positive results were about 10 times as likely to be published as studies that would have reported negative results on the efficacy of the same class of drugs. The practical implication is clear: due to publication bias, there is approximately a 10-to-1 ratio of evidence demonstrating the effectiveness of anti-depressants, when in fact the ratio of evidence was much closer to balanced (actually 38 positive and 24 negative), had all of the results, both positive and negative, been published. But to the uninformed reader, the balance of available evidence clearly favors the efficacy of these drugs. The clinical impact is that confidence in the effectiveness of anti-depressants may very well be misplaced. The scientific implication is that science is and has been contributing and reinforcing false beliefs that will have widespread and lasting implications at all levels of society, public policy, and clinical practice. And we see evidence of this in all walks of life, including the way people believe they need to pre-treat strawberry seeds by freezing them before they plant them.

Strawberries

With all of this in mind, what does the set of data that we collected on strawberry seeds allow us to conclude? Our initial aim was simply to replicate and verify two simple assertions: pre-freezing for 3 to 4 weeks prior to planting is essential for strawberry seed germination, and pre-soaking in water for about an hour immediately before planting is also helpful in this regard. We took these statements as “given”, and endeavored to show whether these effects were additive, and whether or not they interacted with, or were synergistic with, or perhaps could be further enhanced or replaced by, a brief exposure to PEMF. In the initial phases of the experiment, we simply endeavored to replicate these basic beliefs as a starting point. But the resulting data did not support these beliefs.

While the negative findings are certainly not conclusive, I consider them to be highly suggestive that some of the commonly held “facts” about how one should plant strawberry seeds should be revisited with a skeptical eye. I would go further to say that this may indeed be the case for much of the received wisdom as it relates to gardening and agriculture in general. Further, the evidence strongly suggests that, concerning matters of somewhat more importance than strawberry seed germination rates, such as cures, treatments, and scientific understanding of cancer, diabetes, cardiovascular disease, and vaccine safety and efficacy, the integrity of academic science is increasingly in question due to the self-inflicted wounds of irreproducibility and publication bias.

How can this possibly be fixed? It is a complex issue that will take decades to correct, but it is possible to start now with a simple strategy. For any belief that is widely accepted, the practice of which has significant impact on the cost, complexity, effort, or outcome of any activity, we offer the Russian proverb Doveryai, no proveryai (Russian: Доверяй, но проверяй); “Trust, but verify”. If it is important, it may be worth additional or renewed scientific scrutiny. If it is supported it should be published, but especially if it can be shown to be false, by all means, publish it.

And with all of this, it may still be true that my negative findings may be in error. I am willing to cheerfully accept that if someone bothers to trust, but verify.

RG Dennis

References

  1. Harris R. Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. Basic Books: New York; 2018.
  2. Ritchie S. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books: New York; 2020.
  3. Atkin P. A. A paradigm shift in the medical literature. BMJ. 2002; 325(7378)DOI
  4. Fanelli Daniele. Negative results are disappearing from most disciplines and countries. Scientometrics. 2011; 90(3)DOI
  5. The importance of no evidence. Nature Human Behaviour. 2019; 3(3)DOI
  6. Nissen Silas Boye, Magidson Tali, Gross Kevin, Bergstrom Carl T. Publication bias and the canonization of false facts. eLife. 2016; 5DOI
  7. Paul Christopher, Matthews Miriam. The Russian "Firehose of Falsehood" Propaganda Model: Why It Might Work and Options to Counter It. RAND Corporation; 2016. DOI
  8. Brazil Rachel. Fighting flat-Earth theory. Physics World. 2020; 33(7)DOI
  9. Broad WJ. Putin’s Long War Against American Science. In: New York Times [Internet]. Available: https://www.nytimes.com/2020/04/13/science/putin-russia-disinformation-health-coronavirus.html. 2020.
  10. Turner Erick H., Matthews Annette M., Linardatos Eftihia, Tell Robert A., Rosenthal Robert. Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. New England Journal of Medicine. 2008; 358(3)DOI