Negative Results: The Data are the Data
The data are the data. We have all used this line repeatedly while conducting research and while teaching others how to do research. Generally these words are expressed with a frustrated grimace, a shaking of the head, a pulling of the hair or an exasperated sigh. Negative data are a regular result from good research questions, solid experimental designs, and weeks/months of reliable data collection. The results are not what we wanted, we often initially cannot explain them, but as we have all been trained, "the data are the data". As frustrating as it is to obtain negative results after weeks/months of work, experienced researchers are aware that negative results are a pretty common occurrence in research. However, for an inexperienced researcher or a junior scientist, negative results can evoke negative emotions such as fear of failure or fear of disappointing their mentor.
The phrase "negative results" is actually somewhat ambiguous. The phrase is generally thought to describe results that are inconclusive due to a failure to reach statistical significance. However, unexpected results or results contrary to the reported literature are also often described as negative. Furthermore, regarding inconclusive results, there can be multiple reasons that data fails to reveal a clear trend or effect. As stated above, it could be that a researcher's hypothesis is incorrect and thus, their experimental manipulations fail to reveal an effect. However, it could also be that the experiment failed to produce clear positive results because there are experimental factors (i.e., drug side effects) that were not considered and thus not controlled by the experimenter. These extraneous factors can contribute notable variability to the data. Finally, and sadly, unexpected negative data can be a reflection of poor or sloppy research practices due to inexperience and/or poor training or supervision of the research team.
Therefore, upon the frustrating revelation that one's data is negative, it is critical that a researcher takes a deep breath and assesses both their team and their data. Assessing the team for awareness of and adhering to experimental protocol can be relatively easy. Simply sit down the members of the team and ask them to describe their procedures. Attentively listen to their description and then note any inconsistencies between their report and the defined protocol and then work to correct any mistakes or omissions. It is frustrating to realize after the completion of an experiment that mistakes have resulted in wasted time and resources but these problems can generally be easily corrected and a valuable lesson will be learned. Furthermore, the team needs to realize that experimental protocol shift is not uncommon, especially when there is regular personnel turnover combined with poor direction or supervision. Learn the lesson, correct the problem and clean up experimental protocols.
However, if you assess your team and cannot find any procedural inconsistencies, the data should also be assessed for both expected and unexpected observations and trends, even if these trends are somewhat variable. Researchers often detect highly variable, but repeatable trends or observations in their data. These puzzles often require researchers to consider alternate factors that may be contributing to data variability so that they can refine their experimental questions and/or protocols. Delving into these puzzles is often the fun part of research. However, it can also be a very costly phase of discovery. It takes time, resources and considerable risk to discover what these undetected factors may be. And a great deal of effort may never see the reward of publication due to the large volume of inconclusive data that is produced in the process of discovery due to the bias against publishing negative data.
During my undergraduate and graduate training, there were research teams in our department focusing on fetal alcohol syndrome (FAS). Through coursework and journal clubs, I learned a bit about the difficult experimental history surrounding fetal alcohol syndrome research. Although researchers, clinicians and even teachers recognized commonalities and consistent observations that were suggestive of a disorder, it took a great deal of time and effort for the experimental data to reveal conclusive FAS data and a clinical profile. The reason that this early work was so difficult is that there were many factors that needed to be worked out in the research. The frequency of binge drinking during pregnancy, the volume of alcohol consumed, the developmental phase of fetal exposure to binge drinking as well as the genetic susceptibility to alcohol exposure all impacted the type and magnitude of impairment produced by alcohol exposure during fetal development. All of these factors needed to be experimentally addressed before researchers successfully and reliably revealed a reliable and reproducible link between fetal alcohol exposure and developmental problems in children. Each of these factors contributed to variability in the data and for a long time, the variation or "noise" in the data hindered recognition of the syndrome. However, throughout these difficulties with the research, scientists persevered through file cabinets full of negative results while recognizing that there was meaning hidden in their observations and inconclusive data trends.
Difficult success stories, such as that experienced by early FAS researchers, serve as a reminder that the agenda of original research is to conduct experiments to explore and explain the unknown. As such, our data often reveal that there are many more unknowns that we have not yet considered. These are critical phases of research and therefore, it is unfortunate that science often fails to reward the effort that goes into acquiring these interim negative or inconclusive results. As you peruse the literature, it is quite uncommon to come across publications that report negative results. Positive results are much more likely to be published as compared to negative results. Most researchers recognize the ethical and scientific importance of sharing and publishing negative results. However, like many issues in responsible research, recognizing that there is a flaw in research practices does not readily or rapidly elicit a change in research practices or resources.
Furthermore, although many researchers agree that there should be increased opportunities to publish negative results, there are arguments against these practices. As stated above, one of the explanations for negative or contrary research results can be due to inexperienced researchers conducting poor research. If this is the case, it is arguable that ready availability to publishing opportunities may fail to reveal and "weed out" poor research practices. However, most researchers do not want wish to publish poor work. Rather, researchers are very particular about work that they want to publish and thus, only want the opportunity to publish negative results when they are collected through well designed and reliably conducted research. Researchers can be so particular about work that they choose to publish that I have had colleagues express reluctance to publish even positive results when they did not trust the reliability or integrity of their team members that conducted the work.
An additional reservation regarding the opportunities to publish negative results addresses the potential for untrained readers to misinterpret data that is consistent with the null hypothesis to be "proof" of the null hypothesis. Scientists are trained that the goal of research is to disprove hypotheses and that data can never prove a hypothesis to be true. However, when untrained readers peruse the scientific literature, they often misinterpret a single research report as proof of a causal or correlative relationship rather than a piece of evidence that fails to disprove a hypothesis. These types of untrained or premature interpretations can cause a great deal of harm if they reach a popular audience. An example is the discredited and retracted Wakefield (1998) report of a link between MMR vaccine and the development of autism. Even though this single study was proven to be fraudulent and that follow-up studies have failed to reveal any link between vaccines and autism (Taylor et al., 1999; Madsen et al, 2002) the popular myth asserting a link between vaccination and autism persists. Scientists are trained to treat even positive results with skepticism. However, public or popular misinterpretation of negative or conflicting results could potentially impact public opinion and/or funding opportunities, especially during difficult phases of discovery as was described above for fetal alcohol syndrome researchers. Scientists are trained to expect conflicting reports and critically assess the methodology and results of conflicting reports to find a seed of truth and as that seed grows, the scientific literature self-corrects. However, the untrained audience can misinterpret these conflicts as wasteful, fail to recognize that discovery is a process and public opinion will often fail to recognize that large amounts of new data are a critical part of the correction process.
The Wakefield fraud case is a good example of an inaccurate positive result being proven false (false positives). In contrast, negative results can also be proven false (false negatives). However to prove any results to be false, one must have access to the published results. I clearly remember during my graduate training, during a late night study session, the night before a statistics exam, a friend of mine sounded off in frustration regarding everyone's focus on using statistics to avoid false-positive results. She expressed that this concern was ridiculous because science is self-correcting, thus a published false-positive would be replicated and failure to replicate would correct the literature. She expressed that our concern should be focused on false-negative results because once published data failed to support a hypothesis, the hypothesis would be abandoned because researchers would not waste time and resources working on negative findings. Therefore, she asserted, good ideas could rapidly and mistakenly be abandoned once a negative result was reported, even if that negative result was false.
In a way she was correct. However, her rant was based on a few assumptions. First, she assumed that a false-negative result would be reported in the literature. As stated above, there are file cabinets full of negative results that have never been submitted for publication and even more that have been submitted for publication only to have been rejected. Second, although the ideal model is that the literature is self-correcting, that assertion is based on the assumption that follow-up studies, failing to replicate false-positive results, are actually submitted for publication and that these negative or contrary results are actually published. However, due to the tendency of researchers to hesitate wasting time and resources constructing manuscripts that largely contain negative or contrary results and the publication bias toward rejecting publications containing negative or contrary results it is hard to readily assert that our system of scientific reporting is in fact, self-correcting. Failing to publish negative results impairs the self-correcting design of research.
Independent of the above controversies, most researchers agree that "the data are the data" and as such, negative results are as important as positive results. We need those negative results to alter our hypotheses and redesign our experiments. We need access to negative results to guide us on the path to positive results. Therefore, we need to address the bias against publishing negative results and come up with a system that enables us to share those valuable negative results that were produced from solid research questions and reliable data collection techniques. Historically, much of this data has been shared unofficially. We meet at conferences and discuss our ideas and frustrations and we often find that our colleagues are addressing similar questions and encountering similar frustrations. However, those casual conversations often do not reveal all aspects of their methodology so that they can be utilized to modify experimental design. Thus, there needs to be a focus on increased resources that enable researchers to share their negative results so that wasted time and resources are minimized.
There are valid concerns regarding an agenda to increase opportunities to publish negative or inconclusive research results, but many of these concerns can be addressed through the system of peer review that currently exists in academics. In contrast, maintaining practices that are biased against publication of inconclusive or contrary results has too much potential to negatively impact scientific progress. These practices could impact research decisions made by junior scientists, working to build a career, because they may perceive positive results as more valuable than maintaining objectivity. Furthermore, publication bias against reporting negative results also limits researcher access to evolving data and methodology and potentially biases the information that is available in the literature. Finally, bias against publishing negative results slows the self-correcting virtue of science. It is great to encounter increased dialogue regarding the importance of negative data and discussion on how best to share the methods and results associated with inconclusive yet intriguing data. These discussions are a first step toward improving our system of reporting and acknowledgement for the efforts of investigators that struggle with the difficult phases of discovery.
Marianne Evola is senior administrator in the Responsible Research area of the Office of the Vice President for Research. She is a monthly contributor to Scholarly Messenger.
Office of Research & Innovation
-
Address
Texas Tech University, 2500 Broadway, Box 41075 Lubbock, TX 79409 -
Phone
806.742.3905 -
Email
vpr.communications@ttu.edu