The Use of Checklists for Lab and Data Management
By Marianne Evola, Ph.D.
Recently, I led a seminar on data/resource management for the Coffee Break Ethics series for the Ethics Center. As usual, after my presentation I found myself critiquing the productivity of the seminar and the information that was provided. Unfortunately, my self-critique revealed that I had better presented the problems of data management rather than defined solutions. Presenting problems without proposing solutions is a personal pet peeve of mine. So, to rectify my performance, I’ve decided to address the weaknesses in my presentation in the next couple issues of Scholarly Messenger.
In general, I proposed that students create a data management system before they even start a project, and within that system they need to address:
Systems for Collecting Data
- A system for collecting data,
- A flexible system that can grow as they continue to compile more and more data, and
- A system for long-term storage of data.
This month’s article will address the first point, systems for collecting data.
In many areas of research, effective data collection largely requires effective management of a research team. In a previous contribution
to Scholarly Messenger, I addressed the need for students to learn how to manage people. Rather than rehash the topic, I refer the reader to the above link. I will, however, address some additional thoughts that I’m working through due to a recent book recommendation. “The Checklist Manifesto: How to Get Things Done Right” by Atul Gawande was recommended to me by Alice Young, associate vice president for research (research integrity). I recommend reading the book for the full impact of Gawande’s argument and logic, but basically it argues that checklists can maximize productivity and efficiency by minimizing human error, even in highly complex professional endeavors. In fact, checklists may be critical to minimize human fallibility in complex professions. The book argues that there are two causes of failure in professional endeavors: lack of knowledge and ineptitude, which Gawande defines as failure to apply knowledge appropriately.
Gawande proposes that with modern science and technology, we are knowledge heavy. Students are required to complete more and more coursework for “mastery” because of the expanding abundance of information. Reading this triggered a personal memory from graduate school. I remember sitting in one of my courses while a mentor/friend stood just outside the door, listening and shaking his head. After class, I asked him why he was shaking his head and if he disagreed with the class discussion. He responded, “No, there is just so much more to learn than when I was in grad school.” Gawande is a surgeon, an indisputably complex profession with “life and death” consequences to failure. He argues that there is currently so much information that we can only achieve mastery over a narrow range of information, with the result that professional mastery has become tightly specialized. In his world of surgery, there is no longer a true general surgeon. As such, there is a need to address the false ideal of a surgeon being the “master” of the surgical suite. In fact, to not evolve to address the mass of knowledge would be to embrace the practice of ineptitude, which is not an option in the medical profession.
Rather, Gawande argues that the specialized knowledge of each individual in the surgery suite must be utilized. Each person needs to be empowered to assert their knowledge. However, the surgical team also needs a system of organization so that each member’s contribution of knowledge is orderly and timely, for maximum benefit while minimizing distraction. He proposes that simple checklists should be created and utilized in complex professional environments to maximize applicable knowledge and minimize ineptitude.
Checklists in Research
When I started reading this book, my immediate reaction was that a checklist could never be applied in a research environment because what we did was far too complex. In one way, that immediate reaction was right—you probably cannot apply a checklist to creativity, data interpretation and insight, the fun aspects of being a scholar. However, the more I thought about the redundancy of conducting experiments and processing data, the more I realized that informally we already used a sort of checklist. Furthermore, formalizing the checklist process would only serve to maximize communication with students and collaborators, standardize experimental procedures, and enhance the consistency of data collection.
So, how do you institute a checklist into a research environment? Well, likely it is not going to be one checklist but rather a collection of checklists. As Gawande researched what made an effective checklist, he looked to the airline industry for recommendations. Since World War II, pilots have utilized checklists to minimize human error because mistakes are catastrophic in air travel. Turns out, airlines not only have checklists for routine functions, they also have been developing checklists in response to human and mechanical failures for decades. In the case of an unexpected development in flight, pilots can generally refer to a checklist that specifically addresses their immediate problem.
To map this idea onto a research environment, we can easily create checklists for routine data collection. However, we should also consider creating checklists to address random problems, if the solutions can be systematized. For example, in our lab, our research often examined animal behavior. Most projects were computer automated to minimize human error. However, multiple parameter values for an experiment needed to be appropriately entered in the computer for each experimental session. Inaccurate computer entries wasted time and resources and often confused research animals, which complicated subsequent experiments and interpretation of data. As such, step-by-step instructions (i.e., checklists) were posted at each computer. These checklists addressed how to routinely set up an experimental session. However, we did not utilize checklists to address random problems. For example, if equipment failed while being tested prior to the day’s experiment, students would have to find a senior member of the lab to help them. If no one was around, they would appropriately abandon their experiment for the day. Interestingly, in the case of equipment failures there was actually a systematic series of steps that could address most problems. However, no one ever thought to create and post a checklist for these possible equipment problems and their solutions.
Another mistake that our lab made was that we did not mentor systematic use of checklists throughout the duration of studies. We utilized a checklist to train new students, and these students utilized the checklist until they learned the steps. And this is where we failed. Once we learned the steps, we stopped using the checklist to double-check our entries. I cannot tell you how many times I, or lab mates, loaded an experimental session, treated the research animals, placed them in their appropriate chambers and…forgot to start the program. I would return when the appropriate time had elapsed only to realize my mistake. If we had systematized always running through a checklist before walking away, a lot of frustrated grumbling could have been avoided. Luckily in our case, our mistake did not harm the animals or the experiment, but there are many lab mistakes that can be much more serious. Forgetting to assign someone to feed research animals over the weekend or forgetting to check the drug label on a bottle could be catastrophic to a research program and, more importantly, to the research subjects. Constructing simple checklists and mentoring the ongoing use of the checklists every time an experiment is conducted can minimize and even eliminate human error. It can also systematize experimental protocols and data collection, especially in research environments where controlling variability is critical.
So, think about how you can utilize a simple checklist to standardize some of the variability of your research environment. If you are too busy during the workday, think about it on the drive home—my best thinking time. Challenge senior students to create checklists, and then challenge junior students to beta test the checklists and find the flaws. Every research environment has that one person who will fall into a hole that everyone else can see. That is the person that will best test your checklists for flaws. I know because I was/am that person. Rather than criticize them for falling in the hole, utilize them to find the holes so that you can standardize data collection.
Next month, I will address data management systems for point two, compiling data and three, long-term storage of data.
Marianne Evola is senior administrator in the Responsible Research area of the Office of the Vice President for Research. She is a monthly contributor to Scholarly Messenger.
Alice Young, associate vice president for research/research integrity, is a contributing author/editor.