One efficient way to test a message—be it video, image, online, or audio—is through controlled environments where you can control viewer distractions and help ensure focused attention to a message. The CCR contains two large, 24-seat experimental labs where viewers can be randomly assigned to view specific messages while seated at partitioned computer workstations.
Lab Hardware. To allow for the controlled presentation of messages, research participants are seated at partitioned workstations equipped with Dell Optiplex computers, 19” widescreen monitors, and headphones. Specialized research software, including MediaLab and Qualtrics online survey software, allows researchers to present specific messages to each person in the lab. For example, in a classic experimental design, half the participants in a session could be randomly assigned to view a “treatment” message, while the other half views a “control” message. For more applied research, participants could be directed to different versions of a website to test evaluation of site design and appearance.
In addition, the use of software eliminates the need for paper-and-pencil questionnaires that have to be entered manually after the session. This removes the possibility of data entry mistakes and makes data collection more efficient.
Sample Study. Due to their flexibility and application in a wide variety of research contexts, the experimental labs are perhaps the most used space in the CCR. Researchers have conducted numerous studies examining virtually all forms of screen media: video, still images, news content, website evaluations, audio, and more. Moreover, studies have taken the form of both applied and basic, or theoretical, research.
In one recent experiment, researchers used the lab to examine how viewers respond to structural techniques used in reality television programming. Viewers watched one of two specially edited versions of a program and then responded to questions about how they related to characters in the program. In addition, software allowed for interruption of the program along the way to measure changes in response over the course of viewing. Viewer responses were then compared between the two versions of the program to find statistical differences between the groups.