Texas Tech University

Small Bytes Blog

The AI Small Bytes is a blog written by members of the AI Resources & Guidelines Committee. This blog will be updated periodically with new resources and information, and we hope that you will check back often. If you would like to talk about teaching with artificial intelligence and your concerns or ideas, please feel free to contact Lisa Low, AI Faculty Fellow, or Suzanne Tapp.

(What if) My Students are Using AI tools to Answer Online Tests?

Recently I participated in a discussion in which someone asked, “What if my students use QuizMate to answer online, multiple-choice tests?” QuizMate is an example of one of the many mobile apps or browser extensions that allow students to snap a picture of a question so that an AI assistant can instantly provide an answer and explanation of a given topic. A more positive application is that this tool could be used to take a picture of text and generate a quiz to test comprehension and recall. These apps typically work by providing instant answers or more extended feedback, for example:

Instant answer:

Exchange rates are affected by:

  1. Economic conditions
  2. Currently traders’ expectations
  3. Country’s gross domestic product (GDP)
  4. All of the above

Answer: d) All of the above

Extended Feedback:

“The economic policies of a country (which are decided by a government) and the trends of its economy effect the value of its currency in the foreign exchange market.”

Answer: The answer is true. The economic policies implemented by a country can impact factors such as the interest rates, inflation, and overall economic stability, which in turn influences the value of its currency in the foreign exchange market. Additionally, the performance and trends of a country’s economy, such as DGP growth and trade distribution, also play a significant role in determining the value of its currency. Overall, these factors are interconnected and contribute to the fluctuations in a country’s currency value in the foreign exchange market.

As AI technology evolves, so will the potential for its misuse. Already we are aware of AI essay generators, problem-solving applications, translation or paraphrasing tools, writing style imitators, plagiarism bypass tools, tools that mimic typing or mouse patterns, optical character recognition (to extract text from pdfs or images) and more. I have undoubtedly missed a category of tools or something new will be introduced tomorrow. No doubt, education has been disrupted.

Can these apps accurately answer questions?

According to a study which used approximately 1,000 test questions from five semesters of exams conducted by Kenneth Hanson at Florida State University, ChatGPT typically answered difficult questions correctly and easy test questions incorrectly. Hanson said, “ChatGPT is not a right-answer generator; it’s an answer generator.” Although I agree that ChatGPT (used as an umbrella-term here for generative-AI tools) often predicts the correct answer or pattern, we are all aware of the hallucinations and mistakes made by AI. That said, AI abilities and efficiencies only improve at a phenomenal rate. This leads to our conundrum of how we might design assessments that out-perform AI?

Are there resources to help?

Instructors at Texas Tech will continue have access to Respondus Lockdown Browser. Respondus records student movements and flags exams if a student leaves the view, their eyes wander, or another person enters the screen. But cheating finds a way, and unfortunately, this is only a deterrent and can be easily circumvented. TTU Online continues to examine additional tools to assist faculty in protecting the integrity of non-proctored, online exams. TTU does not endorse reliance on AI detection tools given their notorious biases and false-positive predictions of AI generated-work but we continue to look for new developments in this field.

This is important:

Let’s start by acknowledging that our identities as educators are being challenged and our workload, burnout, and stress may be higher than ever before. Whew.
I would be remiss if I did not acknowledge our need to emphasize AI ethics and help students identify guiding principles to help them consider responsible use. Not just for their own integrity but also in consideration for the greater good as we consider global impacts and costs of dependence on AI.

I teach large, introductory, asynchronous online classes. Now what?

Teaching an online, asynchronous class has become more complicated over the past two years with the accessibility of AI but cheating is not new. After all, students can screenshot and post to Quizlet or CourseHero or even pay a third-party vendor to take a test or write a paper. The following are just a few ideas that may act as a deterrent or offer advice:

  • Can you reword test problems and questions with fake compound names or images?
  • Can you group related questions? For example, describe a problem in item #10 and then refer to it in items 11-15 to avoid pointing and clicking on a particular question?
  • Is it possible to have a rule (documented in your syllabus) that a notation different from that shared in class is not acceptable?
  • Can you use vocabulary specific to your class and the required resources/readings? For example, can you ask, “What would the author of X say to the author or our chapter about Y concept?”
  • Reconsider requiring the feasibility of in-person, proctored tests. The logistics are likely to be a nightmare, and these expectations cannot be implemented mid-course.
  • Don’t try creating background that cannot be read by AI. It is highly likely that you will create something that is problematic for ADA accessibility.

Better but more time-intensive solutions involve course-level revisions such as moving to a project-based evaluation instead of online exams or creating scaffolded assignments that require versioning or history to document progress. It is also important to note that communicating clear expectations related to academic integrity and working to build a positive classroom climate, be it in a face to face or online course, can go a long way. For further reading on this topic, you might find these resources helpful.

Do you have ideas or something specific that’s working in your classes related to testing and AI? I would love to hear about it! Please email me to share your ideas as we continue to crowdsource ways to navigate this new terrain in teaching and learning.

-Suzanne Tapp, Associate Vice Provost for Teaching and Learning, Director, Teaching, Learning, and Professional Development Center

Teaching, Learning, & Professional Development Center

  • Address

    University Library Building, Room 136, Mail Stop 2044, Lubbock, TX 79409-2004
  • Phone

    806.742.0133
  • Email

    tlpdc@ttu.edu