Recently I participated in a discussion in which someone asked, “What if my students
use QuizMate to answer online, multiple-choice tests?” QuizMate is an example of one
of the many mobile apps or browser extensions that allow students to snap a picture
of a question so that an AI assistant can instantly provide an answer and explanation
of a given topic. A more positive application is that this tool could be used to take
a picture of text and generate a quiz to test comprehension and recall. These apps
typically work by providing instant answers or more extended feedback, for example:
Instant answer:
Exchange rates are affected by:
Economic conditions
Currently traders expectations
Countrys gross domestic product (GDP)
All of the above
Answer: d) All of the above
Extended Feedback:
“The economic policies of a country (which are decided by a government) and the trends
of its economy effect the value of its currency in the foreign exchange market.”
Answer: The answer is true. The economic policies implemented by a country can impact
factors such as the interest rates, inflation, and overall economic stability, which
in turn influences the value of its currency in the foreign exchange market. Additionally,
the performance and trends of a countrys economy, such as DGP growth and trade distribution,
also play a significant role in determining the value of its currency. Overall, these
factors are interconnected and contribute to the fluctuations in a countrys currency
value in the foreign exchange market.
As AI technology evolves, so will the potential for its misuse. Already we are aware
of AI essay generators, problem-solving applications, translation or paraphrasing
tools, writing style imitators, plagiarism bypass tools, tools that mimic typing or
mouse patterns, optical character recognition (to extract text from pdfs or images)
and more. I have undoubtedly missed a category of tools or something new will be introduced
tomorrow. No doubt, education has been disrupted.
Can these apps accurately answer questions?
According to a study which used approximately 1,000 test questions from five semesters
of exams conducted by Kenneth Hanson at Florida State University, ChatGPT typically
answered difficult questions correctly and easy test questions incorrectly. Hanson
said, “ChatGPT is not a right-answer generator; its an answer generator.” Although
I agree that ChatGPT (used as an umbrella-term here for generative-AI tools) often
predicts the correct answer or pattern, we are all aware of the hallucinations and
mistakes made by AI. That said, AI abilities and efficiencies only improve at a phenomenal
rate. This leads to our conundrum of how we might design assessments that out-perform
AI?
Are there resources to help?
Instructors at Texas Tech will continue have access to Respondus Lockdown Browser.
Respondus records student movements and flags exams if a student leaves the view,
their eyes wander, or another person enters the screen. But cheating finds a way,
and unfortunately, this is only a deterrent and can be easily circumvented. TTU Online
continues to examine additional tools to assist faculty in protecting the integrity
of non-proctored, online exams. TTU does not endorse reliance on AI detection tools
given their notorious biases and false-positive predictions of AI generated-work but
we continue to look for new developments in this field.
This is important:
Lets start by acknowledging that our identities as educators are being challenged
and our workload, burnout, and stress may be higher than ever before. Whew.
I would be remiss if I did not acknowledge our need to emphasize AI ethics and help
students identify guiding principles to help them consider responsible use. Not just
for their own integrity but also in consideration for the greater good as we consider
global impacts and costs of dependence on AI.
Suzanne Tapp, Associate Vice Provost for Teaching and Learning, Director, Teaching,
Learning, and Professional Development Center