The proposed HumainTech AI Center addresses the constitution of artificial intelligence technology (AI), specifically the Large Language Models (LLMs) that have recently emerged with unprecedented capabilities in human-like knowledge and expression. We call these products “virtual language” because they generate a virtual facsimile of human language using statistical probabilities rather than embodied biological intelligence. Using insights from the humanities disciplines, HumainTech investigates the transformative impact of AI on education and the workplace, but also its inherent dangers of systemic bias, misinformation, and misuse of intellectual property. Because the linguistic source material for LLMs emerges in the shared resource of human cultural diversity, AI tools require a democratic ethic and critical perspective.
HumainTech Mission
Attentive to the National Artificial Intelligence R&D Strategic Plan, the Blueprint for an AI Bill of Rights, and Recommendations for AI in the Future of Teaching and Learning, HumainTech uses methods and insights from the humanities to pursue interdisciplinary investigation and educational innovation focused on the virtual language produced by Large Language Models and cultivates public participation in their development, application, and regulation.
In our first three years we will pursue the following research goals:
- Year 1 Goal: Investigate Large Language Models, Human Communication, and Pedagogy
- Year 2 Goal: Analyze Artificially Generated Expression, Social Consensus, and Democracy
- Year 3 Goal: Evaluate Artificial Intelligence, Biological Thinking, and Embodied Experience
At the World Economic Forum in Davos on January 18, 2024, OpenAI CEO Sam Altman expressed cautious optimism that, “with a very tight feedback loop and course correction,” AI technology can be responsibly managed. “And the only way to do that is to put the technology in the hands of people and let society and the technology co-evolve.” Through our public outreach, we aim to offer hands-on introductions to this technology and generate critical feedback that will inform our ongoing activities. HumainTech will implement three mutually supportive activities tied to our research goals.
Activity 1: Scholarly Inquiry
HumainTech will cultivate methods and insights from the humanities that help us better understand and use AI tools, with inquiries into writing, art, ethics, law, medicine, and civil rights. In pursuit of these goals, HumainTech will engage the following activities:
- Articulate an annual research theme that will guide our work for the year.
- Host an annual conference that presents new research produced by the center and invites global scholarly participation.
- Publish research produced by scholars associated with the center on a three-year cycle.
Activity 2: Educational Innovation
HumainTech will develop and disseminate new humanities pedagogies in regard to AI tools, mitigating uses that disrupt the classic virtues of humanities education, while developing new teaching techniques that can incorporate AI tools in critical and productive ways. In pursuit of these goals, HumainTech will engage the following activities:
- Generate innovative ideas for the critical use of AI in the classroom for K-12 and college levels.
- Develop innovative ways to mitigate disruptive and non-sanctioned uses of AI by students.
- Articulate ways to help students develop critical insights into original and AI-generated content and their societal impacts.
Activity 3: Public Outreach
HumainTech will educate public school students, teachers, and community members to use AI tools productively while developing critical awareness of their societal implications through participatory outreach to regional communities via a Mobile AI Lab. In pursuit of these goals, HumainTech will engage the following activities:
- Disseminate pedagogical innovations produced by the center in workshops for educators.
- Design and equip a Mobile AI Lab that offers members of the public an opportunity to interact with AI systems in ways that offer a glimpse “under the hood” giving critical insight into how these systems generate content and articulating the potential for both beneficial and harmful impacts.
- Develop pedagogy and research methods that allow the center to use the Mobile Lab to inform the center's research while also disseminating the center's insights more broadly into society.
On the January 18, 2024 panel in Davos, Britain's Chancellor of the Exchequer, Jeremy Hunt noted that technologically driven revolutions succeeded “where the benefits were spread
evenly throughout society and not concentrated in small groups. In the case of AI,
I would say the challenge is to make sure the benefits are spread throughout the world,
North and South, developing world and developed world, and not just concentrated in
advanced economies, because otherwise that will deepen some of the fractures that
are already, in my view, taking us in the wrong direction.”
HumainTech builds upon the centennial vision of Texas Tech president Lawrence Schovanec: “We must be courageous in our endeavors to find creative and unique solutions to worldwide problems, while standing at the forefront of discovery and innovation.”
In addition to faculty applying AI in the sciences and engineering, a number of TTU faculty are working on responsible AI and humanities insights. Prof. Don Shin has investigated issues of algorithmic bias. Dr. Bryan Giemza and Dr. Joseph Gottlieb seek to envision some of the catastrophic scenarios that massive AI deployment could cause. Dr. Shan Xu and Dr. Jason Tham have received a grant from the National Humanities Center to develop a course on Responsible AI in collaboration with other Texas universities. Dr. Hideki Isoda is working with the TTU Innovation Hub to bring his annual AI hackathon to Lubbock.
The Texas Tech Teaching, Learning, and Professional Development Center, or TLPDC, has led efforts to assess and prepare for the impact of AI technology on our campus. The TTU Library seeks to provide tools and guidance to students and faculty alike. The Humanities Center has cultivated creative scholarly inquiry and outreach for over ten years, providing a model for a center dedicated to cultivating cutting-edge research on annual research themes with proactive oversight by a board of faculty peers. The HumainTech Mobile AI Lab will build upon the TTU THRIVE program which has a strong record of educational outreach to rural communities in the region and the STEM CORE program which works in local public schools to enhance science and technology learning. Additionally, the F. Marie Hall Institute for Rural and Community Health at the TTUHSC provides important medical services to a region where many communities lack medical facilities. The Medical Humanities Program at TTUHSC has pioneered humanities-based inquiry in medical fields. All of these programs are advising us and will provide organizational support for HumainTech's activities.
Center Administration
HumainTech will be governed by an oversight board of Texas Tech faculty and regional community members. The board will have final decision on all matters. Its activities will be administered by the following officers as the initial placements in these positions:
Director: Responsible for the overall operation of the center, coordinating its activities and research themes, managing staff, and providing a point of contact for outreach and communication.
Dr. Paul Bjerk teaches African History at Texas Tech University. His research has focused on the broad historical context of modern Tanzania under Julius Nyerere. Prior to his PhD studies, Bjerk taught at Tumaini University in Tanzania. There he initiated the publication of a college newspaper with wide community distribution and implemented a voters' education project that reached nearly 140,000 people in preparation for the 2000 elections in Tanzania.
Scholarly Activities Coordinator: Responsible for organizing the annual research theme, hosting the Annual Conference, and overseeing scholarly publications.
Dr. Lisa Phillips teaches Technical Communication and Rhetoric in the Department of English at Texas Tech University. Her recent research involves overlaps between sensory and environmental rhetorics, and she is has begun new inquiries examining the implicit bias and rhetorical intelligence of AI tools and the ethical implications of their use across industry and educational contexts.
Educational Innovation Coordinator: Responsible for developing new strategies for teaching the critical and creative AI applications of AI tools in secondary and tertiary education.
Dr. Hideki Isoda is an Emmy-nominated media composer, producer, and recording engineer. Dr. Isoda teaches in the Texas Tech University School of Music and has served as Associate Dean of Technology and Departmental Chair in previous appointments prior to joining the Texas Tech faculty in 2020. Dr. Isoda's research in Music Informatics focuses on developing new electronic musical instruments utilizing AI technologies and hosts an annual AI hackathon.
Public Outreach Coordinator: Responsible for organizing the Mobile AI Lab and other outreach to schools, libraries, and communities throughout West Texas, Eastern New Mexico, and Western Oklahoma.
T.J. Martinez is an Assistant Professor of Practice at the College of Media and Communication at Texas Tech University and is native of the High Plains. He has made both documentary and narrative films. His films have screened at multiple conferences and festivals, including the American Folklore Society Conference and SXSW Film Festival.
Technical Advisor: Responsible for coordinating technical advice and computer expertise as needed for the center's activities.
Dr. Abdul Serwadda is an Assistant Professor of computer science at Texas Tech University. Originally from Uganda, he earned his PhD at Louisiana Tech University. His research focuses on learning algorithms for end-user security problems. Recent domains for his research include mobile and wearable security, user behavior modeling, artificial intelligence, and biometrics.
Legal and Intellectual Property Advisor: Responsible for coordinating legal advice and insight into the novel issues of intellectual property, civil rights, and privacy created by AI systems.
Dr. Barbara Lauriat, JD – Barbara Lauriat is Associate Professor of Law and Dean's Scholar in Intellectual Property at Texas Tech University School of Law where she teaches courses on Tort Law and Intellectual Property. In additions to appointments at George Washington University and Oxford, she was Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University and an NDIAS/ND-TEC Tech Ethics Fellow at Notre Dame. Her research interests include the history and theory of intellectual property rights and the arbitration of IP disputes.
Summary of Current Governmental Recommendations
AI in the Future of Teaching and Learning: Key Recommendations
- Emphasize Humans in the Loop: Exercising judgement and control in the use of AI systems and tools is an essential part of providing the best opportunity to learn for all students—especially when educational decisions carry consequence.
- Align AI Models to a Shared Vision for Education: Tease out the strengths and limitations of AI models inside forthcoming edtech products and tofocus on AI models that align closely to desired visions of learning.
- Design Using Modern Learning Principles: Ensure that product designs are based on best and most current principles of teaching and learning. Achieving effective and equitable educational systems requires more than processing “big data.”
- Prioritize Strengthening Trust: Technology can only help us to achieve educational objectives when we trust it.
- Inspectable, Explainable, Overridable AI: Teachers need the ability to view and make their own judgement about automated decisions, such as decisions about which set of mathematics problems a student should work on next. They need to be able to intervene and override decisions when they disagree with the logic behind an instructional recommendation.
- Harness Assessment Expertise to Reduce Bias: Algorithmic discrimination is not just about the measurement side of formative assessment; it is also about the feedback loop and the instructional interventions and supports that may be undertaken in response to data collected by formative assessments. Strong and deliberate attention to bias and fairness is needed as future formative assessments are developed.
- Inform and Involve Educators: Institutions should prepare teachers to integrate technology more systematically into their programs; for example, the use of technology in teaching and learning should be a core theme across teacher preparation programs, not an issue that arises only in one course.
- Focus R&D on Addressing Context and Enhancing Trust and Safety: To more fully represent the context of teaching and learning, including these and other dimensions of text, researchers will have to work in partnership with others to understand which aspects of context are most relevant to teaching and learning and how they can be usefully incorporated into AI models.
- Develop Education-Specific Guidelines and Guardrails: In addition to key federal laws on privacy, many states have also passed privacy laws that govern the use of educational technology and edtech platforms in classrooms. Leaders at every level need awareness of how this work reaches beyond implications for privacy and security (e.g., to include awareness of potential bias and unfairness), and they need preparation to effectively confront the next level of issues.
Blueprint for an AI Bill of Rights
- You should be protected from unsafe or ineffective systems.
- You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
- You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
National Artificial Intelligence R&D Strategic Plan
- Strategy 1: Make long-term investments in fundamental and responsible AI research. Prioritize investments in the next generation of AI to drive responsible innovation that will serve the public good and enable the United States to remain a world leader in AI. This includes advancing foundational AI capabilities such as perception, representation, learning, and reasoning, as well as focused efforts to make AI easier to use and more reliable and to measure and manage risks associated with generative AI.
- Strategy 2: Develop effective methods for human-AI collaboration. Increase understanding of how to create AI systems that effectively complement and augment human capabilities. Open research areas include the attributes and requirements of successful human-AI teams; methods to measure the efficiency, effectiveness, and performance of AI-teaming applications; and mitigating the risk of human misuse of AI-enabled applications that lead to harmful outcomes.
- Strategy 3: Understand and address the ethical, legal, and societal implications of AI. Develop approaches to understand and mitigate the ethical, legal, and social risks posed by AI to ensure that AI systems reflect our Nation's values and promote equity. This includes interdisciplinary research to protect and support values through technical processes and design, as well as to advance areas such as AI explainability and privacy-preserving design and analysis. Efforts to develop metrics and frameworks for verifiable accountability, fairness, privacy, and bias are also essential.
- Strategy 4: Ensure the safety and security of AI systems. Advance knowledge of how to design AI systems that are trustworthy, reliable, dependable, and safe. This includes research to advance the ability to test, validate, and verify the functionality and accuracy of AI systems, and secure AI systems from cybersecurity and data vulnerabilities.
- Strategy 5: Develop shared public datasets and environments for AI training and testing. Develop and enable access to high-quality datasets and environments, as well as to testing and training resources. A broader, more diverse community engaging with the best data and tools for conducting AI research increases the potential for more innovative and equitable results.
- Strategy 6: Measure and evaluate AI systems through standards and benchmarks. Develop a broad spectrum of evaluative techniques for AI, including technical standards and benchmarks, informed by the Administration's Blueprint for an AI Bill of Rights and AI Risk Management Framework (RMF).
- Strategy 7: Better understand the national AI R&D workforce needs. Improve opportunities for R&D workforce development to strategically foster an AI-ready workforce in America. This includes R&D to improve understanding of the limits and possibilities of AI and AI-related work, and the education and fluency needed to effectively interact with AI systems.
- Strategy 8: Expand public-private partnerships to accelerate advances in AI. Promote opportunities for sustained investment in responsible AI R&D and for transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-federal entities.
- Strategy 9: Establish a principled and coordinated approach to international collaboration in AI research. Prioritize international collaborations in AI R&D to address global challenges, such as environmental sustainability, healthcare, and manufacturing. Strategic international partnerships will help support responsible progress in AI R&D and the development and implementation of international guidelines and standards for AI.
College of Arts & Sciences
-
Address
Texas Tech University, Box 41034, Lubbock, TX 79409-1034 -
Phone
806.742.3831 -
Email
arts-and-sciences@ttu.edu