Skip to main content

Implementing Generative AI in Higher Education

On November 30, 2022, ChatGPT was released to the public. Just days later, one of my professors scratched part of his planned lecture to explore ChatGPT as a class. We gave “Chat” a complex word problem to solve. ChatGPT got the question right and even explained how to get the answer!

I immediately realized that generative AI could impact the world in a big way. I wasn’t alone. In fact, according to the industry journal, *Strategic Finance*, an estimated $1 trillion will be invested in 2023 in the AI chatbot space alone.

Investors aren’t the only ones focused on AI. Over the past year that I have spent as a university student, all my professors have outlined their AI policies. Some of my instructors completely ban the use of AI; others allow the use of AI if we identify how AI was used. Still others allow AI only for specific assignments. These various policies have left me wondering how AI should be integrated into a university education. One thing has become increasingly clear—AI is not going anywhere. Both students and educators need to lean into AI; it can improve the learning experience. Educators must adjust the learning process to fully realize the benefits of generative AI.

To better understand which adjustments to make, we will first explore the exact nature of generative AI. Second, we will outline practical uses of AI in higher education. Third, we will discuss limitations and concerns surrounding the use of generative AI. Finally, we will identify specific steps that educators should take to better integrate generative AI in their classrooms.

What is Generative AI Anyway?

First, we must understand what exactly AI is. AI stands for artificial intelligence. Simply put, AI is a computer system that can perform tasks that typically require human intelligence. AI is currently being used for language translation, speech and facial recognition, personalized feeds on social media, improved search engine results, and much more.

Generative AI is an umbrella term for a specific type of AI that can generate content, such as audio, video, text, code, or images. Large Language Models (LLMs) are a specific type of generative AI that utilize Natural Language Processing (NLP) to understand and generate text content. The content generated by GPT-based LLMs is a function of the data upon which the LLM was trained. Currently, most LLMs are trained using mostly datasets pulled from the internet.

ChatGPT is the most popular form of LLM generative AI. As shown in Figure 1, OpenAI’s ChatGPT reached 1 million users in just 5 days. Additionally, ChatGPT hit the 100 million user mark in just 2 months, according to the United Bank of Switzerland. Such widespread success is largely due to the availability of ChatGPT to the public and an easy-to-use user interface. ChatGPT has provided many students and educators with their first opportunity to experiment with generative AI.

Current Uses of Generative AI in Higher Education

Currently, there are three primary uses for generative AI in the classroom: jump-start mode, appropriator mode, and personalized feedback.

Jump-start mode is using AI to generate new ideas. For example, you could instruct ChatGPT, “Give me ten topic ideas for a business article for a management communication class.” In response, ChatGPT would promptly output ten topic ideas.

Appropriator mode is based on the idea that, often, other communities have already solved a problem which the user is grappling with. Generative AI enhances the human ability to discover and import those successful solutions from other communities. In other words, AI amplifies human investigative abilities by allowing us to find and appropriate what was previously invisible to us.

Third, generative AI provides personalized learning opportunities for students. AI can provide personalized feedback and suggestions about a student’s work. Generative AI can also serve as a debate partner or a dialogue partner. Thus, AI helps increase creativity because discourse generates new ideas.

There are dozens of other uses beyond these three. In fact, it is the very nature of AI that enables these many uses. For example, generative AI is available 24/7 to anyone with an internet connection. AI also has infinite patience and can be used simultaneously by millions of users. Generative AI is non-judgmental which may decrease a student’s inhibitions about asking something which may be deemed a “dumb question.”

Additionally, generative AI is fluent in as many languages as it is trained in. Presently, ChatGPT knows 95 natural languages in addition to various programming languages. Such fluency offers unparallel accessibility to students.

Limitations of Generative AI in Higher Education

Despite the value of AI as outlined above, there are both limitations and concerns surrounding the use of generative AI in higher education.

One limitation has been dubbed “hallucination” by the industry. Hallucination is misinformation generated by AI. Generative Pre-Trained Transformers (GPT) and Large Language Models (LLMs) process inputs and produce outputs without true understanding of what is being said. Instead, LLMs use complex algorithms to predict the next words, one-by-one, based on what has already been said. Thus, LLMs can’t produce true knowledge; they can only output expressions based on their training data. In fact, according to Peter Denning, an American computer scientist and writer, LLMs can “retrieve a segment that was never actually said but is close to several segments that have been said.” This is why “hallucinations” and misinformation produced by GPT models are such a problem; generative AI can create responses that sound realistic but are fundamentally incorrect.

According to OpenAI developers, LLMs cannot currently be trusted for accuracy. For example, notice this glaring contradiction from ChatGPT when asked about its ability to produce error free code: “I can generate code of any length that is free of errors. However, I am not able to check the accuracy or correctness of the code I generate, so it is important to check the code that I generate for any errors or mistakes.” Despite these obvious concerns, excitement surrounding such new technology has drowned out these warnings.

Cheating and plagiarism are also concerns surrounding the use of generative AI. Students can use AI to produce the “products of learning” for graded assignments. Additionally, the use of LLMs may result in accidental plagiarism. LLMs generate outputs that are based on data produced by others. So, GPT may output language that is similar to work that has already been published. GPT does not cite sources, so it is hard to understand where the content is being pulled from.

In January 2023, two months after the release of ChatGPT, OpenAI released “AIClassifier” to assist in detecting content generated by GPT models. However, AIClassifier was shut down in July 2023 “due to its low rate of accuracy.” According to a statement from OpenAI, “our classifier is not fully reliable. In our evaluations on a ‘challenge set’ of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written,’ while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”

Third-party AI detectors have emerged, but with similar levels of inaccuracy. The unreliable nature of AI detectors has created a problem for teachers who need to identify when students have relied on generative AI to complete their assignments. Some have suggested an AI watermark to help identify content generative by generative AI.

Finally, bias is a concern for generative AI. GPT LLMs are only as good as the dataset on which they were trained. Large Language Models (LLMs) exclude some voices because most LLMs were trained using data from the World Wide Web. Therefore, GPT models are limited in their ability to reflect the opinions of non-English speakers, Africans, Asians, dissenters in autocratic countries, and all who do not have access to the internet.

“Generative AI can ‘retrieve a segment that was never actually said but is close to several segments that have been said.’”

So, What Should Educators do now?

Building on the principles outlined above, I have identified three actions that educators can take today. First, define clear rules, policies, or procedures regarding generative AI. Policies will vary from campus to campus and classroom to classroom. In deciding what policies to enact in their classroom, professors should spend meaningful time learning about the nature of generative AI and how AI can be a tool for them and their students.

In turn, teachers should demystify AI for their students. Professors should teach their students the realities of AI and explain the uses and limitations of AI. I love discussing AI with my classmates because I often learn that they are using AI in ways that I had never considered.

Studies suggest students are excited to learn about AI, and that they believe AI can enhance their learning experience. For example, in a recent survey, undergraduate students were asked, “How useful do you think AI would be in the educational process?” A response of “1” means AI would be useless and “10” means AI would be highly useful. As shown in Figure 2, 83% students responded “6” or higher, meaning they believe AI would be useful in the learning process.

Students are excited to learn about generative AI and to use it in the classroom. So, how can professors ensure that AI is used ethically? Teachers should reframe their content to focus more on the process of learning and less on the products of learning. Currently, the education system is based primarily on the products of learning that students produce (e.g., essays, cases, exams, final projects). However, GPT models have demonstrated their ability to produce the products of learning.

AI’s ability to replicate the products of learning is concerning to educators and leads to the temptation to completely ban AI in the classroom. If there is no AI, then students must produce the products of learning themselves.

However, consider the case studies of handheld calculators in the 1960s and personal computers in the 1980s. In both instances, these disruptive technologies allowed students greater capability than ever before. Educators became concerned and many pushed back. Educators in Ottawa even claimed that “literacy may vanish within the next ten years.” However, the opposite has occurred. Assisted by adaptations to the education system, calculators and computers have fostered the most educated and capable era in the history of the world. In a similar manner, generative AI is a tool that can enhance, not hinder, human intelligence and capability. Therefore, educators should adjust the structure of their courses to support the capabilities of generative AI (i.e. focus on the process not the product).

Chris Dede, an industry expert in emerging technologies and education and professor of Learning Technologies at the Harvard Graduate School of Education, views the world’s “wicked problems” as such:

Many of the critical problems we face in the world are ‘wicked problems’ that are ill-defined and complex. Educators should try to stretch out the tension and embrace the fear of uncertainty during discussion without jumping into quick solutions. Educators’ role is not to funnel students’ voices towards a pre-determined solution, but to create favorable conditions for collective interaction and shape the dialogic space as the discussion unfolds.

Making progress on “wicked problems” becomes easier as we marry artificial intelligence and human intelligence. AI can quickly provide “pre-determined solutions.” The human intellect can then apply those solutions to more complex issues.

Dede also emphasizes the importance of “confronting the blank canvas,” which is grappling with a complex problem without relying on the “jump-start” capabilities of generative AI. Educators should emphasize the contrast between “confronting the blank canvas” and building on ideas generated by AI. Both confronting the blank canvas and utilizing generative AI have value. However, depending on the task, one approach is often more favorable than the other. Developing the ability to discern when, where, and how to use generative AI is essential. For this reason, learners should be allowed to experiment with generative AI; it should not be banned. Through open discussion, hands-on experience, and education about AI, students will become more effective and responsible users of generative AI.

Notes

1. “AI Truth Machine,” *Flickr*, August 14, 2020, https://www.flickr.com/photos/arselectronica/50224297163/in/photostream/.

2. Laurie Burney, Kimberly Church, and Mfon Akpan, “ChatGPT and AI in Accounting Education and Research,” *Strategic Finance* (August 2023): 2.

3. Lydia Cao and Chris Dede, “NAVIGATING A WORLD OF GENERATIVE AI: SUGGESTIONS FOR EDUCATORS,” *The Next Level Lab @ Harvard Graduate School of Education* (2023): 1.

4. Cao and Dede, “Navigating,” 1.

5. Cao and Dede, “Navigating,” 1.

6. Bergur Thormundsson, “Adoption rate for major milestone internet-of-things services and technology in 2022, in days,” *Statista*, Published January 23, 2023, https://www.statista.com/statistics/1360613/adoption-rate-of-major-iot-tech/.

7. Martine Paris, “ChatGPT Hits 100 Million Users, Google Invests In AI Bot And CatGPT Goes Viral,” *Forbes*, Published February 3, 2023, https://www.forbes.com/sites/martineparis/2023/02/03/chatgpt-hits-100-million-microsoft-unleashes-ai-bots-and-catgpt-goes-viral/?sh=17ce687e564e.

8. Thormundsson, “Adoption rate.”

9. Peter Denning, “The Profession of IT: Can Generative AI Bots Be Trusted?,” *Communications of the ACM* (June 2023): 4.

10. Doraid Dalalah and Osama Dalalah, “The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT,” *International Journal of Management Education* (July 2023): 4.

11. Denning, “The Profession,” 4.

*Published by BYU ScholarsArchive, 2024*

12. Denning, “The Profession,” 2.

13. Denning, “The Profession,” 2.

14. Dalalah and Dalalah, “The False Positives,” 5.

15. “New AI classifier for indicating AI-written text,” *OpenAI*, Published January 31, 2023, https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text.

16. OpenAI, “New AI classifier.”

17. Mohd Javaid, Abid Haleem, Ravi Pratap Singh, Shahbaz Khan, and Ibrahim Haleem Khan, “Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system,” *BenchCouncil Transactions on Benchmarks, Standards & Evaluations* (June 2023): 11.

18. Gianina-Maria Petrascu, “Students’ Perceptions of AI in Education,” *Kaggle*, Published March 2023, https://www.kaggle.com/datasets/gianinamariapetrascu/survey-on-students-perceptions-of-ai-in-education/.

19. Petrascu, “Students’ Perceptions.”

20. Petrascu, “Students’ Perceptions.”

21. Dalalah and Dalalah, “The False Positives,” 2.

22. Cao and Dede, “Navigating,” 7–8.

23. Cao and Dede, “Navigating,” 8.

24. Cao and Dede, “Navigating,” 8.

25. Cao and Dede, “Navigating,” 7–8.