This guide was put together to help students, instructors, and other TU personnel better understand the use of generative-AI tools in the academic setting, as well as offer links to additional materials for deeper learning and consideration before use.
We encourage all students, faculty, and staff to approach this technology with the same cautious optimism and critical consideration you would use for any new educational tech or tool.
“If we think of artificial intelligence apps as another tool that students can use to ethically demonstrate their knowledge and learning, then we can emphasize learning as a process not a product.” (Eaton and Anselmo 2023)
From calculators to cell phones, the introduction of and its potential impact on education and learning is often hotly debated, spurring conversations around both potential and risks. No where you stand, understanding the basics and adapting to ever-evolving tech is critical for both you and your students.
While the AI debate may seem new to many of us, AI has actually been around for decades. The first AI program was written in 1951, with a focus on teaching a computer to checkers. By the following year, the computer was able to play a game of checkers at a reasonable speed.
As AI began to evolve, so, too, did the need to understand and evaluate it. British logician and computer pioneer Alan Turing began exploring machine intelligence in the 1930s. In the 1950s, he questioned whether a machine has the ability to think and and introduced central concepts of AI.
Today, Turing is considered the father of artificial intelligence and modern cognitive science, and the Turing test remains the criterion for evaluating computer intelligence.
(Source AI In Education: Introduction, Britannica Education)
One of the first times that AI became available to the masses in terms of a helpful tool was Microsoft's Clippy. Clippy was often more annoying than helpful, but that little paperclip was an early AI assistant. It was trying to learn from you in order to improve your Microsoft experience. This type of real time editorial work has moved to programs like Grammarly -- another tool that assists through machine learning.
How could we forget those early commercials with "Hey Siri" or the first time you called a help like and got an automated assistant instead of a person? These are AI programs. Do they function on a different level that LLMs? Absolutely! But they are built on some of the same machine learning programs that are training generative-AI tools today.
And if you look at the state of AI content around you today and think we appear to be picking up speed exponentially, you would be correct. This trajectory is why it is so important to consider the human-in-the-loop. This concept emphasizes the importance of human oversight within AI assistive technologies. It is crucial in terms of fighting off bias, developing equitable use, and tracking systems for better application. The human-in-the-loop is also a good reminder that none of these programs could exist without human input and human coding.
Generative-AI is a subset of Machine Learning, which is under the large umbrella of Artificial Intelligence. To work, generative-AI uses statistical predictions based on training data and response learning. These programs improve with use. Utilizing user supplied feedback on the quality of the answers, as well as new data dumps from the programmers, generative-AI programs can improve and improve their content. Unlike internet searches, generative AI tools do not use algorithms to locate and curate existing sources. Instead, they create new content by predicting what word, sound, or pixel would come next in a pattern.
(Sources: University of Pittsburg and IBM)
What can these tools do?
In a 2023 article published in The New Yorker, computer scientist Jaron Lanier describes the difference between actual artificial intelligence and generative-AI programs.
"It’s easy to attribute intelligence to the new systems; they have a flexibility and unpredictability that we don’t usually associate with computer technology. But this flexibility arises from simple mathematics. A large language model like GPT-4 contains a cumulative record of how particular words coincide in the vast amounts of text that the program has processed. This gargantuan tabulation causes the system to intrinsically approximate many grammar patterns, along with aspects of what might be called authorial style. When you enter a query consisting of certain words in a certain order, your entry is correlated with what’s in the model; the results can come out a little differently each time, because of the complexity of correlating billions of entries."
Answers generated from programs like Copilot, ChatGPT, Perplexity, and other LLMs are not sourced from quantum computers that make logical connections on their own. It is important that you, the user, still know how to identify the components of verifiable answers and high-quality sources.
Information contained on this website is educational in nature and does not represent the generative-AI use policies for the entirety of The University of Tulsa. If you have questions about using generative-AI tools for your course work or in your department, McFarlin encourages you to reach out to your professors or department chairs and request their generative-AI or AI policies.