Artificial Intelligence or AI is an umbrella term to describe a spectrum of technologies that solve problems using computer technology. It's not a new concept nor is it a recent field of study. AI has a range of definitions and a spectrum of degree depending on the context.
As a science, artificial intelligence can be described as combining “the three disciplines of math, computer science and cognitive science to mimic human behaviour through various technologies”. (US Gov, Unknown, accessed 3/4/23).
The definition of intelligence is one that has been long debated and shaped by the lens and context through which one is viewing it. If we take a simple definition that can be applied to both humans and machines as (Tegmark, 2017) “the ability to accomplish complex goals”, then we have a starting point to explore the categorisations and terminology involved.
The video below, part of a series on AI in education, is an excellent starting point for academics and students, produced by the Faculty Director of Wharton Interactive (University of Pennsylvannia), Ethan Mollick and Director of Online Pedagogy Lilach Mollick.
Practical AI for Instructors and Students Part 1: Introduction to AI for Teachers and Students
For people who want more technical details on how GPTs work, view the video "But what is a GPT? Visual intro to Transformers | Chapter 5, Deep Learning", an introduction to transformers and their prerequisites by Grant Sanderson on his channel 3Blue1Brown on YouTube.
A type of machine learning process that uses neural networks to process data. Deep learning involves programming machines to recognise complex patterns in pictures, text, sounds and other data to produce insights and predictions. These predictions are measured by performance criteria selected for specific applications. The auto-captioning features available for YouTube and Panopto videos is an example of this. Natural language processing – such as that used by ChatGPT is another example of deep learning (AWS, n.d.).
A type of artificial intelligence that often uses neural networks to recognise patterns in what it has been taught and the questions asked of it in order to create responses most likely to satisfy the request made to it. The output can take many forms – text, audio, still image, video.
An AI model that modifies an existing model based on layered neural networks to identify patterns and probabilities in both the requests made to it and the data upon which it has been trained in order to generate content as a response. This content can take the form of text, visuals, audio, etc.
An informal term, an LLM is a large neural network programmed to predict the patterns and relationships of words, phrases and sentences in a particular language based on a large amount of data to which it has access.
A branch of Artificial Intelligence and computer science that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy (IBM, n.d.).
For more information, view this overview video on YouTube from IBM "What is Machine Learning?" (Length: 08:22)
The ability to recognise complex patterns in inputs provided in non-programming, naturally written or spoken language and be able to generate an output in that same language.
Virtual assistants such as Siri and Alexa and various chatbots up to and including ChatGPT and its ilk are examples of these.
Neural networks are modelled on the human brain, in which nodes (artificial neurons) are activated and then connect to others based on a weighting (likelihood of correctness) assigned as a result of training and statistical probability (IBM, n.d.).
Models that are fed large datasets and then are either
Supervised - where the output has a pre-defined relationship to the output. If you've ever interacted with an "Ask X" chatbot on a government or sales site and received what reads like a pre-written response, chances are that you've dealt with a Pre-Trained, Supervised AI.
Unsupervised - where the model learns the underlying structure and patterns of the input and then creates an output based on calculations of the context and likely response being sought.
A transformer is a type of neural-network-based machine learning typically used in natural language processing that learns the context and relationships of data by tracking the relationships in a sequence of data. In other words, it derives context based on whole strings of input rather than analysing individual words.
For instance, in the following two sentences, one adjective changes the context of the pronoun"it":
“She poured the water from the jug into the glass until it was full.”
“She poured the water from the jug into the glass until it was empty.”
There are many ways of classifying AI. The categories below are based on the scope of AI intelligence.
What we’re experiencing as mainstream now – is task-oriented. Create an image. Generate text or code. Suggest a design for this slide. All impressive and complex – but they aren’t broadly intelligent. A general purpose, generative text AI may be able to write a report, design a course outline or summarise text with varying success – but that same AI can’t also create a video or an image, complete your most recent year's taxes, help you negotiate where to go for dinner with your significant other or facilitate a workshop on your area of expertise. Other AIs can generate images, audio and video - but not output text.
This is considered the next step – where a single AI platform can do what humans can in most professions and at their level of competency. Some definitions of this include the element of sentience - where an AI becomes self-aware. There is currently no agreement as to when or if this will occur amongst experts. What is agreed is that there need to be safeguards in place to ensure that the goals of AI are in alignment with the values and goals that will benefit humanity.
This level and scope describe an AI that surpasses humans in all endeavours. Whether the risks posed by an AI that understands it is superior to the humans it was created to serve could be controlled at this point is also a subject of debate amongst AI experts.
Video: A.I. Expert Answers A.I. Questions From Twitter | Tech Support | WIRED
In this fun and informative video, cognitive scientist Dr Gary Marcus answers questions about AI both from a technical and general perspective. Although produced in March 2023, it still provides valuable insights on the basics of AI.
If you've read this far and watched the videos, you should now have a sense that general-purpose, generative AIs are task-oriented, pattern-recognition tools that generate the "least surprising" output based on the prompts provided to them.
Depending on the AI you're using, the sources it uses are likely limited to the public internet. ChatGPT 4 - the research version made available via Open AI (the organisation that created it) is limited to publicly available internet content - including opinions and discussions - up to September 2021. Bing is powered by GPT 4 and combines it with the ability to search and cite the current publicly available internet. Other generative AIs have their own limitations based on their training data and levels of complexity.
These AIs are not subject matter experts or scholarly sources. They can and do make mistakes based on incorrectly making connections between data or even inventing answers in order to complete the request made of them. They can misunderstand context, cannot verify information and have human bias built into them due to both the data they've been fed and the human editors that have rated their outputs.
But remember that today's AI is the worst you'll ever use again. And there are many things generative AI can do well.
In 2023 we saw the image generator MidJourney win a prestigious photo contest in the UK. Scammers are using audio generated by AI to approximate the voices of friends and family in an attempt to trick victims into giving them money. The Screen Actors' Guild of America is fighting attempts by studios to body scan and then license the video of background actors to re-use in future movies. And educators are finding that AIs can pass various professional certification exams as well as assessments and writing activities. More on that later in this guide.
There are many positive uses of AI for academic and professional work - but it's vital to remember that a competent human, with knowledge of the context of the work required as well as subject expertise and writing skills, is still necessary for effectively using AI as a tool in most applications. AI may be able to generate an email response, lesson plan or video script - but for it to generate one useful for a specific use case requires not only good prompting (communicating what the desired outcome should be) but also review and refinement. At least, for the AI we have now.
As Gary Marcus notes in a May 2024 post titled 'Partial regurgitation and how AI really work', Marcus's description is also an apt one to describe poor paraphrasing:
What the LLM does is more akin to what some high school students do when they plagiarize: change a few words here or there, while still sticking close to the original.
LLMs are great at that. By clustering piles of similar things together in a giant n-dimensional space, they pretty much automatically become synonym and pastische (sic) machines, regurgitating a lot of words with slight paraphrases while adding conceptually little, and understanding even less.
AI detectors are, like other general-purpose, generative AIs, pattern recognition engines. They look for simple, "unsurprising" text strings. Like short, declarative sentences that are unvaried in length with simple language, cliches and other familiar phrases. So it's not surprising that AI detectors found texts that were quoted quite often up to September 2021 to be AI-generated - like the US Constitution. OpenAI pulled its own AI detector in July 2023 reportedly due to its 'low rate of accuracy'. (Kelly, 2023).
As well, which sets of language users tend towards simple sentence structures and safe phrases? Non-native English writers and people from less educated backgrounds (Liang et. al, 2023). However, Turnitin - the plagiarism detection company that also offers AI detection services - disagrees, citing in-house studies that they claim show there is no statistically significant bias against non-native English speakers and defends the AI detection abilities of its service.
It's worth noting too that AI detectors are fairly easy to fool. For instance, savvy AI users can simply go back and forth with generative AI, providing samples of their own past writing and asking the AI to mimic their style until they have a piece that sounds like their work. Prompts for AIs can include personas, context and rubrics. For instance, that reflective essay you think will bypass AI? There's a prompt for that: "You are a 20 year old female university student from Adelaide, who typically gets average to slightly above average marks. Write a 500 word reflective piece on X for your second year course Y. The learning objectives to be demonstrated are A, B, C. Here are the instructions. <paste> Here is the rubric <paste>. Based on that rubric, write a paper that will just achieve a distinction. Include a few spelling mistakes."