Skip to Main Content

AI Literacies

What skills, knowledge, attitudes and values will our students need to use AI effectively and ethically in their academic work and in the workplace? A scoping review conducted in 2022 by Laupichler et al. found that the study of AI literacy is still in its infancy and not yet fully defined. However, considering the explosion in interest just this year and the rapid evolution of technology, we as a society are in the process of building our evidence base, and these considerations will evolve over time.  

In a January 2023 article, Dr Glenn Kleiman of the Stanford Graduate School of Education suggests a framework for teaching students to write with AI called the SPACE framework. This might provide a useful outline for considering the skills involved in the use of Generative AI in writing and incorporating AI into the teaching of writing. The SPACE acronym stands for:

Set directions for the goals, content and audience that can be communicated. Write an outline and rough draft to visualise how it will flow.

Prompt the AI to produce the specific outputs.

Assess the AI output for accuracy, completeness, bias and writing quality.

Curate the AI-generated text to select what to use and how.

Edit the combined human and AI contributions.

 

Setting Directions and Prompting

For non-programmers, the skill set of effectively crafting inputs for AI tools to deliver the desired output is known as prompt engineering. The capabilities and limitations of the AI tool will impact the effectiveness of prompts and outputs; thus it is important for students to have an understanding of the tool, the relevant skills (writing, artistic, a/v, technical, etc.) and subject matter expertise and critical analysis skills to be able to identify any hallucinations or errors in the output.

Creating an effective prompt involves determining the basics of communication. If you were to approach any professional writer (or artist, or coder) with a project, they would ask you a series of questions to understand what you want to communicate to whom, how you wish to communicate it and why you wish to communicate it. The difference between a competent human writer and an AI writer is that human writers have the ability to comprehend and interrogate your request, understand what they are writing and are capable of having empathy for their audience. They will probe so that they understand your context, that of your message and that of your audience. Human writers know where the piece of writing is going as they write it. The AIs that writers are essentially order takers looking for patterns in their data that fit the patterns in the query. Chat GPT and its ilk are auto-regressive models – the algorithms generate answers linearly and don’t have a planned ending (AI Explained, 2023).

Therefore, a grounding in writing and communications principles and practice will help users design prompts. For example:

Persona. Provide demographic and professional information or a well-known writer or artist for the output to emulate. This could range from “first year, white female university student from Ontario, Canada studying construction management in Adelaide, Australia” to “William Shakespeare”.  

Style. Artistic or writing styles such as genres or specific artists for the output to emulate.

LENGTH. Word count capabilities of different AIs will vary. However, some tools allow users to repeatedly prompt them to expand upon individual points or continue from where they left off.

Audience. Demographic information as to the intended audience.  

Subject/Topic. As with research questions, the more specific details provided about the subject will yield better results. Being specific about the points to be addressed is also possible.

Format. Depending on the AI used, you may need to specify whether you want an image, text, video or audio output.

Note: There are clearly a range of other prompts you could use depending on the context, more detailed resources on prompting are provided at the end of this section.

Example:

Prompt to Chat GPT 3.5

You are William Shakespeare. Write a poem, 250 WORDS IN LENGTH, for 10-year-olds in Australia, about a black cat named Sheila who likes sleeping in the sun.

ChatGPT 4 Output

In sunny lands where skies are blue,

There lives a cat so sleek and true,

Her coat as black as midnight's hue,

And Sheila is her name, 'tis true.

She loves to bask in sun's warm glow,

And sleeps all day, as we all know,

But when she wakes, she's all aglow,

And mischief follows where she'll go.

 

With gentle purrs and silent paws,

She moves with grace, like nature's laws,

And when she sees a bird, she draws,

Her eyes as green as forest floors.

 

Oh, Sheila is a friend so dear,

She brings us joy and never fear,

And when we need her, she is near,

Our Sheila, black cat so sincere.

 

So if you see her in the sun,

Enjoy the sight, the peaceful fun,

For Sheila is a treasure won,

A true delight for everyone.


Resources on Prompting

For further reading on prompt engineering, this February 2023 opinion piece by Dr Marc Scharth of the University of Sydney provides an excellent overview of prompt engineering with the text to text and text to visual technology in existence at the date of writing: How to perfect your prompt writing for AI generators. 

As well, Dr. Alan D. Thompson’s The Chat GPT Prompt Book provides insights on prompting ChatGPT for both text and coding outputs.

And Ms Eva Rtology’s February 2023 article for prompting AI art generators includes advice on adjectives, styles, points of view, content and art/artist styles.

Assessing, curating and editing AI output

At this writing (April 2023), the types of Large Language Model Generative AI freely available for use by the public are still faulty for several reasons, including the origins of the data upon which they’ve been trained having biases and misinformation and the fact they are looking for patterns that best match the input and do not understand what they are producing in terms of language. AIs can and do insert words and phrases that fit a learned pattern, resulting in untruths known as hallucinations or fabrications. As well, AI output often lacks the variety in sentence lengths and colourful, idiomatic language that humans with advanced language skills employ.

Critical thinking, as defined by the (US) National Council for Excellence in Critical Thinking in 1987  is “the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.”

Evaluative judgement is the ability to evaluate the quality of one’s own work and that of others (Tai et al., 2017).

In the section on prompting (link), I asked ChatGPT 4 to produce a poem by Shakespeare rather than a sonnet. That is because I do not know the constituent elements of a quality sonnet. I have not written one since a year ten English class nor researched them, therefore lack the subject matter expertise to objectively evaluate one produced by an AI or a poet laureate. Likewise, if I asked ChatGPT 4 to write code for a specific purpose, I’d need to understand how to test and run the code to determine if it were fit for purpose.

In real-world contexts, a lack of subject matter knowledge and evaluative judgement could result in errors that impact lives. For instance, the “Modelderm” AI meant to identify cancerous skin lesions was trained on datasets where the images of cancerous lesions included rulers. Thus, the system learnt to identify the presence of a ruler in an image with cancer (Narla et al., 2018).

As well, without subject matter expertise or an understanding of what constitutes an effective work of writing, art, code, video, etc. or an understanding of how to communicate ideas effectively – one is at the mercy of the algorithms and data set of the AI being used.

As Dr Simon Buckingham Shum of UTS stated in a webinar on AI in Higher Education (CRADLEdeakin, 2023) - we don't just need humans in the loop - we need competent humans in the loop. AI may be a powerful productivity aid to a trained professional, but students who are at the beginning of their learning are not professionals and may not understand the ethical and technical issues that impact AI outputs and allow them to use it effectively.

Citing the use of AI

In the realm of academia, to what degree students should describe their interactions is contextual and, in Australian universities, including UniSA, largely left up to the academic in charge of the course. Emergent practice and advice from the Australian Academic Integrity Network are to be clear with students about if and how they are allowed to use AI tools and for which activities and assessments, ensuring assessment-level guides and course policies align with programmatic policies and institutional guidelines. 

Citation frameworks are updating their guidelines. The APA has been gathering input from academics on how they think AI tools should be cited, and the APA has responded with guidelines on their blog. In the post, they very clearly state that students should defer to instructors’ directions. If you have questions, the APA invites comments at the bottom of the post. MLA provides instances when citing a generative AI tool is appropriate, how to cite it and why – but is also careful to stress that the directions of instructors must be considered. There are also useful questions and answers in the comments section of their post on the matter.

The Teaching Innovation Unit has an Academic Integrity and Artificial Intelligence resource page it will regularly update with curated resources and developments of interest. 

In the realm of research and publishing, transparency is also a must if a researcher wishes to publish in reputable journals.

The position statement from the Committee on Publication Ethics (COPE) states that AI cannot be considered an author or co-author of a paper and that transparency regarding if and how AI was used to produce content and/or analysis is a must. 

The journal Nature has this to say about submitting papers created by or substantially with ChatGPT and similar Large Language Models: ‘Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably, an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.’