Skip to Main Content

AI and Society

Artificial intelligence is already impacting society.  As early as 2015, leaders in technology such as Stephen Hawking, Elon Musk, Peter Norvig and thousands of others signed an open letter created by the Future of Life Institute calling for consideration of the benefits and risks of AI to society.

We have been using it for years and have already realised benefits in the realms of sciencehealthcare, and manufacturing (amongst many others) as well as in our academic and personal lives. However, the recent explosive interest in it and the rapid developments of the past few years have brought considerations about what ubiquitous AI will do to our society. Will AI take over our jobs? Monitor our every move? And what if it becomes sentient?

The trope of the evil super intelligence is well entrenched in Western popular culture. From Asimov’s I, Robot series written in the 1940s through the 1960s to movie franchises such as The Terminator and The Matrix and countless others, distrust of artificial intelligence is part of the fabric of our culture. 2021 survey results (KPMG, University of QLD, 2021) found that while people were willing to tolerate AI, few approved of or embraced it (and this was pre-ChatGPT!).

One of the founders of the non-profit Future of Life Institute (one of the world’s leading voices on the governance of AI and other technologies), Max Tegmark, had this to say about the trope of evil AI in a 2017 interview with The Guardian :

“This fear of machines turning conscious and evil is a red herring. The real worry with advanced AI is not malevolence but competence. If you have Super Intelligent AI, then by definition, it’s very good at attaining its goals, but we need to be sure those goals are aligned with ours. I don’t hate ants, but if you put me in charge of building a green-energy hydroelectric plant in an anthill area, too bad for the ants. We don’t want to put ourselves in the position of those ants.”

AI Bias

An AI tool is going to reflect the biases of the data, human trainers and programmers involved in its development because bias is part of being human. However, if we have AI tools that are in alignment with societal values of fairness, inclusion and diversity - we need to be aware of the data sources and training methodologies used to power them.

This was highlighted when the Lensa App hit the scene in late 2022. Melissa Heikkilä, a writer at the MIT Technology Review, wrote an article (Heikkilä, 2022) showing the hypersexualisation of different avatar images she requested as compared to that of her colleagues and explaining the bias that can occur at each stage of AI development. 

This sexualisation of female vs male images is not yet resolved. On 13 April 2023, using the MS Bing Image Creator powered by DALL-E, we requested an image of a schoolboy and a schoolgirl. Note that the system readily supplied an image of a boy with a school tie and a backpack, but for the prompt schoolgirl, we were advised that "unsafe image content" was detected, and it was thus unwilling to create the image requested. 

MS Bing Image Creator screenshots

Image: MS Bing Image Creator screenshots (Johnson, personal communication, 2023)

For years recruitment arms of large corporations used AI to sift through resumes and initial interview videos and shortlist candidates as a way of eliminating human bias, improving diversity and reading body language in order to improve hiring practices.  However, these tools have been found to reinforce bias and stereotypes rather than reduce them because they use historical data. Amazon reportedly ended an internal project using AI to vet job candidates after the software regularly reduced the ranking of female candidates for technology-related positions. 

In addition to gender bias, racial and ethnic bias has also been found to be an issue with AI systems. 

AI Ethics

Intellectual property

Do the copyright owners and content creators whose work was scraped to feed AI have any rights? That is still being decided. 

Class action lawsuits representing creative people as well as copyright owners such as Getty Images are working their way through court systems. Tools like plagiarism checkers have been fed with free content for decades. Is what AI produces a derivative work? Are they merely learning as humans do - by ingesting input and rules and then applying them to an output?

Content moderation

OpenAI's ChatGPT was not just let loose on the internet and then released. People were employed in Kenya, Uganda and India at $US1.32 to $2 per hour to label data as harmful. But to do so, they had to be exposed to sometimes graphic content describing violent and disturbing stories, descriptions and opinions. These could range from 100 to 1,000 words and resulted in employees being traumatised. In January 2023, the company that employed these workers cancelled its work on sensitive content moderation work. (Perrigo, 2023) Content moderation is an essential part of ensuring AIs and other tools do not cause harm to society. A lack of moderation has plagued social media platforms such as Facebook, where the combination of algorithms based on engagement and lack of moderation was reportedly (Reuters, 2018) found by the UN to have "substantively contributed"  to widespread violence against the Rohingya people of Myanmar. 

Privacy concerns

As with any cloud-based tool, privacy is a vital consideration. OpenAI's bug in March 2023 that exposed the conversations and some account details of some subscribers to other users was both expected and disturbing. In using these tools, we must consider the privacy risks inherent.

Ethical AI Frameworks

In 2021, the 193 UN member states adopted the Recommendation on the Ethics of Artificial Intelligence, a global standard-setting instrument. However, it is important to keep in mind that these standards are voluntary - both for countries and industry. 

Australia's own eight AI Ethics principles published by the Department of Industry, Science and Resources are also voluntary and are  'intended to be aspiration and complement - not substitute - existing AI regulations and practices.'  In 2019, the Australian Law Council recommended in their submission to the government that more work be done to establish international initiatives, explore governance and in the development of legislative and regulatory frameworks concerning AI 'consideration should be given to placing an onus on AI systems to demonstrate accuracy.'

AI Governance

As the capabilities and functionality of AI evolve, we humans need to consider what those capabilities mean for our civilisation. A growing number of government and non-government organisations are researching the impacts and potential impacts, proposing ethical and diversity frameworks and contributing to high-level conversations about AI governance.

The OECD has an AI policy observatory that tracks what is going on in countries around the world.  Broadly speaking, there are two main approaches to AI governance: horizontal and vertical. Horizontal is a general approach with a focus on moderating the risks and impacts of AI in general, whereas a vertical approach considers the functionality and impact of different types of AI.

In late March 2023, the FLI put out an open letter calling for signatories to a request to halt the development of “Giant AI Experiments”. Leading thinkers in the space including Elon Musk, Apple co-founder Steve Wozniak, the CEOs of AI companies and well-known researchers are among the signatories. Other experts disagree with this move, saying it is a security risk and impossible to monitor and police organisations’ and governments’ AI activities. 

Video: What is AI Ethics?

Length: 06:09

This video does an excellent job of summing up the issues around AI and the discussions being had around the world regarding the integration of AI into society. It's a socio-technological challenge that must be addressed if we are to realise the benefits of AI and address the risks so that AI is, as AI researcher Max Tegmark describes it, in alignment with our principles and values.