Image: (c) University of South Australia
These are machines that cannot learn or make new connections but can automatically respond to input and patterns with very little to no human intervention beyond programming tweaks. They have no memory, cannot “learn” and will respond to identical situations in the exact same ways every time. Think IBM’s Deep Blue chess-playing machine that beat chess Grandmaster Gary Kasparov in 1997, Netflix’s recommendation engine or spam filters.
Adapted from Marr, 2021
Limited memory systems are trained on a closed set of data in combination with pre-programmed information to produce a variety of outputs with varying degrees of accuracy. During training on a test database, they learn to map outputs to inputs through a process of trial and error and are then re-tested and assessed on a number of metrics. Once in use, they can complete complex classification tasks and use historical data to make predictions. Chatbots such as Open AI’s ChatGPT and Google Bard, smart assistants such as Siri and Alexa and all other AI fall into this category. It’s where we are now.
This point along the spectrum is in the future. Simply put, it is empathy - the ability to “read the room” or as AI researcher Arend Hintze explains (Hintze, 2016) “…the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior (sic).” It’s where the science is heading, but we’re not there yet.
Futurist Bernard Marr Marr defines self-aware AI as “not only aware of the emotions and mental states of others, but of their own” (Marr,n.d.). This human-level consciousness is in the future or ever, how distantly in the future or if it will ever happen is a subject of robust debate.