Storytelling about AI
This article is a follow-up to “Storytelling with AI”. That article focuses on the practicalities of using AI to create and tell stories. This time, I focus more on the stories we tell about AI. With lots of examples, and with reference to ongoing real-world drama in the AI industry .
Australia, 2041. Artificial intelligence has driven down the cost of producing almost everything, creating a society where the end of poverty is in sight, and money is no longer needed. In its absence, a new virtual currency is created to motivate people, which is gained through voluntary community work. In this context, a young Aboriginal woman cares for an aging scientist who has successfully led the restoration of the Great Barrier Reef.
That’s the premise of “Dreaming of Plenitude,” one of a collection of short stories about the near future of AI by Kai-Fu Lee and Chen Qiufan. I read this before a talk with students in the Immersive Environments course at Amsterdam University of Applied Sciences – they were preparing to use the story as the basis for creating an immersive experience.
This article is a follow-up to “Storytelling with AI”, which summarises my last lecture with the same course. That article focuses on the practicalities of using AI to create and stage stories. This time, I want to focus more on the stories we tell about AI. With lots of examples, and with reference to current real-world narratives.
I’ve written a couple of sci fi stories about AI myself (here and here). But my take is much more critical than that of Lee and Qiufan. They acknowledge some of the negative effects of AI in a technologically advanced society – for example how users might “game” a social credit system, and how the benefits of technology might not be equally distributed in such a society. And each of their ten stories comes with an academic analysis linked to present and projected trends.
So it’s surprising that they don’t mention China’s social credit scoring system, launched around 2018. An episode of the TV series Black Mirror has dealt wryly with the potential downsides of such a system.
Elderly people in “Dreaming of Plenitude” are basically managed by AI, and the young are employed by AI. This has a basis in reality: today, robot carers are a compelling “use case” (code word for a technology-led approach) to deal with aging populations in some countries. And automation aims explicitly at productivity gains and reducing human “work” – though some fear it means even more AI control over human affairs. Automation does not equal greater human autonomy. In fact, many of us already serve algorithms – for example on social media, and increasingly in work contexts.
Dreaming of climate change
The older woman in “Dreaming of Plenitude” saves the coral reef with the help of AI, underwater robots, and genetically engineered microorganisms. No mention is made of the broader environment crisis, nor of the environmental impacts of increasing use of AI in terms of mineral extraction and energy usage – as I write this, Amazon, Google and Microsoft have increased capital expenditure on cloud resources (data centres and network capacity) by 20% over the past two years, in order to ramp up capacity for exponentially increasing use of AI (source: Financial Times).
Data centers are among the world’s largest users of electricity. According to Tung-Hui Hu: “The cloud is a resource-intensive, extractive technology that converts water and electricity into computational power, leaving a sizable amount of environmental damage that it then displaces from sight.” (Hu’s Prehistory of the Cloud, p.146)
More broadly, where in the story are the climate extremes and environmental degradation that we only see increasing today? No sci fi story can include all the details of the world it depicts, but my understanding is that, even if we were to reach net zero emissions by 2030, the climate would start to mend itself, but it would take hundreds of years to do so. Australia – like an increasing number of places on Earth – suffers from an increase in huge wildfires, long-term drought, and extreme storms. No mention is made of this in the story; instead, Lee’s analysis simply states:
…a clean energy revolution is under way—one that will address the crisis of climate change while dramatically reducing the cost of powering the world. We are approaching a confluence of improved solar, wind, and battery technologies with the capacity to rebuild the world’s energy infrastructure by 2041. (p. 418)
AI is said to help bring about this clean revolution, but exactly how – and without increasing its own environmental footprint – goes unstated. Kate Crawford calls this “the myth of clean tech”.
Another popular fictional story does address these factors, if only in passing. In all the fears about AI going around, most people cite the Terminator scenario, from the film of the same name, wherein AI attains sufficient autonomy that it decides that humans need to be eliminated. But I recently re-watched another film, The Matrix. Samuel L. Jackson’s character Morphius, in revealing the state of the film’s “real world” to Keanu Reeves’ Neo, mentions that it’s perpetually so dark and gloomy outside because when AI attained autonomy, it sought to exploit solar power in order to feed its endless appetite for energy. So humans resorted to geo-engineering to increase global cloud cover and block out the sun. If you’ve seen the movie, you know that this feat still didn’t stop AI from hunting down humans.
AI and climate catastrophe combine to make quite a doomsday cocktail. But I’d like to shift over to a more positive perspective. In the Nine Earths project I worked on, we commissioned artists in 12 countries to film a day in the life of an average person. I then went through all this footage and edited it together to produce ethnographic films.
Someone’s average day doesn’t sound like compelling viewing, but when you look at people doing the same thing – say, commuting, or shopping for food – across different cultures, the most mundane details become fascinating. What kind of fruit is that? Why do motorbikes go so slowly in Vietnam? Why is absolutely everyone staring at the phone while they eat? Looking at people’s consumption practices through the lens of climate change provides a stark contrast to the doomsday imagery of melting ice and belching smokestacks. And it also makes us think of our own actions – what if someone was filming your average day?
Dreaming of culture
You might wonder what this has to do with AI. As I watched hours and hours of footage from Dubai, Indonesia, Jamaica and elsewhere, I was also creating and working with AI systems – completely separately. But I started to feel like an AI myself, fed with a massive dataset that I had to classify and search for patterns – a lot like Everest Pipkin describes here. I wondered if AI could help me with this task. It can recognise objects, people, actions and representations, based on categories it had learned. What would it see differently from me? Could it make inferences, tell stories?
I got a chance to start investigating this, while working with Festival dei Popoli, a film festival in Florence, Italy. It has an archive of ethnographic and documentary films going back to 1959, and I was able to work with some of this in developing what is called “ethnographic AI” – a system that uses relatively small datasets that are specific to one culture, or one film, or one year. Almost all AI systems have been created within particular cultures and subcultures – in places like Silicon Valley, for example – using datasets drawn overwhelmingly from the US and Europe, via the internet (here’s a nice example - Alex). What happens when these systems are imported into, or imposed on, other cultures? What would AI look like if it was based on different cultural contexts?
This is early, ongoing work for me. You can see my progress and freely access my code here.
Artist Julian Tapales, a former student of mine, took a similar approach to create something very different. He used AI to transform a 3D scan of a symbolic artifact from the Philippines, where he comes from. As he describes, the object is “transformed first into its own numerical representation, then into an image that is continuously re-presented.”
As in my ethnographic approach, the AI is trained using data local to the region – in Tapales’ case, local materials – and on-screen it comes to life, continuously transformed and never in a fixed state. Symbolic objects like this are already endowed with magical powers – to those who believe in magic. In using AI to create an animated representation, Tapales tries to draw out these magical powers, while simultaneously drawing attention to the real-world materiality of the object. “Is the digital representation inseparable from its material counterpart,” he asks, “or has it ‘learned’ enough to acquire an agency of its own?” Should art merely represent the world as we experience it, or instead “replicate the generative activity that constitutes the essence of nature itself”?
This work is echoed in the “XR glasses” worn by the Aboriginal character in “Dreaming of Plenitude”, that render the world in the style of the dot paintings of her native culture. These paintings (for example by Angelina Pwerle) are like blueprints for the “ancient Aboriginal philosophy in which the ecosystem, human inhabitants and past, present and future of a place are inextricable,” writes Dan Stapleton. The word “dreaming” in the title of Lee and Qiufan’s story serves a double meaning, alluding to the Aboriginal origin myth. Tapales, too, is interested in the broader reality system in which we live, just using different tools and materials to evoke it.
Dreaming of commerce
In the world of “Dreaming of Plenitude”, money is obsolete. What about power – economic and otherwise? The Australian government has apparently developed the social credit AI system in the story, though the Aboriginal character is said to struggle to find a stable job in an XR company. So companies still exist in this world. Yet no one has to work, and there is no monetary profit to be made. Lee contends, therefore, that a world of plenitude invalidates our current economic models, which were made for scarcity. Elon Musk recently also said that AI would make all jobs unnecessary – this doesn’t make it any more believable.
As I write this, drama at OpenAI has resulted in a win for capitalism and the apparent removal of an off switch for AI, in the form of guardrails or regulation. Another AI optimist, Meta’s Yann LeCun, believes there’s nothing to fear about the technology, “because those machines will be doing our bidding.” My question is: who exactly is “our”? It’s worth noting that OpenAI defines Artificial General Intelligence (AGI – its stated goal) as “AI systems that surpass humans in most economically valuable tasks” (my emphasis).
The lure of commercial interests, and of developing a technology with such far-reaching consequences, isn’t just monetary, it’s about an irresistible sense of power and destiny, according to computer scientist Stuart Russell. Political leaders certainly share this drive for power and legacy, but it’s difficult to imagine any current government being organised and resourced enough to pull off such advanced technological development.
That’s not to say it can’t be done – the US during and after WWII focused vast human and financial resources on computing and the space race, along with attendant dual-use technologies resulting in advanced weapons. If the amount of brainpower and resources that were quickly mobilised to address Covid-19 were applied to climate change, you’d think we could contain it just as effectively.
And yet, “the field – artificial intelligence – is really much more like alchemy than rocket science,” says Helen Toner, who was ousted from OpenAI during the recent drama. “You just throw things into the pot and see what happens.”
Which brings us right back to storytelling. Drama is what we have around AI, with competing narratives, turbulent personalities, power struggles and epic consequences. “Throwing things into the pot” underscores the role of stories and metaphor to help us understand complex phenomena – AI as witchcraft or cookery, take your pick. With all the actors, stakes and narrative twists at play, screenwriters are definitely taking notes for the next historical drama (or conspiracy theory): OpenAI’s Sam Altman is collecting retinal scans for a world currency; Elon Musk and convicted felon Sam Bankman-Fried were early backers of OpenAI; does the company’s mysterious “Q*” project signal a humanity-threatening AGI?
“Alexa, are you the harbinger of doom?”
Look no further than your kitchen, or that phone in your pocket, for answers. I mean answers, not stories. Fact, not fiction. The big push is for AI to provide accurate information and responses to our questions. When it doesn’t, it becomes the subject of social media memes, media hand-wringing, or in the worst case, dire real-world consequences. But what if we didn’t expect “truth” or accuracy from AI systems?
That’s the premise of (my PhD student) Maria Tsilogianni’s work. Her “idiotic agents” don’t sit quietly in the background of your life, waiting for your requests. They roam around the house, play annoying sounds, chat with each other in their own language, blow bubbles, and ask you questions instead of the opposite.
The AI agents in the film Counterproductive by Ravin Raori (another former student) have more sinister intentions. Controlled by a mega-corporation in a near-future world that resembles that of “Dreaming of Plenitude” (if decidedly darker), they spy on you, secretly check you out, spew sexist and rude comments.
Stories without words
All of which highlights the central role of language in contemporary AI systems. All computer code is a form of language, but generative AI, from Large Language Models to images generated by text prompts, rely on written or spoken language – mostly English. You can get a fictional story out of ChatGPT, and you could consider that well-engineered prompt that generates a beautifully-rendered image as a very simple kind of story.
Are there stories without words? Absolutely! I think immediately of Akira Kurosawa’s beautiful film Dreams, and then I recall the whole genre of wordless films about animals – most recently Gunda and Eo. A trail, such as your browser history, is a simple story told in numbers – I went here, then here, then here – as I told the Amsterdam students. Your whole life is a story in half-remembered dreams, images and memories. And of course when we recount a story, we usually put it into different words each time – a story is simply a sequence of events, and usually evolves with each telling and each context it’s carried into.
Can there be AI without words? Yes, I think so. Google’s DeepDream, for example, and the countless generative videos that have followed. Note that Demis Hassabis, the founder of DeepMind (the software’s creator) has said, "Most people start with language, but we start with sensory experience."
Robot journeys – a series of events (computers separate time into periodic events and more persistent states). Computer code is language, but beneath that is a layer of on and off states of transistors. When you get down to the quantum level, things get fuzzy, and you can no longer say that an individual particle was in one place, or one state, then another. Both language and narrative break down.
Could we break down the links between words and the things they refer to? The artist Rene Magritte certainly did, and his resulting artworks create narratives that raise more questions than they tell linear stories. Taking some inspiration from him, and from Surrealism more generally, artist Linnea Langfjord Kristensen and I worked with an AI algorithm called word2vec, which plots a dataset of words in multidimensional, mathematical vector space – the type common to most deep learning models – and what makes them impenetrable to humans, because we can’t really see more than three dimensions. We used a dataset of Surrealist poetry, with the intention of having the AI influence a live performance in real time by taking spoken words and giving back adjacent words in vector space. Find some info about that here.
Poetry is one way to break that link between words and meanings, because it uses multiple meanings and plays with language generally. Would it make sense to get away from meaning altogether in an AI system? That depends what you mean by “meaning”. Large Language Models are known to spew nonsense sometimes – and this becomes an increasing possibility if they are trained on data scraped from the internet, because the net is quickly being filled with AI-generated content. Like a snake eating its own tail, an AI system being fed with too much AI-generated data can result in what’s called model collapse, resulting in nonsensical outputs.
In our project described above, we looked to the Surrealists who embraced nonsense and randomness (and dreams) as drivers of creativity. In fact, go back to the very start of AI, and you find Alan Turing proposing that an AI system should use a degree of randomness in order to generate anything truly new.
A world without data
Model collapse is different from synthetic data, which is also on the rise – it refers to data not scraped from the net or the real world, but generated from computer simulations. Here’s an example of how we have used that in another art project.
Could we get rid of data altogether? Again, depends what you mean. I would make a distinction between data, information, knowledge, and intelligence. I would characterise data as “raw” in some way – say, as electromagnetic waves or numeric quantities. You can find my extended discussion of information here. Knowledge, to me, requires some filtering process, some subjective interpretation and/or synthesis, and as such it requires time. Intelligence, then – ah, that one we could discuss forever. Academics love to argue about the meaning of words, and I’m trying to get away from that.
We could certainly dispense with rationality as a function of AI systems. Why? Two reasons: (1) As I said above, we shouldn’t expect that such a system is truthful or factual by default – and that goes for any human-generated insights too. Critical thinking and skepticism toward any claims is increasingly vital these days. (2) Some of the most interesting things in life are not rational: love, emotions, art, nature, etc. These are things that cannot be captured as quantitative data. The designer Kenya Hara calls this “exformation” – everything that is left out of information. Philosopher Federico Campagna calls such indescribable things ineffable, or simply magic. Sci fi author Arthur C. Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic,” but today we can say that much of what Large Language Models output can indeed be difficult to distinguish from human-generated outputs – but some AI systems can now detect these differences. It’s bots vs. bots.
Now we get down to basic philosophical concepts of reality and existence – and I believe AI is, and should, prompt thinking about such matters. In this article, however, I want to keep the focus on stories, so you can read more about the philosophy here.
From stories to worlds
Reality and stories meet in worldbuilding. Good storytellers, and especially sci fi writers, build entire worlds. Video games are great at immersing us in narrative-driven worlds, and have in fact been using AI for years, as nonhuman characters. My favourite handbook for world building is Ian Cheng’s brief book Emissary’s Guide to Worlding. He explains that an environment is a particular kind of object, whereas a world is a particular kind of relationship. In this sense it is more of an ontology – a system of reality. Campagna told me something similar about art: an artist, he says, is not so much a person as a position, able to step outside our current reality system, and speak from there.
Another nice example of a worldbuilding approach to storytelling is Rosa Menkman’s video artwork DCT: SYPHONING. Not specifically about AI, but it takes you into multidimensional computational worlds using a compelling narrative.
My final example is Divided Together by Henri Holz, another former student. She, too, created a whole world – a future society in which people are divided into two distinct, binary types. And she tells the story through multiple media – fictional products and media. It has nothing to do with AI – or does it? AI is, after all, about classification and discrimination. These issues are indeed at the dark heart of Lee and Quifan’s story “Dreaming of Plenitude”. The authors show that AI certainly won’t eliminate discrimination. I hope my analysis of their story doesn’t come across as too harsh a critique, but merely a point of departure for discussing some other stories about AI. Not all dreaming of plenitude, but definitely a plenitude of dreams.