Migration and machine learning
Artist Mona Hedayati on AI as social imaginary, sensors and "soft resistance"

Mona Hedayati is sitting on a small Persian rug in the middle of the room. Her audience sits on the floor too, around her. It’s dark, but she’s illuminated by the laptop in front of her. She looks around at the audience, then puts on something that looks like a wristwatch. Gradually, you hear the sound of her breathing. It settles into a kind of rhythm, but keeps evolving.
After a few minutes, a projection lights up a wall. Moving images, but it’s hard to tell what they are exactly – it’s split-screen, one side moving faster than the other. It’s heavily filtered too, but you can make out someone running on a street, something like a gun at one point, a suggestion of violence. The sound of breathing is somehow synchronized with the video.
This is Hedayati’s latest performance, called Breathless. It’s a mix of live and processed data – captured by her while she watched recent protests in her home country of Iran. Now she lives in Brussels, after living for a decade in Canada. Her work is about migration and identity, and as you might have guessed, it’s highly personal. She wants you to understand her experiences, her feelings – and paradoxically, she uses technologies like machine learning to do that.
“Basically, between representation and abstraction,” she tells me, “you can’t easily put the pieces together.” She’s lively and warm to talk with, which contrasts with the performance, where all you hear is the rhythms of her breathing.
“The sound is live,” she says. “The machine learning part is not – it consists of outputs I got when I trained this [machine learning] model on maybe five hours of breathing patterns – when I was watching videos of the protests.”
That wrist-worn device has four sensors. “First,” she explains, “is galvanic skin response, which is skin conductance – the hallmark of your stress response. It’s used in lie detectors and biofeedback.
“Then there’s a BVP [blood volume pulse] sensor, which takes your pulse rate, plus derivatives like your blood oxygen level, heart rate, inter-beat interval – which is also great for measuring stress because once you get stressed the length of the beats is what changes. And then there’s a thermometer that takes your skin temperature, and an accelerometer for movement.”
An open work
None of this is explained in the performance, and given all the technical and personal details, I ask how much explanation the work – or any artwork – should have.
“As a traditionally trained artist,” she answers, “I believe that materiality creates imaginary. We don’t need to close that imaginary. In my performance, I could just visualise my sensor data, but I don’t want to make the work too didactic. I don’t see this as a weakness but as a gesture of generosity – it can do things to different people with different mindsets, at different levels. I don’t want to spoon-feed people with my drama.”
She has a directness and honesty – she’s not afraid to tell you how she feels, but at the same time is very thoughtful, and quick with references to philosophy or social science – or deep technical knowledge of data and machine learning.
“I’ve performed it at music festivals where people just enjoyed the sound,” she goes on, “without caring about any of the content. That’s totally cool with me. And I’ve performed it in critical spaces where people didn’t care about the sound, and just wanted to know what I’m doing, what it’s about? Having a didactic text on the wall is fine, but trying to integrate that into the work, I’m not a fan of it”
Hidden dimensions
I ask how machine learning relates to this, and her answer surprised me.
“I had this dream that I wanted to use a generative model. Once you’re in this geekspace, you sort of become polluted with all the details – This is so cool that the machine can do that! With conventional statistics, it’s sort of is clear how this pipeline works. But [machine learning] is so cool because you have no idea. How does it do this? It has that aura of something magical happening.
“This is the same rhetoric as in the industry,” she continues. “They use this word magic a lot. And when you’re around this geekspace, it’s contagious.
“So I started to talk myself out of it. For me the work is meaningful – it’s about my stress response, my background as a migrant, how I navigate my identity, all this complex stuff about me and my experience. Some of this data goes into the machine learning model, but all that stuff about my identity remains personal, hidden. “
So it’s kind of a black box on both sides, I suggest – all her personal experiences and feelings, the machine can never know. The data from the sensors can only hint at this.
“Yeah, but my work is not about technology at all. In fact it’s a subversion of technology – it’s what I call soft resistance.” The term is apt. It’s been used mainly to refer to Hong Kong and protestors’ stance toward China – before the latter came down hard. Echoes with Iran are difficult to ignore. So her use of the term seems to have two meanings.
Surprise surprise
AI, for her, is therefore not some sort of collaborator, with its own intentions and agency. She uses it simply as a “temporal management technology,” as she calls it. “Because I have a million WAV files. I could cherry-pick from my library – what do I pick?
“By feeding it all into the machine, I thought I would get this aggregate of all that data. But surprise surprise, I didn’t get that. Because of all these mysterious processes. But also, when signals don’t have much information content, they don’t have that much fluctuation from the mean—“
“—because information is surprise,” I interject, remembering Shannon.
“Exactly. If it doesn’t have much of that, it just doesn’t see it, or it treats it as an outlier, as noise. Subtlety is lost.”
But surprise surprise – she found something else unexpected.
“I thought I would just record my breathing patterns, but what I got was also ambient information. I was watching most of the videos without sound – first of all because I just couldn’t take it, I was doing this every day, I had to maintain my mental health. In all these videos, people are screaming and shouting.
“But sometimes I would get curious, I would need to know what they were saying. And when I turned on the sound, that would of course leak into my recordings. And those it reconstructed perfectly. However, they don’t make any sense. In terms of intonation, the quality of the sound, the speed – all the sort of aura of speech is there, but the meaning isn’t there. It doesn’t reconstruct actual words.
“And it’s the same with LLMs [Large Language Models like ChatGPT]. Do they know what words mean? No. It just deals with signals, it doesn’t deal with words, so it tries to reconstruct the signals. This I find super interesting.”
The future around the corner
“Things are changing so fast in AI right now,” I say.
“They are, but also they aren’t,” she replies. “It’s the future around the corner that never happens – that’s the staple of the tech industry since the 1950s, when the term AI was coined.
“Chris Wiggins’ book How Data Happens is all about the history of data – where statistics comes from in the 19th century, up through AI and machine learning. It’s super eloquently written and accessible – it’s not an academic book.
“He says that some fields are named after an object of study, like biology. And some fields are named after an aspiration, and that’s AI and machine learning. You know the history – the term was created by scientists in order to ask for funding. That didn’t go anywhere.
The recent push, she says, is just about transformers – deep learning models based on the simple idea of attention. A lot of the current acceleration can be traced to a single publication by Google researchers. This kind of model, she tells me, “is just stacking more and more layers. But of course you can see absolutely nothing inside, and this is what is called the problem of interpretability. It’s pure computing power – it’s not very surprising. That’s why Noam Chomsky calls machine learning basically a brute force mechanism. Compute [computing power] plus big data. And you just create correlations in the data.”
I suggest that it sounds like she’s generally pessimistic about future developments.
Not necessarily, she says: “I think the more technology pushes forward, the more that liberal humanism pushes back. Ideas about posthumanism – from Bruno Latour and Katherine Hayles for example – originally got pushback: ‘Wait a minute – we don’t even know how to be human yet. So let’s not go there.’ Posthumanism flattens the idea of what a human is. I think we’re moving to a space in between – of distributed cognition, between humans and technology. It’s not about simply addressing bias and the ethical issues, but more broadly understanding what’s happening to us. AI is just a mirror of who we are and what we think. It’s a social imaginary.”
So she welcomes the current calls to slow down AI development, since this would enable reflection about these larger questions. “Who builds or creates the truth? Let’s think a little more critically about this symbiosis that’s happening. It’s a way of thinking more about humanism, not posthumanism. Let’s see where we’re heading. And this goes for regulation, in policy – how can we bring the public into these consultations?
“I think things are going to keep moving, and I think there’s a good level of awareness that things could go wrong, even publicly in lay terms. If schools and universities are already drafting guidelines, it’s no longer this niche thing that only a few people use or know about.
“But,” she adds, “I don’t think there’s any way of going back or slowing down. I do think the changes are not going to be as exponential as some people think – look at how slowly robotics has developed since the 1950s or so. So I think I’m generally optimistic.”
Embodied communication
Hedayati is about halfway through a PhD, working between Concordia University in Canada and University of Antwerp. It gives her a way to bring together her artistic practice with her technical skills, and theory, in order to say something new about her experience of migration, through multisensory means.
“As a migrant,” she says, “ I can’t communicate how this feels to other people. Of course language is not a good container for emotions. So I’m looking for ways to try to create an embodied means of communication. That’s why the biosensors and things.
“At the moment, I’m adding interactivity to the performance, and to make it more durational. It will be five or six people and me, each person sending sensor readings. An aggregate signal isn’t interesting to me – I want to do one person at a time, so they can hear the differences. This will be a half day or so, with food after the performance – Persian food. I want to build some sociality around this, and in my culture, everything is around food.”
This shows how she tries to open up her work – not only to multiple interpretations, but in terms of accessibility and interactivity. And food, I note, is very multisensory, bringing up memories and associations.
“But also unknown terrain,” she adds. “If you’ve never tasted something before, for example. The unknown can be uncomfortable.”
Want more? Read the full interview here. Or visit Mona Hedayati’s website.