Rethinking how AI learns
The failure of AI to help trace her ancestry prompted Nouf Aljowaysir to turn to stories, not datasets
“Why do I live in a world that doesn’t see me?”
Nouf Aljowaysir’s image becomes a white silhouette, surrounded by a bounding box that follows the movements of her abaya, classifying it ‘poncho’ with 90 percent accuracy.
Ana Min Wein (Where am I from?) is Aljowaysir’s first film, and it recently won the Lumen Prize for moving image, and was nominated for the Youth Documentary Award (13+) at the International Documentary Film Festival in Amsterdam.
With some coding skills, but no crew and no filmmaking experience, Aljowaysir started out with a simple question: Could AI help her explore her ancestry? Having moved to the US from Saudi Arabia when she was 13, she grew up between two cultures and started feeling like her identity was getting fractured.
“I wanted to see where my mom came from,” she told me, “the regions she lived in, if AI could help me visualize the stories, or if I could learn something – what could I learn from this?”
Artificial Identity
The AI looks at an image of a mosque and says, “I found a castle made of wood. Is this where you are from?”
There are no castles in the Middle East, Aljowaysir points out. She wanted AI to help re-create her memories, but learned instead that it couldn’t. It lacked context and cultural knowledge. And so the film took on a darker tone, as bodies and objects are whitewashed, erased by models and datasets designed for other places, other purposes.
“Generalization,” Aljowaysir recalls. “This thing is seeing me and saying, ‘This is an Eastern Indian culture,’ ‘This is a turban.’” That made her think about the assumptions that go into AI systems, and the data they’re trained on.
I work at a place called the Centre for Postdigital Cultures, and “postdigital” for us means not after the digital, but describes a condition where digital technologies are invisibly embedded in so many aspects of our lives – including our identities.
But, Aljowaysir said, “your identity doesn’t just stop at where you are from.” That’s why her work delved into her ancestry, contrasting the failure of the AI with the stories told by her mother. “It says a lot about your temperament, how you respond to things, your intuition.”
The Iraq that the AI has learned about from online news about two Gulf Wars, for example, is very different from the beautiful, rich land that her mother knew. “If you do a simple Google search,” Aljowaysir said, “maybe you can get some text about history. But these kinds of memories live in stories – oral stories.
“Technology doesn’t work that way. It learns to generalize, to simplify, break things down. The way I’m learning, though, is through oral storytelling – information passed down through stories.”
Unlearning culture
She found an unexpected connection, however – in the Saudi education system. “AI learns from a very specific dataset – learn this, apply that. And in many ways, that’s how we’re taught too – in Saudi Arabian education system, I had to learn word for word and take tests, memorizing.
Can learning, she wondered, be done through storytelling? Both for humans and for AI. This would represent a radically different type of AI system to the ones being developed today.
“Why can’t AI do that? Why those datasets? Why those objects? Why this definition of learning? Why is intelligence defined in this way?
“Objects are, of course, attached to a culture,” she goes on. “They say this is our intelligence, put into machinery. But there’s a lot of bodily intelligence that you learn through intuition.”
We talked about developing an AI system based on cultural context and local datasets – in fact Saudi Arabia is already working on this. Aljowaysir advocated for the expansion of this initiative, urging us to “really go further and reframe the way that learning is defined.”
Making Ana Min Wein involved such a process for Aljowaysir herself – deep discussions with family, friends and other filmmakers – that became a form of unlearning, digging up personal issues so that it became a kind of therapy. I ask if that second voice in the film – the AI voice – represents a kind of therapist.
“I don’t accept what it says as right,” she responds. “I don’t see the coolness in the tech anymore, I just really see it as flawed. I feel like I know who it really is more than it knows me. ChatGPT is almost like a person talking to us, right, and I feel more like the therapist – from what it says and by asking it questions, I feel like I start to know it pretty well.
“So I’m learning from it, and it, I guess, is learning from me.”
Untrustworthy AI
“I see women wearing hats,” the AI says. “I’m detecting men wearing dresses, men with turbans, men with cloaks sitting in a garden. Is this where you are from?”
There’s no danger that AI can truly represent our complete identities, then. “But we’re gonna get so confused about who is you, and did you really say that?” Aljowaysir laughs. “That’s gonna create even more distrust. AI is not creating clarity, but more fog. Now I go online and I do not know who or what to trust. I didn’t think that AI, at a larger scale, would cause this. But it makes a lot of sense.
At the end of the film, the AI tells her, “I don’t know how I can help you. I don’t know where you are from.”
Read a full transcript of our interview here. Watch the film here. Explore Aljowaysir’s other work here.