
Can your dreams predict the future? Can AI? On the other hand, is reality just a dream? In this article I share some amazing connections I have found between dreams, AI, and the construction of reality. Specifically, about world models, prediction, storage, and the role of the body.
World models
A fundamental assumption in neuroscience is that we develop a mental model of the world in our heads, based on what we perceive with our senses, which then guides our decisions and actions. It is flexible enough so that we can react quickly and intuitively to new situations as they arise, by subconsciously comparing past and present states.
This understanding has been transferred over to AI – specifically reinforcement learning, where a model of the world can be used to train a system to make predictions as we do in daily life. See https://worldmodels.github.io/.
I recently read Charlie Morley’s 2013 book Dreams of Awakening. It’s supported by lots of neuroscience research, and one insight that surprised me was that one part of brain generates the physical spaces within dreams, which you then navigate from another part of the brain, just like a videogame character.
This was replicated for AI by the authors of the above world modelling paper: “We first train a large neural network to learn a model of the agent’s world in an unsupervised manner, and then train the smaller controller model to learn to perform a task using this world model.”
Dream worlds, as you know, are not complete or coherent. So maybe a better comparison is this research in which AI generates a game world in real time, not ahead of time. This means that (like your dreaming brain), the AI has some understanding of the physics and construction of physical spaces. In the case of the AI-generated worlds in this research, a friend pointed out to me, “They don’t yet work as persistent-world games because if you turn back, the world has changed.” Just as in a dream.
Predicting the future
This brings us to something weirder, and even more interesting. You know that most AI systems exist to predict what comes next, or what something is, based on past data they’re trained on. As a kind of world model, the human brain does something similar. What about dreams? Can they predict the future?
Morley, in Dreams of Awakening, believes they can, and the way he explains it is convincing. He points to psychologist Carl Jung, who said that the brain can easily predict future events not through some mystical property, but simply due to the amount of information it stores. We are simply unable to see all this information consciously, and some of it emerges in dreams. I don’t know about you, but I see things in dreams that I had completely forgotten about. No wonder Sigmund Freud looked to dreams for repressed desires and anxieties.
Think about it: How much sensory information do you take in during a single day? Without even taking into account screens and social media, that’s already a lot. But predicting the future? Morley refers to quantum theory to explain a nonlinear relation between past, present and future – something about “nonlocal communication through the vast quantum interconnectivity of reality.” Time is not linear, but we fail to see its nonlinearity in waking life due to the overriding preference of the conscious, rational mind for linearity and causality.
According to Tiffany D’Elia, it’s like that scene in The Matrix when Neo visits The Oracle, and a small child shows him how to bend a spoon with his mind because, as the child says, “There is no spoon.” In the interconnectedness of things, the spoon is not separate, and reality is a projection of our own consciousness. This, D’Elia says, goes back to the Hermetic school of philsosophy: “The mind is not inside the universe, the universe is inside the mind.” This is all a bit too mystical for me, but her argument here accords with what I’ve detailed above.
Let’s go back to world models again. The authors of that paper cite evidence suggesting that “what we perceive at any given moment is governed by our brain’s prediction of the future based on our internal model”. And similarly, in AI reinforcement learning, “an artificial agent also benefits from having a good representation of past and present states, and a good predictive model of the future”.
Partial connections
So this model of the world, whether in our head or in an AI model, isn’t coherent, or complete. Memories/data are not stored whole but in parts. Through a similar kind of next-token prediction, both generative AI systems and dreams seem to construct things by combining these partial memories, and that’s why both generative AI and dreams have a similar, seemingly random quality – because there is randomness involved.
When you saw the title of this article, maybe the first thing you thought of was Google’s DeepDream. Born in the fairly early days of the current AI wave, it was developed by Google researcher Alexander Mordvintsev, who was working on computer vision.
Most researchers were building image recognition systems by training them on labeled data, then feeding in an image and having the system classify it based on the training set. Mordvintsev instead fed in an image and then stopped the inference process halfway through, before the system decided what the image was. He then sort of fed the image back and forth repeatedly. You’ve seen the resulting DeepDream imagery.
This article explains how our perceptual system behaves in a similar way, “making us ‘see’ things that aren’t really there…. Did that mean that an artificial neural net was not that artificial? Could we say that Mordvintsev had found a way to look into the machine’s unconscious, into its inner life, its dreams?”
Because of these partial connections in the brain, we shouldn’t read too much into the imagery that dreams throw up. Here, subsequent psychology research has shown that Freud probably overreached in this regard. In his novel London Fields, author Martin Amis writes, “We are all poets or babies in the middle of the night, struggling with being.”
The partial connections in generative AI systems, however, can throw up some surprising things. Artists Holly Herndon and Mat Dryhurst explain:
As AI models are trained, they compress data into representations of “concepts”. These concepts (a person, a thing, an idea, etc.) are understood by the model in multiple dimensions and in relation to all of the other concepts the model recognises. The place in mathematical space that a model assigns to a given concept is called its “embedding”. In theory, every possible thing a model can know – all of the infinite connections that can be drawn from the data it contains – already exists with in it. (From their 2024 exhibition catalog All Media is Training Data, p.79)
If that is the case, they wondered whether they could create an artwork and then see if it already existed in a model’s latent space. Sounds crazy, and yet this is exactly what they found. The created a physical sculpture of a horse, and then searched for it in a model which had been trained before they made the sculpture. And there it was. OMG! We might be wary of what AI might predict or create, but again, this depends on what we attend to and feed into it.
The body strikes back
Everything I’ve discussed so far has been purely mental, except where our mental conceptions influence how we see and act in the physical world. But you and I both know that human “intelligence” is vastly different from AI, owing particularly to the fact that we have a body and exist “in the world”, in relation to and interconnected with other physical and biological things. Therefore, our relation to dreams can’t be all that similar to an AI model.
The body is typically paralyzed during dreams, presumably so we don’t wake up and/or hurt ourselves while we’re acting out in those dreams. The occasional twitch, not to mention rapid eye movements (REM), clue the outside observer that a sleeper is dreaming.
But, the surprising thing I discovered is that it actually works the other way around. Our mental activity doesn’t give rise to physical activity, but bodily movement actually drives our dreams.
Research by neuroscientist Mark Blumberg, carried out over a couple of decades, confirms this, by comparing the order in which neural activity and physical activity occur. His results, spelled out in this article, were clear: “The body and brain weren’t disconnected. The brain was listening to the body.” The body is paralyzed during sleep not because of the twitches, but in order to make them clear.
You might think there couldn’t possibly be a link with AI, but one word: robotics. Blumberg speculated using the same idea – by randomly twitching, a robot might be able to learn how to move in the world. Indeed, such work was already underway: researchers had already found that their robot “could essentially learn to walk from scratch by systematically twitching to map the shape and function of its body.”
What happens next? You guessed it:
Watching the robot twitch, a fellow-researcher commented that it looked like it was dreaming. The team laughed and thought nothing of it until the fall of 2013, when Bongard met Blumberg when he gave a talk on adaptive robots. Suddenly, the idea of a dreaming robot didn’t seem so far-fetched. “Dreaming is a safe space, a time to try things out and retune or debug your body,” Bongard told me. (source)
Some conclusions
Dreams are a safe space for humans too – maybe the last place we are truly free (though paradoxically not in control). For now. In the novel The Dream Hotel, Laila Lalami describes a near future in which AI companies monitor our dreams via a devices that optimizes a good night’s sleep, but also reports disturbing dreams to the government. Isn’t your sleep tracking device already halfway there?
Now we know that the body also reports to the subconscious mind. But is everything somehow already present in the mind, as in the AI model? Maybe, but only in pieces. From these pieces, both we and AI string these together to create a train of ideas.
Neither dreams nor AI know where they’re going, but they keep going in the same direction. That box full of pieces is endless, representing all the data that minds and models collect. So, world models hold almost infinite potential connections: to create dream and AI-gnerated imagery, and maybe to predict the future.
Is everything empty, and is this a bad thing? For more insights, read a longer version of this article here.


