We don’t “see” everything in front of our open eyes – of course - what we attend to modulates what we notice, and sometimes we simply miss things in plain sight. Searching for someone in a crowd isn't just an automatic, given process. It takes a while, we have to direct our attention through our visual scene and process it bit by bit. Classic models of vision capture this intuition and run with it: light is taken in by the eyes and represented in the brain. Then, we pick out important bits from our representation of the world to really take in. Like most simple stories in neuroscience, it turns out that things are actually a lot messier.
Today, our models of vision are much more complicated. The way we construct our visual experience from the projections of light that play on our retinas engages a series of specialized, hierarchical processing stages; a bag of tricks comprising 30% of the brain’s cortical space somehow pulls off, in a miraculous way, the experience of seeing.
Though our models have changed, the basic assumptions of the account remain the same. Our brains are passive, soaking up and processing the signals flowing in through our senses. The models all tell a bottom-up story: we start with light, and build our way up to conscious experience.
Can we tell a complete story about perception with a bottom up account? If we can, it’s going to be a complicated one. Top-down influences – input from the later stages of processing in our analysis hierarchy – enter the picture quite early into our explanation from the bottom up. Emotional states, associations, memories, conditioning, expectations - and especially our attention, as previously noted - all feed back down into the pre-conscious vision processing centers, biasing, subjectifying, and generally making very personal our visual experience milliseconds after our retina starts to pass on information.
Perception is an ongoing process. To tell a story beginning with light shining on a retina and neglecting the 86 billion neurons sitting behind it is always to tell an incomplete one. Critical influences on how that light is processed begin long before it gets to the eyes – telling a top-down story of visual perception alongside the bottom-up tale is necessary if we really want to understand what’s going on when we perceive the world. And, if the advocates of predictive processing are right, our bottom-up processes are completely dependent on what’s flowing down from the top of the hierarchy.
This is a marked departure from traditional conceptions of perception. The idea of predictive processing is simple but radical: our brain uses what it already knows about the world to guess what will be perceived next. In opposition to traditional models, the brain is not a passive absorber of sense data. Our conscious understanding of the world feeds back down through our unconscious behavioral states, down into deep processors in order to guess what we’ll experience next. Any wrong predictions generate an error signal that prompts our brain to fine tune the prediction model. Our brain is engaged in a constant process of prediction and correction – the result is access to the world that is structured and sensible.
With this model we make sense of why we see a snake when there is only a stick, or why the McGurk effect happens, or why we hear "White Christmas" in white noise, but more than explanations for clever illusions and hallucinations turn on this: perception and belief become tied up in each other. Some psychiatric disabilities find a new way to be explained and possibly treated. We make sense of the data showing people with social anxiety look at people’s faces differently. The relationship between hallucination and false belief in schizophrenia becomes clearer. And more unremarkable perceptual phenomena get a new spin as well: confirmation bias doesn’t just change what we notice, the perceptions themselves change. To change beliefs is to change perception.
Skeptical? Try it out yourself. Listen to this a few times:
Freaky, right? When you know what you’ll hear, the noise is turned into a signal. Your prediction maps onto the sound, and you hear.
At the bottom of this post you can find articles that summarize the emerging evidence for this model, if you’re interested in diving into the specifics. But I think the questions that come after the specifics are the interesting ones.
What are the implications of belief and perception being inextricably entangled? Making sense of illusions, informing mental health and explaining biases is nice, but there’s a much more dramatic story to tell here. Predictive processing explains why we are the people we are. What makes a scientist different from a philosopher, or an artist? How do we make sense of others? Why do we find meaning in so many things? How do the narratives we tell ourselves about our lives dictate what we see in our world and in the ones we love?
When we allow ourselves a way of perceiving the world that directly calls forth our understandings of the world, our reflection about the world becomes all the more important. Maybe there’s more to see in everyday life. Maybe spending the time and mental energy to learn a discipline fundamentally different to yours not only gives you a new way of understanding the world, but actually permanently and irrevocably changes what you perceive in the world. Maybe there’s a lot of beauty that we miss because we don’t expect to see it. Maybe artists are in the business of showing us what we assume we see and giving us the opportunity to see it a different way . Maybe the reason people who have different worldviews get into arguments is because their different understandings of the world are actually different perceptions of the world entirely.
For now, the jury is still out. All I can do at this point is tease at some of these big questions. How far can we push predictive processing and what it explains? What evidence do we need to find in order to lend support to the theory? How do we operationalize these seemingly unwieldy concepts? Is predictive processing even falsifiable? These questions will result in quite a few productive careers in cognitive science and philosophy of mind in the next decade. I for one, am excited to be a part of the search for answers.
Most of the above is from Andy Clark's wonderful work making this model of perception accessible.
For Edge.org, "What scientific concept would improve everybody's cognitive toolkit?"
For the Moscow Center for Conscious Studies (Video)
His new book, Surfing Uncertainty, Prediction, Action, and the Embodied Mind.
Whatever next? Predictive brains, situated agents, and the future of cognitive science.