A response to 'The Empty Brain' - The information processing brain

Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.

Criticisms of reductive, computational accounts of the brain are not new, and in fact belong squarely within the set of common philosophical criticisms of neuroscience. Among these criticisms are the claims that we don’t reduce down to our neuronal parts and that “you are not your brain.” Arguments for these positions have historically ranged from positing non-physical spirits to arguing from an explicitly physicalist standpoint, and have been inseparable from the development of psychology and neuroscience. With his recent popular aeon article, decorated psychologist Robert Epstein has become the most recent stirrer-up of this contentious, interdisciplinary pot.

His claim is that the brain does not “contain”. Furthermore, the foundational assumption of cognitive science and psychology is a horribly mistaken “contains + processes” thesis about the brain. Epstein aggressively argues that your brain is not a computer - in reality, or in metaphor.

But does he provide a single good reason why the brain as an information processor is a mistaken metaphor?


Reason number one: We have innate reflexes.

Epstein comes out swinging: his first argument purports to show how “vacuous” the information processing (IP) thesis is.
In short: babies enter the world prepared to interact with it effectively. But, babies are not born with “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers.” Therefore, the brain doesn’t store or process anything.
Epstein provides plenty of evidence about the innate reflexes babies have, including the orientation babies have towards faces. But babies don’t have a merely a specific orientation to faces, they have an orientation to 3 contrasting spots. Why couldn’t it be the case that this reflex phenomenon is explained by an innate heuristic for processing stimuli? Why can't that be a computational process?
There is no connection between innate reflexes and the computational realization of these reflexes. This demonstration of vacuousness is just an assumption of Epstein’s main point, not a reason to support it.

 

Reason number two: Computers code information into bytes. We don’t.

A computer codes information into bytes. These byte patterns represent information. Computers then move patterns place to place. Therefore: “computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories.” But, concludes Epstein, we don’t do this. Therefore, the IP thesis is wrong.
Nowhere does the IP thesis assert that we represent the world in specific, byte-like patterns. ‘Information’ doesn’t just mean ‘things that are coded into bytes.’ An analog amplifier processes and manipulates information, but by definition doesn’t represent that information in bytes. The nature of information is an enormous question. At the very least, I'm comfortable saying that networks of neurons are just as capable as circuits - perhaps even more capable - when it comes to carrying information.  

 

Reason number three: The IP metaphor will eventually be replaced by a better metaphor or by ‘actual knowledge’.

The middle section of Epstein's article is a succinct and compelling overview of the different metaphors humans have used to think about the mind over the course of our recorded existence. These metaphors all have one thing in common: they are subject to the technology and cultural conceptions of their historical context, and they change when that context changes. Because all of these metaphors were far off base, we can expect that our current metaphors are also subject to change, and we shouldn't get too attached to them.
I don’t want to open a philosophy of science can of worms here, but I see no reason for us to be concerned that our metaphors grow and improve with our context and understanding. I agree that we should not allow the IP metaphor to blind us to other possible empirical explanations. 
But what exactly is "actual information"? Are we to understand the brain only in terms of rushes of ions moving in and out of neurons that result in action potentials and a cascade of synapses? Is this what actual knowledge looks like? Abstraction and metaphor will always be a critical part of science, and it shouldn’t be a point against the IP thesis that it is only "the best we have so far." 

 

Reason number four: The IP thesis is based on bad logic.

The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

What?! None of my colleagues would dare to claim that ‘all computers are capable of behaving intelligently.’ It's much more natural to say that the motivation for the IP thesis comes from our ability to model the behavior we've seen from neurons - all the way down to the atomic level.

 

Reason number five: We can draw a dollar bill better when we have one as a template to work off of.

When we're asked to draw a dollar bill from memory - an object we Americans have seen thousands of times - we do a pretty poor job compared to when we have one to copy off of. What does this failure to reconstruct show? According to Epstein:
“We don’t have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains... and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.”

First of all, why would the IP thesis have to hold that any representation is perfect? It is perfectly coherent to hold that our representations of the world are biased, inaccurate, and subject to decay.
Even more concerning is the claim that the IP thesis implies the precise localization of individual representations. Why would the IP thesis be committed to this empirically implausible view? The next reason provided is of a similar form:

 

Reason number six: Cognitive functions are realized by spatially distributed systems in the brain.

Doing anything with the brain - remembering, emoting, reflecting - always recruits a large and diverse area of the brain. But what bearing does this have on the thesis of information processing? Why can’t information processing happen in virtue of a lot of different distributed components? Put another way, what about the IP hypothesis commits it to a simplistic and specific view of functional localization?
“The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous.”

This isn’t advanced by any scientists in the same way no computer scientist advances that an individual jpeg is stored in a particular circuit.

 

Reason number seven: It is more accurate to say that the brain ‘changes’ rather than the brain ‘stores’.

If the IP thesis is wrong, what alternative does Epstein offer? His alternative model:
“(1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways… the brain has simply changed in an orderly way.”

A proponent of the IP thesis could just as well agree with this abstract picture of learning and brain development. To ‘store’ and to ‘process’ imply a physical, neuronal change. The kind of mechanism that gives rise to this change, be it action-oriented perception, reward-punishment stimulus, or associations makes no difference as to the underlying function of neurons, which, according to the IP thesis, is to process information.

 

Reason number eight: There have been no interesting findings from the IP thesis.

Epstein concludes with a shocking and indefensible claim:
“The IP metaphor has had a half-century run, producing few, if any, insights along the way.”

This is, frankly, nonsense. If cognitive neuroscience has been a slave to the ‘useless’ IP metaphor for so long, how can the massive leaps and advances in science and psychology throughout this period be explained? What then, should we consider the myriad of clinical applications developed from them, if not insights? I cannot call the enormous amount of human suffering alleviated from the direct clinical application of the IP metaphor anything but a triumph of human progress. If that doesn’t count for an insight, what does?


Given how vehemently I’ve argued in defense of the IP thesis, it may seem surprising that I have my own disagreements with representational views of the mind. At the end of the day, I agree with Epstein that we need a more action-oriented and holistic view of our own brains, but this is no reason to deny that the brain is an information processor. All it gets us is an anti-representationalist picture.
Despite my own personal convictions, none of the arguments offered in ‘The Empty Brain’ are actually problems for the defender of the IP thesis. The day may come when the IP thesis is replaced with a more accurate metaphor, or we may someday gain a new understanding of the mind that seems unimaginable now that precludes the IP thesis, but that day is not today.

Don’t push DELETE just yet.

Scientists need charity too

The principle of charity is perhaps one of the most important tenants an intellectually respectable philosopher can adhere to. In short: do your best to only address the strongest version of any argument. Except for the occasional casual, off-the-record-type statements, this courtesy is universally extended by professional philosophers to other philosophers, both in informal discussions and as an explicit necessary condition for publication. Troublingly, I believe this good intellectual practice is often not being extended to scientists, particularly at the intersection of neuroscience and philosophy of mind.

Philosophers constantly bemoan the over-application of neuroscience and the unexamined assumptions that often underlie empirical work. I would argue that philosophers are often equally guilty of this ignorance in the opposite direction - coughCfiberscough – but I won’t address this point here. Instead, my issue is with the uncharitable and lazy criticisms of neuroscience that I hear springing up among my colleagues, at guest lectures, and on the internet.

An example: Certain words – like “causality” – tend to ignite a visceral reaction in philosophers when scientists use them. I’ve seen philosophers criticize scientists for calling interventional neuroscientific methods (lesion studies, TMS, etc.) “useful for establishing causality.” The criticisms themselves range from pedantic head shakes about the assumptions of causal relationships between mind and brain all the way to criticisms about ignorance of the nature of causation and the limits of inductive science.

Scientists are using these terms in a specific context – namely that an interventional method is better for establishing causality than a merely correlational method. There is no huge claim here about the relationship between brain states and mental states, let alone a claim about the causal workings for science.

My complaint is not that these misunderstandings are happening in published works, but rather over drinks and at conferences. Scientists are often exposed to so many confused and seemingly uncharitable criticisms from philosophers that by the time they have an honest chance to do interdisciplinary work there is absolutely no desire to. Put yourself in their shoes. Why would you want to do work with an academic tradition that, in your experience, has only offered you ignorant and mistaken criticisms?

When philosophers enter discourses outside their area of interest, there is almost always an effort to pause, figure out motivations, and take stock of exactly the way language is used in the current conversation before jumping in with a contribution. This is good scholarship. A common criticism of neuroscience is that terms are not so carefully demarcated and deployed like in philosophy – and yet I hear this criticism from people that refuse to learn exactly how the language is evolving and being used in neuroscience. Of course terms seem sloppy if there’s no effort made to learn usage!

Philosophy has much to offer science, but it needs to offer it in fair and informed criticisms. Next time you’re reading an empirical paper and you have a gut “wrongness” reaction, try pretending it’s a philosophy paper in a field you’re not familiar with, and see if that changes what courtesy you’re willing to extend to it.

If we want to solve the real and present problem of the lack of philosophy in science, we need to convince scientists that we understand what, why, and how they’re doing what they’re doing.

How to make it in an Interdisciplinary World

Recently, I had the wonderful opportunity to attend an intimate symposium where talks were given by some rather heavy hitters in the worlds of philosophy of mind, cognitive science, and neuroscience. As a MA student, I took every chance at coffee breaks and meals to collect advice (and sometimes admonishment) from these well-situated academics.

These comments might be right, they might be wrong, they might even be out of context - but at the very least I hope these ten pieces of advice are interesting.


"Don't fall into the trap of only trying to score a hit on whatever inconsequential argument in whatever inconsequential debate. There's still a place for grand philosophy in this world, and taking pot shots does not teach you how to do it."

"Go help out in a lab."

"The two questions you should ask yourself when you take empirical work and apply it to philosophy, or vice-versa, is 'How does this advance the work being done here?' and 'Why does it have to happen across disciplines?' If you can't answer either of those questions convincingly, it doesn't matter how 'correct' your project is - it won't matter."

"The job market is horribly grim [in philosophy]. You probably won't get a job."

"Never drink too much at a dinner with a guest lecturer."

"Don't bother with interdisciplinary work right now [as a MA student]. Get really good at something - get useful, and then people will want to do interdisciplinary work with you. What does a half-philosopher half-scientist have to offer anyone?"

"Learn statistics."

"Keep doing exactly what you are doing now - going to these things and talking to people."

"Never take a single empirical paper and try to apply it to your armchair work. There's probably a million other papers in the neighborhood with something relevant to say, and you might just be reading the paper wrong."

"Don't forget what you love about what you do, and if you don't love it, get out of it."

Disentangling Belief and Perception: Predictive Processing

We don’t “see” everything in front of our open eyes – of course - what we attend to modulates what we notice, and sometimes we simply miss things in plain sight. Searching for someone in a crowd isn't just an automatic, given process. It takes a while, we have to direct our attention through our visual scene and process it bit by bit. Classic models of vision capture this intuition and run with it: light is taken in by the eyes and represented in the brain. Then, we pick out important bits from our representation of the world to really take in. Like most simple stories in neuroscience, it turns out that things are actually a lot messier.

Today, our models of vision are much more complicated. The way we construct our visual experience from the projections of light that play on our retinas engages a series of specialized, hierarchical processing stages; a bag of tricks comprising 30% of the brain’s cortical space somehow pulls off, in a miraculous way, the experience of seeing.

Though our models have changed, the basic assumptions of the account remain the same. Our brains are passive, soaking up and processing the signals flowing in through our senses. The models all tell a bottom-up story: we start with light, and build our way up to conscious experience.

Can we tell a complete story about perception with a bottom up account? If we can, it’s going to be a complicated one. Top-down influences – input from the later stages of processing in our analysis hierarchy – enter the picture quite early into our explanation from the bottom up. Emotional states, associations, memories, conditioning, expectations - and especially our attention, as previously noted - all feed back down into the pre-conscious vision processing centers, biasing, subjectifying, and generally making very personal our visual experience milliseconds after our retina starts to pass on information.

Perception is an ongoing process. To tell a story beginning with light shining on a retina and neglecting the 86 billion neurons sitting behind it is always to tell an incomplete one. Critical influences on how that light is processed begin long before it gets to the eyes – telling a top-down story of visual perception alongside the bottom-up tale is necessary if we really want to understand what’s going on when we perceive the world. And, if the advocates of predictive processing are right, our bottom-up processes are completely dependent on what’s flowing down from the top of the hierarchy.

This is a marked departure from traditional conceptions of perception. The idea of predictive processing is simple but radical: our brain uses what it already knows about the world to guess what will be perceived next. In opposition to traditional models, the brain is not a passive absorber of sense data. Our conscious understanding of the world feeds back down through our unconscious behavioral states, down into deep processors in order to guess what we’ll experience next. Any wrong predictions generate an error signal that prompts our brain to fine tune the prediction model. Our brain is engaged in a constant process of prediction and correction – the result is access to the world that is structured and sensible.

Believing and perceiving, although conceptually distinct, emerge as deeply mechanically intertwined. They are constructed using the same computational resources, and are mutually, reciprocally, entrenching.
— Andy Clark, for Edge.org

With this model we make sense of why we see a snake when there is only a stick, or why the McGurk effect happens, or why we hear "White Christmas" in white noise, but more than explanations for clever illusions and hallucinations turn on this: perception and belief become tied up in each other. Some psychiatric disabilities find a new way to be explained and possibly treated. We make sense of the data showing people with social anxiety look at people’s faces differently. The relationship between hallucination and false belief in schizophrenia becomes clearer. And more unremarkable perceptual phenomena get a new spin as well: confirmation bias doesn’t just change what we notice, the perceptions themselves change. To change beliefs is to change perception.

Skeptical? Try it out yourself. Listen to this a few times:

Now, click here, read the text, and listen again.

Freaky, right? When you know what you’ll hear, the noise is turned into a signal. Your prediction maps onto the sound, and you hear.

At the bottom of this post you can find articles that summarize the emerging evidence for this model, if you’re interested in diving into the specifics. But I think the questions that come after the specifics are the interesting ones.

What are the implications of belief and perception being inextricably entangled? Making sense of illusions, informing mental health and explaining biases is nice, but there’s a much more dramatic story to tell here. Predictive processing explains why we are the people we are. What makes a scientist different from a philosopher, or an artist? How do we make sense of others? Why do we find meaning in so many things? How do the narratives we tell ourselves about our lives dictate what we see in our world and in the ones we love?

When we allow ourselves a way of perceiving the world that directly calls forth our understandings of the world, our reflection about the world becomes all the more important. Maybe there’s more to see in everyday life. Maybe spending the time and mental energy to learn a discipline fundamentally different to yours not only gives you a new way of understanding the world, but actually permanently and irrevocably changes what you perceive in the world. Maybe there’s a lot of beauty that we miss because we don’t expect to see it. Maybe artists are in the business of showing us what we assume we see and giving us the opportunity to see it a different way . Maybe the reason people who have different worldviews get into arguments is because their different understandings of the world are actually different perceptions of the world entirely.

Maybe.

For now, the jury is still out. All I can do at this point is tease at some of these big questions. How far can we push predictive processing and what it explains? What evidence do we need to find in order to lend support to the theory? How do we operationalize these seemingly unwieldy concepts? Is predictive processing even falsifiable? These questions will result in quite a few productive careers in cognitive science and philosophy of mind in the next decade. I for one, am excited to be a part of the search for answers.