Brains Are Probability Ultra-Approximaters (we are damned good guessers)


Your brain might be disagreeing with the title of this article right now depending on how recently you’ve visited Las Vegas. You would be right to think that there are many ways in which our brains can be tricked into making poor choices. As interesting as those tricks are, they aren’t half as impressive as all of the excellent predictions we make nearly instantaneously, each and everyday.

For those of us who press snooze in the morning, have you ever made a blind stab at the button without ever opening your eyes? It turns out that your brain is able to transmute the mechanical signals from the alarm into neural signals through the cochlea in your inner ear, and then associative connections between your auditory perception pathways and your motor cortex were able to detect the source of the sound relative to your body and coordinate your movements to turn it off. While this is a serious simplification of the how your brain is able to accomplish this feat, our ability to execute this action proves that we can perform very complicated motor and proprioceptive (referring to our body’s position relative to itself and our environment) predictions that robotics labs have really struggled to recreate. But how is it that a group of neurons can make predictions at all?

Neurons are not as simple as you might think.

Your brain is made up of something like 100 billion neurons that connect 1000 trillion times. That means that, on average, each little neuron often has more than ten thousand other neurons talking to it. If you imagine that a neuron is something like a dance club without bouncers to turn people away, you can imagine that at different times of the week, or night, there might be varying number of people on the dance floor. On Salsa night at 11 o’clock you might not be able to see the dance floor at all because the club is bursting with activity. You might further imagine that with so many people in the club at one time it could get pretty tiresome and people would want to leave in order to relieve some of the crush inside. While this is far from a good analogy you can see how a neuron might, similarly, use the number and “boisterousness” of incoming signals to determine when to relieve some pressure and pass on a signal of its own. That response signal is not always the same magnitude but when it starts it never stops. The effect that a single neuron has on its downstream connections is not always the same…some it may excite, others it may inhibit, and always in varying degrees. It turns out that the type and variation of this strength of connectivity is the chief mechanism allowing learning, and consequently, pattern recognition and prediction.

Neurons learn together

When I was nineteen years old I had the opportunity to spend a few years of my life, learning and performing public service in the Kingdom of Thailand. Unfortunately, I didn’t spend much time relaxing on the beaches or getting massages; instead, I was tasked with teaching and serving the Thai people in the places where they lived. I learned Thai and learned their customs. This was incredibly difficult and I still wonder how they ever understood me. Part of building understanding between two individuals is developing knowledge of customs and culture—one interesting custom that the Thai people have is the Wai. Used as a greeting and an expression of respect and gratitude, it is performed by bringing both hands together, flattened, palm to palm, in front of yourself and may be combined with a small bowing of the head. While it may be difficult for westerners to learn when and how it is most appropriate to perform the Wai, it is nearly as difficult for Thais to learn the western handshake. Improperly performed Wais, and handshakes, amazingly (and perhaps tragically), have tremendous ability to create distrust between two people. Similarly, poorly formed and dysfunctional connections between neurons tend to fade away and eventually quit working altogether. However, just like a good handshake, one strong synapse can cause two neurons to strengthen their connection and grow in synchrony. It takes more than one person to make a good handshake and it takes more than one neuron to complete a functionally meaningful circuit in the brain.

Neurons make predictions by popular vote

My wife is a leader in a children’s Sunday School class each week and she spends quite a lot of time trying to think up ways to motivate the kids. I imagine she has tried many different kinds of treats, and while kids tend to like any treat, they always have their preferences. She has taken to buying assorted treats and just letting them pick for themselves when the time comes. Neurons have a way of preferring some inputs over others, just as the kids (and ourselves, I suppose) prefer one kind of treat over another. This preference grows out of the strengthening handshake phenomenon that we discussed earlier. As neurons strengthen some connections and weaken others, they eventually only respond strongly to a few types of stimulus. In this way, they display their choice and telegraph to other cells downstream what type of input they are receiving. Imagine you conducted a test on a room full of kids—by holding up a kind of candy (say Skittles or M&M’s) and asking children to stand up to express their preference so that you could take a headcount. In subsequent tests the experimenter holds up candy without showing you which candy they held up but asked the kids to continue standing up in order to express their preference; if you knew the headcount, or paid attention to which kids preferred which candy, then without much effort you could deduce which candy the experimenter was holding up. Similarly, downstream groups of neurons may deduce from their input signals what triggered the activity in the first place…was something red presented to the eyes? Or something blue? Your brain is made up of collections of neurons that are constantly voting on inputs. By refining their synaptic connections (working on their handshakes) they reinforce or reform their predictions in order to achieve better and better outcomes.

Brains are not democracies and neurons are not citizens

Neurons might be the stuff that makes us human but it would be a silly anthropomorphism for me to give you the false impression that neurons actually do much thinking on their own. I say they vote, pick their own candy, and go around shaking everybody’s hand as if they were at town hall, but neural activity is subject to two things…you may call it nature and nurture but there are two very real phenomena that influence how neurons determine their connections and how networks develop behavior.
I think fifty-two card pickup is a game that I had the opportunity to learn at a very young age…this is a game (or trick, rather) where one person sprays all the cards of a deck in all directions and then leaves the other player to pick them up. While a bit more orderly, neurons find their eventual place in the developing brain in a much similar manner. Their exact location, orientation, and connectivity, while following some basic rules of architecture, are largely random. This nuerodevelopmental variability adds up to be what those who study biological system call initial conditions. We each have nuanced initial conditions resulting in differences in the ways that our brains are wired. The randomness in determination of the initial conditions in the brain are part of what causes each of us to be more or less likely to develop certain behaviors or obtain certain skills. These initial conditions are an under appreciated source of individuality in human development. Initial conditions may be a major component of the “nature” that makes us who we are.

The “nurture” component of brain development shouldn’t be much of a mystery at all as we learn and make choices we either reinforce or reform our initial conditions. Continual sculpting of neural networks by and through our sensory experiences and repeated behaviors leaves us with strong tendencies toward certain behaviors and preferences for particular stimuli.

In brief, our brains are full of complex functional units that, over time, develop increased or suppressed sensitivity to particular stimuli. When many of these functional units are strung together there are many amazing emergent phenomena, including the ability to choose, predict, and classify stimuli. Whether you believe that we are just a sum of our parts or not, reality is that brains are made up of parts that we are beginning to understand. So far, we understand that, in concert, neurons are powerful prediction machines.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.


The Brain is Not a Computer


Of all of the question and answer websites on the internet, I have become particularly fond of Quora. I can’t, however, go a day without being asked a question that compares the brain to computers. I find myself wanting to give them the answers that they expect….”The brain has a memory capacity of approximately 2.5 petabytes”, ”The brain computes at a speed of 20 petaflops”, ”the brain uses less than 1/100,000th power of the world’s most powerful computer per unit of processing power,” et cetera. The problem with this is that these answers are fundamentally wrong. As entertaining as it is to draw comparisons between two obvious computing machines, they compute by two unrecognizably different methods. Those differences make memory, processing power, and power consumption very difficult to compare in real quantities. The comparison is so tempting because of how easy it is to compare one model of a traditional computer to another…processor clock speed, bus capacity, ram size, hard drive type and size, and graphics card are all easily compared features of traditional computers. But the brain is not a traditional computer (it turns out this isn’t as obvious as it seems). In fact, it nearly does the brain, and all those trying to understand it, a disservice to compare it to modern computers. It would be better to describe the brain as a collection of filters. In the context of a filter, it is much easier to understand and perhaps even measure brain memory, speed, and power consumption.

Why not a Computer?

I don’t intend to be misleading—we all know that our brains compute. I merely wish to help clear up common misconceptions that stem from our constant desire to draw comparisons between the brain and computers as we know them. Perhaps the problems are only semantic, but I believe that the confusion is much more fundamental than that and even learning some basics about the brain has failed, for most, to be illuminating. Let’s identify a few characteristics of computers that are irrelevant to the kind of computing we see in a human brain.

Computer memory is like a bucket.

Not only is it like a bucket but it is located apart from the processor. In neural structures, memory is “stored” in the connections between cells—the network adds and deletes these stores by changing the strengths of these connections, adding or removing them altogether. This is a stark contrast to how computer memory works…in brief, some process contains pieces of information. These pieces, depending on their type, require a certain number of buckets of computer memory. The process tells the machine this number and shuttles the information there in a series of binaries, where they are written in a very particular place on a disk. When needed again, these bytes are read back into the machine. If our brain had to work like this it would mean we would constantly need to write down things we needed to remember…including how to move, control our muscles, see, hear, etc. This simply wouldn’t work quickly or efficiently enough for many of the things that we do with ease.

Computer processors are like an electric abacus.

Processors do very, very simple tasks very, very quickly. Perhaps a better analogy would be to compare processors to a horse-race. Processors compute simple tasks by sending them (horses) around a circuit. Because tasks can only move so quickly, companies like Intel, Qualcomm, and AMD have systematically found ways to both make many more tracks, and shrink the tracks down until they are tremendously small (so the horses don’t have as far to run). While this is very fast, every time a computer needs to solve a problem (even the same problem) it must send the horses around the track again. Brains work in a very different manner. Imagine, instead of a horse-race, an engineer who is attempting to build a million very complicated devices from a set of blueprints. Similarly, the brain works very hard to understand the plans but then constructs an efficient process (likely a factory) that doesn’t require pulling apart the plans every time a new device needs to be made. Once the brain takes the time to learn how to do something, it doesn’t forget easily, and the task becomes much, much easier to execute; in contrast, processors have to be told how to do every thing, every time.

Computers don’t do anything until they are told.

Even computer programs that run forever need to be executed, or started by a user. Your brain, however, is constantly buzzing with activity. Sensory information is constantly being pinged around in the brain and even when we shut off as much stimuli as possible (when we are asleep, for example) we see the same old areas light up in response to recombined or previously experienced stimuli.

Computers are not dynamic systems.

Perhaps this is just an extension of the idea that computers don’t do anything until they are told, but computers don’t adapt without being programmed to. Adaptation is not an inherent feature of computers. Brains, however, have ingrained and natural processes that allow them to adapt to stimuli over time. At the level of the neuron this is expressed in spike-time dependent plasticity and in “neural sculpting”—neurons tend to strengthen connections that cause them to fire action potentials and weaken those that don’t. This is called spike-time dependent plasticity. Over time, weakened connections may fall off altogether resulting in the phenomenon described as “neural sculpting”.

The Brain is a Collection of Filters

By now you may still be wondering what the picture above has to do with all of this. There are three important components to this image: an input signal (chicken parts), a system (black box), and an output signal (a whole chicken). Inside of the black box (our system in question) there must be some process that assembles chickens from their respective parts. Perhaps interestingly, the box is typically portrayed as black because when we first encounter a system, we probably don’t immediately understand how it works (we haven’t yet shined a light inside). When we consider the brain, we don’t immediately understand how it works, but we have a pretty fundamental understanding of how neurons work and so we have some bit of light to help us see inside the black box of the brain. Somewhat like the editors of a gossip magazine, neurons “listen” to the chattering of other neurons in the brain and when one piece of information sounds particularly meaningful, it presses publish on its own packet of information by firing an action potential. As nearly all good magazines editors do, they learn to determine what is good enough to publish and what is not. Neurons edit, over time, which inputs they prefer and they tend to respond preferentially to them by becoming more sensitive; conversely, they become less sensitive to other inputs that are less important. In this way, neurons, affectively, filter out some signals and amplify others. Interestingly, neurons typically have many more connections with their neighbors than with neurons across town in other regions of the brain; this suggests that not only do neurons act as filters on their own, but they allow their neighbors to offer an opinion on when it is the right time to fire an action potential. If you think these network characteristics are sounding eerily like a suburban America, you aren’t alone. Neuroscientists have found many features of biological neural networks to be analogous to social networks…though this only brings home the reality that these unique features seem to exponentially increase the complexity of the functioning brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.