The Brain is Not a Computer

blackbox

Of all of the question and answer websites on the internet, I have become particularly fond of Quora. I can’t, however, go a day without being asked a question that compares the brain to computers. I find myself wanting to give them the answers that they expect….”The brain has a memory capacity of approximately 2.5 petabytes”, ”The brain computes at a speed of 20 petaflops”, ”the brain uses less than 1/100,000th power of the world’s most powerful computer per unit of processing power,” et cetera. The problem with this is that these answers are fundamentally wrong. As entertaining as it is to draw comparisons between two obvious computing machines, they compute by two unrecognizably different methods. Those differences make memory, processing power, and power consumption very difficult to compare in real quantities. The comparison is so tempting because of how easy it is to compare one model of a traditional computer to another…processor clock speed, bus capacity, ram size, hard drive type and size, and graphics card are all easily compared features of traditional computers. But the brain is not a traditional computer (it turns out this isn’t as obvious as it seems). In fact, it nearly does the brain, and all those trying to understand it, a disservice to compare it to modern computers. It would be better to describe the brain as a collection of filters. In the context of a filter, it is much easier to understand and perhaps even measure brain memory, speed, and power consumption.

Why not a Computer?

I don’t intend to be misleading—we all know that our brains compute. I merely wish to help clear up common misconceptions that stem from our constant desire to draw comparisons between the brain and computers as we know them. Perhaps the problems are only semantic, but I believe that the confusion is much more fundamental than that and even learning some basics about the brain has failed, for most, to be illuminating. Let’s identify a few characteristics of computers that are irrelevant to the kind of computing we see in a human brain.

Computer memory is like a bucket.

Not only is it like a bucket but it is located apart from the processor. In neural structures, memory is “stored” in the connections between cells—the network adds and deletes these stores by changing the strengths of these connections, adding or removing them altogether. This is a stark contrast to how computer memory works…in brief, some process contains pieces of information. These pieces, depending on their type, require a certain number of buckets of computer memory. The process tells the machine this number and shuttles the information there in a series of binaries, where they are written in a very particular place on a disk. When needed again, these bytes are read back into the machine. If our brain had to work like this it would mean we would constantly need to write down things we needed to remember…including how to move, control our muscles, see, hear, etc. This simply wouldn’t work quickly or efficiently enough for many of the things that we do with ease.

Computer processors are like an electric abacus.

Processors do very, very simple tasks very, very quickly. Perhaps a better analogy would be to compare processors to a horse-race. Processors compute simple tasks by sending them (horses) around a circuit. Because tasks can only move so quickly, companies like Intel, Qualcomm, and AMD have systematically found ways to both make many more tracks, and shrink the tracks down until they are tremendously small (so the horses don’t have as far to run). While this is very fast, every time a computer needs to solve a problem (even the same problem) it must send the horses around the track again. Brains work in a very different manner. Imagine, instead of a horse-race, an engineer who is attempting to build a million very complicated devices from a set of blueprints. Similarly, the brain works very hard to understand the plans but then constructs an efficient process (likely a factory) that doesn’t require pulling apart the plans every time a new device needs to be made. Once the brain takes the time to learn how to do something, it doesn’t forget easily, and the task becomes much, much easier to execute; in contrast, processors have to be told how to do every thing, every time.

Computers don’t do anything until they are told.

Even computer programs that run forever need to be executed, or started by a user. Your brain, however, is constantly buzzing with activity. Sensory information is constantly being pinged around in the brain and even when we shut off as much stimuli as possible (when we are asleep, for example) we see the same old areas light up in response to recombined or previously experienced stimuli.

Computers are not dynamic systems.

Perhaps this is just an extension of the idea that computers don’t do anything until they are told, but computers don’t adapt without being programmed to. Adaptation is not an inherent feature of computers. Brains, however, have ingrained and natural processes that allow them to adapt to stimuli over time. At the level of the neuron this is expressed in spike-time dependent plasticity and in “neural sculpting”—neurons tend to strengthen connections that cause them to fire action potentials and weaken those that don’t. This is called spike-time dependent plasticity. Over time, weakened connections may fall off altogether resulting in the phenomenon described as “neural sculpting”.

The Brain is a Collection of Filters

By now you may still be wondering what the picture above has to do with all of this. There are three important components to this image: an input signal (chicken parts), a system (black box), and an output signal (a whole chicken). Inside of the black box (our system in question) there must be some process that assembles chickens from their respective parts. Perhaps interestingly, the box is typically portrayed as black because when we first encounter a system, we probably don’t immediately understand how it works (we haven’t yet shined a light inside). When we consider the brain, we don’t immediately understand how it works, but we have a pretty fundamental understanding of how neurons work and so we have some bit of light to help us see inside the black box of the brain. Somewhat like the editors of a gossip magazine, neurons “listen” to the chattering of other neurons in the brain and when one piece of information sounds particularly meaningful, it presses publish on its own packet of information by firing an action potential. As nearly all good magazines editors do, they learn to determine what is good enough to publish and what is not. Neurons edit, over time, which inputs they prefer and they tend to respond preferentially to them by becoming more sensitive; conversely, they become less sensitive to other inputs that are less important. In this way, neurons, affectively, filter out some signals and amplify others. Interestingly, neurons typically have many more connections with their neighbors than with neurons across town in other regions of the brain; this suggests that not only do neurons act as filters on their own, but they allow their neighbors to offer an opinion on when it is the right time to fire an action potential. If you think these network characteristics are sounding eerily like a suburban America, you aren’t alone. Neuroscientists have found many features of biological neural networks to be analogous to social networks…though this only brings home the reality that these unique features seem to exponentially increase the complexity of the functioning brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

What is Neuromorphic Computing?

IBM's Synapse neuromorphic processor chip imbedded in a computer board.
IBM’s Synapse “neuromorphic” processor chip imbedded in a computer board.

It might be worthwhile to take a moment and inspect your current understanding of how computers work…probably some combination of zeros and ones and a series of electrical components to move those binaries back and forth. You would be surprised to know how far you could get down the path of building a modern computer (at least on paper) with that rudimentary of an understanding. With an electrical engineer in tow, you could probably even build a primitive mainframe computer (the ticker-tape or punch card variety). Would it shock you if I said that nearly all of the advances since then have been either due to materials and manufacturing advances? Software has made interacting with this hardware much more comfortable and computers have gotten incredibly powerful in the past few decades but the underlying architecture has roughly maintained that “ticker-tape” mode of information transfer. This is convenient for lots of reasons…the functionality of a program relies entirely on the quality of the code that gives it instructions (read “ticker-tape”). In some ways, all of that is about to change.

Neuromorphic computing is pretty close to what it sounds like…brain-like computing. Many of the core components of the brain can be implemented in analog hardware…resistors and capacitors in parallel and in series become neurons–their cell bodies, dendrites, as well as axons.  When these analog neurons are connected together (like a synapse) into a network they take on many of the same processing properties that the neurons in our brains do. When researchers figured out how to make the capacitance variable (primer on capacitance found here) they also figured out how to make the analog neurons “learn”; this mimics the natural changes in strength of connections between neurons in a brain.

Now that you understand what it is you might ask, “Why do we want brain-like computers?”

Traditional Computers Suck at Optimization

Have you ever heard of the “traveling salesman problem”? It goes kind of like this…You show up in a new town with a bunch of widgets to sell. So you go to the local chamber of commerce and ask them for a list of businesses that might be interested in purchasing some widgets. They give you names and addresses for ten businesses as well as a town map. You obviously don’t want to take too long making these sales calls or you might not make it to the next town before dark. So you sit down to figure out what order you should go see these ten businesses and what path you should take through town so that you can spend the least amount of time traveling. Believe it or not but your brain is usually faster at coming up with a pretty good solution to these types of problems than computers. The challenge of teaching traditional computers to solve “traveling salesman” problems has created a whole field of research called optimization. (more about traveling salesmen problems here)

Brains Rock at Pattern Recognition, Vision, and Object Recognition

You didn't need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.
You didn’t need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.

There isn’t a day that passes without your brain having to recognize new objects for what they are. You probably saw your first cat fairly early in life…did you ever stop to wonder how it is that you learned to recognize your second cat encounter as a version of the first? You may think that it is an algorithmic solution…four legs, tail, and furry with whiskers and you have a cat? That is how traditional computers have been programmed to identify cats and for the most part they perform dismally. Humans are so good at identifying cats that we often outperform the best computer algorithms when we are only shown a part of the animal we are to identify. It isn’t just our accuracy that is astounding but the speed at which we can recognize these features. This is all due to the fundamental nature of neural circuits as highly optimized complex filters instead of simply processors which are the “plug-and-chug” machines we put in our traditional computers.

Brains Use a Fraction of The Power

The human brain consumes approximately one one-hundred thousandth of the power that the average desktop computer does (per byte processed). Consider the implications of this difference…our brains do so much, and so much more efficiently than computers do. This is a feature of the filter functionality that I mentioned above. Maybe to provide an example of how this works…imagine you need to cut up a block of cheese into equally sized rectangles. You have two options: you can use a knife and a measuring-tape to carefully cut the cheese a cube at a time. Or you can use a file, the measuring tape, and a raw chunk of steel to shape a grid-like tool that cuts any size cheese block into perfectly equal rectangles…maybe you have deduced it but the second solution is “neuromorphic” one–you must teach a neural network about the right way to cut the cheese and after it has learned, you can use the tool much more quickly without the need to stop and measure. Each time you use this tool in the future you save both time and energy. Similarly, neuromorphic computing is able to re-use solutions with vastly increased efficiency.

Neuromorphic Computing is Happening

Putting neuromorphic chips into phones and computers are probably not the silver bullet to solving all of the challenges that I outlined above…instead they are a serious and creative improvement to the technologies that we are already so reliant on. A combination of traditional processing and neuromorphic computing is likely to be the long-term approach to applying these advancements. Very soon your phone will be that much better at telling you about the world…and helping you be a better traveling salesman.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.