What is Neuromorphic Computing?

IBM's Synapse neuromorphic processor chip imbedded in a computer board.
IBM’s Synapse “neuromorphic” processor chip imbedded in a computer board.

It might be worthwhile to take a moment and inspect your current understanding of how computers work…probably some combination of zeros and ones and a series of electrical components to move those binaries back and forth. You would be surprised to know how far you could get down the path of building a modern computer (at least on paper) with that rudimentary of an understanding. With an electrical engineer in tow, you could probably even build a primitive mainframe computer (the ticker-tape or punch card variety). Would it shock you if I said that nearly all of the advances since then have been either due to materials and manufacturing advances? Software has made interacting with this hardware much more comfortable and computers have gotten incredibly powerful in the past few decades but the underlying architecture has roughly maintained that “ticker-tape” mode of information transfer. This is convenient for lots of reasons…the functionality of a program relies entirely on the quality of the code that gives it instructions (read “ticker-tape”). In some ways, all of that is about to change.

Neuromorphic computing is pretty close to what it sounds like…brain-like computing. Many of the core components of the brain can be implemented in analog hardware…resistors and capacitors in parallel and in series become neurons–their cell bodies, dendrites, as well as axons.  When these analog neurons are connected together (like a synapse) into a network they take on many of the same processing properties that the neurons in our brains do. When researchers figured out how to make the capacitance variable (primer on capacitance found here) they also figured out how to make the analog neurons “learn”; this mimics the natural changes in strength of connections between neurons in a brain.

Now that you understand what it is you might ask, “Why do we want brain-like computers?”

Traditional Computers Suck at Optimization

Have you ever heard of the “traveling salesman problem”? It goes kind of like this…You show up in a new town with a bunch of widgets to sell. So you go to the local chamber of commerce and ask them for a list of businesses that might be interested in purchasing some widgets. They give you names and addresses for ten businesses as well as a town map. You obviously don’t want to take too long making these sales calls or you might not make it to the next town before dark. So you sit down to figure out what order you should go see these ten businesses and what path you should take through town so that you can spend the least amount of time traveling. Believe it or not but your brain is usually faster at coming up with a pretty good solution to these types of problems than computers. The challenge of teaching traditional computers to solve “traveling salesman” problems has created a whole field of research called optimization. (more about traveling salesmen problems here)

Brains Rock at Pattern Recognition, Vision, and Object Recognition

You didn't need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.
You didn’t need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.

There isn’t a day that passes without your brain having to recognize new objects for what they are. You probably saw your first cat fairly early in life…did you ever stop to wonder how it is that you learned to recognize your second cat encounter as a version of the first? You may think that it is an algorithmic solution…four legs, tail, and furry with whiskers and you have a cat? That is how traditional computers have been programmed to identify cats and for the most part they perform dismally. Humans are so good at identifying cats that we often outperform the best computer algorithms when we are only shown a part of the animal we are to identify. It isn’t just our accuracy that is astounding but the speed at which we can recognize these features. This is all due to the fundamental nature of neural circuits as highly optimized complex filters instead of simply processors which are the “plug-and-chug” machines we put in our traditional computers.

Brains Use a Fraction of The Power

The human brain consumes approximately one one-hundred thousandth of the power that the average desktop computer does (per byte processed). Consider the implications of this difference…our brains do so much, and so much more efficiently than computers do. This is a feature of the filter functionality that I mentioned above. Maybe to provide an example of how this works…imagine you need to cut up a block of cheese into equally sized rectangles. You have two options: you can use a knife and a measuring-tape to carefully cut the cheese a cube at a time. Or you can use a file, the measuring tape, and a raw chunk of steel to shape a grid-like tool that cuts any size cheese block into perfectly equal rectangles…maybe you have deduced it but the second solution is “neuromorphic” one–you must teach a neural network about the right way to cut the cheese and after it has learned, you can use the tool much more quickly without the need to stop and measure. Each time you use this tool in the future you save both time and energy. Similarly, neuromorphic computing is able to re-use solutions with vastly increased efficiency.

Neuromorphic Computing is Happening

Putting neuromorphic chips into phones and computers are probably not the silver bullet to solving all of the challenges that I outlined above…instead they are a serious and creative improvement to the technologies that we are already so reliant on. A combination of traditional processing and neuromorphic computing is likely to be the long-term approach to applying these advancements. Very soon your phone will be that much better at telling you about the world…and helping you be a better traveling salesman.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

Why Brain Engineering Will Spawn The New “Hot Jobs”

The hot jobs of this decade, almost without exception, have become “cerebral” in some way or another. Programmers build complex algorithms, quatitative financial analysts build equally complex models, and data analysts (with their myriad titles) are swimming in complex methods; even in the health industry you can see the trend toward an increased emphasis on problem solving ability…physician’s assistants that are capable of accurately diagnosing various conditions are more in demand than ever (long the exclusive domain of board certified medical doctors). How appropriate is it that brain technology would further the trend in “cerebral”-ization of work?

In collaboration with computer scientists, brain researchers have poked holes in the veil of the future–several technologies, previously only possible in the pages of Isaac Asimov and other Sci-Fi writers, such as Deep Brain Stimulation, Neuromorphic Computing, and Machine Learning have opened a new frontier for game-changing products and applications.

Deep Brain Stimulation (DBS)

DBS is essentially a pacemaker, repurposed for the brain. While nearly all current applications for DBS are in the correction of disruptive electrical signals in the brain, it proves it is possible to externally and locally trigger specific circuits in the brain responsible for motion, sensation, memory, emotion, and even abstract thought. Why might this lead to the creation of so called hot jobs? Imagine being the engineer who implements a DBS system to help reduce cravings for food due to boredom? Or a DBS system that helps you recognize individuals by stimulating circuits containing relevant information about that individual?

Neuromorphic Computing

This is an image of a neuromorphic processor with 1/4 million synapses on a 16×16 node array

You might have already pieced together what this means but it is just what it sounds like: computers that are like brains in form. Now, they don’t actually look like brains but they utilize a fundamental architecture of nodes (neurons) connected (a la synapses) in a network with variable strengths. These variable strengths allow learning to happen (if you forgot how this works look here). As you can imagine this chip would be fundamentally different than the Intel processor your desktop or laptop probably have under the hood. The fundamental difference is that like your brain, neuromorphic chips must be trained in order to perform a task. Another interesting feature of these types of chips is that the task also needs to be designed. I can’t imagine a sexier job that thinking up tasks and training regimes for neuromorphic chips! If you aren’t convinced this is possible or coming in the near future, you might be surprised to hear that Intel and Qualcomm already have working prototypes and are planning to put them into cell phones very soon (read about it here).

Machine Learning

If the concept of a machine learning doesn’t sound totally anthropomorphic to you…it probably should. But once again our understanding of how networks of neurons work has opened a huge can of worms for those who know how to hook them up and go fishing. Machine Learning forms much of the theoretical framework underlying neuromorphic computing. The major difference is that not being implemented in hardware allows the user a ton of flexibility to build creative and novel solutions. The types of problems that are being solved with Machine learning are crazy…there are many things that you and I are good at but would make your computer crash every time–face recognition, reading, writing, speaking, listening, and identifying objects are all within the domain of machine learning. As you can imagine we have only begun to tap the well of interesting applications for machine learning and there may be an inexhaustible need for engineers to come up with them.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you have any interest in writing here or would like to hear more about the work done by Clayton in the USC Center for Neural Engineering he can be reached at: clayton dot bingham at gmail dot com.