Your Smart Phone Will Get An IQ Bump From Neuromorphic Chips


In 2008 I came back from having spent two years abroad. During this time many things happened that contributed greatly the current technology landscape. Facebook became open to the public, Youtube hit its stride, and the iPhone was first released. While it wasn’t the first “smartphone” per se…the iPhone was the first truly popular one. Within two years the market’s appetite for smartphones has grown from non-existent to staggering. By the end of 2015 it is estimated that there will be 2 billion mobile devices in circulation. While this signals tremendous progress for computing, it is fair to wonder what it is about these phones that makes them “smart”. There is no doubt as to the reason they represent a huge advancement in mobile technology—the newest mobile microprocessors are virtually indistinguishable from their desktop counterparts in architecture, and nearly a match for many of the entry-level machines in performance. But does that make them smart? Not yet. We use the word smart to describe a human trait—the ability to learn, which may be synonymous with intelligence. While it wouldn’t be productive to get into the semantics of intelligence, it is clear that, “smart” phones don’t pass the test. But there is a technology in the pipeline that will allow phones to learn, and adapt, in order to provide better solutions to our problems.

The Bridge Between Brains and Computing

Late last year IBM introduced TrueNorth neuromorphic technology and later this year Qualcomm will introduce the Kryo (a similar architecture) into production. These chip is not your typical grid of transistors. In fact, it is designed to mimic your brain. Engineers have found a way to reimplement neural networks by using resistors and capacitors (resistors provide resistance and capacitors are like miniature batteries) in parallel and in series. This hardware, while a recognizable simplification of the biological systems it is modeled after, are capable of learning in a manner reminiscent of the way that our own neurons learn.

The journey to a successful neuromorphic chip has not been a short one. The first artificial neural networks were being tinkered with in the 40’s by Warren McCulloch and Walter Pitts. Even then it was postulated that we might be able to someday build something of an artificial brain and harness its computational power as a sort of personal assistant or perhaps let it loose to work on the biggest problems of the day. While actually achieving this is a long way off and there are many complicated hurdles remaining some of these science fictions have become reality in very important ways. No one has yet managed to build a complete human brain but we have been able to simulate large portions of it with biologically realistic features. When I joined the Center for Neural Engineering (CNE) at University of Southern California in late 2014, researchers there were already using thousands of computers to reconstruct and simulate up to a million neurons in a very biologically realistic computational model of the Hippocampus. We called ourselves the multi-scale modeling group because we incorporated complex details of the brain at multiple scales including: detailed models of synapses, beautiful and morphologically appropriate models of neurons, all arranged and connected according to what we had observed in experimental studies of the Hippocampus. The primary purpose of this work, at the time, was to explore the possibility of replacing dysfunctional portions of the Hippocampus with a computer chip. As of this writing, CNE has successfully tested such a device in rats, macaques, and has just completed preliminary testing in humans. Such a device, which incorporates complex math and analog electrical hardware, is able to function by mimicking the computation that might have been performed by a network of neurons.

Why Your Phone Needs a New Brain

You may have noticed a few interesting new features in Facebook’s photo tagging system in the past couple of years. Of particular interest is the ability of the site to recognize faces. While very impressive, humans outperform all but the very best algorithms with much greater efficiency. How is it that your brain is so much better at this exercise than an algorithm? The answer lies in the architecture of your brain and how it learns. Your brain learns by crafting a network of cells to look for small features of a persons face. Particular facial characteristics cause neural networks to respond with a unique pattern. This pattern is then identified as the facial response pattern of that individual. If trained adequately, this type of facial recognition can happen with electric speed (a bit slower because actual connections between neurons are mostly chemical, not electrical).

In general, pattern recognition problems are prime areas of improvement in computing. Because mobile computers are so often presented with pattern stimulus (route planning, video, images, sound, etc.), they are the prime application for neuromorphic chips and really are the place where they are able to make the biggest impact. Look forward to your phone getting a lot smarter in the near future…it is gonna get a new brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.


One thought on “Your Smart Phone Will Get An IQ Bump From Neuromorphic Chips

  1. Hello Clayton, I find this fascinating. For some years work has been going on into producing miniturised retinal implants to restore a useable level of vision, particularly in eyes that have lost central foveal vision. It is still at a primitive level which, considering I was not long out of college when it began and am now retired, is a particularly slow rate. The obstacles to be overcome are immense. Maybe if neuromorphic technology could provide the sort of cross-linking found in the retina it might enhance the current low level of acuity obtainable by improving contrast, border recognition etc. Regards, Tony Shephard.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s