Artificial Intelligence, Deep Learning, and Ray Kurzweil’s Singularity

Artificial Intelligence and the Singularity

What do Elon Musk, Bill Gates, and Stephen Hawking have in common? They are all deathly afraid of intelligent machines. This might seem a bit ironic considering it was a fantastical idea to two of them and the other one seems to be building them. Perhaps it isn’t the self-driving kind of machines that most frighten Mr. Musk? Admittedly, of all of the smart tasks that computers will be capable of in the coming age of algorithms, self-driving is rather tame. Furthermore, it is unlikely that the first computers which learn how to drive a car safely will spontaneously learn other complex tasks. This is the feature of truly intelligent machines which seems to have the real Iron Man shaking in his rocket boots. It isn’t quite clear yet if computers will overtake humans in intelligence within a decade but it is no longer an inconceivable event for the general public. Ray Kurzweil, a prominent thinker, calls this the “Singularity”. But what will it be like?

Will the Singularity be a Disappointment?

Ray Kurzweil is a smart dude, but the moment that we achieve singularity may be a disappointment to him. When we think of machines as intelligent as humans, we tend to think of little robotic Haley Joel Osmond, Robin Williams, or the more recent Ex Machina kind. But a brief glance at the literature on Artificial Intelligence could tell you that intelligences that are in development are all highly specialized. “Intelligent” programs have been designed to navigate rough terrain, jump obstacles, identify faces, voices, and other biometrics. Other algorithms are being used to make valuable predictions or classify objects. But as far as general intelligences go none have achieved quite so much as the IBM supercomputers, Deep Blue and Watson—you know, the ones that beat the grandmaster at chess and whooped Ken Jennings at Jeopardy? While Watson is constantly in development and learning new skills along the way, he is not self-conscious, nor is he capable of doing his own “development” without human guidance. He is largely just a question and answer robot, with access to lots of information.

Independent Learning is a Hallmark of Intelligence

There are, however, computer programs that teach themselves things based on loosely defined learning rules. Deep learning neural networks are essentially simplified models of brains designed for very specific tasks. These simple little brains can learn things about their environment very similarly to the way you learn things about the world around you. The “neurons” in these algorithms change and adapt in order to learn a pre-determined pattern. Once the pattern has been learned this piece of software can be deployed to recognize this pattern in any new chunk of input passed to it. For example, a deep learning system could be trained to look for cats in a couple of videos of your choosing…once it has been properly trained, this system could be happily sent off looking for cats in all of the videos of YouTube. While this is somewhat of a silly example you should be able to see the power that a system like this could have.

But is this system what we call intelligent? Obviously the ability to learn in an unstructured manner is a crucial component of the puzzle but humans do so much more than that. They aren’t restricted to a few simple learning tasks. Our diverse sensory inputs allow us to learn about anything we can perceive. Perhaps that is the limiting factor for these Deep Learning systems? If a system of the size and scale of the human brain were created and allowed to digest the sensory milieu that we partake on a minute by minute basis would it “wake up and ask for a smoke” or maybe something even more human? The reality is that, while theoretically possible, there are innumerable complexities missing, even from the most sophisticated Deep Learning systems that make such complex behavior unlikely in the near future.

How About Whole Brain Simulations?

The Blue Brain Project is a primarily European Effort based out of the Ecole Polytechnique Federale De Lausanne and headed up by Henry Markram and Eilif Muller (among many other contributors). I had the opportunity to hear Dr. Muller speak about the effort to simulate whole structures just recently at the Society for Neuroscience’s annual conference in Chicago. The Blue Brain Project has been tasked by various European science academies and funding agencies to construct computational models of biologically accurate brain structures. This effort has been reasonably successful; the group has published 70 or so journal articles in the past 5 years representing a growing corpus of work. Here’s the catch to simulating brain tissue models in this way, seconds of simulated time can take hours or days of real (computation) time. While it is very impressive that that realistic neural tissues can be simulated at all, it is virtually impossible to evaluate the intelligence of a system with only a few seconds to work with.

My own effort to model large-scale regions of the brain structure called the hippocampus has not been without major challenges. Even with the computing power equal to about 4,500 desktop computers, my latest 100 millisecond simulation of 100k cells can take as long as 8 hours. While this time-course is sufficient to ask many wonderfully practical questions about how neural tissue behaves at scale…it is not practicable at all for studies of emergent intelligence of such networks.

It can and should be speculated that computing power will develop further and improve the performance of these large-scale biological networks, but the likelihood that these systems will be realtime (that is, simulate 1 second in 1 second of computer time) within the next decade is slim.

Which Solution Will Achieve Singularity (With Elegance)

A little more than ten years ago Eugene Izikhevich published a math model of the neuron which could respond to stimuli in a manner highly similar to real neurons in real time. If such a model could be implemented at scale in a way that reflected realistic brain systems in terms of connectivity…it is conceivable that we could have a living, breathing, albeit non-plastic, brain on our hands. If the plastic, learning aspect of the brain and deep learning algorithms could be implemented in an Izhikevich neural network then perhaps we could truly train an intelligence that would become an elegant Singularity.

Without a solution such as I described above, it is difficult to tell what the real capabilities of the system would be. Perhaps this would be the moment where we realize if consciousness is an emergent phenomena of large neural architectures or truly an endowment from above.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Your Smart Phone Will Get An IQ Bump From Neuromorphic Chips

16981803257_5b001e3585_k

In 2008 I came back from having spent two years abroad. During this time many things happened that contributed greatly the current technology landscape. Facebook became open to the public, Youtube hit its stride, and the iPhone was first released. While it wasn’t the first “smartphone” per se…the iPhone was the first truly popular one. Within two years the market’s appetite for smartphones has grown from non-existent to staggering. By the end of 2015 it is estimated that there will be 2 billion mobile devices in circulation. While this signals tremendous progress for computing, it is fair to wonder what it is about these phones that makes them “smart”. There is no doubt as to the reason they represent a huge advancement in mobile technology—the newest mobile microprocessors are virtually indistinguishable from their desktop counterparts in architecture, and nearly a match for many of the entry-level machines in performance. But does that make them smart? Not yet. We use the word smart to describe a human trait—the ability to learn, which may be synonymous with intelligence. While it wouldn’t be productive to get into the semantics of intelligence, it is clear that, “smart” phones don’t pass the test. But there is a technology in the pipeline that will allow phones to learn, and adapt, in order to provide better solutions to our problems.

The Bridge Between Brains and Computing

Late last year IBM introduced TrueNorth neuromorphic technology and later this year Qualcomm will introduce the Kryo (a similar architecture) into production. These chip is not your typical grid of transistors. In fact, it is designed to mimic your brain. Engineers have found a way to reimplement neural networks by using resistors and capacitors (resistors provide resistance and capacitors are like miniature batteries) in parallel and in series. This hardware, while a recognizable simplification of the biological systems it is modeled after, are capable of learning in a manner reminiscent of the way that our own neurons learn.

The journey to a successful neuromorphic chip has not been a short one. The first artificial neural networks were being tinkered with in the 40’s by Warren McCulloch and Walter Pitts. Even then it was postulated that we might be able to someday build something of an artificial brain and harness its computational power as a sort of personal assistant or perhaps let it loose to work on the biggest problems of the day. While actually achieving this is a long way off and there are many complicated hurdles remaining some of these science fictions have become reality in very important ways. No one has yet managed to build a complete human brain but we have been able to simulate large portions of it with biologically realistic features. When I joined the Center for Neural Engineering (CNE) at University of Southern California in late 2014, researchers there were already using thousands of computers to reconstruct and simulate up to a million neurons in a very biologically realistic computational model of the Hippocampus. We called ourselves the multi-scale modeling group because we incorporated complex details of the brain at multiple scales including: detailed models of synapses, beautiful and morphologically appropriate models of neurons, all arranged and connected according to what we had observed in experimental studies of the Hippocampus. The primary purpose of this work, at the time, was to explore the possibility of replacing dysfunctional portions of the Hippocampus with a computer chip. As of this writing, CNE has successfully tested such a device in rats, macaques, and has just completed preliminary testing in humans. Such a device, which incorporates complex math and analog electrical hardware, is able to function by mimicking the computation that might have been performed by a network of neurons.

Why Your Phone Needs a New Brain

You may have noticed a few interesting new features in Facebook’s photo tagging system in the past couple of years. Of particular interest is the ability of the site to recognize faces. While very impressive, humans outperform all but the very best algorithms with much greater efficiency. How is it that your brain is so much better at this exercise than an algorithm? The answer lies in the architecture of your brain and how it learns. Your brain learns by crafting a network of cells to look for small features of a persons face. Particular facial characteristics cause neural networks to respond with a unique pattern. This pattern is then identified as the facial response pattern of that individual. If trained adequately, this type of facial recognition can happen with electric speed (a bit slower because actual connections between neurons are mostly chemical, not electrical).

In general, pattern recognition problems are prime areas of improvement in computing. Because mobile computers are so often presented with pattern stimulus (route planning, video, images, sound, etc.), they are the prime application for neuromorphic chips and really are the place where they are able to make the biggest impact. Look forward to your phone getting a lot smarter in the near future…it is gonna get a new brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.