Artificial Intelligence, Deep Learning, and Ray Kurzweil’s Singularity

Artificial Intelligence and the Singularity

What do Elon Musk, Bill Gates, and Stephen Hawking have in common? They are all deathly afraid of intelligent machines. This might seem a bit ironic considering it was a fantastical idea to two of them and the other one seems to be building them. Perhaps it isn’t the self-driving kind of machines that most frighten Mr. Musk? Admittedly, of all of the smart tasks that computers will be capable of in the coming age of algorithms, self-driving is rather tame. Furthermore, it is unlikely that the first computers which learn how to drive a car safely will spontaneously learn other complex tasks. This is the feature of truly intelligent machines which seems to have the real Iron Man shaking in his rocket boots. It isn’t quite clear yet if computers will overtake humans in intelligence within a decade but it is no longer an inconceivable event for the general public. Ray Kurzweil, a prominent thinker, calls this the “Singularity”. But what will it be like?

Will the Singularity be a Disappointment?

Ray Kurzweil is a smart dude, but the moment that we achieve singularity may be a disappointment to him. When we think of machines as intelligent as humans, we tend to think of little robotic Haley Joel Osmond, Robin Williams, or the more recent Ex Machina kind. But a brief glance at the literature on Artificial Intelligence could tell you that intelligences that are in development are all highly specialized. “Intelligent” programs have been designed to navigate rough terrain, jump obstacles, identify faces, voices, and other biometrics. Other algorithms are being used to make valuable predictions or classify objects. But as far as general intelligences go none have achieved quite so much as the IBM supercomputers, Deep Blue and Watson—you know, the ones that beat the grandmaster at chess and whooped Ken Jennings at Jeopardy? While Watson is constantly in development and learning new skills along the way, he is not self-conscious, nor is he capable of doing his own “development” without human guidance. He is largely just a question and answer robot, with access to lots of information.

Independent Learning is a Hallmark of Intelligence

There are, however, computer programs that teach themselves things based on loosely defined learning rules. Deep learning neural networks are essentially simplified models of brains designed for very specific tasks. These simple little brains can learn things about their environment very similarly to the way you learn things about the world around you. The “neurons” in these algorithms change and adapt in order to learn a pre-determined pattern. Once the pattern has been learned this piece of software can be deployed to recognize this pattern in any new chunk of input passed to it. For example, a deep learning system could be trained to look for cats in a couple of videos of your choosing…once it has been properly trained, this system could be happily sent off looking for cats in all of the videos of YouTube. While this is somewhat of a silly example you should be able to see the power that a system like this could have.

But is this system what we call intelligent? Obviously the ability to learn in an unstructured manner is a crucial component of the puzzle but humans do so much more than that. They aren’t restricted to a few simple learning tasks. Our diverse sensory inputs allow us to learn about anything we can perceive. Perhaps that is the limiting factor for these Deep Learning systems? If a system of the size and scale of the human brain were created and allowed to digest the sensory milieu that we partake on a minute by minute basis would it “wake up and ask for a smoke” or maybe something even more human? The reality is that, while theoretically possible, there are innumerable complexities missing, even from the most sophisticated Deep Learning systems that make such complex behavior unlikely in the near future.

How About Whole Brain Simulations?

The Blue Brain Project is a primarily European Effort based out of the Ecole Polytechnique Federale De Lausanne and headed up by Henry Markram and Eilif Muller (among many other contributors). I had the opportunity to hear Dr. Muller speak about the effort to simulate whole structures just recently at the Society for Neuroscience’s annual conference in Chicago. The Blue Brain Project has been tasked by various European science academies and funding agencies to construct computational models of biologically accurate brain structures. This effort has been reasonably successful; the group has published 70 or so journal articles in the past 5 years representing a growing corpus of work. Here’s the catch to simulating brain tissue models in this way, seconds of simulated time can take hours or days of real (computation) time. While it is very impressive that that realistic neural tissues can be simulated at all, it is virtually impossible to evaluate the intelligence of a system with only a few seconds to work with.

My own effort to model large-scale regions of the brain structure called the hippocampus has not been without major challenges. Even with the computing power equal to about 4,500 desktop computers, my latest 100 millisecond simulation of 100k cells can take as long as 8 hours. While this time-course is sufficient to ask many wonderfully practical questions about how neural tissue behaves at scale…it is not practicable at all for studies of emergent intelligence of such networks.

It can and should be speculated that computing power will develop further and improve the performance of these large-scale biological networks, but the likelihood that these systems will be realtime (that is, simulate 1 second in 1 second of computer time) within the next decade is slim.

Which Solution Will Achieve Singularity (With Elegance)

A little more than ten years ago Eugene Izikhevich published a math model of the neuron which could respond to stimuli in a manner highly similar to real neurons in real time. If such a model could be implemented at scale in a way that reflected realistic brain systems in terms of connectivity…it is conceivable that we could have a living, breathing, albeit non-plastic, brain on our hands. If the plastic, learning aspect of the brain and deep learning algorithms could be implemented in an Izhikevich neural network then perhaps we could truly train an intelligence that would become an elegant Singularity.

Without a solution such as I described above, it is difficult to tell what the real capabilities of the system would be. Perhaps this would be the moment where we realize if consciousness is an emergent phenomena of large neural architectures or truly an endowment from above.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Why Brain Engineering Will Spawn The New “Hot Jobs”

The hot jobs of this decade, almost without exception, have become “cerebral” in some way or another. Programmers build complex algorithms, quatitative financial analysts build equally complex models, and data analysts (with their myriad titles) are swimming in complex methods; even in the health industry you can see the trend toward an increased emphasis on problem solving ability…physician’s assistants that are capable of accurately diagnosing various conditions are more in demand than ever (long the exclusive domain of board certified medical doctors). How appropriate is it that brain technology would further the trend in “cerebral”-ization of work?

In collaboration with computer scientists, brain researchers have poked holes in the veil of the future–several technologies, previously only possible in the pages of Isaac Asimov and other Sci-Fi writers, such as Deep Brain Stimulation, Neuromorphic Computing, and Machine Learning have opened a new frontier for game-changing products and applications.

Deep Brain Stimulation (DBS)

DBS is essentially a pacemaker, repurposed for the brain. While nearly all current applications for DBS are in the correction of disruptive electrical signals in the brain, it proves it is possible to externally and locally trigger specific circuits in the brain responsible for motion, sensation, memory, emotion, and even abstract thought. Why might this lead to the creation of so called hot jobs? Imagine being the engineer who implements a DBS system to help reduce cravings for food due to boredom? Or a DBS system that helps you recognize individuals by stimulating circuits containing relevant information about that individual?

Neuromorphic Computing

This is an image of a neuromorphic processor with 1/4 million synapses on a 16×16 node array

You might have already pieced together what this means but it is just what it sounds like: computers that are like brains in form. Now, they don’t actually look like brains but they utilize a fundamental architecture of nodes (neurons) connected (a la synapses) in a network with variable strengths. These variable strengths allow learning to happen (if you forgot how this works look here). As you can imagine this chip would be fundamentally different than the Intel processor your desktop or laptop probably have under the hood. The fundamental difference is that like your brain, neuromorphic chips must be trained in order to perform a task. Another interesting feature of these types of chips is that the task also needs to be designed. I can’t imagine a sexier job that thinking up tasks and training regimes for neuromorphic chips! If you aren’t convinced this is possible or coming in the near future, you might be surprised to hear that Intel and Qualcomm already have working prototypes and are planning to put them into cell phones very soon (read about it here).

Machine Learning

If the concept of a machine learning doesn’t sound totally anthropomorphic to you…it probably should. But once again our understanding of how networks of neurons work has opened a huge can of worms for those who know how to hook them up and go fishing. Machine Learning forms much of the theoretical framework underlying neuromorphic computing. The major difference is that not being implemented in hardware allows the user a ton of flexibility to build creative and novel solutions. The types of problems that are being solved with Machine learning are crazy…there are many things that you and I are good at but would make your computer crash every time–face recognition, reading, writing, speaking, listening, and identifying objects are all within the domain of machine learning. As you can imagine we have only begun to tap the well of interesting applications for machine learning and there may be an inexhaustible need for engineers to come up with them.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you have any interest in writing here or would like to hear more about the work done by Clayton in the USC Center for Neural Engineering he can be reached at: clayton dot bingham at gmail dot com.