Artificial Intelligence, Deep Learning, and Ray Kurzweil’s Singularity

Artificial Intelligence and the Singularity

What do Elon Musk, Bill Gates, and Stephen Hawking have in common? They are all deathly afraid of intelligent machines. This might seem a bit ironic considering it was a fantastical idea to two of them and the other one seems to be building them. Perhaps it isn’t the self-driving kind of machines that most frighten Mr. Musk? Admittedly, of all of the smart tasks that computers will be capable of in the coming age of algorithms, self-driving is rather tame. Furthermore, it is unlikely that the first computers which learn how to drive a car safely will spontaneously learn other complex tasks. This is the feature of truly intelligent machines which seems to have the real Iron Man shaking in his rocket boots. It isn’t quite clear yet if computers will overtake humans in intelligence within a decade but it is no longer an inconceivable event for the general public. Ray Kurzweil, a prominent thinker, calls this the “Singularity”. But what will it be like?

Will the Singularity be a Disappointment?

Ray Kurzweil is a smart dude, but the moment that we achieve singularity may be a disappointment to him. When we think of machines as intelligent as humans, we tend to think of little robotic Haley Joel Osmond, Robin Williams, or the more recent Ex Machina kind. But a brief glance at the literature on Artificial Intelligence could tell you that intelligences that are in development are all highly specialized. “Intelligent” programs have been designed to navigate rough terrain, jump obstacles, identify faces, voices, and other biometrics. Other algorithms are being used to make valuable predictions or classify objects. But as far as general intelligences go none have achieved quite so much as the IBM supercomputers, Deep Blue and Watson—you know, the ones that beat the grandmaster at chess and whooped Ken Jennings at Jeopardy? While Watson is constantly in development and learning new skills along the way, he is not self-conscious, nor is he capable of doing his own “development” without human guidance. He is largely just a question and answer robot, with access to lots of information.

Independent Learning is a Hallmark of Intelligence

There are, however, computer programs that teach themselves things based on loosely defined learning rules. Deep learning neural networks are essentially simplified models of brains designed for very specific tasks. These simple little brains can learn things about their environment very similarly to the way you learn things about the world around you. The “neurons” in these algorithms change and adapt in order to learn a pre-determined pattern. Once the pattern has been learned this piece of software can be deployed to recognize this pattern in any new chunk of input passed to it. For example, a deep learning system could be trained to look for cats in a couple of videos of your choosing…once it has been properly trained, this system could be happily sent off looking for cats in all of the videos of YouTube. While this is somewhat of a silly example you should be able to see the power that a system like this could have.

But is this system what we call intelligent? Obviously the ability to learn in an unstructured manner is a crucial component of the puzzle but humans do so much more than that. They aren’t restricted to a few simple learning tasks. Our diverse sensory inputs allow us to learn about anything we can perceive. Perhaps that is the limiting factor for these Deep Learning systems? If a system of the size and scale of the human brain were created and allowed to digest the sensory milieu that we partake on a minute by minute basis would it “wake up and ask for a smoke” or maybe something even more human? The reality is that, while theoretically possible, there are innumerable complexities missing, even from the most sophisticated Deep Learning systems that make such complex behavior unlikely in the near future.

How About Whole Brain Simulations?

The Blue Brain Project is a primarily European Effort based out of the Ecole Polytechnique Federale De Lausanne and headed up by Henry Markram and Eilif Muller (among many other contributors). I had the opportunity to hear Dr. Muller speak about the effort to simulate whole structures just recently at the Society for Neuroscience’s annual conference in Chicago. The Blue Brain Project has been tasked by various European science academies and funding agencies to construct computational models of biologically accurate brain structures. This effort has been reasonably successful; the group has published 70 or so journal articles in the past 5 years representing a growing corpus of work. Here’s the catch to simulating brain tissue models in this way, seconds of simulated time can take hours or days of real (computation) time. While it is very impressive that that realistic neural tissues can be simulated at all, it is virtually impossible to evaluate the intelligence of a system with only a few seconds to work with.

My own effort to model large-scale regions of the brain structure called the hippocampus has not been without major challenges. Even with the computing power equal to about 4,500 desktop computers, my latest 100 millisecond simulation of 100k cells can take as long as 8 hours. While this time-course is sufficient to ask many wonderfully practical questions about how neural tissue behaves at scale…it is not practicable at all for studies of emergent intelligence of such networks.

It can and should be speculated that computing power will develop further and improve the performance of these large-scale biological networks, but the likelihood that these systems will be realtime (that is, simulate 1 second in 1 second of computer time) within the next decade is slim.

Which Solution Will Achieve Singularity (With Elegance)

A little more than ten years ago Eugene Izikhevich published a math model of the neuron which could respond to stimuli in a manner highly similar to real neurons in real time. If such a model could be implemented at scale in a way that reflected realistic brain systems in terms of connectivity…it is conceivable that we could have a living, breathing, albeit non-plastic, brain on our hands. If the plastic, learning aspect of the brain and deep learning algorithms could be implemented in an Izhikevich neural network then perhaps we could truly train an intelligence that would become an elegant Singularity.

Without a solution such as I described above, it is difficult to tell what the real capabilities of the system would be. Perhaps this would be the moment where we realize if consciousness is an emergent phenomena of large neural architectures or truly an endowment from above.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

The Brain is Not a Computer

blackbox

Of all of the question and answer websites on the internet, I have become particularly fond of Quora. I can’t, however, go a day without being asked a question that compares the brain to computers. I find myself wanting to give them the answers that they expect….”The brain has a memory capacity of approximately 2.5 petabytes”, ”The brain computes at a speed of 20 petaflops”, ”the brain uses less than 1/100,000th power of the world’s most powerful computer per unit of processing power,” et cetera. The problem with this is that these answers are fundamentally wrong. As entertaining as it is to draw comparisons between two obvious computing machines, they compute by two unrecognizably different methods. Those differences make memory, processing power, and power consumption very difficult to compare in real quantities. The comparison is so tempting because of how easy it is to compare one model of a traditional computer to another…processor clock speed, bus capacity, ram size, hard drive type and size, and graphics card are all easily compared features of traditional computers. But the brain is not a traditional computer (it turns out this isn’t as obvious as it seems). In fact, it nearly does the brain, and all those trying to understand it, a disservice to compare it to modern computers. It would be better to describe the brain as a collection of filters. In the context of a filter, it is much easier to understand and perhaps even measure brain memory, speed, and power consumption.

Why not a Computer?

I don’t intend to be misleading—we all know that our brains compute. I merely wish to help clear up common misconceptions that stem from our constant desire to draw comparisons between the brain and computers as we know them. Perhaps the problems are only semantic, but I believe that the confusion is much more fundamental than that and even learning some basics about the brain has failed, for most, to be illuminating. Let’s identify a few characteristics of computers that are irrelevant to the kind of computing we see in a human brain.

Computer memory is like a bucket.

Not only is it like a bucket but it is located apart from the processor. In neural structures, memory is “stored” in the connections between cells—the network adds and deletes these stores by changing the strengths of these connections, adding or removing them altogether. This is a stark contrast to how computer memory works…in brief, some process contains pieces of information. These pieces, depending on their type, require a certain number of buckets of computer memory. The process tells the machine this number and shuttles the information there in a series of binaries, where they are written in a very particular place on a disk. When needed again, these bytes are read back into the machine. If our brain had to work like this it would mean we would constantly need to write down things we needed to remember…including how to move, control our muscles, see, hear, etc. This simply wouldn’t work quickly or efficiently enough for many of the things that we do with ease.

Computer processors are like an electric abacus.

Processors do very, very simple tasks very, very quickly. Perhaps a better analogy would be to compare processors to a horse-race. Processors compute simple tasks by sending them (horses) around a circuit. Because tasks can only move so quickly, companies like Intel, Qualcomm, and AMD have systematically found ways to both make many more tracks, and shrink the tracks down until they are tremendously small (so the horses don’t have as far to run). While this is very fast, every time a computer needs to solve a problem (even the same problem) it must send the horses around the track again. Brains work in a very different manner. Imagine, instead of a horse-race, an engineer who is attempting to build a million very complicated devices from a set of blueprints. Similarly, the brain works very hard to understand the plans but then constructs an efficient process (likely a factory) that doesn’t require pulling apart the plans every time a new device needs to be made. Once the brain takes the time to learn how to do something, it doesn’t forget easily, and the task becomes much, much easier to execute; in contrast, processors have to be told how to do every thing, every time.

Computers don’t do anything until they are told.

Even computer programs that run forever need to be executed, or started by a user. Your brain, however, is constantly buzzing with activity. Sensory information is constantly being pinged around in the brain and even when we shut off as much stimuli as possible (when we are asleep, for example) we see the same old areas light up in response to recombined or previously experienced stimuli.

Computers are not dynamic systems.

Perhaps this is just an extension of the idea that computers don’t do anything until they are told, but computers don’t adapt without being programmed to. Adaptation is not an inherent feature of computers. Brains, however, have ingrained and natural processes that allow them to adapt to stimuli over time. At the level of the neuron this is expressed in spike-time dependent plasticity and in “neural sculpting”—neurons tend to strengthen connections that cause them to fire action potentials and weaken those that don’t. This is called spike-time dependent plasticity. Over time, weakened connections may fall off altogether resulting in the phenomenon described as “neural sculpting”.

The Brain is a Collection of Filters

By now you may still be wondering what the picture above has to do with all of this. There are three important components to this image: an input signal (chicken parts), a system (black box), and an output signal (a whole chicken). Inside of the black box (our system in question) there must be some process that assembles chickens from their respective parts. Perhaps interestingly, the box is typically portrayed as black because when we first encounter a system, we probably don’t immediately understand how it works (we haven’t yet shined a light inside). When we consider the brain, we don’t immediately understand how it works, but we have a pretty fundamental understanding of how neurons work and so we have some bit of light to help us see inside the black box of the brain. Somewhat like the editors of a gossip magazine, neurons “listen” to the chattering of other neurons in the brain and when one piece of information sounds particularly meaningful, it presses publish on its own packet of information by firing an action potential. As nearly all good magazines editors do, they learn to determine what is good enough to publish and what is not. Neurons edit, over time, which inputs they prefer and they tend to respond preferentially to them by becoming more sensitive; conversely, they become less sensitive to other inputs that are less important. In this way, neurons, affectively, filter out some signals and amplify others. Interestingly, neurons typically have many more connections with their neighbors than with neurons across town in other regions of the brain; this suggests that not only do neurons act as filters on their own, but they allow their neighbors to offer an opinion on when it is the right time to fire an action potential. If you think these network characteristics are sounding eerily like a suburban America, you aren’t alone. Neuroscientists have found many features of biological neural networks to be analogous to social networks…though this only brings home the reality that these unique features seem to exponentially increase the complexity of the functioning brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.