Connectomics, Neuroscience, and Computational Models of the Brain

15600521099_e01ed52d2c_z

When an idea in science is stamped a theory most scientists just shrug. Unlike the public they know that theory or not, an idea can’t be taken as ground truth until it has been thoroughly vetted through extensive and redundant experimentation. In 2012 Dr. Sebastion Seung published a book advocating a theory neither new, old, nor entirely his own. The science of some aspects of his idea is, however, becoming quite acceptable to the neuroscience community. His scientific idea is that, aside from our DNA, it is the uniqueness of the pattern and characteristics of connections between the neurons in our brains that makes us who we are. In all fairness, Dr. Seung is not the first to propose this idea, nor is he the only proponent of what has become known as Connectomics—but he has found ample resistance to the idea that our “connectome” could have a role in establishing identity that rivals the uniqueness of our individual genetic code. Many question that the mere architecture of the connections in a brain could yield the rich functionality that we all enjoy. Another established expert in the field, Dr. Cristof Koch said,”Even though we have known the connectome of the nematode worm for 25 years we are far from reading its mind. We don’t yet understand how its nerve cells work.” As Dr. Koch and others have intimated, the more likely whole theory of the brain is a hybrid one taking into account not only connections but also the chemical-laden soupy milieu that neurons sit in.

Connectomics As A Theory Is Great But Incomplete

Imagine yourself as a competitor in a wrestling match. Pretend that before the match you get to choose between two competitions—one option is to wrestle a thoroughly muscled man twice your size, the second is to wrestle 25 small, but very angry, eight year old children. It is likely that you will be overpowered in either case but it is a useful analogy to help you see the differences between connections of neurons. These connections determine how similar, or coupled, the behavior of two neurons are and they are not all the same strength; Some connections are weak, and others are strong. It would take many more weak connections to achieve a similar response from a neuron as you might expect from a few very strong connections. Connections between cells are called synapses, and synapses are essentially a gap across which neurons send chemicals. The upstream neurons typically do most of the sending, and the downstream neuron pays attention to how much the signaling neurons sends.

It is possible, however, for this process to be interrupted. Foreign chemicals, not usually found at the synapse can block or replace those that belong…the results could be dramatic. The body releases specific chemicals on a regular basis— dopamine, serotonin, glutamate, calcium, and many others that are routinely synthesised in the body and play an important role in the way your neurons function. The role that these extra-neural chemicals play is an example of a crucial non-Connectome feature of your brain which contributes to what makes you who you are. While the connectome forms the primary architectural framework on which these processes are possible, it cannot tell the whole story alone.

How To Measure the Importance of Connectomics?

The concept of experimental control is central to what makes scientific results at all verifiable. If you wanted to determine if lavender oil cures cancer you would need to isolate cancerous cells by controlling for all other potentially cancer killing compounds or mechanisms that might also be nearby…otherwise how could you prove that exposure to the lavender was what did the deed? How do you control for the contribution of connectomics to identity when the connectome is never the only variable that changes from person to person? To put it in other words…how do you know that the differences between my connectome and yours is what makes me walk, talk, and think differently than you? How do we know that factors such as environment, genetics, diet, and habits are also coming into play?

We obviously need some kind of experimental control…where we can observe the changes in behavior in a single connectome when exposed to different environments or perhaps the differences in behavior between two connectomes exposed to the exact same environment. It turns out that the easiest, most practical, and ethical method of doing this is to build what is called a computational model…this is essentially a version of the system in question reproduced via mathematical equivalents inside of a computer. If you are a bit mystified as to how this could be done…take the example of a pitcher throwing a baseball. If you knew the trajectory and initial velocity of the baseball, as well as a few essential details about the ball itself, you could predict the path with near perfect precision. Similarly, if you know a few of the rules by which neurons behave, you can predict their behavior with very high accuracy. When you add to that a model of the behavior of connections between cells you have all of the functional components of a connectome. Simulations of connectomes, real and hypothetical, have the power to yield incredibly valuable insights.

Connectomics, A Piece Of The Puzzle And A Clue For Further Investigation

While it is unlikely that a connectome holds all of the information necessary to reconstruct your cerebral identity, it is undoubtedly a crucial component. But how do we test the idea and gauge just how important it is? Computational models can shed some light into the black box of the brain by building toy versions, simple and complex, to explore how variations of the connectome impacts the behavior of a network of neurons. Combined with models of extracellular features of neural systems, we may be able to learn the balance of influence each structural component of our brains hold over our behavior. Like behavior, connectomics is very difficult to study via reductionist methods…it may very well be the completeness of the brain that makes us so special.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

Humanism and Brain Technology: The Borg, The Matrix, and The New Voices In Your Head

You don’t have to have seen the kid-flick Goonies to imagine a gadget-savvy adolescent who’s one mission in their little life is to contraption-ize everything. I am, in fact, guilty of “booby”-trapping my room for fun in order to reenact my favorite MacGyver episodes (safely, of course). I was known to string up cords that crossed my little boy bedroom with mostly unknown but very important functions…maybe one string flipped the light switch off when the doorknob was turned so my mom wouldn’t catch me awake after my bedtime. I distinctly remember another set at a high angle so that I could send my dirty clothes across the room to the hamper when it was time to get into bed. While none of these contraptions were very practicable (or sightly), my mom tolerated them for a few days because she knew it was important to let the little engineer in me experiment with ways to make my life easier or better. I probably won’t surprise you by saying that none of my earliest inventions resulted in a significantly increased quality of life…but the spirit of that invention is more important than any of us are aware; that spirit has come be called Humanism, or more particularly: Transhumanism. Transhumanists mostly go unidentified—those that are, typically self-identify—but most find themselves up to their eyeballs in the world of technology. The express goals of this technology fascination is always the same…make being human better.

In order to improve the human condition, Transhumanists must first understand and explore the opportunities for improvement. The improvements must be objective and measurable; this turns out to be harder than it seems. Consider the merits of the gas-powered car…never before have we been able to zoom to and from our destinations with such speed and ease. However, even after a nearly 100 year love affair with the automobile there are still those who are dissatisfied with the improvement and complain about this or that flaw and seem ready to abandon the technology all-together. Consider next, the complexity of the human body…as we introduce drugs and devices are implanted, there are often unintended consequences. The drug that thins our blood makes us prone to bleeding, the drug that calms inflammation compromises the immune system, and the implanted insulin pump often causes serious infections because the tube that crosses the skin collects and delivers bacteria deep into the tissue. Complex systems abound in the body, chief among them is the human brain. 100 billion neurons make upwards of 1000 trillion connections. The unintended consequences of modifying such a system can be severe. For years we saw seemingly savage interventions in the brain such as lobotomies and ablations (essentially cutting out offending parts of the brain), electroshock therapy, and extreme high-dose pharmaceutical therapies with opiates and cocaine, and other psychoactives like Lithium and even Cannabis. Despite the lack of sophistication, the more primitive therapies have only been retired when something apparently better came along; this indicates that the benefits outweighed the known side-effects. We can talk about the unknown side-effects somewhere else. First, stop for a moment and answer this question of paramount importance: what is it about our bodies that makes us uniquely human? Maybe there is more than one answer but it cannot be denied that our brains are the most unique thing about our species as well as our most powerfully evolved feature. Naturally, it is the most tempting target for Transhumanists and their ability and experience enhancing technologies. We are beginning to see these neurotechnologies in clinical settings as well as in the hands of the private consumer—they arrive in the form of implants, uploads and downloads, assistive intelligences, brain computer interfaces, and many other creative modalities.

3756873814_1368dfe10a_z

Implants and The Borg

The seemingly stray reference to the evil race of humanoid robots from Star Trek is more apt than you know. The very laboratory that I work in is founded on the goal of implementing a computer chip to replace damaged parts of the brain—particularly the hippocampus (which is responsible for memory management). How much of your brain must you replace with computer componentry before you become Borg? That is a question for the philosophers—the reality is that there are members of our community that are severely hindered by dysfunctional brains and implants are a popular proposed solution to the problem. In fact, there are thousands of these devices already deployed around the world: the Cochlear implant and the Deep Brain Stimulator are very effective at correcting some types of hearing and motor disorders. Other devices correcting more complicated disorders are in the pipeline.

10987353554_976862387e_z

Uploads, Downloads, and The Matrix

If sticking really big needles into your head or neck creeps you out you can rest assured that there will likely be less invasive or scary ways to “upload” your brain or “download” things to your brain. Types of imaging using really powerful magnets and dyes have been developed that may soon have the resolution to peer into the tightest corners of your brain and extract information such as where neurons are, where they send their branches, who they connect to, and perhaps even how strong that connection is. With this information, it is conceivable that engineers could reconstruct many of the important features of your brain in a computer model…essentially taking a snapshot of what it is that makes you you—voile brain uploaded. The download is both much more complex and much more simple at the same time…information in the brain is encoded in individual neuron “spikes” and can therefore be modulated by inducing these spikes in a pre-programmed manner. By encoding an outside message into this spike-language and inducing this pattern of activity in the right region of the brain you can effectively communicate directly to the brain any piece of information which you can reduce to this spike-language. A lot of work is being done to do exactly this. Retinal implants and Cochlear implants are still the most successful examples of this method of information delivery. They by-pass the eyes and ears and communicate outside visual and auditory stimulus directly to the brain in a way that the brain understands. Other applications such as that in my own lab have more complicated hurdles to jump because we don’t yet understand at an abstract level what the stimulus of other brain systems are. This makes translation into and out of spike-language difficult.

P1040361

Assistive Technologies and Brain Computer Interfaces (Jarvis, The Red Queen, and the little voice in your head)

While Siri and “Ok Google” are a far cry from what we hope to finally achieve in artificial intelligence, their application is very Transhumanist! Their primary purpose is to help us navigate the world of information and find the best answers quickly. They also operate as a more high-level interface with the device on which they live. Advances in neurotechnology suggest that it may someday be possible to merge this assistive technology with an implant and use it to directly modulate brain activity…uploading and downloading information to and from our brains constantly. This would allow you to keep a steady finger on the pulse of information most relevant to your moment to moment interests. If you aren’t convinced of the power that this could hold…imagine how test-taking would necessarily require more creativity instead of simply recall, or how about all of those almost acquaintances who’s name you can’t remember…wouldn’t it be nice to have their name and pertinent details whispered to you discreetly in your time of need? All of this and more is imaginable given the current trajectory of neuroscience and technology research.

Humanism is Optimism and Materialism Combined

Lets briefly explore the motivations of someone who wishes to improve the human experience or condition with technology. They obviously don’t envision the end of humanity within their lifetimes, nor do they struggle to see the value of easing some of the challenges humans face. You might say this makes up most of the population…I would agree. Humanism, and Transhumanism by extension, is a nearly-innate philosophical world-view that, in my opinion, one must try very hard to be talked out of. Optimism is at the very core of what makes a Transhumanist tick…they only want to imagine a world better or more awesome and interesting than the one they live in. Their means to bringing that about? Technology…But why not politics, or social activism, or journalism? Let me answer your question like this…can you think of anything in the past decade that has enacted more social change than Facebook, or provided more educational opportunity than Google, or put more power (literally) into the hands of the people than Apple or Samsung? While we may not necessarily prescribe to the world-view of these companies we can immediately see the power for change provided by technological advancement. It opens the eyes of the public to new ideas and ways of living. Transhumanism may be optimism, and it may be materialism, but it is also one of the most truly modern and rational work philosophies.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Artificial Intelligence, Deep Learning, and Ray Kurzweil’s Singularity

Artificial Intelligence and the Singularity

What do Elon Musk, Bill Gates, and Stephen Hawking have in common? They are all deathly afraid of intelligent machines. This might seem a bit ironic considering it was a fantastical idea to two of them and the other one seems to be building them. Perhaps it isn’t the self-driving kind of machines that most frighten Mr. Musk? Admittedly, of all of the smart tasks that computers will be capable of in the coming age of algorithms, self-driving is rather tame. Furthermore, it is unlikely that the first computers which learn how to drive a car safely will spontaneously learn other complex tasks. This is the feature of truly intelligent machines which seems to have the real Iron Man shaking in his rocket boots. It isn’t quite clear yet if computers will overtake humans in intelligence within a decade but it is no longer an inconceivable event for the general public. Ray Kurzweil, a prominent thinker, calls this the “Singularity”. But what will it be like?

Will the Singularity be a Disappointment?

Ray Kurzweil is a smart dude, but the moment that we achieve singularity may be a disappointment to him. When we think of machines as intelligent as humans, we tend to think of little robotic Haley Joel Osmond, Robin Williams, or the more recent Ex Machina kind. But a brief glance at the literature on Artificial Intelligence could tell you that intelligences that are in development are all highly specialized. “Intelligent” programs have been designed to navigate rough terrain, jump obstacles, identify faces, voices, and other biometrics. Other algorithms are being used to make valuable predictions or classify objects. But as far as general intelligences go none have achieved quite so much as the IBM supercomputers, Deep Blue and Watson—you know, the ones that beat the grandmaster at chess and whooped Ken Jennings at Jeopardy? While Watson is constantly in development and learning new skills along the way, he is not self-conscious, nor is he capable of doing his own “development” without human guidance. He is largely just a question and answer robot, with access to lots of information.

Independent Learning is a Hallmark of Intelligence

There are, however, computer programs that teach themselves things based on loosely defined learning rules. Deep learning neural networks are essentially simplified models of brains designed for very specific tasks. These simple little brains can learn things about their environment very similarly to the way you learn things about the world around you. The “neurons” in these algorithms change and adapt in order to learn a pre-determined pattern. Once the pattern has been learned this piece of software can be deployed to recognize this pattern in any new chunk of input passed to it. For example, a deep learning system could be trained to look for cats in a couple of videos of your choosing…once it has been properly trained, this system could be happily sent off looking for cats in all of the videos of YouTube. While this is somewhat of a silly example you should be able to see the power that a system like this could have.

But is this system what we call intelligent? Obviously the ability to learn in an unstructured manner is a crucial component of the puzzle but humans do so much more than that. They aren’t restricted to a few simple learning tasks. Our diverse sensory inputs allow us to learn about anything we can perceive. Perhaps that is the limiting factor for these Deep Learning systems? If a system of the size and scale of the human brain were created and allowed to digest the sensory milieu that we partake on a minute by minute basis would it “wake up and ask for a smoke” or maybe something even more human? The reality is that, while theoretically possible, there are innumerable complexities missing, even from the most sophisticated Deep Learning systems that make such complex behavior unlikely in the near future.

How About Whole Brain Simulations?

The Blue Brain Project is a primarily European Effort based out of the Ecole Polytechnique Federale De Lausanne and headed up by Henry Markram and Eilif Muller (among many other contributors). I had the opportunity to hear Dr. Muller speak about the effort to simulate whole structures just recently at the Society for Neuroscience’s annual conference in Chicago. The Blue Brain Project has been tasked by various European science academies and funding agencies to construct computational models of biologically accurate brain structures. This effort has been reasonably successful; the group has published 70 or so journal articles in the past 5 years representing a growing corpus of work. Here’s the catch to simulating brain tissue models in this way, seconds of simulated time can take hours or days of real (computation) time. While it is very impressive that that realistic neural tissues can be simulated at all, it is virtually impossible to evaluate the intelligence of a system with only a few seconds to work with.

My own effort to model large-scale regions of the brain structure called the hippocampus has not been without major challenges. Even with the computing power equal to about 4,500 desktop computers, my latest 100 millisecond simulation of 100k cells can take as long as 8 hours. While this time-course is sufficient to ask many wonderfully practical questions about how neural tissue behaves at scale…it is not practicable at all for studies of emergent intelligence of such networks.

It can and should be speculated that computing power will develop further and improve the performance of these large-scale biological networks, but the likelihood that these systems will be realtime (that is, simulate 1 second in 1 second of computer time) within the next decade is slim.

Which Solution Will Achieve Singularity (With Elegance)

A little more than ten years ago Eugene Izikhevich published a math model of the neuron which could respond to stimuli in a manner highly similar to real neurons in real time. If such a model could be implemented at scale in a way that reflected realistic brain systems in terms of connectivity…it is conceivable that we could have a living, breathing, albeit non-plastic, brain on our hands. If the plastic, learning aspect of the brain and deep learning algorithms could be implemented in an Izhikevich neural network then perhaps we could truly train an intelligence that would become an elegant Singularity.

Without a solution such as I described above, it is difficult to tell what the real capabilities of the system would be. Perhaps this would be the moment where we realize if consciousness is an emergent phenomena of large neural architectures or truly an endowment from above.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Brains Are Probability Ultra-Approximaters (we are damned good guessers)

15155716967_3a30d504a1_k

Your brain might be disagreeing with the title of this article right now depending on how recently you’ve visited Las Vegas. You would be right to think that there are many ways in which our brains can be tricked into making poor choices. As interesting as those tricks are, they aren’t half as impressive as all of the excellent predictions we make nearly instantaneously, each and everyday.

For those of us who press snooze in the morning, have you ever made a blind stab at the button without ever opening your eyes? It turns out that your brain is able to transmute the mechanical signals from the alarm into neural signals through the cochlea in your inner ear, and then associative connections between your auditory perception pathways and your motor cortex were able to detect the source of the sound relative to your body and coordinate your movements to turn it off. While this is a serious simplification of the how your brain is able to accomplish this feat, our ability to execute this action proves that we can perform very complicated motor and proprioceptive (referring to our body’s position relative to itself and our environment) predictions that robotics labs have really struggled to recreate. But how is it that a group of neurons can make predictions at all?

Neurons are not as simple as you might think.

Your brain is made up of something like 100 billion neurons that connect 1000 trillion times. That means that, on average, each little neuron often has more than ten thousand other neurons talking to it. If you imagine that a neuron is something like a dance club without bouncers to turn people away, you can imagine that at different times of the week, or night, there might be varying number of people on the dance floor. On Salsa night at 11 o’clock you might not be able to see the dance floor at all because the club is bursting with activity. You might further imagine that with so many people in the club at one time it could get pretty tiresome and people would want to leave in order to relieve some of the crush inside. While this is far from a good analogy you can see how a neuron might, similarly, use the number and “boisterousness” of incoming signals to determine when to relieve some pressure and pass on a signal of its own. That response signal is not always the same magnitude but when it starts it never stops. The effect that a single neuron has on its downstream connections is not always the same…some it may excite, others it may inhibit, and always in varying degrees. It turns out that the type and variation of this strength of connectivity is the chief mechanism allowing learning, and consequently, pattern recognition and prediction.

Neurons learn together

When I was nineteen years old I had the opportunity to spend a few years of my life, learning and performing public service in the Kingdom of Thailand. Unfortunately, I didn’t spend much time relaxing on the beaches or getting massages; instead, I was tasked with teaching and serving the Thai people in the places where they lived. I learned Thai and learned their customs. This was incredibly difficult and I still wonder how they ever understood me. Part of building understanding between two individuals is developing knowledge of customs and culture—one interesting custom that the Thai people have is the Wai. Used as a greeting and an expression of respect and gratitude, it is performed by bringing both hands together, flattened, palm to palm, in front of yourself and may be combined with a small bowing of the head. While it may be difficult for westerners to learn when and how it is most appropriate to perform the Wai, it is nearly as difficult for Thais to learn the western handshake. Improperly performed Wais, and handshakes, amazingly (and perhaps tragically), have tremendous ability to create distrust between two people. Similarly, poorly formed and dysfunctional connections between neurons tend to fade away and eventually quit working altogether. However, just like a good handshake, one strong synapse can cause two neurons to strengthen their connection and grow in synchrony. It takes more than one person to make a good handshake and it takes more than one neuron to complete a functionally meaningful circuit in the brain.

Neurons make predictions by popular vote

My wife is a leader in a children’s Sunday School class each week and she spends quite a lot of time trying to think up ways to motivate the kids. I imagine she has tried many different kinds of treats, and while kids tend to like any treat, they always have their preferences. She has taken to buying assorted treats and just letting them pick for themselves when the time comes. Neurons have a way of preferring some inputs over others, just as the kids (and ourselves, I suppose) prefer one kind of treat over another. This preference grows out of the strengthening handshake phenomenon that we discussed earlier. As neurons strengthen some connections and weaken others, they eventually only respond strongly to a few types of stimulus. In this way, they display their choice and telegraph to other cells downstream what type of input they are receiving. Imagine you conducted a test on a room full of kids—by holding up a kind of candy (say Skittles or M&M’s) and asking children to stand up to express their preference so that you could take a headcount. In subsequent tests the experimenter holds up candy without showing you which candy they held up but asked the kids to continue standing up in order to express their preference; if you knew the headcount, or paid attention to which kids preferred which candy, then without much effort you could deduce which candy the experimenter was holding up. Similarly, downstream groups of neurons may deduce from their input signals what triggered the activity in the first place…was something red presented to the eyes? Or something blue? Your brain is made up of collections of neurons that are constantly voting on inputs. By refining their synaptic connections (working on their handshakes) they reinforce or reform their predictions in order to achieve better and better outcomes.

Brains are not democracies and neurons are not citizens

Neurons might be the stuff that makes us human but it would be a silly anthropomorphism for me to give you the false impression that neurons actually do much thinking on their own. I say they vote, pick their own candy, and go around shaking everybody’s hand as if they were at town hall, but neural activity is subject to two things…you may call it nature and nurture but there are two very real phenomena that influence how neurons determine their connections and how networks develop behavior.
I think fifty-two card pickup is a game that I had the opportunity to learn at a very young age…this is a game (or trick, rather) where one person sprays all the cards of a deck in all directions and then leaves the other player to pick them up. While a bit more orderly, neurons find their eventual place in the developing brain in a much similar manner. Their exact location, orientation, and connectivity, while following some basic rules of architecture, are largely random. This nuerodevelopmental variability adds up to be what those who study biological system call initial conditions. We each have nuanced initial conditions resulting in differences in the ways that our brains are wired. The randomness in determination of the initial conditions in the brain are part of what causes each of us to be more or less likely to develop certain behaviors or obtain certain skills. These initial conditions are an under appreciated source of individuality in human development. Initial conditions may be a major component of the “nature” that makes us who we are.

The “nurture” component of brain development shouldn’t be much of a mystery at all as we learn and make choices we either reinforce or reform our initial conditions. Continual sculpting of neural networks by and through our sensory experiences and repeated behaviors leaves us with strong tendencies toward certain behaviors and preferences for particular stimuli.

In brief, our brains are full of complex functional units that, over time, develop increased or suppressed sensitivity to particular stimuli. When many of these functional units are strung together there are many amazing emergent phenomena, including the ability to choose, predict, and classify stimuli. Whether you believe that we are just a sum of our parts or not, reality is that brains are made up of parts that we are beginning to understand. So far, we understand that, in concert, neurons are powerful prediction machines.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

The Brain is Not a Computer

blackbox

Of all of the question and answer websites on the internet, I have become particularly fond of Quora. I can’t, however, go a day without being asked a question that compares the brain to computers. I find myself wanting to give them the answers that they expect….”The brain has a memory capacity of approximately 2.5 petabytes”, ”The brain computes at a speed of 20 petaflops”, ”the brain uses less than 1/100,000th power of the world’s most powerful computer per unit of processing power,” et cetera. The problem with this is that these answers are fundamentally wrong. As entertaining as it is to draw comparisons between two obvious computing machines, they compute by two unrecognizably different methods. Those differences make memory, processing power, and power consumption very difficult to compare in real quantities. The comparison is so tempting because of how easy it is to compare one model of a traditional computer to another…processor clock speed, bus capacity, ram size, hard drive type and size, and graphics card are all easily compared features of traditional computers. But the brain is not a traditional computer (it turns out this isn’t as obvious as it seems). In fact, it nearly does the brain, and all those trying to understand it, a disservice to compare it to modern computers. It would be better to describe the brain as a collection of filters. In the context of a filter, it is much easier to understand and perhaps even measure brain memory, speed, and power consumption.

Why not a Computer?

I don’t intend to be misleading—we all know that our brains compute. I merely wish to help clear up common misconceptions that stem from our constant desire to draw comparisons between the brain and computers as we know them. Perhaps the problems are only semantic, but I believe that the confusion is much more fundamental than that and even learning some basics about the brain has failed, for most, to be illuminating. Let’s identify a few characteristics of computers that are irrelevant to the kind of computing we see in a human brain.

Computer memory is like a bucket.

Not only is it like a bucket but it is located apart from the processor. In neural structures, memory is “stored” in the connections between cells—the network adds and deletes these stores by changing the strengths of these connections, adding or removing them altogether. This is a stark contrast to how computer memory works…in brief, some process contains pieces of information. These pieces, depending on their type, require a certain number of buckets of computer memory. The process tells the machine this number and shuttles the information there in a series of binaries, where they are written in a very particular place on a disk. When needed again, these bytes are read back into the machine. If our brain had to work like this it would mean we would constantly need to write down things we needed to remember…including how to move, control our muscles, see, hear, etc. This simply wouldn’t work quickly or efficiently enough for many of the things that we do with ease.

Computer processors are like an electric abacus.

Processors do very, very simple tasks very, very quickly. Perhaps a better analogy would be to compare processors to a horse-race. Processors compute simple tasks by sending them (horses) around a circuit. Because tasks can only move so quickly, companies like Intel, Qualcomm, and AMD have systematically found ways to both make many more tracks, and shrink the tracks down until they are tremendously small (so the horses don’t have as far to run). While this is very fast, every time a computer needs to solve a problem (even the same problem) it must send the horses around the track again. Brains work in a very different manner. Imagine, instead of a horse-race, an engineer who is attempting to build a million very complicated devices from a set of blueprints. Similarly, the brain works very hard to understand the plans but then constructs an efficient process (likely a factory) that doesn’t require pulling apart the plans every time a new device needs to be made. Once the brain takes the time to learn how to do something, it doesn’t forget easily, and the task becomes much, much easier to execute; in contrast, processors have to be told how to do every thing, every time.

Computers don’t do anything until they are told.

Even computer programs that run forever need to be executed, or started by a user. Your brain, however, is constantly buzzing with activity. Sensory information is constantly being pinged around in the brain and even when we shut off as much stimuli as possible (when we are asleep, for example) we see the same old areas light up in response to recombined or previously experienced stimuli.

Computers are not dynamic systems.

Perhaps this is just an extension of the idea that computers don’t do anything until they are told, but computers don’t adapt without being programmed to. Adaptation is not an inherent feature of computers. Brains, however, have ingrained and natural processes that allow them to adapt to stimuli over time. At the level of the neuron this is expressed in spike-time dependent plasticity and in “neural sculpting”—neurons tend to strengthen connections that cause them to fire action potentials and weaken those that don’t. This is called spike-time dependent plasticity. Over time, weakened connections may fall off altogether resulting in the phenomenon described as “neural sculpting”.

The Brain is a Collection of Filters

By now you may still be wondering what the picture above has to do with all of this. There are three important components to this image: an input signal (chicken parts), a system (black box), and an output signal (a whole chicken). Inside of the black box (our system in question) there must be some process that assembles chickens from their respective parts. Perhaps interestingly, the box is typically portrayed as black because when we first encounter a system, we probably don’t immediately understand how it works (we haven’t yet shined a light inside). When we consider the brain, we don’t immediately understand how it works, but we have a pretty fundamental understanding of how neurons work and so we have some bit of light to help us see inside the black box of the brain. Somewhat like the editors of a gossip magazine, neurons “listen” to the chattering of other neurons in the brain and when one piece of information sounds particularly meaningful, it presses publish on its own packet of information by firing an action potential. As nearly all good magazines editors do, they learn to determine what is good enough to publish and what is not. Neurons edit, over time, which inputs they prefer and they tend to respond preferentially to them by becoming more sensitive; conversely, they become less sensitive to other inputs that are less important. In this way, neurons, affectively, filter out some signals and amplify others. Interestingly, neurons typically have many more connections with their neighbors than with neurons across town in other regions of the brain; this suggests that not only do neurons act as filters on their own, but they allow their neighbors to offer an opinion on when it is the right time to fire an action potential. If you think these network characteristics are sounding eerily like a suburban America, you aren’t alone. Neuroscientists have found many features of biological neural networks to be analogous to social networks…though this only brings home the reality that these unique features seem to exponentially increase the complexity of the functioning brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

What is Neuromorphic Computing?

IBM's Synapse neuromorphic processor chip imbedded in a computer board.
IBM’s Synapse “neuromorphic” processor chip imbedded in a computer board.

It might be worthwhile to take a moment and inspect your current understanding of how computers work…probably some combination of zeros and ones and a series of electrical components to move those binaries back and forth. You would be surprised to know how far you could get down the path of building a modern computer (at least on paper) with that rudimentary of an understanding. With an electrical engineer in tow, you could probably even build a primitive mainframe computer (the ticker-tape or punch card variety). Would it shock you if I said that nearly all of the advances since then have been either due to materials and manufacturing advances? Software has made interacting with this hardware much more comfortable and computers have gotten incredibly powerful in the past few decades but the underlying architecture has roughly maintained that “ticker-tape” mode of information transfer. This is convenient for lots of reasons…the functionality of a program relies entirely on the quality of the code that gives it instructions (read “ticker-tape”). In some ways, all of that is about to change.

Neuromorphic computing is pretty close to what it sounds like…brain-like computing. Many of the core components of the brain can be implemented in analog hardware…resistors and capacitors in parallel and in series become neurons–their cell bodies, dendrites, as well as axons.  When these analog neurons are connected together (like a synapse) into a network they take on many of the same processing properties that the neurons in our brains do. When researchers figured out how to make the capacitance variable (primer on capacitance found here) they also figured out how to make the analog neurons “learn”; this mimics the natural changes in strength of connections between neurons in a brain.

Now that you understand what it is you might ask, “Why do we want brain-like computers?”

Traditional Computers Suck at Optimization

Have you ever heard of the “traveling salesman problem”? It goes kind of like this…You show up in a new town with a bunch of widgets to sell. So you go to the local chamber of commerce and ask them for a list of businesses that might be interested in purchasing some widgets. They give you names and addresses for ten businesses as well as a town map. You obviously don’t want to take too long making these sales calls or you might not make it to the next town before dark. So you sit down to figure out what order you should go see these ten businesses and what path you should take through town so that you can spend the least amount of time traveling. Believe it or not but your brain is usually faster at coming up with a pretty good solution to these types of problems than computers. The challenge of teaching traditional computers to solve “traveling salesman” problems has created a whole field of research called optimization. (more about traveling salesmen problems here)

Brains Rock at Pattern Recognition, Vision, and Object Recognition

You didn't need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.
You didn’t need any help recognizing this natural pattern as a giraffe. A traditional computer would likely be stumped.

There isn’t a day that passes without your brain having to recognize new objects for what they are. You probably saw your first cat fairly early in life…did you ever stop to wonder how it is that you learned to recognize your second cat encounter as a version of the first? You may think that it is an algorithmic solution…four legs, tail, and furry with whiskers and you have a cat? That is how traditional computers have been programmed to identify cats and for the most part they perform dismally. Humans are so good at identifying cats that we often outperform the best computer algorithms when we are only shown a part of the animal we are to identify. It isn’t just our accuracy that is astounding but the speed at which we can recognize these features. This is all due to the fundamental nature of neural circuits as highly optimized complex filters instead of simply processors which are the “plug-and-chug” machines we put in our traditional computers.

Brains Use a Fraction of The Power

The human brain consumes approximately one one-hundred thousandth of the power that the average desktop computer does (per byte processed). Consider the implications of this difference…our brains do so much, and so much more efficiently than computers do. This is a feature of the filter functionality that I mentioned above. Maybe to provide an example of how this works…imagine you need to cut up a block of cheese into equally sized rectangles. You have two options: you can use a knife and a measuring-tape to carefully cut the cheese a cube at a time. Or you can use a file, the measuring tape, and a raw chunk of steel to shape a grid-like tool that cuts any size cheese block into perfectly equal rectangles…maybe you have deduced it but the second solution is “neuromorphic” one–you must teach a neural network about the right way to cut the cheese and after it has learned, you can use the tool much more quickly without the need to stop and measure. Each time you use this tool in the future you save both time and energy. Similarly, neuromorphic computing is able to re-use solutions with vastly increased efficiency.

Neuromorphic Computing is Happening

Putting neuromorphic chips into phones and computers are probably not the silver bullet to solving all of the challenges that I outlined above…instead they are a serious and creative improvement to the technologies that we are already so reliant on. A combination of traditional processing and neuromorphic computing is likely to be the long-term approach to applying these advancements. Very soon your phone will be that much better at telling you about the world…and helping you be a better traveling salesman.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Why Brain Engineering Will Spawn The New “Hot Jobs”

The hot jobs of this decade, almost without exception, have become “cerebral” in some way or another. Programmers build complex algorithms, quatitative financial analysts build equally complex models, and data analysts (with their myriad titles) are swimming in complex methods; even in the health industry you can see the trend toward an increased emphasis on problem solving ability…physician’s assistants that are capable of accurately diagnosing various conditions are more in demand than ever (long the exclusive domain of board certified medical doctors). How appropriate is it that brain technology would further the trend in “cerebral”-ization of work?

In collaboration with computer scientists, brain researchers have poked holes in the veil of the future–several technologies, previously only possible in the pages of Isaac Asimov and other Sci-Fi writers, such as Deep Brain Stimulation, Neuromorphic Computing, and Machine Learning have opened a new frontier for game-changing products and applications.

Deep Brain Stimulation (DBS)

DBS is essentially a pacemaker, repurposed for the brain. While nearly all current applications for DBS are in the correction of disruptive electrical signals in the brain, it proves it is possible to externally and locally trigger specific circuits in the brain responsible for motion, sensation, memory, emotion, and even abstract thought. Why might this lead to the creation of so called hot jobs? Imagine being the engineer who implements a DBS system to help reduce cravings for food due to boredom? Or a DBS system that helps you recognize individuals by stimulating circuits containing relevant information about that individual?

Neuromorphic Computing

This is an image of a neuromorphic processor with 1/4 million synapses on a 16×16 node array

You might have already pieced together what this means but it is just what it sounds like: computers that are like brains in form. Now, they don’t actually look like brains but they utilize a fundamental architecture of nodes (neurons) connected (a la synapses) in a network with variable strengths. These variable strengths allow learning to happen (if you forgot how this works look here). As you can imagine this chip would be fundamentally different than the Intel processor your desktop or laptop probably have under the hood. The fundamental difference is that like your brain, neuromorphic chips must be trained in order to perform a task. Another interesting feature of these types of chips is that the task also needs to be designed. I can’t imagine a sexier job that thinking up tasks and training regimes for neuromorphic chips! If you aren’t convinced this is possible or coming in the near future, you might be surprised to hear that Intel and Qualcomm already have working prototypes and are planning to put them into cell phones very soon (read about it here).

Machine Learning

If the concept of a machine learning doesn’t sound totally anthropomorphic to you…it probably should. But once again our understanding of how networks of neurons work has opened a huge can of worms for those who know how to hook them up and go fishing. Machine Learning forms much of the theoretical framework underlying neuromorphic computing. The major difference is that not being implemented in hardware allows the user a ton of flexibility to build creative and novel solutions. The types of problems that are being solved with Machine learning are crazy…there are many things that you and I are good at but would make your computer crash every time–face recognition, reading, writing, speaking, listening, and identifying objects are all within the domain of machine learning. As you can imagine we have only begun to tap the well of interesting applications for machine learning and there may be an inexhaustible need for engineers to come up with them.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you have any interest in writing here or would like to hear more about the work done by Clayton in the USC Center for Neural Engineering he can be reached at: clayton dot bingham at gmail dot com.

3 Barriers to Brain Science

Electrodes, translational work, and human subjects. There is said it…as far as I am concerned these are the things that are holding back the advancement of neuroscience. Don’t get me wrong, neuroscience is certainly moving forward but at a snail pace relative to the leaps and bounds seen in physics at the turn of the century, or in biologics in the past thirty years. As with most fields of study, neuroscience was helped greatly by advancements in physics, which made it possible to image some of the biomechanics of the neuron, but also by math and statistics which gave rise to computational neuroscience and many insights to how networks of neurons made “decisions” by classifying inputs, and biology as well, where the surferdude Kary Mullis’s discovery of the Polymerase Chain Reaction (PCR) has provided us with many avenues to more easily inspect the traits of brain cells.

1. Electrodes

Electrodes are used to record and to stimulate neurons in the brain. In essence they are artificial dendrites and axons. Here is the problem: every neuron has anywhere from a few hundred inbound synapses to tens of thousands (an order of magnitude more than that in some cells of the cerebellum). Each of those synapses is doing something different at any point in time. By stimulating the same cell with a single electrode we are imposing on the cell a non-distributed input of a single type and source. And for that matter, because the electrode is floating in the medium next to the cell, there are hundreds of thousands of nearby cells that are forced to receive the same non-specialized input. The most advanced electrodes are what are called Multi-Electrode Arrays (MEA’s) and have up to a few hundred individual electrodes. That is fine, but we are a long ways away from reaching the resolution of a natural neural input.

2. Translational Work

At first glance you might think I am talking about language barriers (also a problem but not as big as you might think); in this case, I am specifically referring to work to develop translational technologies that would make neuroscientific discoveries valuable to the mass market in the form of a product or service. You might wonder how this could help increase the pace of innovation in the field of neuroscience well money has a way of inspiring creativity…even if the influx of cash alone didnt inspire creativity among existing neuroscientists it would certainly help by attracting a new cadre of smart young contributors.

3. Human Subjects

Other disciplines have the convenience of developing what is called a model to help them study pathology. Interestingly, and unfortunately, many human brain pathologies are unique to us because of the crucial evolutionary characteristics of our brains. Specifically disorders of a cortical nature may have no real analog in other animals. Another serious limitation specific to brains is that many of the traits we wish to study are difficult to characterize without the subject being able to express their own experience through speech…being the only animals on the planet capable of advanced speech you can see how this could severely limit the access to test subjects. As a third dynamic (though one not specific to brain science), HIPAA (health information protection legislation) and the FDA (Food and Drug Administration) put stringent controls on who, how, when, and under what conditions tests can be performed. These factors add up to make securing human subjects for neurological studies incredibly difficult.

Wrap-up

So with all of the progress we have made why is it that I chose to latch onto these three problems in particular? It is interesting to notice that only one of them is technical in nature…You might think that one or more of these problems is easy and should be simple to solve. Between human ethical considerations and even more human creative limitations we are forced to inch our way along as always; breakthroughs are, more often than not, the result to many years of very hard work instead of simple genius or a stroke of good luck.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Souther California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you have any interest in writing here or would like to hear more about the work done by Clayton in the USC Center for Neural Engineering he can be reached at: clayton dot bingham at gmail dot com.