Connectomics, Neuroscience, and Computational Models of the Brain

15600521099_e01ed52d2c_z

When an idea in science is stamped a theory most scientists just shrug. Unlike the public they know that theory or not, an idea can’t be taken as ground truth until it has been thoroughly vetted through extensive and redundant experimentation. In 2012 Dr. Sebastion Seung published a book advocating a theory neither new, old, nor entirely his own. The science of some aspects of his idea is, however, becoming quite acceptable to the neuroscience community. His scientific idea is that, aside from our DNA, it is the uniqueness of the pattern and characteristics of connections between the neurons in our brains that makes us who we are. In all fairness, Dr. Seung is not the first to propose this idea, nor is he the only proponent of what has become known as Connectomics—but he has found ample resistance to the idea that our “connectome” could have a role in establishing identity that rivals the uniqueness of our individual genetic code. Many question that the mere architecture of the connections in a brain could yield the rich functionality that we all enjoy. Another established expert in the field, Dr. Cristof Koch said,”Even though we have known the connectome of the nematode worm for 25 years we are far from reading its mind. We don’t yet understand how its nerve cells work.” As Dr. Koch and others have intimated, the more likely whole theory of the brain is a hybrid one taking into account not only connections but also the chemical-laden soupy milieu that neurons sit in.

Connectomics As A Theory Is Great But Incomplete

Imagine yourself as a competitor in a wrestling match. Pretend that before the match you get to choose between two competitions—one option is to wrestle a thoroughly muscled man twice your size, the second is to wrestle 25 small, but very angry, eight year old children. It is likely that you will be overpowered in either case but it is a useful analogy to help you see the differences between connections of neurons. These connections determine how similar, or coupled, the behavior of two neurons are and they are not all the same strength; Some connections are weak, and others are strong. It would take many more weak connections to achieve a similar response from a neuron as you might expect from a few very strong connections. Connections between cells are called synapses, and synapses are essentially a gap across which neurons send chemicals. The upstream neurons typically do most of the sending, and the downstream neuron pays attention to how much the signaling neurons sends.

It is possible, however, for this process to be interrupted. Foreign chemicals, not usually found at the synapse can block or replace those that belong…the results could be dramatic. The body releases specific chemicals on a regular basis— dopamine, serotonin, glutamate, calcium, and many others that are routinely synthesised in the body and play an important role in the way your neurons function. The role that these extra-neural chemicals play is an example of a crucial non-Connectome feature of your brain which contributes to what makes you who you are. While the connectome forms the primary architectural framework on which these processes are possible, it cannot tell the whole story alone.

How To Measure the Importance of Connectomics?

The concept of experimental control is central to what makes scientific results at all verifiable. If you wanted to determine if lavender oil cures cancer you would need to isolate cancerous cells by controlling for all other potentially cancer killing compounds or mechanisms that might also be nearby…otherwise how could you prove that exposure to the lavender was what did the deed? How do you control for the contribution of connectomics to identity when the connectome is never the only variable that changes from person to person? To put it in other words…how do you know that the differences between my connectome and yours is what makes me walk, talk, and think differently than you? How do we know that factors such as environment, genetics, diet, and habits are also coming into play?

We obviously need some kind of experimental control…where we can observe the changes in behavior in a single connectome when exposed to different environments or perhaps the differences in behavior between two connectomes exposed to the exact same environment. It turns out that the easiest, most practical, and ethical method of doing this is to build what is called a computational model…this is essentially a version of the system in question reproduced via mathematical equivalents inside of a computer. If you are a bit mystified as to how this could be done…take the example of a pitcher throwing a baseball. If you knew the trajectory and initial velocity of the baseball, as well as a few essential details about the ball itself, you could predict the path with near perfect precision. Similarly, if you know a few of the rules by which neurons behave, you can predict their behavior with very high accuracy. When you add to that a model of the behavior of connections between cells you have all of the functional components of a connectome. Simulations of connectomes, real and hypothetical, have the power to yield incredibly valuable insights.

Connectomics, A Piece Of The Puzzle And A Clue For Further Investigation

While it is unlikely that a connectome holds all of the information necessary to reconstruct your cerebral identity, it is undoubtedly a crucial component. But how do we test the idea and gauge just how important it is? Computational models can shed some light into the black box of the brain by building toy versions, simple and complex, to explore how variations of the connectome impacts the behavior of a network of neurons. Combined with models of extracellular features of neural systems, we may be able to learn the balance of influence each structural component of our brains hold over our behavior. Like behavior, connectomics is very difficult to study via reductionist methods…it may very well be the completeness of the brain that makes us so special.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

Humanism and Brain Technology: The Borg, The Matrix, and The New Voices In Your Head

You don’t have to have seen the kid-flick Goonies to imagine a gadget-savvy adolescent who’s one mission in their little life is to contraption-ize everything. I am, in fact, guilty of “booby”-trapping my room for fun in order to reenact my favorite MacGyver episodes (safely, of course). I was known to string up cords that crossed my little boy bedroom with mostly unknown but very important functions…maybe one string flipped the light switch off when the doorknob was turned so my mom wouldn’t catch me awake after my bedtime. I distinctly remember another set at a high angle so that I could send my dirty clothes across the room to the hamper when it was time to get into bed. While none of these contraptions were very practicable (or sightly), my mom tolerated them for a few days because she knew it was important to let the little engineer in me experiment with ways to make my life easier or better. I probably won’t surprise you by saying that none of my earliest inventions resulted in a significantly increased quality of life…but the spirit of that invention is more important than any of us are aware; that spirit has come be called Humanism, or more particularly: Transhumanism. Transhumanists mostly go unidentified—those that are, typically self-identify—but most find themselves up to their eyeballs in the world of technology. The express goals of this technology fascination is always the same…make being human better.

In order to improve the human condition, Transhumanists must first understand and explore the opportunities for improvement. The improvements must be objective and measurable; this turns out to be harder than it seems. Consider the merits of the gas-powered car…never before have we been able to zoom to and from our destinations with such speed and ease. However, even after a nearly 100 year love affair with the automobile there are still those who are dissatisfied with the improvement and complain about this or that flaw and seem ready to abandon the technology all-together. Consider next, the complexity of the human body…as we introduce drugs and devices are implanted, there are often unintended consequences. The drug that thins our blood makes us prone to bleeding, the drug that calms inflammation compromises the immune system, and the implanted insulin pump often causes serious infections because the tube that crosses the skin collects and delivers bacteria deep into the tissue. Complex systems abound in the body, chief among them is the human brain. 100 billion neurons make upwards of 1000 trillion connections. The unintended consequences of modifying such a system can be severe. For years we saw seemingly savage interventions in the brain such as lobotomies and ablations (essentially cutting out offending parts of the brain), electroshock therapy, and extreme high-dose pharmaceutical therapies with opiates and cocaine, and other psychoactives like Lithium and even Cannabis. Despite the lack of sophistication, the more primitive therapies have only been retired when something apparently better came along; this indicates that the benefits outweighed the known side-effects. We can talk about the unknown side-effects somewhere else. First, stop for a moment and answer this question of paramount importance: what is it about our bodies that makes us uniquely human? Maybe there is more than one answer but it cannot be denied that our brains are the most unique thing about our species as well as our most powerfully evolved feature. Naturally, it is the most tempting target for Transhumanists and their ability and experience enhancing technologies. We are beginning to see these neurotechnologies in clinical settings as well as in the hands of the private consumer—they arrive in the form of implants, uploads and downloads, assistive intelligences, brain computer interfaces, and many other creative modalities.

3756873814_1368dfe10a_z

Implants and The Borg

The seemingly stray reference to the evil race of humanoid robots from Star Trek is more apt than you know. The very laboratory that I work in is founded on the goal of implementing a computer chip to replace damaged parts of the brain—particularly the hippocampus (which is responsible for memory management). How much of your brain must you replace with computer componentry before you become Borg? That is a question for the philosophers—the reality is that there are members of our community that are severely hindered by dysfunctional brains and implants are a popular proposed solution to the problem. In fact, there are thousands of these devices already deployed around the world: the Cochlear implant and the Deep Brain Stimulator are very effective at correcting some types of hearing and motor disorders. Other devices correcting more complicated disorders are in the pipeline.

10987353554_976862387e_z

Uploads, Downloads, and The Matrix

If sticking really big needles into your head or neck creeps you out you can rest assured that there will likely be less invasive or scary ways to “upload” your brain or “download” things to your brain. Types of imaging using really powerful magnets and dyes have been developed that may soon have the resolution to peer into the tightest corners of your brain and extract information such as where neurons are, where they send their branches, who they connect to, and perhaps even how strong that connection is. With this information, it is conceivable that engineers could reconstruct many of the important features of your brain in a computer model…essentially taking a snapshot of what it is that makes you you—voile brain uploaded. The download is both much more complex and much more simple at the same time…information in the brain is encoded in individual neuron “spikes” and can therefore be modulated by inducing these spikes in a pre-programmed manner. By encoding an outside message into this spike-language and inducing this pattern of activity in the right region of the brain you can effectively communicate directly to the brain any piece of information which you can reduce to this spike-language. A lot of work is being done to do exactly this. Retinal implants and Cochlear implants are still the most successful examples of this method of information delivery. They by-pass the eyes and ears and communicate outside visual and auditory stimulus directly to the brain in a way that the brain understands. Other applications such as that in my own lab have more complicated hurdles to jump because we don’t yet understand at an abstract level what the stimulus of other brain systems are. This makes translation into and out of spike-language difficult.

P1040361

Assistive Technologies and Brain Computer Interfaces (Jarvis, The Red Queen, and the little voice in your head)

While Siri and “Ok Google” are a far cry from what we hope to finally achieve in artificial intelligence, their application is very Transhumanist! Their primary purpose is to help us navigate the world of information and find the best answers quickly. They also operate as a more high-level interface with the device on which they live. Advances in neurotechnology suggest that it may someday be possible to merge this assistive technology with an implant and use it to directly modulate brain activity…uploading and downloading information to and from our brains constantly. This would allow you to keep a steady finger on the pulse of information most relevant to your moment to moment interests. If you aren’t convinced of the power that this could hold…imagine how test-taking would necessarily require more creativity instead of simply recall, or how about all of those almost acquaintances who’s name you can’t remember…wouldn’t it be nice to have their name and pertinent details whispered to you discreetly in your time of need? All of this and more is imaginable given the current trajectory of neuroscience and technology research.

Humanism is Optimism and Materialism Combined

Lets briefly explore the motivations of someone who wishes to improve the human experience or condition with technology. They obviously don’t envision the end of humanity within their lifetimes, nor do they struggle to see the value of easing some of the challenges humans face. You might say this makes up most of the population…I would agree. Humanism, and Transhumanism by extension, is a nearly-innate philosophical world-view that, in my opinion, one must try very hard to be talked out of. Optimism is at the very core of what makes a Transhumanist tick…they only want to imagine a world better or more awesome and interesting than the one they live in. Their means to bringing that about? Technology…But why not politics, or social activism, or journalism? Let me answer your question like this…can you think of anything in the past decade that has enacted more social change than Facebook, or provided more educational opportunity than Google, or put more power (literally) into the hands of the people than Apple or Samsung? While we may not necessarily prescribe to the world-view of these companies we can immediately see the power for change provided by technological advancement. It opens the eyes of the public to new ideas and ways of living. Transhumanism may be optimism, and it may be materialism, but it is also one of the most truly modern and rational work philosophies.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Your Smart Phone Will Get An IQ Bump From Neuromorphic Chips

16981803257_5b001e3585_k

In 2008 I came back from having spent two years abroad. During this time many things happened that contributed greatly the current technology landscape. Facebook became open to the public, Youtube hit its stride, and the iPhone was first released. While it wasn’t the first “smartphone” per se…the iPhone was the first truly popular one. Within two years the market’s appetite for smartphones has grown from non-existent to staggering. By the end of 2015 it is estimated that there will be 2 billion mobile devices in circulation. While this signals tremendous progress for computing, it is fair to wonder what it is about these phones that makes them “smart”. There is no doubt as to the reason they represent a huge advancement in mobile technology—the newest mobile microprocessors are virtually indistinguishable from their desktop counterparts in architecture, and nearly a match for many of the entry-level machines in performance. But does that make them smart? Not yet. We use the word smart to describe a human trait—the ability to learn, which may be synonymous with intelligence. While it wouldn’t be productive to get into the semantics of intelligence, it is clear that, “smart” phones don’t pass the test. But there is a technology in the pipeline that will allow phones to learn, and adapt, in order to provide better solutions to our problems.

The Bridge Between Brains and Computing

Late last year IBM introduced TrueNorth neuromorphic technology and later this year Qualcomm will introduce the Kryo (a similar architecture) into production. These chip is not your typical grid of transistors. In fact, it is designed to mimic your brain. Engineers have found a way to reimplement neural networks by using resistors and capacitors (resistors provide resistance and capacitors are like miniature batteries) in parallel and in series. This hardware, while a recognizable simplification of the biological systems it is modeled after, are capable of learning in a manner reminiscent of the way that our own neurons learn.

The journey to a successful neuromorphic chip has not been a short one. The first artificial neural networks were being tinkered with in the 40’s by Warren McCulloch and Walter Pitts. Even then it was postulated that we might be able to someday build something of an artificial brain and harness its computational power as a sort of personal assistant or perhaps let it loose to work on the biggest problems of the day. While actually achieving this is a long way off and there are many complicated hurdles remaining some of these science fictions have become reality in very important ways. No one has yet managed to build a complete human brain but we have been able to simulate large portions of it with biologically realistic features. When I joined the Center for Neural Engineering (CNE) at University of Southern California in late 2014, researchers there were already using thousands of computers to reconstruct and simulate up to a million neurons in a very biologically realistic computational model of the Hippocampus. We called ourselves the multi-scale modeling group because we incorporated complex details of the brain at multiple scales including: detailed models of synapses, beautiful and morphologically appropriate models of neurons, all arranged and connected according to what we had observed in experimental studies of the Hippocampus. The primary purpose of this work, at the time, was to explore the possibility of replacing dysfunctional portions of the Hippocampus with a computer chip. As of this writing, CNE has successfully tested such a device in rats, macaques, and has just completed preliminary testing in humans. Such a device, which incorporates complex math and analog electrical hardware, is able to function by mimicking the computation that might have been performed by a network of neurons.

Why Your Phone Needs a New Brain

You may have noticed a few interesting new features in Facebook’s photo tagging system in the past couple of years. Of particular interest is the ability of the site to recognize faces. While very impressive, humans outperform all but the very best algorithms with much greater efficiency. How is it that your brain is so much better at this exercise than an algorithm? The answer lies in the architecture of your brain and how it learns. Your brain learns by crafting a network of cells to look for small features of a persons face. Particular facial characteristics cause neural networks to respond with a unique pattern. This pattern is then identified as the facial response pattern of that individual. If trained adequately, this type of facial recognition can happen with electric speed (a bit slower because actual connections between neurons are mostly chemical, not electrical).

In general, pattern recognition problems are prime areas of improvement in computing. Because mobile computers are so often presented with pattern stimulus (route planning, video, images, sound, etc.), they are the prime application for neuromorphic chips and really are the place where they are able to make the biggest impact. Look forward to your phone getting a lot smarter in the near future…it is gonna get a new brain.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Brains Are Probability Ultra-Approximaters (we are damned good guessers)

15155716967_3a30d504a1_k

Your brain might be disagreeing with the title of this article right now depending on how recently you’ve visited Las Vegas. You would be right to think that there are many ways in which our brains can be tricked into making poor choices. As interesting as those tricks are, they aren’t half as impressive as all of the excellent predictions we make nearly instantaneously, each and everyday.

For those of us who press snooze in the morning, have you ever made a blind stab at the button without ever opening your eyes? It turns out that your brain is able to transmute the mechanical signals from the alarm into neural signals through the cochlea in your inner ear, and then associative connections between your auditory perception pathways and your motor cortex were able to detect the source of the sound relative to your body and coordinate your movements to turn it off. While this is a serious simplification of the how your brain is able to accomplish this feat, our ability to execute this action proves that we can perform very complicated motor and proprioceptive (referring to our body’s position relative to itself and our environment) predictions that robotics labs have really struggled to recreate. But how is it that a group of neurons can make predictions at all?

Neurons are not as simple as you might think.

Your brain is made up of something like 100 billion neurons that connect 1000 trillion times. That means that, on average, each little neuron often has more than ten thousand other neurons talking to it. If you imagine that a neuron is something like a dance club without bouncers to turn people away, you can imagine that at different times of the week, or night, there might be varying number of people on the dance floor. On Salsa night at 11 o’clock you might not be able to see the dance floor at all because the club is bursting with activity. You might further imagine that with so many people in the club at one time it could get pretty tiresome and people would want to leave in order to relieve some of the crush inside. While this is far from a good analogy you can see how a neuron might, similarly, use the number and “boisterousness” of incoming signals to determine when to relieve some pressure and pass on a signal of its own. That response signal is not always the same magnitude but when it starts it never stops. The effect that a single neuron has on its downstream connections is not always the same…some it may excite, others it may inhibit, and always in varying degrees. It turns out that the type and variation of this strength of connectivity is the chief mechanism allowing learning, and consequently, pattern recognition and prediction.

Neurons learn together

When I was nineteen years old I had the opportunity to spend a few years of my life, learning and performing public service in the Kingdom of Thailand. Unfortunately, I didn’t spend much time relaxing on the beaches or getting massages; instead, I was tasked with teaching and serving the Thai people in the places where they lived. I learned Thai and learned their customs. This was incredibly difficult and I still wonder how they ever understood me. Part of building understanding between two individuals is developing knowledge of customs and culture—one interesting custom that the Thai people have is the Wai. Used as a greeting and an expression of respect and gratitude, it is performed by bringing both hands together, flattened, palm to palm, in front of yourself and may be combined with a small bowing of the head. While it may be difficult for westerners to learn when and how it is most appropriate to perform the Wai, it is nearly as difficult for Thais to learn the western handshake. Improperly performed Wais, and handshakes, amazingly (and perhaps tragically), have tremendous ability to create distrust between two people. Similarly, poorly formed and dysfunctional connections between neurons tend to fade away and eventually quit working altogether. However, just like a good handshake, one strong synapse can cause two neurons to strengthen their connection and grow in synchrony. It takes more than one person to make a good handshake and it takes more than one neuron to complete a functionally meaningful circuit in the brain.

Neurons make predictions by popular vote

My wife is a leader in a children’s Sunday School class each week and she spends quite a lot of time trying to think up ways to motivate the kids. I imagine she has tried many different kinds of treats, and while kids tend to like any treat, they always have their preferences. She has taken to buying assorted treats and just letting them pick for themselves when the time comes. Neurons have a way of preferring some inputs over others, just as the kids (and ourselves, I suppose) prefer one kind of treat over another. This preference grows out of the strengthening handshake phenomenon that we discussed earlier. As neurons strengthen some connections and weaken others, they eventually only respond strongly to a few types of stimulus. In this way, they display their choice and telegraph to other cells downstream what type of input they are receiving. Imagine you conducted a test on a room full of kids—by holding up a kind of candy (say Skittles or M&M’s) and asking children to stand up to express their preference so that you could take a headcount. In subsequent tests the experimenter holds up candy without showing you which candy they held up but asked the kids to continue standing up in order to express their preference; if you knew the headcount, or paid attention to which kids preferred which candy, then without much effort you could deduce which candy the experimenter was holding up. Similarly, downstream groups of neurons may deduce from their input signals what triggered the activity in the first place…was something red presented to the eyes? Or something blue? Your brain is made up of collections of neurons that are constantly voting on inputs. By refining their synaptic connections (working on their handshakes) they reinforce or reform their predictions in order to achieve better and better outcomes.

Brains are not democracies and neurons are not citizens

Neurons might be the stuff that makes us human but it would be a silly anthropomorphism for me to give you the false impression that neurons actually do much thinking on their own. I say they vote, pick their own candy, and go around shaking everybody’s hand as if they were at town hall, but neural activity is subject to two things…you may call it nature and nurture but there are two very real phenomena that influence how neurons determine their connections and how networks develop behavior.
I think fifty-two card pickup is a game that I had the opportunity to learn at a very young age…this is a game (or trick, rather) where one person sprays all the cards of a deck in all directions and then leaves the other player to pick them up. While a bit more orderly, neurons find their eventual place in the developing brain in a much similar manner. Their exact location, orientation, and connectivity, while following some basic rules of architecture, are largely random. This nuerodevelopmental variability adds up to be what those who study biological system call initial conditions. We each have nuanced initial conditions resulting in differences in the ways that our brains are wired. The randomness in determination of the initial conditions in the brain are part of what causes each of us to be more or less likely to develop certain behaviors or obtain certain skills. These initial conditions are an under appreciated source of individuality in human development. Initial conditions may be a major component of the “nature” that makes us who we are.

The “nurture” component of brain development shouldn’t be much of a mystery at all as we learn and make choices we either reinforce or reform our initial conditions. Continual sculpting of neural networks by and through our sensory experiences and repeated behaviors leaves us with strong tendencies toward certain behaviors and preferences for particular stimuli.

In brief, our brains are full of complex functional units that, over time, develop increased or suppressed sensitivity to particular stimuli. When many of these functional units are strung together there are many amazing emergent phenomena, including the ability to choose, predict, and classify stimuli. Whether you believe that we are just a sum of our parts or not, reality is that brains are made up of parts that we are beginning to understand. So far, we understand that, in concert, neurons are powerful prediction machines.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

How do you tell the future? Build a model.

E = MC^2 is a model. Einstein used this model to describe the relationship between fast moving objects and energy. It is perhaps the most famous model among those who don’t have to look at them on a daily basis. For those who do, the word model may take on a more nuanced meaning but to put it into as simple of language as possible, a model is a description of a pattern. This should be your “aha” moment, where you put the pieces together and can see how a model might be used to predict the future. It turns out there are a lot of products and even whole industries that rely on models of this sort. If you aren’t convinced of the value of models, consider some examples of industries that rely on the predictions made by models: insurance and models of the occurrence of flooding, hedge funds and models of the price of a stock, or a model of rush-hour traffic to transportation companies. These models could respectively help you to determine the price of home-owner’s insurance, know the time to buy or sell equity, or the ideal time to start your commute or whether you will take a train, plane, or automobile. The value of models is undeniable but they can be tricky to build and they come in a variety of forms. The forms of the model really depend largely on the goals of the modeling effort and the preciseness with which we can describe the phenomenon. Many models are merely estimations, where others are intended to be a governing theory.

Governing Theories

Many of the discoveries in science that later became laws are what we might call a Governing Theory. They describe a natural phenomenon so accurately that you could say the model “governs” the behavior. By following the model you can predict the exact amount of the behavior you will observe. For example, you probably built or saw a science fair volcano at some point in your primary school days—if you used vinegar, baking soda and food coloring for lava, then you were relying on a very well defined model of the chemical reaction between the materials. If, before you poured in the vinegar you had put pen to paper you could come up with a prediction (a very accurate one) of how much carbon dioxide would be released and perhaps with a few other useful models, how high your “lava” would shoot.

Governing theories usually involve extremely precise measurements and validation by several different methods. In the case of the vinegar and baking soda, a chemist would have captured the gas products of the reaction and measured them carefully, then used those measurements along with a knowledge of the other reactions those substances make to try and figure out what the exact chemical makeup of the beginning reactants were. Pulling together all of the crucial information needed to establish a governing theory can be exhausting; for this reason, it is more common that ideas become theories over the course of decades of observation and experiments and have dozens of scientists contributing to its discovery.

Estimations, Approximations, and Simplifications

Estimation is less like guessing than it sounds…the mere fact that I indicated that estimation models and governing theories are ends of the same spectrum, may have given you the impression that estimations are not to be trusted. On the contrary, the insights that can be gained by building these models can be priceless. Occasionally, I consult for a company who’s product is to provide an approximation of the liquidation value of heavy equipment. Their clients are major rental companies such as United, Hertz, and Sunbelt  Rental as well as a host of private equity companies and banks scoping out new deals—at any time, each company might own thousands of loader backhoes, cranes, and dump trucks. As their equipment gets older they are very concerned with when, where, and for how much they should sell. Before working with us, many of these companies had a process that was rather more like a wizened man with a camera and a clipboard than like a polished model. Using as much of their prior sales data and public auction sales data that we could get our hands on, we built a model of the ideal times, and places to sell equipment. While there are multiple advanced statistical or mathematical methods that go into making an excellent value recovery model or pricing model of this equipment, simply plotting average sale prices for each month over the prior year for each item or category of equipment could provide a much better idea of the real present value of a piece of equipment as well as what kind of depreciation you might expect in the coming months.

Silly Wand Waving

It would be irresponsible of me not to admit that some problems don’t actually lend themselves to the development of models and can actually lead to some truly ridiculous conclusions. Pretty high on the ridiculous list is a modeling practice best described as a “Fishing Expedition”. Now, I could let you just use your imagination as to what that might mean but I’ll give you a truly heinous example just so that we will be entirely clear what this might mean for a modeling effort. Let’s suppose you are given a pair of dice. Let’s further assume that this is the first time that you have ever seen dice and you want to figure out what the probability is that you roll a “snake-eyes” when you toss the dice. You determine to roll the dice ten times and count the number of snake-eyes that you roll—the probability of snake-eyes you report would be the number of times two ones were rolled as a ratio of the total times the dice were rolled. If you know nothing about the dice before-hand, there is nothing inherently wrong with this approach…in fact this might be the most common modeling approach used to describe a natural phenomenon over which we can exert some control. Getting back to the dice example, let me ask you question: given what you know about dice (six sides, etc.), how many times would I have to conduct this test such that you would expect to roll snake-eyes all ten times?

  • Odds of rolling a one = 1/6
  • Odds of rolling snake-eyes = 1/36
  • Odds of rolling snake-eyes ten times consecutively = 1/36 to the tenth power

1/36ths to the tenth power is a very small number—but  if  I attempt the test enough times I will inevitably end up with ten snake-eyes. This is the probability that this kind of test would yield incorrect results. If I were the tester and I desperately wanted to be able to say that the odds of rolling snake-eyes were really high I might conduct this test over and over, increasing the odds of getting 10 snake-eyes. Upon achieving my goal, I might even feel validated at this final outcome and run and tell the town. However, I would be guilty of having conducted a “Fishing Expedition”. If we were to test the odds of multiple unlikely pairs in series (one’s, two’s, three’s, etc.) we would still find it increasingly likely that one of the tests would have an unlikely outcome. The mere fact that we repeatedly ask questions of the same system (dice) increases the likelihood that we accidentally make an inaccurate conclusion. In this example it is quite clear that we couldn’t trust the conclusion of such a clumsy experiment but in other real-world examples the opportunity to go fishing is much more tempting. There is a new field of business intelligence that is especially prone to this kind of bias: Big Data Analytics. You may have heard of the supposed phenomenon of “Big Data,” the truth is that there is nothing inherently new about it…as the name suggests, it is simply when an analyst attempts to gain some insights from a large pool of data with varying degrees of organization.  The primary reason that big data analyses are particularly likely to have this sort of bias is that analysts typically don’t know what they are looking for in advance and so they tend to ask multiple questions, one after another, until an answer pops out of the data that appears interesting. The problem with this approach doesn’t become serious until the analyst decides to trust this conclusion as if it were the only test they ran; while, they should really be adjusting their level of confidence to account for having conducted multiple tests on the same data. Big Data Analysis and business analyses, because of the reliance on largely unreproducible, legacy data, are prone to mistaking coincidence for causality.

Conclusion: Modeling Tells the Future by Describing the Past

One of the things that keeps me waking up in the morning are the endless possibilities that each day holds. As humans, we are born with a certain degree of curiosity which keeps us interacting with the world around us. As a scientist, I find that I am especially interested in things that I don’t understand or have never seen before. As an engineer, I see value in understanding how things work. While we continue to develop an understanding of the world around us, fewer and fewer things remain a surprise…that is, we learn to expect certain outcomes and we have developed rules to predict the likelihood of those outcomes. Modeling, in its many forms, provides value to us as we seek to improve our quality of life. Whether we are avoiding catastrophic events, curing diseases, or creating technology that allows us to communicate across the globe in real-time, models are at the heart of our future.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.