How do you tell the future? Build a model.

E = MC^2 is a model. Einstein used this model to describe the relationship between fast moving objects and energy. It is perhaps the most famous model among those who don’t have to look at them on a daily basis. For those who do, the word model may take on a more nuanced meaning but to put it into as simple of language as possible, a model is a description of a pattern. This should be your “aha” moment, where you put the pieces together and can see how a model might be used to predict the future. It turns out there are a lot of products and even whole industries that rely on models of this sort. If you aren’t convinced of the value of models, consider some examples of industries that rely on the predictions made by models: insurance and models of the occurrence of flooding, hedge funds and models of the price of a stock, or a model of rush-hour traffic to transportation companies. These models could respectively help you to determine the price of home-owner’s insurance, know the time to buy or sell equity, or the ideal time to start your commute or whether you will take a train, plane, or automobile. The value of models is undeniable but they can be tricky to build and they come in a variety of forms. The forms of the model really depend largely on the goals of the modeling effort and the preciseness with which we can describe the phenomenon. Many models are merely estimations, where others are intended to be a governing theory.

Governing Theories

Many of the discoveries in science that later became laws are what we might call a Governing Theory. They describe a natural phenomenon so accurately that you could say the model “governs” the behavior. By following the model you can predict the exact amount of the behavior you will observe. For example, you probably built or saw a science fair volcano at some point in your primary school days—if you used vinegar, baking soda and food coloring for lava, then you were relying on a very well defined model of the chemical reaction between the materials. If, before you poured in the vinegar you had put pen to paper you could come up with a prediction (a very accurate one) of how much carbon dioxide would be released and perhaps with a few other useful models, how high your “lava” would shoot.

Governing theories usually involve extremely precise measurements and validation by several different methods. In the case of the vinegar and baking soda, a chemist would have captured the gas products of the reaction and measured them carefully, then used those measurements along with a knowledge of the other reactions those substances make to try and figure out what the exact chemical makeup of the beginning reactants were. Pulling together all of the crucial information needed to establish a governing theory can be exhausting; for this reason, it is more common that ideas become theories over the course of decades of observation and experiments and have dozens of scientists contributing to its discovery.

Estimations, Approximations, and Simplifications

Estimation is less like guessing than it sounds…the mere fact that I indicated that estimation models and governing theories are ends of the same spectrum, may have given you the impression that estimations are not to be trusted. On the contrary, the insights that can be gained by building these models can be priceless. Occasionally, I consult for a company who’s product is to provide an approximation of the liquidation value of heavy equipment. Their clients are major rental companies such as United, Hertz, and Sunbelt  Rental as well as a host of private equity companies and banks scoping out new deals—at any time, each company might own thousands of loader backhoes, cranes, and dump trucks. As their equipment gets older they are very concerned with when, where, and for how much they should sell. Before working with us, many of these companies had a process that was rather more like a wizened man with a camera and a clipboard than like a polished model. Using as much of their prior sales data and public auction sales data that we could get our hands on, we built a model of the ideal times, and places to sell equipment. While there are multiple advanced statistical or mathematical methods that go into making an excellent value recovery model or pricing model of this equipment, simply plotting average sale prices for each month over the prior year for each item or category of equipment could provide a much better idea of the real present value of a piece of equipment as well as what kind of depreciation you might expect in the coming months.

Silly Wand Waving

It would be irresponsible of me not to admit that some problems don’t actually lend themselves to the development of models and can actually lead to some truly ridiculous conclusions. Pretty high on the ridiculous list is a modeling practice best described as a “Fishing Expedition”. Now, I could let you just use your imagination as to what that might mean but I’ll give you a truly heinous example just so that we will be entirely clear what this might mean for a modeling effort. Let’s suppose you are given a pair of dice. Let’s further assume that this is the first time that you have ever seen dice and you want to figure out what the probability is that you roll a “snake-eyes” when you toss the dice. You determine to roll the dice ten times and count the number of snake-eyes that you roll—the probability of snake-eyes you report would be the number of times two ones were rolled as a ratio of the total times the dice were rolled. If you know nothing about the dice before-hand, there is nothing inherently wrong with this approach…in fact this might be the most common modeling approach used to describe a natural phenomenon over which we can exert some control. Getting back to the dice example, let me ask you question: given what you know about dice (six sides, etc.), how many times would I have to conduct this test such that you would expect to roll snake-eyes all ten times?

  • Odds of rolling a one = 1/6
  • Odds of rolling snake-eyes = 1/36
  • Odds of rolling snake-eyes ten times consecutively = 1/36 to the tenth power

1/36ths to the tenth power is a very small number—but  if  I attempt the test enough times I will inevitably end up with ten snake-eyes. This is the probability that this kind of test would yield incorrect results. If I were the tester and I desperately wanted to be able to say that the odds of rolling snake-eyes were really high I might conduct this test over and over, increasing the odds of getting 10 snake-eyes. Upon achieving my goal, I might even feel validated at this final outcome and run and tell the town. However, I would be guilty of having conducted a “Fishing Expedition”. If we were to test the odds of multiple unlikely pairs in series (one’s, two’s, three’s, etc.) we would still find it increasingly likely that one of the tests would have an unlikely outcome. The mere fact that we repeatedly ask questions of the same system (dice) increases the likelihood that we accidentally make an inaccurate conclusion. In this example it is quite clear that we couldn’t trust the conclusion of such a clumsy experiment but in other real-world examples the opportunity to go fishing is much more tempting. There is a new field of business intelligence that is especially prone to this kind of bias: Big Data Analytics. You may have heard of the supposed phenomenon of “Big Data,” the truth is that there is nothing inherently new about it…as the name suggests, it is simply when an analyst attempts to gain some insights from a large pool of data with varying degrees of organization.  The primary reason that big data analyses are particularly likely to have this sort of bias is that analysts typically don’t know what they are looking for in advance and so they tend to ask multiple questions, one after another, until an answer pops out of the data that appears interesting. The problem with this approach doesn’t become serious until the analyst decides to trust this conclusion as if it were the only test they ran; while, they should really be adjusting their level of confidence to account for having conducted multiple tests on the same data. Big Data Analysis and business analyses, because of the reliance on largely unreproducible, legacy data, are prone to mistaking coincidence for causality.

Conclusion: Modeling Tells the Future by Describing the Past

One of the things that keeps me waking up in the morning are the endless possibilities that each day holds. As humans, we are born with a certain degree of curiosity which keeps us interacting with the world around us. As a scientist, I find that I am especially interested in things that I don’t understand or have never seen before. As an engineer, I see value in understanding how things work. While we continue to develop an understanding of the world around us, fewer and fewer things remain a surprise…that is, we learn to expect certain outcomes and we have developed rules to predict the likelihood of those outcomes. Modeling, in its many forms, provides value to us as we seek to improve our quality of life. Whether we are avoiding catastrophic events, curing diseases, or creating technology that allows us to communicate across the globe in real-time, models are at the heart of our future.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Advertisements

4 Reasons Deep Brain Stimulation Is The Worst Best Therapy Out There

Deep Brain Stimulation can often restore near-total function to those suffering from severe neural disorders.
Deep Brain Stimulation can often restore near-total function to those suffering from severe neural disorders.
Worst best, best awful, best worst…the point is that Deep Brain Stimulation (DBS) is the best that we have but it isn’t that great. There are many reasons why DBS is not the panacea for brain disorders–I only picked four. If you don’t already know what DBS is you can probably figure it out from the name…neurophysiologists and neurosurgeons take a wire, hook it up to a battery, and stick it into a dysfunctional region of a patient’s brain. It is surprisingly barbaric, though measurably more humane than its parent method, electroshock therapy–which is only a hop, skip, and a jump away from death by electric chair. The main problem with this type of criticism is that, in spite of whatever ethical breaches have led to the discovery of DBS as a therapy, it seems to work.

DBS has been used with meaningful success to treat a variety of conditions with some of the most notable including Parkinson’s disease, types of Dystonia and other special neuromuscular disorders, Epilepsy, and even severe depression. DBS stimulators are typically implanted in the chest (similar to a cardiac pacemaker) with a wire electrode fed up the neck, through the base of the skull, and directly into the focal point of the dysfunctioning tissue.

Such a simple device that can and does work wonders can’t be without its flaws.

Some Of The Serious Problems

1. Scarring

Brain tissue is more like paste than the spongy, sort of bouncy looking thing you see floating in formaldehyde–aside from neurons there are billions of other cells too. Supportive cells called glial cells are there for structure and to play several support functions; but there are also loads of immune system cells there to protect against infection and disease. When any foreign object is stuck into the brain these cells all congregate around it and stick to it in layers…this insulates the object from the vulnerable neurons in the area. Stimulating electrodes have a really short effective life in the brain because of the gunk that the body throws at them. Not only do the DBS devices stop working so well but this means that a DBS patient has to go under the knife every six-twelve months to have the electrode replaced.

2. Unintended Side-effects

With the current technology, it is impossible to activate a few neurons without also stimulating other neurons nearby. This makes sense…If you are hit by lightning and you are holding hands with your buddy, your buddy is hit by lightning too. Well, those other neurons are connected to other circuits that do other things…things you didn’t intend to do. Just to leave you with no excuses for getting my meaning, it just so happens that a strip of your brain that is responsible for helping you make sense of what you see is right next to a strip of your brain that is responsible for helping you coordinate your movements…if I were to electrically shock the one in an attempt to get you to see something, I could be unintentionally shocking the other, perhaps causing you to move spasmodically. It turns out that many brain circuits are interconnected and share real world functionality…this is especially true in the neocortex, where your decision making centers associate with your emotion centers, your sensory regions, as well as your motor regions.

3. Burns

Anyone who has seen The Green Mile, or been talked into licking a nine volt battery knows that electricity burns. What you might not know is that it is enormously easy to kill a few cells with a little bit of electricity. You might be thinking, “a few cells…no big deal.” Stop and consider the fact that once you reach a few years old, most of your brain has already stopped making new cells. Add to that an understanding that for most patients DBS is constantly delivering electrical current to the brain. While scientists have a pretty good idea of how strong of a shock it takes to kill a neuron quickly, they don’t have as clear of an idea what might kill neurons slowly. With the presumed safe usage of DBS already posing some potential risks to the patient, let’s not imagine what could happen if a surgeon slips up on the settings.

4. One-Size Fits All

The title of this section is a little bit misleading…the physical size of the device isn’t really a concern anymore…they have shrunk considerably since the devices first arrived in the clinic. It has more to do with the “Unintended Side-effects” problem than anything. It turns out that we don’t do a very good job of diagnosing brain disorders. Even the ones we have studied for years like Alzheimer’s have several different stages, degrees, and forms that call into question the wisdom of using a single type of therapy that is pre-programmed to deliver an out of the box protocol. While it is intuitive to calibrate the device to the needs of an individual patient’s condition, there is little to no understanding of what is optimal. It is typical that the surgeon is instructed to root around and “turn-up” the stimulation until the pathological tissue behaves more “normally”. With the special set of conditions every patient presents, it is unlikely that this degree of customization is enough.

But, Deep Brain Stimulation Is Still The Best We Have

Despite these looming concerns, DBS does amazing things for those who have no other recourse. The internet is full of testimonies of the increased quality of life that DBS brings…many couldn’t control their tremors or seizures long enough to even look where they wanted to. Walking, writing, feeding yourself, or speaking…are all activities that we take for granted but are restored (albeit temporarily and possibly at a cost) to those who are lucky enough to qualify for a DBS device. You might expect the best therapy to be a little less awful than what I have described. As worst as it is, it is still the best–and for many, it is the only cure working and we can’t take that away, even if it is just a placeholder for something better.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you would like to hear more about the work done by Clayton, and his colleagues, in the USC Center for Neural Engineering he can be reached at: csbingha-at-usc-dot-edu.

Why Brain Engineering Will Spawn The New “Hot Jobs”

The hot jobs of this decade, almost without exception, have become “cerebral” in some way or another. Programmers build complex algorithms, quatitative financial analysts build equally complex models, and data analysts (with their myriad titles) are swimming in complex methods; even in the health industry you can see the trend toward an increased emphasis on problem solving ability…physician’s assistants that are capable of accurately diagnosing various conditions are more in demand than ever (long the exclusive domain of board certified medical doctors). How appropriate is it that brain technology would further the trend in “cerebral”-ization of work?

In collaboration with computer scientists, brain researchers have poked holes in the veil of the future–several technologies, previously only possible in the pages of Isaac Asimov and other Sci-Fi writers, such as Deep Brain Stimulation, Neuromorphic Computing, and Machine Learning have opened a new frontier for game-changing products and applications.

Deep Brain Stimulation (DBS)

DBS is essentially a pacemaker, repurposed for the brain. While nearly all current applications for DBS are in the correction of disruptive electrical signals in the brain, it proves it is possible to externally and locally trigger specific circuits in the brain responsible for motion, sensation, memory, emotion, and even abstract thought. Why might this lead to the creation of so called hot jobs? Imagine being the engineer who implements a DBS system to help reduce cravings for food due to boredom? Or a DBS system that helps you recognize individuals by stimulating circuits containing relevant information about that individual?

Neuromorphic Computing

This is an image of a neuromorphic processor with 1/4 million synapses on a 16×16 node array

You might have already pieced together what this means but it is just what it sounds like: computers that are like brains in form. Now, they don’t actually look like brains but they utilize a fundamental architecture of nodes (neurons) connected (a la synapses) in a network with variable strengths. These variable strengths allow learning to happen (if you forgot how this works look here). As you can imagine this chip would be fundamentally different than the Intel processor your desktop or laptop probably have under the hood. The fundamental difference is that like your brain, neuromorphic chips must be trained in order to perform a task. Another interesting feature of these types of chips is that the task also needs to be designed. I can’t imagine a sexier job that thinking up tasks and training regimes for neuromorphic chips! If you aren’t convinced this is possible or coming in the near future, you might be surprised to hear that Intel and Qualcomm already have working prototypes and are planning to put them into cell phones very soon (read about it here).

Machine Learning

If the concept of a machine learning doesn’t sound totally anthropomorphic to you…it probably should. But once again our understanding of how networks of neurons work has opened a huge can of worms for those who know how to hook them up and go fishing. Machine Learning forms much of the theoretical framework underlying neuromorphic computing. The major difference is that not being implemented in hardware allows the user a ton of flexibility to build creative and novel solutions. The types of problems that are being solved with Machine learning are crazy…there are many things that you and I are good at but would make your computer crash every time–face recognition, reading, writing, speaking, listening, and identifying objects are all within the domain of machine learning. As you can imagine we have only begun to tap the well of interesting applications for machine learning and there may be an inexhaustible need for engineers to come up with them.

Add some meat to your social media feed…follow The Public Brain Journal on Twitter

Clayton S. Bingham is a Biomedical Engineer working at the Center for Neural Engineering at University of Southern California. Under the direction of Drs. Theodore Berger and Dong Song, Clayton builds large-scale computational models of neurological systems. Currently, the emphasis is on the modeling of Hippocampal tissue in response to electrical stimulation with the goal of optimizing the placement of stimulating electrodes in regions of the brain that are dysfunctional. These therapies can be used for a broad range of pathologies including Alzheimer’s, various motor disorders, depression, and Epilepsy.

If you have any interest in writing here or would like to hear more about the work done by Clayton in the USC Center for Neural Engineering he can be reached at: clayton dot bingham at gmail dot com.