The curse of dimensionality reduction. - Or how (not) to reverse engineer the brain.

I have been working in Neuroscience for seven years already, but my PhD, in a slightly irrelevant way, was in molecular biophysics / protein mechanics. It was about figuring out the folding dynamics of protein molecules so small nobody knew how to measure appropriately yet. Soon after I started in it, I realized I had landed myself in an interesting situation. Although most biophysicists / physical chemists work in a molecular biology / biochemistry lab I ended up working in the Molecular and Nanoscale Physics group of the Department of Physics and Astronomy of my university. My project demanded a strong collaboration with the biochemists and the molecular biologists across the road from us (someone had to to mutate and purify the proteins I was all too happy to blast with very strong lasers beams) but I also spent large chunks of my time interacting with the engineers and physicists in my group. It was maybe three years into my studies when I started piecing together the reasons I often felt like suffering a bit from bipolar or split personality syndromes. I especially remember the monthly lab meetings were the physicists would visit the molecular biologists’ offices (or vice versa) and would all sit down to discuss the progress of a common project (mine being one of those). We all seemed to share the same overarching goal, i.e. to understand how proteins fold and how structure leads to function. Everybody agreed that this was how we would be able to prevent and correct misfoldings in the cell and create artificial proteins with novel functionalities. Yet, the question of how to go on about that goal was what appeared to divide us at a level that went all the way down to our most basic and assumed notions of how to do science, or even what science is in the first place. Over the course of my degree it dawn on me that there was this oft used word which appeared to posses two very different meanings according to which group you asked. The magic word, and in my view, the cause of many a misunderstanding was ’model’. To the molecular biologist, to model a protein was to collect all the data there was to collect for it (structural, dynamic, thermodynamic and what have you), maybe draw some qualitative and descriptive (i.e language based) assumptions about how that protein did what it did and then collate it all together in a nice publication. For the physicist, modeling a protein meant to collect the least amount of data from it possible and use these to create, expand, tweak, or disprove one of the caricaturish but quantitative descriptions of how all proteins under all conditions folded and functioned. And there you had it; model as an exhaustive collection of true information, collated in an as easy to use look up table (written in English or another language) or model as a summarizing group of principles (written in maths) that are general but always a bit wrong. Or to put it differently, models driven by data, or data driven by models.

Reading Lazebnik’s ’Can a biologist fix a radio?’ (Lazebnik 2002) felt like someone had summarized and put on paper all my frustrations with the biologists during my PhD years. Yet I always wondered why does that dichotomy exists. Why are sciences like biology and psychology descriptive and data driven when physics, chemistry and engineering are quantitative and model driven? Answering this, I often heard biologists say, is trivial. The phenomena biology or psychology or the social sciences study are complex, they professed. Physics, chemistry, etc. study simple ones. But with my original conception of complex and simple systems I had a hard time assigning earth’s weather system or a modern computer’s cpu for example to the category of simple phenomena while assigning a bacterium or a virus to the category of complex ones. So what did they mean by complex and simple? Was it that simple phenomena could be described by a number (sometimes large) of differential, ordinary or partial, equations, while for complex that was close to impossible? Or as Krioukov suggests (Krioukov 2014) in his excellent critic of data driven neuroscience, simple would be the systems for which one can define a group of symmetries , a scalar invariant and a set of Euler-Lagrange equations, while complex are all other systems? Or was there a number of model parameters that assigned a phenomenon to one of these two categories? Or maybe it was the sheer number of outliers from any conceivable general model that differentiated complex from simple? Yet 20th century physics and engineering had attacked and even themselves constructed systems in a quantitative way that were definitely highly complex by all of the above definitions. So where was the catch? Both Lazebnik and Krioukov seem to agree that there isn’t really one; it is just that the biologists need to catch up to the rest of the natural scientist in their use of calculus.

Here, and despite my original frustrations, I will argue for the views of Edelman (Edelman 2001) and of Ehresmann and Vanbremeersch (Ehresmann 2007). In these works, the idea of a complex system, i.e. one that is in a fundamental way much harder to construct a general principles model for, is presented as not a fantasy of biologists and social scientists but as something that must be recognized as real. Such systems, according to the above works, have three characteristics that, combined, qualitatively change their nature compared to the large-number-of-parts systems that an engineer for example deals with (like a radio). They are hierarchical, show degeneracy in their constituent parts and show emergent properties as one traverses their levels from the more basic to the more encompassing. Let’s use two examples against each other to demonstrate the above idea better. Let’s take a cpu and a cell and try and describe their complexity using the above three concepts. Both cpu and cell are definitely complicated, multi-part, systems. They show a strong hierarchy from basic components all the way to the final system. Yet the cell shows degeneracy of parts, i.e. at any level of the system there is a number of parts that although different (both structurally and often functionally) can actually perform the same function as far as the level above them is concerned. In the case of the cpu, degeneracy, when and if it exists, is actually a design flaw. Every function in any level is implemented by one and only one structure of parts in the level below. Now lets get a bit more theoretical and think of these two systems as a conglomerate of mathematical structures, each representing a part of a level (think of sets, or types or categories if you are mathematically inclined). The lack of degeneracy in the cpu means that such a complicated system can still be described by a master function that combines in a single step all its levels and that takes as input its most basic parts and outputs the system’s behavior. On the other hand the degeneracy of a cell stops us from generating such a one step function. Each level of the cell’s hierarchy requires its own function to run and provide that level’s output before we can use this to move on to the next level and so on. This degeneracy leads to the phenomenon of emergence where a level shows dynamics and follows laws that are not the result of just the sum of its constituent parts in the lower level. Parts that are often interchangeable with others, structurally and functionally different, very much not like a system like a cpu or a radio. One might think of this as each level of a complex system performing a dimensionality reduction (or lossy zipping) to its inputs so that by looking at its outputs it is impossible to know exactly what the inputs where. And really complex systems, like the brain for example have a large number of these layers, each loosing information from its input and generating emergent functionality as its output.

As a neuroscientist, what I care about is whether the above notions can help me (or anyone else for that matter) understand the brain. Why have our tools failed to produce the hoped for results? How come both attacking the brain from the point of view of the biologist and that of the physicist has proven very successful in providing data (and high impact publications) but not so in providing a real understanding of the system? In the neuroscientific version of Lazebnik’s letter, Jonas and Kording (Jonas 2016) address, this time experimentally, exactly whether the bread and butter techniques of modern neuroscience can help someone reverse engineer a fully know but really complicated system like that of a cpu. It makes for a sober reading when the authors conclude that none of the ways we are using for the collection and analysis of neuroscientific data would be of any help in reverse engineering a cpu. And if the arguments above have any merit, the cpu is a much simpler system than the brain (a statement that one may be able to substantiate also by pointing at the complexity of their relative outputs). So is there any hope? I believe that some pointers may lie in trying to understand why techniques like granger causality of ’lfps’ and peristimulus ’spike’ histograms, among others, offer pretty much zero information as to what exactly the cpu is doing when one plays a game for example. The argument I will put forward here is that the issue lies with the effort of connecting correlationaly and/or causationaly levels of activity that are far away from each other. In the case of Jonas and Kording that would be the switching of transistors with the ’behavior’ of the pixels on the screen during a game. In the case of most neuroscience experiments that would be connecting some genetic or physiological function of one or few cells with the movement of muscles of an animal behaving in an environment. Especially in systems with degeneracy and emergence such an attack sounds difficult to offer any poignant pointers as to what the system is actually doing. If we were to probe each level not to understand the behavior of the whole but just to understand the behavior of the level above it then one can imagine experiments that would reverse engineer from the functionality of the switching transistors the way the logical gates are constructed in the chip, and from those the structure and functionality of its adders, etc. all the way to understanding how the implemented software algorithms generate a games behavior. But how can you transfer such an approach to the brain? In the reverse engineering of the cpu one already knows of the concepts of gates and adders and memory and software functions but in the brain we seem to be missing vital information as to its levels of organization, especially between the single cell and that of the whole brain. Concepts like canonical (micro)columns, sparse networks, concept cells and whatever else, are all ideas that come from long hard looks into giant libraries of data but do not seem to be able to play the role of the levels of functionality that can bridge the gap between the cell and the organ.

As Krioukov points out, historically, in natural sciences, such understanding came not from collecting and collating large data sets but from generating often ludicrous sounding general ideas (models) and then trying to see if data fitted them. I will follow his conviction here in arguing that the elucidation of the brain requires the proposal and testing of quantitative models about all levels of the brain’s hierarchy and even maybe the proposal of new such levels that we may be missing. I will argue at the same time that although these models need to be quantitative and general they should focus on one level at a time. Each should try to provide the general principles that this level in the brain’s hierarchy operates under, i.e. constructing the brain’s equivalent of general notions/principles like logic gate or software function (mentioned here as examples of notions not as things the direct equivalents of which the brain should implement). If people think that cells for example combine in networks to perform computations then lets elucidate all the principles under which these networks operate and generate the principles of the computations they run. We do not need to do this under the light of this or that behavior. Transistors implement logic gates no matter what software the cpu is running. Linking straight to behavior at this level is like correlating transistors’ voltages to pixels turned on or off on the screen. If we think that simple computations get combined in more complicated ones lets figure out how this happens exactly at this level and what are the principles those more complicate computations encompass. And so on all the way up to the animal’s behavior one level at a time. Unfortunately we cannot even tell at the moment if it is only one level (maybe that of networks of cells) that we are missing or maybe three or five. At the same time we are not sure we have fully understood the full spectrum of cellular and sub cellular functionality that is relevant. Is dendritic computation a thing for example and if yes at which level does it play a role? In complex systems input to a level can come from any of the levels bellow it, so for example something like a computation functionality at the subcellular level might be one very important input to the computational capabilities of the network of cells. Where are the degeneracies in each level, from the molecular to the subcellular to the cellular? Quantitative models at all the currently recognized levels of the brain’s hierarchy are sorely required. At the same time the current state of the research in artificial intelligence might be hinting at the notion that we might be even missing a few levels all together.

In conclusion, this researcher’s opinion, for what it is worth, is that the field is lacking not from more correlations between cell functionality and behavior but in ideas that are brave in their irreverence to the status quo but quantitative, testable and actually useful if proven correct (i.e. able to explain at once vast swaths of data and generate the right kind of next questions). I believe that such ideas could be generated more freely if people look more carefully at the notions of hierarchy, degeneracy and emergence especially under the rigor of mathematical constructs and ideas, but I would be willing to bet less money on that second assertions than on the first one.


  1. Yuri Lazebnik. Can a biologist fix a radio?Or what I learned while studying apoptosis. Cancer Cell 2, 179–182 Elsevier BV, 2002. Link

  2. Dmitri Krioukov. Brain theory. Front. Comput. Neurosci. 8 Frontiers Media SA, 2014. Link

  3. G. M. Edelman, J. A. Gally. Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences 98, 13763–13768 Proceedings of the National Academy of Sciences, 2001. Link

  4. Andrée Charles Ehresmann, Jean-Paul Vanbremeersch. Memory evolutive systems; hierarchy, emergence, cognition. 4 Elsevier, 2007. Link

  5. Eric Jonas, Konrad Kording. Could a neuroscientist understand a microprocessor?. (2016). Link

[Someone else is editing this]

You are editing this file