The curse of dimensionality reduction. - Or how (not) to reverse engineer the brain.

I have been working in Neuroscience for seven years already, but my PhD, in a slightly irrelevant way, was in molecular biophysics / protein mechanics. It was about figuring out the folding dynamics of protein molecules so small nobody knew how to measure appropriately yet. Soon after I started in it, I realized I had landed myself in an interesting situation. Although most biophysicists / physical chemists work in a molecular biology / biochemistry lab I ended up working in the Molecular and Nanoscale Physics group of the Department of Physics and Astronomy of my university. My project demanded a strong collaboration with the biochemists and the molecular biologists across the road from us (someone had to to mutate and purify the proteins I was all too happy to blast with very strong lasers beams) but I also spent large chunks of my time interacting with the engineers and physicists in my group. It was maybe three years into my studies when I started piecing together the reasons I often felt like suffering a bit from bipolar or split personality syndromes. I especially remember the monthly lab meetings were the physicists would visit the molecular biologists’ offices (or vice versa) and would all sit down to discuss the progress of a common project (mine being one of those). We all seemed to share the same overarching goal, i.e. to understand how proteins fold and how structure leads to function. Everybody agreed that this was how we would be able to prevent and correct misfoldings in the cell and create artificial proteins with novel functionalities. Yet, the question of how to go on about that goal was what appeared to divide us at a level that went all the way down to our most basic and assumed notions of how to do science, or even what science is in the first place. Over the course of my degree it dawn on me that there was this oft used word which appeared to posses two very different meanings according to which group you asked. The magic word, and in my view, the cause of many a misunderstanding was ’model’. To the molecular biologist, to model a protein was to collect all the data there was to collect for it (structural, dynamic, thermodynamic and what have you), maybe draw some qualitative and descriptive (i.e language based) assumptions about how that protein did what it did and then collate it all together in a nice publication. For the physicist, modeling a protein meant to collect the least amount of data from it possible and use these to create, expand, tweak, or disprove one of the caricaturish but quantitative descriptions of how all proteins under all conditions folded and functioned. And there you had it; model as an exhaustive collection of true information, collated in an as easy to use look up table (written in English or another language) or model as a summarizing group of principles (written in maths) that are general but always a bit wrong. Or to put it differently, models driven by data, or data driven by models.

Reading Lazebnik’s ’Can a biologist fix a radio?’ (Lazebnik 2002) felt like someone had summarized and put on paper all my frustrations with the biologists during my PhD years. Yet I always wondered why does that dichotomy exists. Why are sciences like biology and psychology descriptive and data driven when physics, chemistry and engineering are quantitative and model driven? Answering this, I often heard biologists say, is trivial. The phenomena biology or psychology or the social sciences study are complex, they professed. Physics, chemistry, etc. study simple ones. But with my original conception of complex and simple systems I had a hard time assigning earth’s weather system or a modern computer’s cpu for example to the category of simple phenomena while assigning a bacterium or a virus to the category of complex ones. So what did they mean by complex and simple? Was it that simple phenomena could be described by a number (sometimes large) of differential, ordinary or partial, equations, while for complex that was close to impossible? Or as Krioukov suggests (Krioukov 2014) in his excellent critic of data driven neuroscience, simple would be the systems for which one can define a group of symmetries , a scalar invariant and a set of Euler-Lagrange equations, while complex are all other systems? Or was there a number of model parameters that assigned a phenomenon to one of these two categories? Or maybe it was the sheer number of outliers from any conceivable general model that differentiated complex from simple? Yet 20th century physics and engineering had attacked and even themselves constructed systems in a quantitative way that were definitely highly complex by all of the above definitions. So where was the catch? Both Lazebnik and Krioukov seem to agree that there isn’t really one; it is just that the biologists need to catch up to the rest of the natural scientist in their use of calculus.

Here, and despite my original frustrations, I will argue for the views of Edelman (Edelman 2001) and of Ehresmann and Vanbremeersch (Ehresmann 2007). In these works, the idea of a complex system, i.e. one that is in a fundamental way much harder to construct a general principles model for, is presented as not a fantasy of biologists and social scientists but as something that must be recognized as real. Such systems, according to the above works, have three characteristics that, combined, qualitatively change their nature compared to the large-number-of-parts systems that an engineer for example deals with (like a radio). They are hierarchical, show degeneracy in their constituent parts and show emergent properties as one traverses their levels from the more basic to the more encompassing. Let’s use two examples against each other to demonstrate the above idea better. Let’s take a cpu and a cell and try and describe their complexity using the above three concepts. Both cpu and cell are definitely complicated, multi-part, systems. They show a strong hierarchy from basic components all the way to the final system. Yet the cell shows degeneracy of parts, i.e. at any level of the system there is a number of parts that although different (both structurally and often functionally) can actually perform the same function as far as the level above them is concerned. In the case of the cpu, degeneracy, when and if it exists, is actually a design flaw. Every function in any level is implemented by one and only one structure of parts in the level below. Now lets get a bit more theoretical and think of these two systems as a conglomerate of mathematical structures, each representing a part of a level (think of sets, or types or categories if you are mathematically inclined). The lack of degeneracy in the cpu means that such a complicated system can still be described by a master function that combines in a single step all its levels and that takes as input its most basic parts and outputs the system’s behavior. On the other hand the degeneracy of a cell stops us from generating such a one step function. Each level of the cell’s hierarchy requires its own function to run and provide that level’s output before we can use this to move on to the next level and so on. This degeneracy leads to the phenomenon of emergence where a level shows dynamics and follows laws that are not the result of just the sum of its constituent parts in the lower level. Parts that are often interchangeable with others, structurally and functionally different, very much not like a system like a cpu or a radio. One might think of this as each level of a complex system performing a dimensionality reduction (or lossy zipping) to its inputs so that by looking at its outputs it is impossible to know exactly what the inputs where. And really complex systems, like the brain for example have a large number of these layers, each loosing information from its input and generating emergent functionality as its output.