The goal of neural system identification is to construct quantitative models that describe how a neuron responds to arbitrary stimuli [WuDavidGallant06]. In sensory neuroscience the standard way of doing this is with a generalized linear model (GLM) which in its most basic form consists of a linear filter followed by a point-wise nonlinearity [WeberPillow16]. This model is widely used to describe the responses of sensory neurons across modalities including vision [Pillow08] audition []… The reason for this popularity is the simplicity of the model and its success in describing the computation of a neuron even when relatively few data are available for fitting []. However, this simplicity also constrains the space of possible functions that a GLM can model. In Bayesian terms, a GLM places a strong prior constraint on the space of possible computations that a neuron can perform [WuDavidGallant06]. A simple computation close to the idealized neuron that weighs its inputs and fires with a nonlinear activation function will be well described by a GLM even with relatively few data. But if the neuron’s computation is more complex e.g. direction selectivity in vision [] then a GLM will do a bad job in describing the input-output transform of this neuron. Naturally this depends on the stimuli used for probing a neuron. For instance GLMs successfully describe the responses of retinal ganglion cells to white noise [Pillow08 chichilnisky?] but not to complex natural stimuli [Heitman16] (indicating that natural images trigger nonlinear computations even at these early stages of visual processing).