There are multiple reasons to regularize system identification models across neurons of the same type or at the same location in the visual hierarchy. They receive similar inputs from previous processing steps on which they perform stereotyped computations defined by their genetic, morphological and other type-specific properties [BadenFrankeBerensSchubertet al. 2016]. They need to detect the same features in a convolutional fashion for instance the presence of movement across visual space or the identity of phonemes at different auditory frequencies. Populations of visual neurons e.g. in the retina can be grouped into a limited number of types each with a specific computation that tile visual space and thus effectively convolve the input image with a number of feature channels [Euler BC review] {Footnote:Analogously CNNs introduced a form of regularization through shared convolutional features that contributed to the success of deep learning[Bengio-DL review, https://arxiv.org/abs/1606.02580]}. And finally large (wide) but short (shallow) population recordings are becoming more available [Peron et al. 2015; Kim et al. 2016] where the amount of data may not be enough to discover complex computations of individual neurons but pooling the estimation under the biological constraint of typed computations could allow us to understand nonlinear neural computations from these high-dimensional datasets [Ganguli review].