The obvious conclusion is to use more expressive models e.g. extended or stacked GLMs [Pillow..Meister&Gollisch] spike-triggered covariance models […] or general machine learning techniques [Benjamin…Kording]. Unfortunately these models all trade flexibility against an increased hunger for training samples to avoid overfitting [WuDavidGallant06]. But experimental data is always scarce, expensive or ethically prohibitive and so sensory neuroscience will continue operating in a regime where flexible models require smart regularization to identify complex neural computations from limited data. While smart attempts to regularize models have been made in the past [e.g.Park&Pillow…] we believe that the most important regularization has been overlooked namely the similarity of computations across neurons of the same type [FrankeBerensSchubert et al.2017].