Collection of jupyter notebooks that link topics from information theory and optimization to analyses of neural codes.
Contributors: Johan Westö
- Part 1: Uncertainty and information
- Part 2: Entropy estimation and bias correction
- Part 3: Maximum entropy models
- Part 4: Entropy and information in spike trains
Contributors: Johan Westö & Joel Honkamaa.
Based upon the lecture series given by Ryan T.
Modified to fit the context of receptive field models.
- Part 1: Basic ideas and gradient descent
- Part 2: Proximal gradient method and L1 regularization
- Part 3: Matrix completion and nuclear norm regularization
- Part 4: Low-rank receptive field models
Contributors: Johan Westö.
The temporal filters of receptive field models are sometimes presented wit a negative time axes and sometimes with a positive time axes. Here, we show that the difference lies in whether you interpret the filter as a stimulus template or as an inpulse response.
Linear regression is the standard initial tool for approximating a function that maps input data to output data. Here we show how that three different approaches/interpretations all lead to the same linear regression solution.