Skip to main content Accessibility help
×
Hostname: page-component-5c6d5d7d68-wp2c8 Total loading time: 0 Render date: 2024-08-15T08:47:16.704Z Has data issue: false hasContentIssue false

Estimating state and parameters in state space models of spike trains

from Part I - State space methods for neural data

Published online by Cambridge University Press:  05 October 2015

J. H. MacKe
Affiliation:
University College London
L. Buesing
Affiliation:
University College London
M. Sahani
Affiliation:
University College London
Zhe Chen
Affiliation:
New York University
Get access

Summary

Introduction

State space models for neural population spike trains Neural computations at all scales of evolutionary and behavioural complexity are carried out by recurrently connected networks of neurons that communicate with each other, with neurons elsewhere in the brain, and with muscles through the firing of action potentials or “spikes.” To understand how nervous tissue computes, it is therefore necessary to understand how the spiking of neurons is shaped both by inputs to the network and by the recurrent action of existing network activity. Whereas most historical spike data were collected one neuron at a time, new techniques including silicon multielectrode array recording and scanning 2-photon, light-sheet or light-field fluorescence calcium imaging increasingly make it possible to record spikes from dozens, hundreds and potentially thousands of individual neurons simultaneously. These new data offer unprecedented empirical access to network computation, promising breakthroughs both in our understanding of neural coding and computation (Stevenson & Kording 2011), and our ability to build prosthetic neural interfaces (Santhanam et al. 2006). Fulfillment of this promise will require powerful methods for data modeling and analysis, able to capture the structure of statistical dependence of network activity across neurons and time.

Probabilistic latent state space models (SSMs) are particularly well-suited to this task. Neural activity often appears stochastic, in that repeated trials under the same controlled experimental conditions can evoke quite different patterns of firing. Some part of this variation may reflect differences in the way the computation unfolds on each trial. Another part might reflect noisy creation and transmission of neural signals. Yet more may come from chaotic amplification of small perturbations. As computational signals are thought to be distributed across the population (in a “population code”), variation in the computation may be distinguished by its common impact on different neurons and the systematic evolution of these common effects in time.

An SSM is able to capture such structured variation through the evolution of its latent state trajectory. This latent state provides a summary description of all factors modulating neural activity that are not observed directly. These factors could include processes such as arousal, attention, cortical state (Harris & Thiele 2011) or behavioural states of the animal (Niell & Stryker 2010; Maimon 2011).

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2015

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Archer, E., Koester, U., Pillow, J.W. & Macke, J. H. (2015). Low-dimensional models of neural population activity in sensory cortical circuits. In Advances in Neural Information Processing Systems 27, New York: Curran Associates, Inc.
Beal, M. J. (2003). Variational algorithms for approximate Bayesian inference. PhD thesis, Gatsby Unit, University College London.
Bishop, C. (2006). Pattern Recognition and Machine Learning, SpringerNew York.
Boyd, S. P. & Vandenberghe, L. (2004). Convex Optimization, Cambridge: Cambridge University Press.
Buesing, L., Machado, T., Cunningham, J. P. & Paninski, L. (2015). Clustered factor analysis of multineuronal spike data. In Advances in Neural Information Processing Systems 27, New York: Curran Associates, Inc.
Buesing, L., Macke, J. H. & Sahani, M. (2012). Learning stable, learning regularised latent models of neural population dynamics. Network: Computation in Neural Systems 23, 24–47.Google Scholar
Buesing, L., Macke, J. H. & Sahani, M. (2013). Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. In Advances in Neural Information Processing Systems 25, York: Curran Associates, Inc. pp. 1691–1699.
Chen, Z. (2003). Bayesian filtering: From kalman filters to particle filters, and beyond. Statistics 182(1), 1–69.Google Scholar
Chen, Z. & Brown, E. N. (2013). State space model. Scholarpedia 8(3), 30868.Google Scholar
Chornoboy, E., Schramm, L. & Karr, A. (1988). Maximum likelihood identification of neural point process systems. Biological Cybernetics 59(4), 265–275.Google Scholar
Churchland, M. M., Yu, B. M., Sahani, M. & Shenoy, K.V. (2007). Techniques for extracting single-trial activity patterns from large-scale neural recordings. Current Opinion in Neurobiology 17(5), 609–618.Google Scholar
Cunningham, J. P. & Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience 17.Google Scholar
Dempster, A. P., Laird, N. M. & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1–38.Google Scholar
Doucet, A. & Johansen, A. M. (2009). A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering 12, 656–704.Google Scholar
Durbin, J., Koopman, S. J. & Atkinson, A. C. (2001). Time Series Analysis by State Space Methods, Oxford: Oxford University Press.
Ecker, A. S.,Berens, P., Cotton, R. J., Subramaniyan, M., Denfield, G. H., Cadwell, C. R., Smirnakis, S. M., Bethge, M. & Tolias, A. S. (2014). State dependence of noise correlations in macaque primary visual cortex. Neuron 82(1), 235–248.Google Scholar
Eden, U. T., Frank, L. M., Barbieri, R., Solo, V. & Brown, E. N. (2004). Dynamic analysis of neural encoding by point process adaptive filtering. Neural Computation 16(5), 971–998.Google Scholar
Emtiyaz Khan, M., Aravkin, A., Friedlander, M. & Seeger, M. (2013). Fast dual variational inference for non-conjugate latent Gaussian models. In Proceedings of the 30th International Conference on Machine Learning, pp. 951–959.Google Scholar
Ghahramani, Z. & Hinton, G. E. (1996). Parameter estimation for linear dynamical systems. Technical Report CRG-TR-96-2, University of Toronto.
Ghahramani, Z. & Roweis, S. T. (1999). Learning nonlinear dynamical systems using an EM algorithm. In Advances in Neural Information Processing Systems 11, Cambridge, MA: MIT Press, pp. 431–437.
Goris, R. L. T., Movshon, J. A. & Simoncelli, E. P. (2014). Partitioning neuronal variability. Nature Neuroscience 17(6), 858–65.Google Scholar
Harris, K. D. & Thiele, A. (2011). Cortical state and attention. Nature Reviews Neuroscience 12(9), 509–523.Google Scholar
Ho, B. L. & Kalman, R. E. (1966). Effective construction of linear state-variable models from input/output functions. Regelungstechnik 14(12), 545–548.Google Scholar
Kalman, R. E. & Bucy, R. S. (1961). New results in linear filtering and prediction theory. Transactions of the ASME–Journal of Basic Engineering 83, 95–108.Google Scholar
Kass, R. E., Eden, U. & Brown, E. (2014). Analysis of Neural Data, New York: Springer.
Katayama, T. (2005). Subspace Methods for System Identification, New York: Springer.
Krumin, M. & Shoham, S. (2009). Generation of spike trains with controlled auto-and crosscorrelation functions. Neural Computation 21, 1–23.Google Scholar
Kulkarni, J. E. & Paninski, L. (2007). Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems 18(4), 375–407.Google Scholar
Lawhern, V., Wu, W., Hatsopoulos, N. & Paninski, L. (2010). Population decoding of motor cortical activity using a generalized linear model with hidden states. Journal of Neuroscience Methods 189(2), 267–280.Google Scholar
Macke, J., Berens, P., Ecker, A., Tolias, A. & Bethge, M. (2009). Generating spike trains with specified correlation coefficients. Neural Computation 21(2), 397–423.Google Scholar
Macke, J. H., Buesing, L., Cunningham, J. P., Yu, B. M., Shenoy, K.V. & Sahani, M. (2012). Empirical models of spiking in neural populations. In Advances in Neural Information Processing Systems 24, New York: Curran Associates, Inc.
Maimon, G. (2011). Modulation of visual physiology by behavioral state in monkeys, mice, and flies. Current Opinion in Neurobiology 21(4), 559–564.Google Scholar
Mangion, A. Z., Yuan, K., Kadirkamanathan, V., Niranjan, M. & Sanguinetti, G. (2011). Online variational inference for state-space models with point-process observations. Neural Computation 23(8), 1967–1999.Google Scholar
Møller, J., Syversveen, A. & Waagepetersen, R. (1998). Log Gaussian Cox processes. Scandinavian Journal of Statistics 25(3), 451–482.Google Scholar
Nickisch, H. & Rasmussen, C. E. (2008). Approximations for binary Gaussian process classification. Journal of Machine Learning Research 9(10), 2035–2078.Google Scholar
Niell, C. M. & Stryker, M. P. (2010). Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65(4), 472–479.Google Scholar
Opper, M. & Archambeau, C. (2009). The variational Gaussian approximation revisited. Neural Computation 21(3), 786–792.Google Scholar
Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems 15(4), 243–262.Google Scholar
Paninski, L., Ahmadian, Y., Ferreira, D., Koyama, S., Rahnama Rad, K., Vidne, M., Vogelstein, J. & Wu, W. (2010). A new look at state-space models for neural data. Journal of Computational Neuroscience 29, 107–126.Google Scholar
Pillow, J.W., Shlens, J., Paninski, L., Sher, A., Litke, A. M., Chichilnisky, E. J. & Simoncelli, E. P. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999.Google Scholar
Santhanam, G., Ryu, S. I., Yu, B. M., Afshar, A. & Shenoy, K.V. (2006). A high-performance brain-computer interface. Nature 442, 195–198.Google Scholar
Smith, A. C. & Brown, E. N. (2003). Estimating a state-space model from point process observations. Neural Computation 15(5), 965–991.Google Scholar
Stevenson, I. H. & Kording, K. P. (2011). How advances in neural recording affect data analysis. Nature Neuroscience 14(2), 139–42.Google Scholar
Truccolo, W., Hochberg, L.R. & Donoghue, J. P. (2010). Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nature Neuroscience 13(1), 105–111.Google Scholar
Turner, R. E. & Sahani, M. (2011). Two problems with variational expectation maximisation for time-series models. In D., Barber, A. T., Cemgil & S., Chiappa, eds, Inference and Learning in Dynamic Models, Cambridge: Cambridge University Press.
Vidne, M., Ahmadian, Y., Shlens, J., Pillow, J., Kulkarni, J., Litke, A., Chichilnisky, E., Simoncelli, E. & Paninski, L. (2012). Modeling the impact of common noise inputs on the network activity of retinal ganglion cells. Journal of Computational Neuroscience 33, 97–121.Google Scholar
Yu, B. M., Afshar, A., Santhanam, G., Ryu, S. I., Shenoy, K. & Sahani, M. (2006). Extracting dynamical structure embedded in neural activity. In Y., Weiss, B., Schölkopf & J., Platt, eds, Advances in Neural Information Processing Systems 18, Cambridge, MA: MIT Press, pp. 1545–1552.
Yu, B. M., Cunningham, J. P., Shenoy, K.V. & Sahani, M. (2008). Neural decoding of movements: From linear to nonlinear trajectory models. In Neural Information Processing, New York: Springer, pp. 586–595.
Yuan, K., Girolami, M. & Niranjan, M. (2012). Markov chain Monte Carlo methods for statespace models with point process observations. Neural Computation 24(6), 1462–1486.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×