Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-m8s7h Total loading time: 0 Render date: 2024-07-19T05:34:07.305Z Has data issue: false hasContentIssue false

Identifying outcome-discriminative dynamics in multivariate physiological cohort time series

from Part II - State space methods for clinical data

Published online by Cambridge University Press:  05 October 2015

S. Nemati
Affiliation:
Harvard University
R. P. Adams
Affiliation:
Harvard University
Zhe Chen
Affiliation:
New York University
Get access

Summary

Background

Physiological control systems typically involve multiple interacting variables operating in feedback loops that enhance an organism's ability to self-regulate and respond to internal and external disturbances. The resulting multivariate time series often exhibit rich dynamical patterns that are altered under pathological conditions, and are therefore informative of health and disease (Ivanov et al. 1996; Costa et al. 2002; Stein et al. 2005; Nemati et al. 2011). Previous studies using nonlinear (Ivanov et al. 1996; Costa et al. 2002) indices of heart rate (HR) variability (i.e., beat-to-beat fluctuations in HR) have shown that subtle changes to the dynamics of HR may act as an early sign of adverse cardiovascular outcomes (e.g., mortality after myocardial infarction (Stein et al. 2005)) in large patient cohort. However, these studies fall short of assessing the multivariate dynamics of the vital signs (such as HR, blood pressure, respiration, etc.), and do not yield any mechanistic hypotheses for the observed deteriorations of normal variability. This shortcoming is in part due to the inherent difficulty of parameter estimation in physiological time series, where one is confronted by nonlinearities (including rapid regime changes), measurement artifacts, and/or missing data, which are particularly prominent in ambulatory recordings (due to patient movements) and bedside monitoring (due to equipment malfunction).

In Chapter 11, a framework has been described for unsupervised discovery of shared dynamics in multivariate physiological time series from large patient cohorts. A central premise of our approach was that even within heterogeneous cohorts (with respect to demographics, genetic factors, etc.) there are common phenotypic dynamics that a patient's vital signs may exhibit, reflecting underlying pathologies (e.g., detraction of the baroreflex system) or temporary physiological state changes (e.g., postural changes or sleep/wake related changes in physiology). We used a switching state space model (SSM), or in particular, a switching vector autoregressive (VAR) model, to automatically segment the time series into regions with similar dynamics, i.e., time-dependent rules describing the evolution of the system state. The state space modeling approach allows for incorporation of physiologically constrained linear models (e.g., via linearization of the nonlinear dynamics around equilibrium points of interest) to derive mechanistic explanations of the observed dynamical patterns, for instance, in terms of directional influences among the interacting variables (e.g., baroreflex gain or chemoreflex sensitivity).

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2015

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bahl, L., Brown, P., De Souza, P. & Mercer, R. (1986). Maximum mutual information estimation of hidden Markov model parameters for speech recognition. In Proceedings of IEEE ICASSP, pp. 49–52.Google Scholar
Bengio, Y., Courville, A. & Vincent, P. (2013). Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8), 1798–1828.Google Scholar
Costa, M., Goldberger, A. L. & Peng, C. K. (2002). Multiscale entropy analysis of complex physiologic time series. Physical Review Letters 89(6), 068102.Google Scholar
Domke, J. (2013). Learning graphical model parameters with approximate marginal inference. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(10), 2454–2467.Google Scholar
Eaton, F. & Ghahramani, Z. (2009). Choosing a variable to clamp: approximate inference using conditioned belief propagation. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics 5, pp. 145–152.Google Scholar
Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P. & Bengio, S. (2010). Why does unsupervised pre-training help deep learning?Journal of Machine Learning Research 11, 625–660.Google Scholar
Graves, A. (2012). Supervised Sequence Labelling with Recurrent Neural Networks, New York: Springer.
Graves, A., Fernández, S., Gomez, F. & Schmidhuber, J. (2006). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, pp. 369–376.Google Scholar
Guzzetti, S., Piccaluga, E., Casati, R., Cerutti, S., Lombardi, F., Pagani, M. & Malliani, A. (1988). Sympathetic predominance an essential hypertension: a study employing spectral analysis of heart rate variability. Journal of Hypertension 6(9), 711–717.Google Scholar
Heldt, T., Oefinger, M. B., Hoshiyama, M. & Mark, R. G. (2003). Circulatory response to passive and active changes in posture. In Proceedings of Computers in Cardiology, pp. 263–266.Google Scholar
Heskes, T. & Zoeter, O. (2002). Expectation propagation for approximate inference in dynamic Bayesian networks. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, pp. 216–223.Google Scholar
Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N. & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine 29(6), 82–97.Google Scholar
Hinton, G., Osindero, S. & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation 18(7), 1527–1554.Google Scholar
Ivanov, P. C., Rosenblum, M. G., Peng, C. K., Mietus, J., Havlin, S., Stanley, H. E. & Goldberger, A. L. (1996). Scaling behaviour of heartbeat intervals obtained by wavelet-based time-series analysis. Nature 383, 323–327.Google Scholar
Kim, M. & Pavlovic, V. (2009). Discriminative learning for dynamic state prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(10), 1847–1861.Google Scholar
Lafferty, J., McCallum, A. & Pereira, F. C. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289.Google Scholar
Lasko, T. A., Denny, J. C. & Levy, M. A. (2013). Computational phenotype discovery using unsupervised feature learning over noisy, sparse, and irregular clinical data. PLoS One 8(6), e66341.Google Scholar
Lehman, L., Adams, R., Mayaud, L., Moody, G., Malhotra, A., Mark, R. & Nemati, S. (2014). A physiological time series dynamics-based approach to patient monitoring and outcome prediction. IEEE Journal of Biomedical and Health Informatics 18, eprint.Google Scholar
Marlin, B. M., Kale, D. C., Khemani, R. G. & Wetzel, R. C. (2012). Unsupervised pattern discovery in electronic health care data using probabilistic clustering models. In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, pp. 389–398.Google Scholar
McCallum, A., Freitag, D. & Pereira, F. C. N. (2000). Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, pp. 591–598.Google Scholar
Memisevic, R. (2006). An introduction to structured discriminative learning. Technical report, University of Toronto, Toronto, Canada.
Murphy, K. P. (1998). Switching Kalman filter. Technical report, Compaq Cambridge Research Laboratory, Tech. Rep. 98-10.
Nemati, S., Edwards, B. A., Sands, S. A., Berger, P. J., Wellman, A., Verghese, G. C., Malhotra, A. & Butler, J. P. (2011). Model-based characterization of ventilatory stability using spontaneous breathing. Journal of Applied Physiology 111(1), 55–67.Google Scholar
Nemati, S., Lehman, L. -W. H., Adams, R. P. & Malhotra, A. (2012). Discovering shared cardiovascular dynamics within a patient cohort. In Proceedings of IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6526–6529.Google Scholar
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986). Learning representations by backpropagating errors. Nature 323, 533–536.Google Scholar
Stein, P. K., Domitrovich, P. P., Huikuri, H.V., Kleiger, R. E. & Investigators, C. (2005). Traditional and nonlinear heart rate variability are each independently associated with mortality after myocardial infarction. Journal of Cardiovascular Electrophysiology 16(1), 13–20.Google Scholar
Stoyanov, V., Ropson, A. & Eisner, J. (2011). Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In Proceedings of AISTATS, pp. 725–733.Google Scholar
Sutskever, I. (2013). Training recurrent neural networks. PhD thesis, University of Toronto, Toronto, Canada.
Woodland, P. C. & Povey, D. (2002). Large scale discriminative training of hidden Markov models for speech recognition. Computer Speech & Language 16(1), 25–47.Google Scholar
Wu, W., Black, M. J., Mumford, D., Gao, Y., Bienenstock, E. & Donoghue, J. P. (2004). Modeling and decoding motor cortical activity using a switching Kalman filter. IEEE Transactions on Biomedical Engineering 51(6), 933–942.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×