Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Sets
- 3 Probability
- 4 Visualising and quantifying the properties of data
- 5 Useful distributions
- 6 Uncertainty and errors
- 7 Confidence intervals
- 8 Hypothesis testing
- 9 Fitting
- 10 Multivariate analysis
- Appendix A Glossary
- Appendix B Probability density functions
- Appendix C Numerical integration methods
- Appendix D Solutions
- Appendix E Reference tables
- References
- Index
10 - Multivariate analysis
Published online by Cambridge University Press: 05 July 2013
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Sets
- 3 Probability
- 4 Visualising and quantifying the properties of data
- 5 Useful distributions
- 6 Uncertainty and errors
- 7 Confidence intervals
- 8 Hypothesis testing
- 9 Fitting
- 10 Multivariate analysis
- Appendix A Glossary
- Appendix B Probability density functions
- Appendix C Numerical integration methods
- Appendix D Solutions
- Appendix E Reference tables
- References
- Index
Summary
Consider a data sample Ω described by the set of variables x that is composed of two (or more) populations. Often we are faced with the task of trying to identify or separate one sub-sample from the other (as these are different classes or types of events). In practice it is often not possible to completely separate samples of one class A from another class B as was seen in the case of likelihood fits to data. There are a number of techniques that can be used in order to try and optimally identify or separate a sub-sample of data from the whole, and some of these are described below. Each of the techniques described has its own benefits and disadvantages, and the final choice of the ‘optimal’ solution of how to separate A and B can require subjective input from the analyst. In general this type of situation requires the use of multivariate analysis (MVA).
The simplest approach is that of cutting on the data to improve the purity of a class of events, as described in Section 10.1. More advanced classifiers such as Bayesian classifiers, Fisher discriminants, neural networks, and decision trees are subsequently discussed. The Fisher discriminant described in Section 10.3 has the advantage that the coefficients required to optimally separate two populations of events are determined analytically up to an arbitrary scale factor. The neural network (Section 10.4) and decision tree (Section 10.5) algorithms described here require a numerical optimisation to be performed.
- Type
- Chapter
- Information
- Statistical Data Analysis for the Physical Sciences , pp. 153 - 180Publisher: Cambridge University PressPrint publication year: 2013