Skip to main content Accessibility help
×
Home
Hostname: page-component-7ccbd9845f-hcslb Total loading time: 0.475 Render date: 2023-01-27T19:02:58.834Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true

9 - Maximum Likelihood and Nonlinear Regression

Published online by Cambridge University Press:  01 June 2011

John F. Monahan
Affiliation:
North Carolina State University
Get access

Summary

Introduction

Maximum likelihood is generally regarded as the best all-purpose approach for statistical analysis. Outside of the most common statistical procedures, when the “optimal” or “usual” method is unknown, most statisticians follow the principle of maximum likelihood for parameter estimation and statistical hypothesis tests. Bayesian statistical methods also rely heavily on maximum likelihood. The main reason for this reliance is that following the principle of maximum likelihood usually leads to very reasonable and effective estimators and tests. From a theoretical viewpoint, under very mild conditions, maximum likelihood estimators (MLEs) are consistent, asymptotically unbiased, and efficient. Moreover, MLEs are invariant under reparameterizations or transformations: the MLE of a function of the parameter is the function of the MLE. From a practical viewpoint, the estimates and test statistics can be constructed without a great deal of analysis, and large-sample standard errors can be computed. Overall, experience has shown that maximum likelihood works well most of the time.

The biggest computational challenge comes from the naive expectation that any statistical problem can be solved if the maximum of some function is found. Instead of relying solely on the unconstrained optimization methods presented in Chapter 8 to meet this unrealistic expectation, the nature of the likelihood function can be exploited in ways that are more effective for computing MLEs. Since the exploitable properties of likelihood functions follow from the large-sample theory, this chapter will begin with a summary of the consistency and asymptotic normality properties of MLEs.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Acton, F. S. (1970), Numerical Methods That Work. New York: Harper & Row.Google Scholar
Albert, A. and Anderson, J. A. (1984), “On the Existence of Maximum Likelihood Estimates in Logistic Regression Models,” Biometrika 71: 1–10.CrossRefGoogle Scholar
Bard, Y. (1974), Nonlinear Parameter Estimation. New York: Academic Press.Google Scholar
Bates, D. M., Hamilton, D. C., and Watts, D. G. (1983), “Calculation of Intrinsic and Parameter-Effects Curvatures for Nonlinear Regression Models,” Communications in Statistics B 12: 469–77.CrossRefGoogle Scholar
Bates, D. M. and Watts, D. G. (1980), “Relative Curvature Measures of Nonlinearity,” Journal of the Royal Statistical Society B 42: 1–25.Google Scholar
Bates, D. M. and Watts, D. G. (1988), Nonlinear Regression Analysis and Its Applications. New York: Wiley.CrossRefGoogle Scholar
Boos, Dennis D. (1992), “On Generalized Score Tests,” American Statistician 46: 327–33.Google Scholar
Cox, D. R. (1970), Analysis of Binary Data. London: Methuen.Google Scholar
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977), “Maximum Likelihood from Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society B 39: 1–38.Google Scholar
Dennis, J. E. Jr., Gay, D. M., and Welsch, Roy E. (1981), “An Adaptive Nonlinear Least-Squares Algorithm,” ACM Transactions on Mathematical Software 7: 348–83.CrossRefGoogle Scholar
Dennis, J. E. Jr., and Schnabel, R. B. (1983), Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Efron, B. and Hinkley, D. V. (1978), “Assessing the Accuracy of the Maximum Likelihood Estimator: Observed versus Expected Fisher Information,” Biometrika 65: 457–87.CrossRefGoogle Scholar
Finney, D. J. (1947), “The Estimation from Individual Records of the Relationship Between Dose and Quantal Response,” Biometrika 34: 320–34.CrossRefGoogle ScholarPubMed
Frome, E. L. (1983), “The Analysis of Rates Using Poisson Regression Models,” Biometrics 39: 665–74.CrossRefGoogle ScholarPubMed
Fuller, W. A. (1976), Introduction to Statistical Time Series. New York: Wiley.Google Scholar
Gallant, A. R. (1975), “Nonlinear Regression,” American Statistician 29: 73–81.Google Scholar
Gallant, A. R. (1987), Nonlinear Statistical Models. New York: Wiley.CrossRefGoogle Scholar
Hebden, M. D. (1973), “An Algorithm for Minimization Using Exact Second Derivatives,” Report no. TP515, Atomic Energy Research Establishment, Harwell, U.K.
Holland, Paul W. and Welsch, Roy E. (1977), “Robust Regression Using Iteratively Reweighted Least Squares,” Communications in Statistics A6: 813–27.CrossRefGoogle Scholar
Jennrich, R. I. and Moore, R. H. (1975), “Maximum Likelihood Estimation by Means of Nonlinear Least Squares,” in Proceedings of the Statistical Computing Section, pp. 57–65. Washington, DC: American Statistical Association.Google Scholar
Jennrich, R. I. and Sampson, P. F. (1968), “An Application of Stepwise Regression to Non-Linear Estimation,” Technometrics 10: 63–72.CrossRefGoogle Scholar
LeCam, L. (1956), “On the Asymptotic Theory of Estimation and Testing Hypotheses,” in Proceedings of the Third Berkeley Symposium of Mathematical Statistics and Probability, vol. 1, pp. 129–56. Berkeley: University of California Press.Google Scholar
LeCam, L. (1970), “On the Assumptions Used to Prove Asymptotic Normality of Maximum Likelihood Estimators,” Annals of Mathematical Statistics 41: 802–28.CrossRefGoogle Scholar
LeCam, L. and Yang, G. (1990), Asymptotics in Statistics: Some Basic Concepts. New York: Springer-Verlag.CrossRefGoogle Scholar
Marquardt, D. (1963), “An Algorithm for Least-Squares Estimation of Nonlinear Parameters,” SIAM Journal of Applied Mathematics 11: 431–41.CrossRefGoogle Scholar
McCullagh, P. and Nelder, J. A. (1992), Generalized Linear Models, 2nd ed. New York: Chapman & Hall.Google Scholar
Monahan, John F. (1983), “Fully Bayesian Analysis of ARMA Time Series Models,” Journal of Econometrics 21: 307–31.CrossRefGoogle Scholar
Moré, Jorge J. (1977), “The Levenberg–Marquardt Algorithm: Implementation and Theory,” in Watson, G. A. (Ed.), Numerical Analysis (Lecture Notes in Mathematics, no. 630), pp. 105–16. Berlin: Springer-Verlag.Google Scholar
Moré, Jorge J., Garbow, Burton S., and Hillstrom, Kenneth E. (1981), “Testing Unconstrained Optimization Software,” ACM Transactions on Mathematical Software 7: 17–41.CrossRefGoogle Scholar
Murphy, S. A. and Vaart, A. W. van der (2000), “On Profile Likelihood,” Journal of the American Statistical Association 95: 449–65.CrossRefGoogle Scholar
Neyman, J. and Scott, E. (1948), “Consistent Estimates Based on Partially Consistent Observations,” Econometrica 16: 1–32.CrossRefGoogle Scholar
Rao, C. R. (1973), Linear Statistical Inference and Its Applications, 2nd ed. New York: Wiley.CrossRefGoogle Scholar
Santner, Thomas J. and Duffy, Diane E. (1986), “A Note on A. Albert and J. A. Anderson's Conditions for the Existence of Maximum Likelihood Estimates in Logistic Regression Models,” Biometrika 73: 755–58.CrossRefGoogle Scholar
Serfling, R. J. (1980), Approximation Theorems and Mathematical Statistics. New York: Wiley.CrossRefGoogle Scholar
Stokes, Houston H. (2004), “On the Advantage of Using Two or More Econometric Software Systems to Solve the Same Problem,” Journal of Economic and Social Measurement 29: 307–20.Google Scholar
Wald, A. (1949), “Note on the Consistency of the Maximum Likelihood Estimate,” Annals of Mathematical Statistics 20: 595–601.CrossRefGoogle Scholar
Wedderburn, R. W. M. (1976), “On the Existence and Uniqueness of the Maximum Likelihood Estimates for Certain Generalized Linear Models,” Biometrika 63: 27–32.CrossRefGoogle Scholar
Wolfowitz, J. (1949), “On Wald's Proof of the Consistency of the Maximum Likelihood Estimate,” Annals of Mathematical Statistics 20: 601–2.CrossRefGoogle Scholar
Wu, C. F. Jeff (1981), “Asymptotic Theory of Nonlinear Least Squares Estimation,” Annals of Statistics 9: 501–13.CrossRefGoogle Scholar
Wu, C. F. Jeff (1983), “On the Convergence Properties of the EM Algorithm,” Annals of Statistics 11: 95–103.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×