Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-26T15:04:42.728Z Has data issue: false hasContentIssue false

Part III - Perception Metrology

Published online by Cambridge University Press:  20 December 2018

Ehsan Samei
Affiliation:
Duke University Medical Center, Durham
Elizabeth A. Krupinski
Affiliation:
Emory University, Atlanta
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

Abadi, E., Sturgeon, G.M., Agasthya, G., Harrawood, B., Kapadia, A., Segars, W.P., Samei, E. (2017). Airways, vasculature, and interstitial tissue: anatomically-informed computational modeling of human lungs for virtual clinical trials. Proc SPIE Medical Imag, 10132.Google Scholar
Bamber, D. (1975). Area above ordinal dominance graph and area below receiver operating characteristic graph. J Math Psych, 12, 387415.CrossRefGoogle Scholar
Boedeker, K.L., McNitt-Gray, M.F. (2007). Application of the noise power spectrum in modern diagnostic MDCT: part II. Noise power spectra and signal to noise Phys Med Biol, 52, 40474061.Google Scholar
Burgess, A.E. (1995). Comparison of receiver operating characteristic and forced choice observer performance measurement methods. Med Phys, 22, 643655.Google Scholar
Burgess, A.E. (2011). Visual perception studies and observer models in medical imaging. Semin Nucl Med, 41, 419436.Google Scholar
Chawla, A.S., Samei, E. (2007). Ambient illumination revisited: a new adaptation-based approach for optimizing medical imaging reading environments. Med Phys, 34, 8190.Google Scholar
Chawla, A.S., Samei, E., Saunders, R.S., Lo, J.Y., Baker, J.A. (2008). A mathematical model platform for optimizing a multiprojection breast imaging system. Med Phys, 35, 13371345.Google Scholar
Christianson, O., Chen, J., Yang, Z., Saiprasad, G., Dima, A., Filliben, J., Peskin, A., Trimble, C., Siegel, E., Samei, E. (2015). An improved index of image quality for task-based performance of CT iterative reconstruction across three commercial implementations. Radiology, 275, 725734.Google Scholar
Dobbins, J.T., McAdams, H.P., Song, J.W., Li, C.M., Godfrey, D.J., DeLong, D.M., Paik, S.H., Martinez-Jimenez, S. (2008). Digital tomosynthesis of the chest for lung nodule detection: interim sensitivity results from an ongoing NIH-sponsored trial. Med Phys, 35, 25542557.CrossRefGoogle Scholar
Dorfman, D.D., Berbaum, K.S., Metz, C.E. (1992 ). Receiver operating characteristic rating analysis. Generalization to the population of readers and patients with the jackknife method. Invest Radiol, 27, 723731.Google Scholar
Egglin, T.K., Feinstein, A.R. (1996). Context bias. A problem in diagnostic radiology. JAMA, 276, 17521755.Google Scholar
Green, D.M., Swets, J.A. (1966). Signal Detection Theory and Psychophysics. New York, NY: Wiley.Google Scholar
Hanley, J.A., McNeil, B.J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 2936.Google Scholar
Hillis, S., Berbaum, K.S. (2006). MRMC sample size program. http://perception.radiology.uiowa.edu (accessed January 9, 2018).Google Scholar
Hoe, C.L., Samei, E., Frush, D.P., Delong, D.M. (2006). Simulation of liver lesions for pediatric CT. Radiology, 238, 699705.Google Scholar
Ko, J.P., Rusinek, H., Jacobs, E.L., Babb, J.S., Betke, M., McGuinness, G., Naidich, D P. (2003). Small pulmonary nodules: volume measurement at chest CT – phantom study. Radiology, 228, 864870.Google Scholar
Metz, C. (2000). Fundamental ROC analysis. In: Beutel, J., Kundel, H.L., Van Metter, R.L. (eds.) Handbook of Medical Imaging. Vol. 1, Physics and Psychophysics. Bellingham, WA: SPIE Press, pp. 751769.Google Scholar
Obuchowski, N.A. (2000). Sample size tables for receiver operating characteristic studies. AJR Am J Roentgenol, 175, 603608.Google Scholar
Obuchowski, N.A. (2004). How many observers are needed in clinical studies of medical imaging? AJR Am J Roentgenol, 182, 867869.Google Scholar
Pokrajac, D.D., Maidment, A.D.A., Bakic, P.R. (2012). Optimized generation of high resolution breast anthropomorphic software phantoms. Med Phys, 39, 22902302.Google Scholar
Pollard, B.J., Chawla, A.S., Delong, D.M., Hashimoto, N., Samei, E. (2008). Object detectability at increased ambient lighting conditions. Med Phys, 35, 22042213.Google Scholar
Samei, E., Flynn, M.J., Eyler, W.R. (1997). Simulation of subtle lung nodules in projection chest radiography. Radiology, 202, 117124.Google Scholar
Samei, E., Ranger, N.T., Delong, D.M. (2008). A comparative contrast-detail study of five medical displays. Med Phys, 35, 13581364.CrossRefGoogle ScholarPubMed
Samei, E., Lin, Y., Choudhury, K.R., McAdams, H.P. (2014). Automated characterization of perceptual quality of clinical chest radiographs: validation and calibration to observer preference. Med Phys, 41, 111918.Google Scholar
Saunders, R.S., Samei, E., Hoeschen, C. (2004). Impact of resolution and noise characteristics of digital radiographic detectors on the detectability of lung nodules. Med Phys, 31, 16031613.Google Scholar
Saunders, R., Samei, E., Baker, J., Delong, D. (2006). Simulation of mammographic lesions. Acad Radiol, 13, 860870.Google Scholar
Schindera, S.T., Nelson, R.C., Mukundan, S., Paulson, E.K., Jaffe, T.A., Miller, C.M., DeLong, D.M., Kawaji, K., Yoshizumi, T.T., Samei, E. (2008). Hypervascular liver tumors: low tube voltage, high tube current multi-detector row CT for enhanced detection – phantom study. Radiology, 246, 125132.Google Scholar
Segars, W.P., Mahesh, M., Beck, T.J., Frey, E.C., Tsui, B.M. (2008). Realistic CT simulation using the 4D XCAT phantom. Med Phys, 35, 38003808.CrossRefGoogle ScholarPubMed
Solomon, J., Samei, E. (2014). A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging. Phys Med Biol, 59, 66376657.Google Scholar
Solomon, J., Samei, E. (2016). Correlation between human detection accuracy and observer model-based image quality metrics in CT. J Med Imag, 3, 035506.Google Scholar
Swets, J.A., Pickett, R.M. (1982). Evaluation of Diagnostic Systems: Methods from Signal Detection Theory. New York, NY: Academic Press.Google Scholar
Thornbury, J.R., Fryback, D.G., Turski, P.A., Javid, M.J., McDonald, J.V., Beinlich, B.R., Gentry, L.R., Sackett, J.F., Dasbach, E.J., Martin, P.A. (1993). Disk-caused nerve compression in patients with acute low-back pain: diagnosis with MR, CT myelography, and plain CT. Radiology, 186, 731738.Google Scholar
Venjakoba, A.C., Mello-Thoms, C.R. (2016). Review of prospects and challenges of eye tracking in volumetric imaging. J Med Imag, 3, 011002.Google Scholar
Zhou, X.-H., Obuchowski, N.A., McClish, D.K. (2002). Statistical Methods in Diagnostic Medicine. New York, NY: Wiley-Interscience.Google Scholar

References

Aoki, K., Misumi, J., Kimura, T., Zhao, W., Xie, T. (1997). Evaluation of cutoff levels for screening of gastric cancer using serum pepsinogens and distributions of levels of serum pepsinogen I, Ii and of Pg I/Pg Ii ratios in a gastric cancer case-control study. J Epidemiol, 7, 143151.Google Scholar
Bamber, D. (1975). The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. J Math Psychol, 12, 387415.Google Scholar
Begg, C.B., Greenes, R.A. (1983). Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics, 39, 207215.Google Scholar
Beiden, S.V., Campbell, G., Meier, K.L., Wagner, R.F. (2000a). The problem of ROC analysis without truth: the EM algorithm and the information matrix. Proc SPIE, 3981, 126134.CrossRefGoogle Scholar
Beiden, S.V., Wagner, R.F., Campbell, G. (2000b). Components-of-variance models and multiple-bootstrap experiments: an alternative method for random effects, receiver operating characteristic analysis. Acad Radiol, 7, 341349.Google Scholar
Cortes, C., Mohri, M. (2003). AUC optimization vs. error rate. In: Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference. Cambridge, MA: MIT Press.Google Scholar
Delong, E.R., Delong, D.M., Clarke-Pearson, D.L. (1988). Comparing the areas under two or more correlated receiver operating characteristics curves: a non-parametric approach. Biometrics, 44, 837845.Google Scholar
Deneef, P., Kent, D.L. (1993). Using treatment-tradeoff preferences to select diagnostic strategies. Med Decis Making, 13, 126132.Google Scholar
Dorfman, D.D., Alf, E. (1968). Maximum likelihood estimation of parameters of signal detection theory: a direct solution. Psychometrika, 33, 117124.Google Scholar
Dorfman, D.D., Alf, E. (1969). Maximum likelihood estimation of parameters of signal detection theory and determination of confidence intervals – rating method data. J Math Psychol, 6, 487496.Google Scholar
Dorfman, D.D., Berbaum, K.S., Metz, C.E. (1992). Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method. Invest Radiol, 27, 723731.Google Scholar
Dorfman, D.D., Berbaum, K.S., Metz, C.E., Lenth, R.V., Hanley, J.A., Abu Dagga, H. (1997). Proper receiver operating characteristic analysis: the bigamma model. Acta Radiol, 4, 138–149.Google Scholar
Dwyer, A.J. (1997). In pursuit of a piece of the ROC. Radiology, 202, 621625.Google Scholar
Efron, B., Tibshirani, R.J. (1993). An Introduction to the Bootstrap. New York, NY: Chapman and Hall.Google Scholar
Faraggi, D., Reiser, B. (2002). Estimation of the area under the ROC curve. Statistics Med, 21, 30933106.Google Scholar
Goddard, M.J., Hinberg, I. (1990). Receiver operating characteristic (ROC) curves and non-normal data: an empirical study. Statistics Med, 9, 325337.Google Scholar
Greiner, M., Sohr, D., Gobel, P. (1995). A modified ROC analysis for the selection of cut-off values and the definition of intermediate results for serodiagnostic tests. J Immunol Methods, 185, 123132.Google Scholar
Grmec, I., Kupnik, D. (2004). Does the Mainz emergency evaluation scoring (MEES) in combination with capnometry (MEESC) help in the prognosis of outcome from cardiopulmonary resuscitation in a prehospital setting? Resuscitation, 58, 8996.CrossRefGoogle Scholar
Hajian-Tilaki, K.O., Hanley, J.A., Joseph, L., Collet, J.P. (1997). A comparison of parametric and nonparametric approaches to ROC analysis of quantitative diagnostic tests. Med Decis Making, 17, 94102.Google Scholar
Halpern, E.J., Albert, M., Krieger, A.M., Metz, C.E., Maidment, A.D. (1996). Comparison of receiver operating characteristic curves on the basis of optimal operating points. Acad Radiol, 3, 245253.Google Scholar
Hand, D.J., Till, R.J. (2001). A simple generalization of the area under the ROC curve to multiple class classification problems. Machine Learn, 45, 171186.Google Scholar
Hanley, J.A. (1988). The robustness of the “binormal” assumptions used in fitting ROC curves. Med Decis Making, 8, 197203.Google Scholar
Hanley, J.A. (1996). The use of the ‘‘binormal’’ model for parametric ROC analysis of quantitative diagnostic tests. Statistics Med, 15, 15751585.Google Scholar
Hanley, J.A., Hajian-Tilaki, K.O. (1997). Sampling variability of nonparametric estimates of the areas under receiver operating characteristic curves: an update. Acad Radiol, 4, 4958.Google Scholar
Hanley, J.A., McNeil, B.J. (1982). The meaning and use of the area under a receiver operating characteristic curve. Radiology, 143, 2936.Google Scholar
Hanley, J.A., McNeil, B.J. (1983). A method for comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148, 839843.Google Scholar
Harrell, F.E., Jr., Califf, R.M., Pryor, D.B., Lee, K.L., Rosati, R.A. (1982). Evaluating the yield of medical tests. JAMA, 247, 25432546.Google Scholar
Henkelman, R.M., Kay, I., Bronskill, M.J. (1990). Receiver operator characteristic (ROC) analysis without truth. Med Decis Making, 10, 2429.Google Scholar
Ikeda, M., Ishigaki, T., Yamauch, K. (2003). How to establish equivalence between two treatments in ROC analysis. Proc SPIE, 5034, 383392.Google Scholar
Jiang, Y., Metz, C.E., Nishikawa, R.M. (1996). A receiver operating characteristic partial area index for highly sensitive diagnostic tests. Radiology, 201, 745750.Google Scholar
Johnson, W.O., Gastwirth, J.L., Pearson, L.M. (2001). Screening without a “gold standard”: the Hui-Walter paradigm revisited. Am J Epidemiol, 153, 921924.Google Scholar
Kairisto, V., Poola, A. (1995). Software for illustrative presentation of basic clinical characteristics of laboratory tests – Graphroc for Windows. Scand J Clin Lab Invest, 55, 4360.Google Scholar
Kijewski, M.F., Swennson, R.G., Judy, P.F. (1989). Analysis of rating data from multiple-alternative tasks. J Math Psychol, 33, 123.Google Scholar
Lee, W.C. (1999). Probabilistic analysis of global performances of diagnostic tests: interpreting the Lorenz curve-based summary measures. Statistics Med, 18, 455471.Google Scholar
Lee, W.C., Hsiao, C.K. (1996). Alternative summary indices for the receiver operating characteristic curve. Epidemiology, 7, 605611.Google Scholar
Li, C.R., Liao, C.-T., Liu, J.-P. (2008). A non-inferiority test for diagnostic accuracy based on the paired partial areas under ROC curves. Statistics Med, 27, 17621776.Google Scholar
Liu, J.-P., Ma, M.-C., Wu, C.-Y., Tai, J.-Y. (2006). Tests of equivalence and non-inferiority for diagnostic accuracy based on the paired areas under ROC curves. Statistics Med, 25, 12191238.Google Scholar
Lusted, L.B. (1960). Logical analysis in roentgen diagnosis. Radiology, 74, 7893.Google Scholar
Lusted, L.B. (1961). Signal detectability and medical decision making. Science, 171, 12171219.Google Scholar
McClish, D.K. (1989). Analyzing a portion of the ROC curve. Med Decis Making, 9, 190195.Google Scholar
Metz, C.E. (1978). Basic principles of ROC analysis. Semin Nucl Med, 8, 283298.Google Scholar
Metz, C.E. (1986a). Statistical analysis of ROC data in evaluating diagnostic performance. In: Herbert, D., Myers, R. (eds.) Multiple Regression Analysis: Applications in the Health Sciences. New York, NY: American Institute of Physics, pp. 365384.Google Scholar
Metz, C.E. (1986b). ROC methodology in radiologic imaging. Invest Radiol, 21, 720733.Google Scholar
Metz, C.E., Kronman, H.B. (1980). Statistical significance tests for binormal ROC curves. J Math Psychol, 22, 218243.Google Scholar
Metz, C.E., Pan, X. (1999). “Proper” binormal ROC curves: theory and maximum-likelihood estimation. J Math Psychol, 43, 133.Google Scholar
Metz, C.E., Wang, P.-L., Kronman, H.B. (1984). A new approach for testing the significance of differences between ROC curves measured from correlated data. In: Deconinck, F. (ed.) Information Processing in Medical Imaging. The Hague: Nijhoff, pp. 432445.Google Scholar
Metz, C.E., Herman, B.A., Shen, J.-H. (1998). Maximum-likelihood estimation of ROC curves from continuously-distributed data. Statistics Med, 17, 10331053.Google Scholar
Miller, D.P., O’Shaughnessy, K.F., Wood, S.A., Castellino, R.A. (2004). Gold standards and expert panels: a pulmonary nodule case study with challenges and solutions. Proc SPIE, 5372, 173.CrossRefGoogle Scholar
Mossman, D. (1999). Three-way ROCs. Med Decis Making, 19, 7889.Google Scholar
Obuchowski, N.A. (1994). Sample size for receiver operating characteristic studies. Invest Radiol, 29, 238243.Google Scholar
Obuchowski, N.A. (1997). Testing for equivalence of diagnostic tests. Am J Radiol, 168, 1317.Google Scholar
Obuchowski, N.A. (2000). Sample size tables for receiver operating characteristic studies. Am J Roentgenol, 175, 603608.Google Scholar
Obuchowski, N.A. (2005). Multi-reader multi-modality ROC studies: hypothesis testing and sample size estimation using an ANOVA approach with dependent observations. Acad Radiol, 2, 522529.Google Scholar
Obuchowski, N.A. (2006). An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale. Statistics Med, 25, 481493.Google Scholar
Obuchowski, N.A., Liebler, M.L. (1998). Confidence intervals for the receiver operating characteristic area in studies with small samples. Acad Radiol, 5, 561571.Google Scholar
Obuchowski, N.A., Goske, M.J., Applegate, K.E. (2001). Assessing physicians’ accuracy in diagnosing pediatric patients with acute abdominal pain: measuring accuracy for multiple diseases. Statistics Med, 20, 32613278.Google Scholar
Pepe, M.S. (2003). The Statistical Evaluation of Medical Tests for Classification and Prediction. Oxford, UK: Oxford University Press.Google Scholar
Petrick, N., Gallas, B.D., Samuelson, F.W., Wagner, R.F., Myers, K.J. (2005). Influence of panel size and expert skill on truth panel performance when combining expert ratings. Proc SPIE, 5749, 49.Google Scholar
Phelps, C.E., Hutson, A. (1995). Estimating diagnostic test accuracy using a “fuzzy gold standard.” Med Decis Making, 15, 4457.Google Scholar
Schafer, H. (1989). Constructing a cut-off point for a quantitative diagnostic test. Statistics Med, 8, 13811391.Google Scholar
Schoonjans, F., Zalata, A., Depuydt, C.E., Comhaire, F.H. (1995). Medcalc: a new computer program for medical statistics. Comput Methods Programs Biomed, 48, 257262.Google Scholar
Schisterman, E.F., Perkins, N.J., Aiyi, L., Bondell, H. (2005). Optimal cut-point and its corresponding Youden index to discriminate individuals using pooled blood samples. Epidemiology, 16, 7381.Google Scholar
Schuirmann, D.U.I. (1987). A comparison of the two 1-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. J Pharmacokinet Pharmacodyn, 15, 657680.Google Scholar
Stephan, C., Wesseling, S., Schink, T., Jung, K. (2003). Comparison of eight computer programs for receiver-operating characteristic analysis. Clin Chem, 49, 433439.Google Scholar
Swets, J.A. (1979). ROC analysis applied to the evaluation of medical imaging techniques. Invest Radiol, 14, 109121.Google Scholar
Swets, J.A. (1986). Empirical ROCs in discrimination and diagnostic tasks: implications for theory and measurement of performance. Psychol Bull, 99, 181198.Google Scholar
Swets, J.A. (1988). Measuring the accuracy of diagnostic systems. Science, 240, 12851293.Google Scholar
Swets, J.A. (1992). The science of choosing the right decision threshold in high-stakes diagnostics. Am Psychol, 47, 522532.Google Scholar
Toledano, A.Y., Gatsonis, C. (1999). Generalized estimating equations for ordinal categorical data: arbitrary patterns of missing responses and missingness in a key covariate. Biometrics, 55, 488496.Google Scholar
Vergara, I.A., Norambuena, T., Ferrada, E., Slater, A.W., Melo, F. (2008). StAR: a simple tool for the statistical comparison of ROC curves. BMC Bioinformatics, 9, 265.Google Scholar
Wagner, R.F., Beiden, C.V., Metz, C.E., Campbell, G. (2001). Continuous versus categorical data for ROC analysis: some quantitative considerations. Acad Radiol, 8, 328334.Google Scholar
Wagner, R.F., Metz, C.E., Campbell, G. (2007). Assessment of medical imaging systems and computer aids: a tutorial review. Acad Radiol, 14, 723748.Google Scholar
Walsh, S.J. (1999). Goodness-of-fit issues in ROC curve estimation. Med Decis Making, 19, 193201.Google Scholar
Wieand, S., Gail, M.H., James, B.R., James, K.L. (1989). A family of nonparametric statistics for comparing diagnostic markers with paired or unpaired data. Biometrika, 76, 585592.Google Scholar
Youden, W.J. (1950). Index for rating diagnostic tests. Cancer, 3, 3235.3.0.CO;2-3>CrossRefGoogle ScholarPubMed
Zhang, D.D., Zhou, X.-H., Freeman, D.H., Jr., Freeman, J.L. (2002). A non-parametric method for the comparison of partial areas under ROC curves and its application to large health care data sets. Statistics Med, 21, 701–15.Google Scholar
Zhou, X.-H., Higgs, R.E. (2000). Assessing the relative accuracies of two screening tests in the presence of verification bias. Statistics Med, 19, 16971705.Google Scholar
Zhou, X.-H., Obuchowski, N.A., McClish, D.K. (2002). Statistical Methods in Diagnostic Medicine. New York, NY: Wiley.Google Scholar
Zou, K.H. (2001). Comparison of correlated receiver operating characteristic curves derived from repeated diagnostic test data. Acad Radiol, 8, 225233.Google Scholar
Zou, K.H., Hall, W.J., Shapiro, D.E. (1997). Smooth non-parametric receiver operating characteristic (ROC) curves for continuous diagnostic tests. Statistics Med, 16, 21432156.Google Scholar
Zou, K.H., Tempany, C.M., Fielding, J.R., Silverman, S.G. (1998). Original smooth receiver operating characteristic curve estimation from continuous data: statistical methods for analyzing the predictive value of spiral CT of ureteral stones. Acad Radiol, 5, 680687.Google Scholar
Zou, K.H., Resnic, F.S., Talos, I.F., et al. (2005). A global goodness-of-fit test for receiver operating characteristic curve analysis via the bootstrap method. J Biomed Informatics, 38, 395403.Google Scholar

References

Abbey, C.K., Samuelson, F.W., Gallas, B.D. (2013). Statistical power considerations for a utility endpoint in observer performance studies. Acad Radiol, 20, 798–806.Google Scholar
Abbey, C.K., Gallas, B.D., Boone, J.M., Niklason, L.T., Hadjiiski, L.M., Sahiner, B., Samuelson, F.W. (2014). Comparative statistical properties of expected utility and area under the ROC curve for laboratory studies of observer performance in screening mammography. Acad Radiol, 21, 481–490.Google Scholar
Bamber, D. (1975). Area above ordinal dominance graph and area below receiver operating characteristic graph. J Math Psychol, 12, 387–415.Google Scholar
DeLong, E.R., DeLong, D.M., Clarke-Pearson, D.L. (1988). Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics, 44, 837–845.Google Scholar
Dorfman, D.D., Berbaum, K.S. (2000a). A contaminated binormal model for ROC data – part III. Initial evaluation with detection ROC data. Acad Radiol, 7, 438–447.Google Scholar
Dorfman, D.D., Berbaum, K.S. (2000b). A contaminated binormal model for ROC data – part II. A formal model. Acad Radiol, 7, 427–437.Google Scholar
Dorfman, D.D., Berbaum, K.S., Metz, C.E. (1992). Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method. Invest Radiol, 27, 723–731.Google Scholar
Dorfman, D.D., Berbaum, K.S., Lenth, R.V., Chen, Y.F. (1999). Monte Carlo validation of a multireader method for receiver operating characteristic discrete rating data: split plot experimental design. SPIE Proc Ser, 3663, 9199.Google Scholar
Dorfman, D.D., Berbaum, K.S., Brandser, E.A. (2000). A contaminated binormal model for ROC data – part I. Some interesting examples of binormal degeneracy. Acad Radiol, 7, 420–426.Google Scholar
Gallas, B.D. (2006). One-shot estimate of MRMC variance: AUC. Acad Radiol, 13, 353–362.Google Scholar
Gallas, B.D. (2017). iMRMC v3p3 Application for Analyzing and Sizing MRMC Reader Studies (computer software). Division of Imaging, Diagnostics, and Software Reliability, OSEL/CDRH/FDA, Silver Spring, MD. Available for download from https://github.com/DIDSR/iMRMC/releases.Google Scholar
Gallas, B.D., Bandos, A., Samuelson, F.W., Wagner, R.F. (2009). A framework for random-effects ROC analysis: biases with the bootstrap and other variance estimators. Commun Statistics – Theory Methods, 38, 2586–2603.Google Scholar
Hanley, J.A., McNeil, B.J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.Google Scholar
Hillis, S.L. (2007). A comparison of denominator degrees of freedom methods for multiple observer ROC analysis. Statistics Med, 26(3), 596–619.Google Scholar
Hillis, S.L. (2014). A marginal-mean ANOVA approach for analyzing multireader multicase radiological imaging data. Statistics Med, 33, 330–360.Google Scholar
Hillis, S.L., Berbaum, K.S. (2005). Monte Carlo validation of the Dorfman-Berbaum-Metz method using normalized pseudovalues and less data-based model simplification. Acad Radiol, 12, 1534–1542.Google Scholar
Hillis, S.L., Schartz, K.M. (2015). Demonstration of multi- and single-reader sample size program for diagnostic studies software. In: Mello-Thoms, C.R., Kupinski, M.A. (eds.) Medical Imaging 2015: Image Perception, Observer Performance, and Technology Assessment, Proc SPIE 9416 94160E.Google Scholar
Hillis, S.L., Obuchowski, N.A., Schartz, K.M., Berbaum, K.S. (2005). A comparison of the Dorfman-Berbaum-Metz and Obuchowski-Rockette methods for receiver operating characteristic (ROC) data. Statistics Med, 24, 1579–1607.Google Scholar
Hillis, S.L., Schartz, K.M., Berbaum, K.S. (2007). DBM MRMC 3.0 for SAS (computer software). Available at: http://perception.radiology.uiowa.edu.Google Scholar
Hillis, S.L., Berbaum, K.S., Metz, C.E. (2008). Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis. Acad Radiol, 15, 647–661.Google Scholar
Hillis, S.L., Obuchowski, N.A., Berbaum, K.S. (2011). Power estimation for multireader ROC methods: an updated and unified approach. Acad Radiol, 18, 129–142.Google Scholar
Jiang, Y.L., Metz, C.E., Nishikawa, R.M. (1996). A receiver operating characteristic partial area index for highly sensitive diagnostic tests. Radiology, 201, 745–750.Google Scholar
McClish, D.K. (1989). Analyzing a portion of the ROC curve. Med Decis Making, 9, 190–195.Google Scholar
Metz, C.E., Pan, X.C. (1999). “Proper” binormal ROC curves: theory and maximum-likelihood estimation. J Math Psychol, 43, 1–33.Google Scholar
Obuchowski, N.A. (1995). Multireader receiver operating characteristic studies: a comparison of study designs. Acad Radiol, 2, 709–716.Google Scholar
Obuchowski, N.A. (2009). Reducing the number of reader interpretations in MRMC studies. Acad Radiol, 16, 209–217.Google Scholar
Obuchowski, N.A., Rockette, H.E. (1995). Hypothesis testing of diagnostic accuracy for multiple readers and multiple tests: an ANOVA approach with dependent observations. Commun Statistics – Simul Comput, 24, 285–308.Google Scholar
Obuchowski, N.A., Gallas, B.D., Hillis, S.L. (2012). Multi-reader ROC studies with split-plot designs: a comparison of statistical methods. Acad Radiol, 19, 1508–1517.Google Scholar
Pan, X.C., Metz, C.E. (1997). The ‘‘proper’’ binormal model: parametric receiver operating characteristic curve estimation with degenerate data. Acad Radiol, 4, 380–389.Google Scholar
Schartz, K.M., Hillis, S.L. (2018). Multi-reader sample size program for diagnostic studies (computer software). Available for download from http://perception.radiology.uiowa.edu.Google Scholar
Schartz, K.M., Hillis, S.L., Pesce, L.L., Berbaum, K.S. (2018). OR-DBM MRMC (Version 2.51) (computer software). Available for download from http://perception.radiology.uiowa.edu.Google Scholar
Shao, J., Dongshen, T. (1995). The Jackknife and Bootstrap. New York, NY: Springer-VerlagGoogle Scholar
Van Dyke, C.W., White, R.D., Obuchowski, N.A., Geisinger, M.A., Lorig, R.J., Meziane, M.A. (1993). Cine MRI in the diagnosis of thoracic aortic dissection. 79th RSNA Meetings, Chicago, IL, November 28–December 3.Google Scholar
Zeger, S.L., Liang, K.Y. (1986). Longitudinal data analysis for discrete and continuous outcomes. Biometrics, 42, 121–130.Google Scholar

References

Allard, F., Graham, S., Paarsalu, M.E., et al. (1980). Perception in sport: basketball. J Sport Psychol, 2(1), 1421.Google Scholar
Alloway, T.P., Alloway, R.G. (2010). Investigating the predictive roles of working memory and IQ in academic attainment. J Exp Child Psychol, 106, 2029.Google Scholar
Anderson, M.C. (1994). Remembering can cause forgetting: retrieval dynamics in long-term memory. J Exp Psychol Learn, 20(5), 1063.Google Scholar
Baddeley, A.D. (1986). Working Memory. Oxford, UK: Clarendon Press.Google Scholar
Baddeley, A.D. (2007). Working Memory, Thought, and Action (Vol. 45). Oxford, UK: OUP.Google Scholar
Baddeley, A.D., Hitch, G.J. (2000). Development of working memory: should the Pascual-Leone and the Baddeley and Hitch models be merged? J Exp Child Psychol, 77(2), 128137.Google Scholar
Baddeley, A.D., Hitch, G.J. (2017). Is the levels of processing effect language-limited? J Mem Lang, 92, 113.Google Scholar
Balassy, C. (2005). Flat-panel display (LCD) versus high-resolution gray-scale display (CRT) for chest radiography: an observer preference study. Am J Roentgenol, 184(3), 752756.Google Scholar
Brady, T.F., Konkle, T., Alvarez, G.A. (2008). Visual long-term memory has a massive storage capacity for object details. Proc Natl Acad Sci U S A, 105, 1432514329.Google Scholar
Brady, T.F., Konkle, T., Alvarez, G.A., et al. (2011). A review of visual memory capacity: beyond individual items and toward structured representations. J Vision, 11(5), 4, 134.Google Scholar
Chase, W.G., Ericsson, K.A. (1982). Skill and working memory. Psychol Learn Motiv, 16, 158.Google Scholar
Chase, W.G., Simon, H.A. (1973). Perception in chess. Cogn Psychol, 4(1), 5581.Google Scholar
Cohen, M.A., Evans, K.K., Horowitz, T.S., et al. (2011). Auditory and visual memory in musicians and nonmusicians. Psychonom B Rev, 18(3), 586591.Google Scholar
Daneman, M., Carpenter, P.A. (1980). Individual differences in working memory and reading. J Verb Learn Verb B, 19, 450466.Google Scholar
de Hoop, B., De Boo, D.W., Gieterna, H.A., et al. (2010). Computer-aided detection of lung cancer on chest radiographs: effect on observer performance. Radiology, 257(2), 532540.Google Scholar
de Smet, A.A., Norris, M.A., Yandow, D.R., et al. (1993). Diagnosis of meniscal tears of the knee with MR imaging: effect of observer variation and sample size on sensitivity and specificity. Am J Roentgenol, 160, 555559.Google Scholar
de Vries, A.H., Venema, H.W., Florie, J., et al. (2008). Influence of tagged fecal material on detectability of colorectal polyps at CT: phantom study. Am J Roentgenol, 2008, W181–W189.Google Scholar
Evans, K.K., Cohen, M.A., Tambouret, R., et al. (2011). Does visual expertise improve visual recognition memory? Atten Percept Psychophys, 73(1), 3035.Google Scholar
Evans, K.K., Haygood, T.M., Cooper, J., et al. (2016a). A half-second glimpse often lets radiologists identify breast cancer cases even when viewing the mammogram of the opposite breast. Proc Natl Acad Sci U S A, 113(37), 1029210297.Google Scholar
Evans, K.K., Marom, E.M., Godoy, M.C.B., et al. (2016b). Radiologists remember mountains better than radiographs, or do they? J Med Imag, 3(1), 011005.Google Scholar
Frey, P.W., Adesman, P. (1976). Recall memory for visually presented chess positions. Mem Cognit, 4(5), 541547.Google Scholar
Fuhrman, C.R., Britton, C.A., Bender, T., et al. (2002). Observer performance studies: detection of single versus multiple abnormalities of the chest. Am J Roentgenol, 179, 15511553.Google Scholar
Fukuda, K., Vogel, E.K., Mayr, U., et al. (2010). Quantity not quality: the relationship between fluid intelligence and working memory capacity. Psychonom B Rev, 17, 673679.Google Scholar
Gardiner, J.M. (2001). Episodic memory and autonoetic consciousness: a first–person approach. Philos T R Soc B, 356(1413), 13511361.Google Scholar
Good, W.F., Sumkin, J.H., Dash, N., et al. (1999). Observer sensitivity to small differences: a multipoint rank-order experiment. Am J Roentgenol, 175, 275278.Google Scholar
Hardesty, L.A., Ganott, M.A., Hakim, C.M., et al. (2005). “Memory effect” in observer performance studies of mammograms. Acad Radiol, 12(3), 286290.Google Scholar
Haygood, T.M., Ryan, J., Liu, M.A.Q., et al. (2012). Image recognition and consistency of response. Proc SPIE 8318, Medical Imaging 2012: Image perception, observer performance, and technology assessment.Google Scholar
Haygood, T.M., Liu, M.A.Q., Galvan, E.M., et al. (2013). Memory for previously viewed radiographs and the effect of prior knowledge of memory task. Acad Radiol, 10, 15981603.Google Scholar
Henderson, J.M. (2003). Human gaze control during real-world scene perception. Trends Cogn Sci, 7(11), 498504.Google Scholar
Hillard, A., Myles-Worsley, M., Johnston, W., et al. (1985). The development of radiologic schemata through training and experience: a preliminary communication. Invest Radiol, 18, 422425.Google Scholar
Hollingworth, A. (2005). The relationship between online visual representation of a scene and long-term scene memory. J Exp Psychol Learn, 31, 396411.Google Scholar
Hollingworth, A., Henderson, J.M. (2003). Testing a conceptual locus for the inconsistent object change detection advantage in real-world scenes. Mem Cognit, 31, 930940.Google Scholar
Isola, P., Xiao, J., Parikh, D., et al (2014). What makes a photograph memorable? IEEE T Pattern Anal, 36(7), 14691482.Google Scholar
Jolicoeur, P. (1985). The time to name disoriented natural objects. Mem Cognit, 13(4), 289303.Google Scholar
Kallergi, M., Pianou, N., Georgakopoulos, A., et al (2012). Quantitative evaluation of the memory bias effect in ROC studies with PET/CT. Proc SPIE, 8318, 83180D1-8.Google Scholar
Kandel, E.R., Schwartz, J.H., Jessell, T.M., et al. (2000). Principles of Neural Science, Vol. 4. New York: McGraw-Hill, pp. 12271246.Google Scholar
Kane, M.J., Bleckley, M.K., Conway, A.R.A., et al. (2001). A controlled-attention view of working-memory capacity. J Exp Psychol Gen, 130, 169183.Google Scholar
Kim, A.Y., Cho, K.S., Song, K., et al. (2001). Urinary calculi on computed radiography: comparison of observer performance with hard-copy versus soft-copy images on different viewer systems. Am J Roentgenol, 177(2), 331335.Google Scholar
Konkle, T., Brady, T.F., Alvarez, G.A., et al. (2010). Scene memory is more detailed than you think: the role of categories in visual long-term memory. Psychol Sci, 21, 1551–1556.Google Scholar
Koutstaal, W. (2003). Older adults encode – but do not always use – perceptual details: intentional versus unintentional effects of detail on memory judgments. Psychol Sci, 14(2), 189193.Google Scholar
Landauer, T.K. (1986). How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cogn Sci, 10(4), 477493.Google Scholar
Lewandowsky, S. (2014). Implicit Memory: Theoretical Issues. New York, NY: Psychology Press.Google Scholar
Meisamy, S., Bolan, P.J., Baker, E.H., et al. (2005). Adding in vivo quantitative 1H MR spectroscopy to improve diagnostic accuracy of breast MR imaging: preliminary results of observer performance study at 4.0 T. Radiology, 236, 465467.Google Scholar
Mervis, C.B., Rosch, E. (1981). Categorization of natural objects. Annu Rev Psychol, 32(1), 89115.Google Scholar
Metz, C.E. (1989). Some practical issues of experimental design and data analysis in radiological ROC studies. Invest Radiol, 24, 234245.Google Scholar
Reiner, B.I., Krupinski, E. (2012). The insidious problem of fatigue in medical imaging practice. J Digit Imag, 25(1), 36.Google Scholar
Ryan, J.T., Haygood, T.M., Yamal, J., et al. (2011). The “memory effect” for repeated radiologic observations. Am J Roentgenol, 197, W985–W991.Google Scholar
Schacter, D.L. (1993). Implicit memory: a selective review. Annu Rev Neurosci, 16(1), 159182.Google Scholar
Schacter, D.L., Tulving, E. (1994). What are the memory systems of 1994? In: Schacter, D.L., Tulving, E. (eds.) Memory Systems. Cambridge, MA: MIT Press.Google Scholar
Shepard, R.N. (1967). Recognition memory for words, sentences, and pictures. J Verb Learn Verb B, 6, 156163.Google Scholar
Shiffrin, R.M., Steyvers, M. (1997). A model for recognition memory: REM – retrieving effectively from memory. Psychonom B Rev, 4(2), 145166.Google Scholar
Sperling, G. (1963). A model for visual memory tasks. Hum Factors, 5(1), 1931.Google Scholar
Squire, L.R. (1989). On the course of forgetting in very long-term memory. J Exp Psychol Learn, 15(2), 241.Google Scholar
Standing, L. (1973). Learning 10,000 pictures. Q J Exp Psychol, 25, 207222.Google Scholar
Taylor-Phillips, S., Elze, M.D., Krupinski, E.A., et al. (2015). Retrospective review of the drop in observer detection performance over time in lesion-enriched experimental studies. J Digit Imag, 28(1), 3240.Google Scholar
Tchou, P.M., Haygood, T.M., Atkinson, E.N., et al. (2010). Interpretation time of computer-aided detection at screening mammography. Radiology, 257, 4046.Google Scholar
Tulving, E. (2000). Concepts of memory. In: The Oxford Handbook of Memory. Oxford: Oxford University Press, pp. 3343.Google Scholar
Tulving, E. (2002). Episodic memory: from mind to brain. Annu Rev Psychol, 53(1), 125.Google Scholar
Vogt, S., Magnussen, S. (2007). Expertise in pictorial perception: eye-movement patterns and visual memory in artists and laymen. Perception, 36(1), 91100.Google Scholar
Voss, J.F., Vesonder, G.T., Spilich, G.J., et al. (1980). Text generation and recall by high-knowledge and low-knowledge individuals. J Verb Learn Verb B, 19(6), 651667.Google Scholar
Wiseman, S., Neisser, U. (1974). Perceptual organization as a determinant of visual recognition memory. Am J Psychol, 87, 675681.Google Scholar

References

Abbey, C.K., Barrett, H.H. (2001). Human and model-observer performance in ramp-spectrum noise: effects of regularization and object variability. J Opt Soc Am A, 18, 473488.Google Scholar
Abbey, C.K., Bochud, F.O. (2000). Modeling visual detection tasks in correlated image noise with linear model observers. In: Beutel, J., Kundel, H.L., Van Metter, R.L. (eds.). The Handbook of Medical Imaging: Volume 1, Progress in Medical Physics and Psychophysics, Bellingham, WA: SPIE Press, pp. 629654.Google Scholar
Abbey, C.K., Eckstein, M.P. (2007). Classification images for simple detection and discrimination tasks in correlated noise. J Opt Soc Am A, 24, B110–B124.Google Scholar
Ahumada, A.J., Jr. (1987). Putting the visual system noise back in the picture. J Opt Soc Am A, 4, 23722378.Google Scholar
Aufrichtig, R., Xue, P. (2000). Dose efficiency and low-contrast detectability of an amorphous silicon X-ray detector for digital radiography. Phys Med Biol, 45, 26532669.Google Scholar
Barrett, H.H., Yao, J., Rolland, J.P., Myers, K.J. (1993). Observer models for assessment of image quality. Proc Natl Acad Sci USA, 90, 97589765.Google Scholar
Barrett, H.H., Denny, J.L., Wagner, R.F., Myers, K.J. (1995). Objective assessment of image quality. II. Fisher information, Fourier crosstalk, and figures of merit for task performance. J Opt Soc Am A, 12, 834852.Google Scholar
Barten, P.G.J. (1987). The SQRI method: a new method for the evaluation of visible resolution on a display. Proc Soc Inf Disp, 28: 253262.Google Scholar
Barten, P.G.J. (1999). Contrast Sensitivity of the Human Eye and its Effect on Image Quality. Bellingham, WA: SPIE Press.Google Scholar
Burgess, A.E. (1994). Statistically defined backgrounds: performance of a modified nonprewhitening observer model. J Opt Soc Am A, 11, 12371242.Google Scholar
Burgess, A.E. (1995). Comparison of receiver operating characteristic and forced choice observer performance measurement methods. Med Phys, 22, 643655.Google Scholar
Burgess, A.E. (1999). The Rose model revisited. J Opt Soc Am A, 16, 633646.Google Scholar
Burgess, A.E., Judy, P.F. (2007). Signal detection in power-law noise: effect of spectrum exponents. J Opt Soc Am A, 24, B52– B60.Google Scholar
Burgess, A.E., Wagner, R.F., Jennings, R.J., Barlow, H.B. (1981). Efficiency of human visual signal discrimination. Science, 214, 9394.Google Scholar
Burgess, A.E., Li, X., Abbey, C.K. (1997). Visual signal detectability with two noise components: anomalous masking effects. J Opt Soc Am A, 14, 24202442.Google Scholar
Diaz, I., Abbey, C.K., Timberg, P.A.S., Eckstein, M.P., Verdun, F.R., Castella, C., Bochud, F.O. (2015). Derivation of an observer model adapted to irregular signals based on convolution channels. IEEE Trans Med Imag, 34, 14281435.Google Scholar
Eckstein, M.P., Whiting, J.S. (1995). Lesion detection in structured noise. Acad Radiol, 2, 249253.Google Scholar
Fukunaga, K. (1972). Introduction to Statistical Pattern Recognition. New York, NY: Academic Press.Google Scholar
Gang, G.J., Stayman, J.W., Zbijewski, W., Siewerdsen, J.H. (2014). Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation. Med Phys, 41, 081902 1–19.Google Scholar
Gifford, H.C., Pretorius, P.H., King, M.A. (2003). Comparison of human- and model-observer LROC studies. Proc SPIE Med Imag, 5034, 112122.Google Scholar
Gifford, H.C., King, M.A., Pretorius, P.H., Wells, R.G. (2005). A comparison of human and model observers in multislice LROC studies. IEEE Trans Med Imag, 24, 160169.Google Scholar
Gifford, H.C., Kinahan, P.E., Lartizien, C., King, M.A. (2007). Evaluation of multiclass model observers in PET LROC studies. IEEE Trans Nucl Sci, 54, 116123.Google Scholar
Green, D.M., Swets, J.A. (1966). Signal Detection Theory and Psychophysics. New York, NY: Wiley.Google Scholar
Gur, D., Sumkin, J.H., Rockette, H.E., et al. (2004). Changes in breast cancer detection and mammography recall rates after the introduction of a computer-aided detection system. J Natl Cancer Inst, 96, 185190.Google Scholar
Hanson, K.M. (1979). Detectability in computed tomographic images. Med Phys, 6, 441451.Google Scholar
Ikejimba, L.C., Kiarashi, N., Ghate, S.V., Samei, E., Lo, J.Y. (2014). Task-based strategy for optimized contrast enhanced breast imaging: analysis of six imaging techniques for mammography and tomosynthesis. Med Phys, 41,061908 1-14.Google Scholar
Judy, P.F., Swensson, R.G. (1981). Lesion detection and signal-to-noise ratio in CT images. Med Phys, 8, 1323.Google Scholar
Judy, P.F., Swensson, R.G. (1987). Display thresholding of images and observer detection performance. J Opt Soc Am A, 4, 954965.Google Scholar
Kelly, D.H. (1975). Spatial frequency selectivity in the retina. Vision Res, 15, 665672.Google Scholar
Kersten, D. (1984). Spatial summation in visual noise. Vision Res, 24, 19771990.Google Scholar
Mahalanobis, P. (1936). On the generalized distance in statistics. Proc Natl Inst Sci India (Calcutta), 2, 4955.Google Scholar
Metz, C.E. (1986). ROC methodology in radiologic imaging. Invest Radiol, 21, 720733.Google Scholar
Myers, K.J., Barrett, H.H. (1987). Addition of a channel mechanism to the ideal-observer model. J Opt Soc Am A, 4, 24472457.Google Scholar
Myers, K.J., Barrett, H.H., Borgstrom, M.C., Patton, D.D., Seeley, G.W. (1985). Effect of noise correlation on detectability of disk signals in medical imaging. J Opt Soc Am A, 2, 17521759.Google Scholar
Obuchowski, N.A., Schoenhagen, P., Modic, M.T., Meziane, M., Budd, G.T. (2007). Incidence of advanced symptomatic disease as primary endpoint in screening and prevention trials. Am J Roentgenol, 189, 1923.Google Scholar
Pisano, E.D., Gatsonis, C., Hendrick, E., et al. (2005). Diagnostic performance of digital versus film mammography for breast-cancer screening. N Engl J Med, 353, 1773–1783.Google Scholar
Reiser, I., Nishikawa, R.M. (2006). Identification of simulated microcalcifications in white noise and mammographic backgrounds. Med Phys, 33, 29052911.Google Scholar
Richard, S., Siewerdsen, J.H. (2007). Optimization of dual-energy imaging systems using generalized NEQ and imaging task. Med Phys, 34, 127139.Google Scholar
Richard, S., Siewerdsen, J.H. (2008). Comparison of model and human observer performance for detection and discrimination tasks using dual-energy X-ray images. Med Phys, 35, 50435053.Google Scholar
Rolland, J.P., Barrett, H.H. (1992). Effect of random background inhomogeneity on observer detection performance. J Opt Soc Am A, 9, 649658.Google Scholar
Rolland, J.P., Barrett, H.H., Seeley, G.W. (1991). Ideal versus human observer for long-tailed point spread functions: does deconvolution help? Phys Med Biol, 36, 10911109.Google Scholar
Rose, A. (1953). Quantum and noise limitations of the visual process. J Opt Soc Am, 43, 715716.Google Scholar
Segui, J.A., Zhao, W. (2006). Amorphous selenium flat panel detectors for digital mammography: validation of a NPWE model observer with CDMAM observer performance experiments. Med Phys, 33, 37113722.Google Scholar
Swets, J.A., Pickett, R.M. (1982). Evaluation of Diagnostic Systems: Methods from Signal Detection Theory. New York, NY: Academic Press.Google Scholar
Tapiovaara, M.J., Wagner, R.F. (1993). SNR and noise measurements for medical imaging. I. A practical approach based on statistical decision theory. Phys Med Biol, 38, 7192.Google Scholar
Wagner, R.F., Brown, D.G. (1985). Unified SNR analysis of medical imaging systems. Phys Med Biol, 30, 489518.Google Scholar
Watson, A.B. (1987a). Efficiency of a model human image code. J Opt Soc Am A, 4, 24012417.Google Scholar
Watson, A.B. (1987b). The cortex transform: rapid computation of simulated neural images. Comput Vision, Graphics Image Proc, 39, 311327.Google Scholar
Webster, M.A., De Valois, R.L. (1985). Relationship between spatial-frequency and orientation tuning of striate-cortex cells. J Opt Soc Am A, 2, 11241132.Google Scholar
Yao, J., Barrett, H.H. (1992). Predicting human performance by a channelized Hotelling observer model. Proc SPIE Med Imag, 1768, 161168.Google Scholar
Zhang, Y., Pham, B.T., Eckstein, M.P. (2004). Automated optimization of JPEG 2000 encoder options based on observer model performance for detecting variable signals in X-ray coronary angiograms. IEEE Trans Med Imag, 23, 459474.Google Scholar
Zhang, Y., Pham, B.T., Eckstein, M.P. (2007). Evaluation of internal noise methods for Hotelling observer models. Med Phys, 34, 33123322.Google Scholar

References

Abbey, C.K., Barrett, H.H. (2001). Human- and model-observer performance in ramp-spectrum noise: effects of regularization and object variability. J Opt Soc Am A, 18, 473488.Google Scholar
Barrett, H.H. (1990). Objective assessment of image quality: effects of quantum noise and object variability. J. Opt Soc Am A, 7(7), 12661278.Google Scholar
Barrett, H.H., Myers, K.M. (2004). Foundations of Image Science. Hoboken, NJ: John Wiley.Google Scholar
Barrett, H.H., Wilson, D.W., Tsui, B.M.W. (1994). Noise properties of the EM algorithm I: theory. Phys Med Biol, 39, 833846.Google Scholar
Barrett, H.H., Abbey, C., Gallas, B., Eckstein, M. (1998). Stabilized estimates of Hotelling-observer detection performance in patient-structured noise. Proc SPIE Med Imag, 3340, 2743.Google Scholar
Barrett, H.H., Myers, K.J., Gallas, B.D., Clarkson, E., Zhang, H. (2001) Megalopinakophobia: its symptoms and cures. Proc SPIE Med Imag, 4320, 299307.Google Scholar
Barrett, H.H., Myers, K.J., Devaney, N., Dainty, C. (2006). Objective assessment of image quality. IV. Application to adaptive optics. J Opt Soc Am A, 23(12), 30803105.Google Scholar
Brankov, J.G. (2013). Evaluation of the channelized Hotelling observer with an internal-noise model in a train-test paradigm for cardiac SPECT defect detection. Phys Med Biol, 58(20), 7159.Google Scholar
Chan, H.-P., Sahiner, B., Wagner, R.F., Petrick, N. (1999). Classifier design for computer-aided diagnosis: effects of finite sample size on the measured performance of classical and neural-network classifiers, Med Phys, 26(12), 26542669.Google Scholar
Clarkson, E. (2007). The estimation receiver operating characteristic curve and ideal observers for combined detection/estimation tasks, J Opt Soc Am A, 24(12), B91–B98.Google Scholar
Ferrero, A., Favazza, C.P., Yu, L., Leng, S., McCollough, C.H. (2017). Practical implementation of channelized Hotelling observers: effect of ROI size. Proc SPIE Med Imag, 10132, 101320G.Google Scholar
Fukunaga, K. (1990). Statistical Pattern Recognition. San Diego, CA: Academic Press.Google Scholar
Gifford, H.C. (2014). Efficient visual-search model observers for PET. Br J Radiol, 87(1039), 20140017.Google Scholar
Gifford, H.C., King, M.A., Wells, R.G. (2000). Single-photon emission computed tomography: LROC analysis of detector-response compensation in SPECT. IEEE Trans Med Imag, 19, 463473.Google Scholar
Gilks, W.R., Richardson, S., Spiegelhalter, D.J. (1996). Markov Chain Monte Carlo in Practice. Boca Raton, FL: Chapman and Hall/CRC.Google Scholar
Gross, K., Kupinski, M.A., Peterson, T., Clarkson, E. (2003). Optimizing a multiple-pinhole SPECT system using the ideal observer. Proc SPIE Med Imag, 5034, 314322.Google Scholar
He, X., Caffo, B.S., Frey, E.C. (2008). Toward realistic and practical ideal observer estimation for the optimization of medical imaging systems. IEEE Trans Med Imag, 27(10), 15351543.Google Scholar
Henkelman, R.M., Kay, I., Bronskill, M.J. (1990). Receiver operator characteristic (ROC) analysis without truth. Med Decis Making, 10, 2429.Google Scholar
Hernandez-Giron, I., Calzado, A., Geleijns, J., Joemai, R.M.S., Veldkamp, W.J.H. (2011) Automated assessment of low contrast sensitivity for CT systems using a model observer. Med Phys, 38, S1.Google Scholar
Hoppin, J.W., Kupinski, M.A., Kastis, G., Clarkson, E., Barrett, H.H. (2002). Objective comparison of quantitative imaging modalities without the use of a gold standard. IEEE Trans Med Imag, 21, 441449.Google Scholar
Khurd, P., Gindi, G. (2005). Decision strategies maximizing the area under the LROC curve SPIE Proc Med Imag, 5749, 150161.Google Scholar
Kupinski, M.A., Hoppin, J.W., Clarkson, E., Barrett, H.H. (2002). Estimation in medical imaging without a gold standard. Acad Radiol, 9, 290297.Google Scholar
Kupinski, M.A., Hoppin, J.W., Clarkson, E., Barrett, H.H. (2003). Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques. J Opt Soc Am A, 20(3), 430438.Google Scholar
Kupinski, M.A., Clarkson, E., Hesterman, J.Y. (2007). Bias in Hotelling observer performance computed from finite data. Proc SPIE Med Imag, 6515, 65150S-1-7.Google Scholar
Lehovich, A., Gifford, H.C., King, M.A. (2008). Model observer to predict human performance in LROC studies of SPECT reconstruction using anatomical priors. Proc SPIE Med Imag, 6917, 67170R-1–7.Google Scholar
McCollough, C.H. (2012). Achieving routine submillisievert CT scanning: report from the summit on management of radiation dose in CT. Radiology, 264(2), 567580.Google Scholar
Park, S., Kupinski, M.A., Clarkson, E. (2003). Ideal-observer performance under signal and background uncertainty. Proc Inform Process Med Imag, 18, 342353.Google Scholar
Park, S., Clarkson, E., Kupinski, M.A., Barrett, H.H. (2005). Efficiency of the human observer detection random signal in random backgrounds J Opt Soc Am A, 22(1), 316.Google Scholar
Platiša, L., Goossens, B., Vansteenkiste, E., Park, S., Gallas, B.D., Badano, A., Philips, W. (2011). Channelized Hotelling observers for the assessment of volumetric imaging data sets. J Opt Soc Am A 28(6) 11451163.Google Scholar
Rusinek, H., Naidich, D.P., McGuinness, G., Leitman, B.S., McCauley, D.I., Krinsky, G.A., Clayton, K., Cohen, H. (1998). Pulmonary nodule detection: low-dose versus conventional CT. Radiology, 209(1), 243249.Google Scholar
Schindera, S.T., Oddra, D., Raza, S.A., Kim, T.K., Jang, H.J., Szucs-Farkas, Z., Rogalla, P. (2013). Iterative reconstruction algorithm for CT: can radiation dose be decreased while low-contrast detectability is preserved? Radiology, 269(2), 511518.Google Scholar
Sen, A., Gifford, H. C. (2016). Accounting for anatomical noise in search-capable model observers for planar nuclear imaging. J Med Imag, 3(1), 015502.Google Scholar
Tseng, H.-W., Fan, J., Kupinski, M.A., Sainath, P., Hsieh, J. (2014). Assessing image quality and dose reduction of a new X-ray computed tomography iterative reconstruction algorithm using model observers. Med Phys, 41(7), 071910.Google Scholar
Tseng, H.-W., Fan, J., Kupinski, M.A. (2015). Combination of detection and estimation tasks using channelized scanning linear observer for CT imaging systems. Proc SPIE Med Imag, 9416, 94160H.Google Scholar
Whitaker, M.K., Clarkson, E., Barrett, H.H. (2008). Estimating random signal parameters from noisy images with nuisance parameters: linear and scanning-linear methods. Optics Express, 16(11), 81508173.Google Scholar
Wilson, D.W., Tsui, B.M., Barrett, H.H. (1994). Noise properties of the EM algorithm: II. Monte Carlo simulations, Phys Med Biol, 39(5), 847871.Google Scholar
Woodbury, M.A. (1950). Inverting modified matrices. In: Statistical Research Group. Princeton, NJ: Princeton University Press.Google Scholar
Wunderlich, A., Goossens, B. (2014). Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks. J Med Imag, 1(3), 031002.Google Scholar

References

Adler, R.J. (2000). On excursion sets, tube formulas and maxima of random fields. Ann Appl Probab, 10(1), 174.Google Scholar
Barrett, H.H., Yao, J., Rolland, J.P., Myers, K.J. (1993). Observer models for assessment of image quality. Proc Natl Acad Sci USA, 90(21), 97589765.Google Scholar
Bochud, F.O., Valley, J.-F., Verdun, F.R., Hessler, C., Schnyder, P. (1999). Estimation of the noisy component of anatomical backgrounds. Med Phys, 26(7), 13651370.Google Scholar
Chakraborty, D.P. (2006). A search model and figure of merit for observer data acquired according to the free-response paradigm. Phys Med Biol, 51, 34493462.Google Scholar
Chakraborty, D.P. (2013). A brief history of free-response receiver operating characteristic paradigm data analysis. Acad Radiol, 20(7), 915919.Google Scholar
Gifford, H.C. (2014). Efficient visual-search observer models for pet. Br J Radiol, 87(1039), 20140017.Google Scholar
Hand, D.J. (1996). Statistics and the theory of measurement. J R Statist Soc A, 159, 445492.Google Scholar
Hoeschen, C., Tischenko, O., Buhr, E., Illers, H. (2005). Comparison of technical and anatomical noise in digital thorax X-ray images. Radiat Protect Dosim, 114(1–3), 7580.Google Scholar
Kohli, M., Prevedello, L.M., Filice, R.W., Geis, J.R. (2017). Implementing machine learning in radiology practice and research. Am J Roentgenol, 208(4), 754760.Google Scholar
Kundel, H.L., Nodine, C.F., Krupinski, E.A. (1989). Searching for lung nodules: visual dwell indicates location of false-positive and false negative decisions. Invest Radiol, 4, 472478.Google Scholar
Myers, K.J., Barrett, H.H. (1987). Addition of a channel mechanism to the ideal-observer model. J Opt Soc Am A, 4(12), 24472457.Google Scholar
Popescu, L.M. (2008). Model for the detection of signals in images with multiple suspicious locations. Med Phys, 35(12), 55655574.Google Scholar
Popescu, L.M. (2011). Nonparametric signal detectability evaluation using an exponential transformation of FROC curve. Med Phys, 38(10), 56905702.Google Scholar
Popescu, L.M., Myers, K.J. (2013). CT image assessment by low contrast signal detectability evaluation with unknown signal location. Med Phys, 40(11), 111908111910.Google Scholar
Rolland, J.P., Barrett, H.H. (1992). Effect of random background inhomogeneity on observer detection performance. J Opt Soc Am A, 9(5), 649658.Google Scholar
Sharp, P., Barber, D.C., Brown, D.G., Burgess, A.E., Metz, C.E., Myers, K.J., Taylor, C.J., Wagner, R.F., Brooks, R., Hill, C.R., Kuhl, D.E., Smith, M.A., Wells, P., Worthington, B. (1996). Medical Imaging: The Assessment of Image Quality. ICRU Report 54. Bethesda, MD: International Commission on Radiological Units and Measurements.Google Scholar
Suppes, P., Zinnes, J.L. (1963). Basic measurement theory. In: Luce, R.L., Bush, R.R., Galanter, E. (eds.) Handbook of Mathematical Psychology, Vol. 1. New York, NY: Wiley, pp. 176.Google Scholar
Swets, J.A. (1973). The relative operating characteristic in psychology. Science, 183, 9901000.Google Scholar
Wagner, R.F., Brown, D.G. (1985). Unified SNR analysis of medical imaging systems. Phys Med Biol, 30(6), 489518.Google Scholar
Wang, S., Summers, R.M. (2012). Machine learning and radiology. Med Image Analysis, 16(5), 933951.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×