Skip to main content Accessibility help
×
Home
  • Print publication year: 2012
  • Online publication date: November 2012

Part II - Statistical Problems, Approaches, and Solutions in Real-World Contexts

  • Edited by Barbara Kelly, University of Strathclyde, Daniel F. Perkins, Pennsylvania State University
  • Publisher: Cambridge University Press
  • pp 35-108

References

Altman, D. G., & Doré, C. J. (1991). Randomisation and baseline comparisons in clinical trials. Lancet 335, 149–53.
Bamberger, M. (2009). Strengthening impact evaluation designs through the reconstruction of baseline data. Journal of Development Effectiveness 1(1), 37–59.
Belsky, J., Melhuish, E., Barnes, J., Leyland, A. H. & Romaniuk, H. (2006). Effects of Sure Start local programmes on children and families: Early findings from a quasi-experimental, cross-sectional study. British Medical Journal 332, 1476–8.
Bonell, C., Oakley, A., Hargreaves, J., Strange, V. & Rees, R. (2006). Assessment of generalisability in trials of health interventions: Suggested framework and systematic review. British Medical Journal 333, 346–9.
Brown, C. A., & Lilford, R. J. (2006). The stepped wedge trial design: A systematic review [doi:10.1186/1471–2288–6-54]. BMC Medical Research Methodology 6, 54.
Bumbarger, B., & Perkins, D. (2008). After randomised trials: Issues related to the dissemination of evidence-based interventions. Journal of Children’s Services 3(2), 55–64.
Bywater, T., & Axford, N. (2010). Strategies for targeting and recruiting families to randomised controlled trials of evidenced based parent programmes in community settings. Paper presented at Society for Prevention Research, 18th Annual Meeting, ‘Cells to Society: Prevention at All Levels’, June 1–4, 2010, Denver, CO.
Bywater, T., Hutchings, J., Daley, D., Whitaker, C., Yeo, S. T., Jones, K., Eames, C. & Tudor Edwards, R. (2009). Long-term effectiveness of a parenting intervention in Sure Start services in Wales for children at risk of developing conduct disorder [doi:10.1192/bjp.bp.108.056531]. British Journal of Psychiatry 195, 318–24.
Cohen, J. (1988). Statistical power for the behavioural sciences. Hillsdale, NJ: Erlbaum.
Collins, R., & MacMahon, S. (2001). Reliable assessment of the effects of treatment on mortality and major morbidity. Lancet 357, 373–80.
Donner, A., & Klar, N. (2000). Design and analysis of cluster randomization trials in health research. London: Arnold.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology 41, 327–50.
Eames, C., Daley, D., Hutchings, J., Hughes, C., Jones, K., Martin, P. & Bywater, T. (2008). The Leader Observation Tool (LOT): A process skills treatment fidelity measure for the Incredible Years Parenting Programme. Child Care Health and Development 34(3), 391–400.
Eccles, M., Grimshaw, J., Campbell, M. & Ramsay, C. (2003). Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Healthcare 12, 47–52.
Edwards, R. T., Ó Céilleachair, A., Bywater, T., Hughes, D. & Hutchings, J. (2007). A parenting programme for children at risk of developing conduct disorder: A cost-effective analysis. British Medical Journal 334, 682.
Epstein, M. H., Kutash, K. & Duchnowski, A. J. (2005). Outcomes for children and youth with emotional and behavioural disorders and their families: Programs and evaluation best practices, 2nd ed. Dallas, TX: PRO-ED, Inc.
Evans, C., Margison, F. & Barkham, M. (1998). The contribution of reliable and clinically significant change methods to evidence-based mental health. Evidence Based Mental Health 1, 70–2.
Forgatch, M., & DeGarmo, D. (1999). Parenting through change: An effective parenting training program for single mothers. Journal of Consulting and Clinical Psychology 67, 711–24.
Gardner, F., Burton, J. & Klimes, I. (2006). Randomised controlled trial of a parenting intervention in the voluntary sector for reducing child conduct problems: outcomes and mechanisms of change. Journal of Child Psychology and Psychiatry 47, 1123–32.
Gardner F., Hutchings, J., Bywater, T. & Whitaker, C. (2010). Who benefits and how does it work? Moderators and mediators of outcome in an effectiveness trial of a parenting intervention. Journal of Clinical Child and Adolescent Psychology 39(4), 1–13.
Gitlin, A., & Smyth, J. (1989). Teacher evaluation: Critical education and transformative alternatives. Lewes, UK: Falmer Press.
Goodman, R. (1997). The Strengths and Difficulties Questionnaire: A research note. Journal of Child Psychology, Psychiatry, and Allied Disciplines 38(5), 581–6.
Greenberg, M. T., Kusche, C. A., Cook, E. T. & Quamma, J. P. (1995). Promoting emotional competence in school-aged children: The effects of the PATHS Curriculum. Development and Psychopathology 7, 117–36.
Harris, D. N. (2009). Toward policy-relevant benchmarks for interpreting effect sizes: Combining effects with costs. Educational Evaluation and Policy Analysis 31, 3–29.
Hill, C. J., Bloom, H. S., Black, A. R. & Lipsey, M. W. (2007), Empirical benchmarks for interpreting effect sizes in research. Washington: Manpower Demonstration Research Corporation.
Hinshaw, S. P. (2002). Intervention research, theoretical mechanisms, and causal processes related to externalizing behavior patterns. Development and Psychopathology 14, 789–818.
Hollis, S., & Campbell, F. (1999). What is meant by intention to treat analysis? Survey of published randomised, controlled trials. British Medical Journal 319, 670–4.
Hubble, M. A., Duncan, B. L. & Miller, S. D. (1999). The heart and soul of change: What works in therapy. Washington: American Psychological Society.
Hutchings, J., Bywater, T. & Daley, D. (2007). Pragmatic randomised, controlled trial of a parenting intervention in Sure Start services for pre-school children at risk of developing conduct disorder: How and why did it work? Journal of Children’s Services 2, 4–14.
Hutchings, J., Bywater, T., Eames, C. & Martin, P. (2008). Implementing child mental health interventions in service settings: Lessons from three pragmatic randomised, controlled trials in Wales. Journal of Children`s Services 13(2), 17–27
Hutchings, J., Bywater, T., Daley, D., Gardner, F., Whitaker, C., Jones, K., Eames, C. & Edwards, R. T. (2007). A pragmatic randomised, controlled trial of a parenting intervention in Sure Start services for children at risk of developing conduct disorder [doi:10.1136/bmj.39126.620799.55)]. British Medical Journal, March 7, 2007.
Kinsman, J., Nakiyingi, J., Kamali, A., Carpenter, L., Quigley, M., Pool, R. & Whitworth, J. (2001). Evaluation of a comprehensive school-based AIDS education programme in rural Masaka, Uganda. Health Education Research 16, 85–100.
Kraemer, H., Wilson, G., Fairburn, C. & Agras, W. (2002). Mediators and moderators of treatment effects. Archives of General Psychiatry 59, 877–83.
Lambert, M. J. (1992). Psychotherapy outcome research: Implications for integrative and eclectic therapists, in C. Norcoss & M. Goldfried (eds.), Handbook of psychotherapy integration. New York: Basic Books.
Levin, H. M., & McEwan, P. J. (2001). Cost-effectiveness analysis, 2nd ed. London: Sage.
McKee, M., Britton, A., Black, N., McPherson, K., Sanderson, C. & Bain, C. (1999). Interpreting the evidence: Choosing between randomised and non-randomised studies. British Medical Journal 319, 312–5.
Meadows, P. (2006). Cost-effectiveness of implementing SSLPs: An interim research report (NESS/2006/FR/015). London: Sure Start: Evidence & Research.
Medical Research Council (2009). Developing and evaluating complex interventions: New guidance. Retrievable at www.mrc.ac.uk/complexinterventionsguidance.
Mihalic, S., Fagan, M., Irwin, K., Ballard, D. & Elliot, D. (2002). Blueprints for violence prevention replications: Factors for implementation success. Center for the Study and Prevention of Violence, University of Colorado, Boulder.
Network of Networks on Impact Evaluation (NONIE) (2009). Impact evaluations and development: NONIE guidance on impact evaluation. Cairo International Evaluation Conference, Cairo, Egypt.
Nutley, S. M., & Davies, H. T. O. (2000). Making a reality of evidence-based practice: Some lessons from the diffusion of innovations. Public Money and Management 20(4), 35–42.
Oakley, A., Strange, V., Bonell, C., Allen, E. & Stephenson, J. (2006). RIPPLE Study Team. Process evaluation in randomised, controlled trials of complex interventions. British Medical Journal 332, 413–16.
Reid, M., Webster-Stratton, C. & Baydar, N. (2004). Halting the development of conduct problems in Head Start children: The effects of parenting training. Journal of Clinical Child and Adolescent Psychology 33, 279–91.
Rice, J. K. (2002). Cost analysis in education policy research: A comparative analysis across fields of public policy. In H. M. Levin & P. J. McEwan (eds.), Cost-effectiveness in educational policy (pp. 21–35), Larchmont, NY: Eye on Education.
Rogers, A., & Smith, M. K. (2006). Evaluation: Learning what matters. London: Rank Foundation/YMCA George Williams College. Retrievable as a pdf at www.ymca.org.uk/rank/conference/evaluation_learning_what_matters.pdf.
Torgerson, D. J., & Torgerson, C. J. (2008). Designing randomised trials in health, education and the social sciences. Hampshire, UK: Palgrave Macmillan.
Webster-Stratton, C. (1998). Preventing conduct problems in Head Start children: Strengthening parenting competencies. Journal of Consultant Clinical Psychology 66, 715–30.
Webster-Stratton, C. (1981). The incredible years: The parents and children series. Retrievable at http://www.incredibleyears.com/.
Welsh Assembly Government (2005). Parenting action plan: Supporting mothers, fathers and carers with raising children in Wales (DfTE Information Document No. 054–05). Swansea: Welsh Assembly Government.

References

Abramson, J. H. (1990). Survey methods in community medicine, 4th ed. Edinburgh: Churchill-Livingstone.
Abramson, J. H., & Abramson, Z. H. (2008). Research methods in community medicine: Surveys, epidemiological research, programme evaluation, clinical trials, 6th ed. Chichester: Wiley.
Anastas, J. W., & MacDonald, M. L. (1994). Research design for social work and the human services. Lexington, MA: Lexington Books.
Barlow, D. H., & Hersen, M. (1984). Single case experimental designs: Strategies for studying behaviour change, 2nd ed. London: Allyn & Bacon.
Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., Pitkin, R., Rennie, D., Schulz, K. F., Simel, D. & Stroup, D. F. (1996). Improving the quality of reporting randomised controlled trials: The CONSORT statement. Journal of the American Medical Association 276(8), 637–9.
Blank, M., Rose, S. A. & Berlin, L. J. (1978). Preschool Language Assessment Instrument: The language of learning in practice. Orlando, FL: Grune and Stratton.
Boyle, J., & McLellan, E. (1998). The Early Language Skills Checklist: Observation-based assessment for early education. London: Hodder & Stoughton.
Boyle, J., McCartney, E., O’Hare, A. & Forbes, J. (2009). Direct versus indirect and individual versus group modes of language therapy for children with primary language impairment. International Journal of Language and Communication Disorders 44(6), 826–46.
Brewer, J., & Hunter, A. (1989). Multi-method research: A synthesis of styles. Newbury Park, CA: Sage.
Butterfield, L. D., Borgen, W. A., Amundson, N. E. & Maglio, A. S. (2005). Fifty years of the critical incident technique: 1954–2004 and beyond. Qualitative Research 5, 475–97.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (ed.), Handbook of research on teaching. Chicago: Rand McNally.
Cook, T. D., & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.
Craig, P., Dieppe, P., Macintyre, S., Mitchie, S., Nazareth, I. & Petticrew, M. (2008). Developing and evaluating complex interventions: The new Medical Research Council guidelines. British Medical Journal 337, 979–83.
Davis, P. (2006). Critical incident technique: A learning intervention for organizational problem solving. Development and Learning in Organizations 20(2), 13–16.
Dickson, K., Marshall, M., Boyle, J., McCartney, E., O’Hare, A. & Forbes, J. F. (2009). Cost analysis of direct versus indirect and individual versus group modes of manual based speech and language therapy for primary aged school children with primary language impairment. International Journal of Language and Communication Disorders 44(3), 369–81.
Ehri, L., Nunes, S., Stahl, S. & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel’s meta-analysis. Review of Educational Research 71, 393–447.
Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing 12, 77–84.
Field, A. P. (2009). Discovering statistics using SPSS: And sex and drugs and rock ‘n’ roll, 3rd ed. London: Sage.
Fixsen, D. L., Blase, K. A., Naoom, S. F. & Wallace, F. (2009). Core implementation components. Research on Social Work Practice 19(5), 531–40.
Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin 51(4), 327–58.
Fox, M. (2003). Opening Pandora’s box: Evidence-based practice for educational psychologists. Educational Psychology in Practice 19(2), 91–102.
Hammersley, M. (2001). Some questions about evidence-based practice in education. Paper presented at the symposium on ‘Evidence-based practice in education’ at the Annual Conference of the British Educational Research Association, University of Leeds, UK, September 13–15, 2001.
Jadad, A.R., & Enkin, M. W. (2007). Randomised, controlled trials: Questions, answers and musings, 2nd. ed. Oxford, UK: Blackwell Publishing/BMJ Books.
Kazdin, A. E. (1984). Statistical analyses for single-case experimental designs. In D. H. Barlow and M. Hersen (eds.), Single case experimental designs: Strategies for studying behavior change, 2nd ed. (Chap. 9, pp. 285–324). London: Allyn & Bacon.
Kratochwill, T. R. (ed.) (1978). Single subject research: Strategies for evaluating change. New York: Academic Press.
Lindquist, E. F. (1940). Statistical analysis in educational research. Boston: Houghton Mifflin.
Mackenzie, M., O’Donnell, C., Halliday, E., Sridharan. S. & Platt, S. (2010). Evaluating complex interventions: One size does not fit all. British Medical Journal 340, 401–3.
McCall, W. A. (1923). How to experiment in education. New York: Macmillan.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
Robinson, P. W., & Foster, D. F. (1979). Experimental psychology: A small-n approach. New York: Harper & Row.
Robson, C. (2002). Real world research, 2nd ed. Oxford, UK: Blackwell Publishing.
Rock, M. L. (2005). Use of strategic self-monitoring to enhance academic engagement, productivity, and accuracy of students with and without exceptionalities. Journal of Positive Behavior Interventions 7(1), 3–17.
Sackett, D. L., Strauss, S. E., Richardson, W. S., Rosenberg, W. & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM. Edinburgh: Churchill-Livingstone.
Todman, J. B., & Dugard, P. (2001). Single-case and small-n experimental designs: A practical guide to randomization tests. Mahwah, NJ: Erlbaum.
Torgerson, C. J., Brooks, G. & Hall, J. (2006). A systematic review of the research literature on the use of phonics in the teaching of reading and spelling. London: DfES Research Report 711.
Torgerson, D. J., & Torgerson, C. J. (2008). Designing randomised trials in health, education and the social sciences: An introduction. Basingstoke, Hampshire: Palgrave Macmillan.
Zhang, X., & Tomblin, J. B. (2003) Explaining and controlling regression to the mean in longitudinal research designs: Tutorial. Journal of Speech, Language and Hearing Sciences 46(6), 1340–51.
Zwarenstein, M., Treweek, S., Gagnier, J. J., Altman, D. G., Tunis, S., Haynes, B., Oxman, A. D. & Moher, D., for the CONSORT and Pragmatic Trials in Healthcare (Practihc) Groups (2008). Improving the reporting of pragmatic trials: An extension of the CONSORT statement. British Medical Journal 337, 1–8.

References

Andrews, G. (1999). Efficacy, effectiveness and efficiency in mental health service delivery. Australian and New Zealand Journal of Psychiatry 33, 316–22.
Babbie, E. (2004). The practice of social research, 10th ed. Belmont, CA: Wadsworth.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology 51, 1173–82.
Beckett, D. (2000). Just-in-time training as anticipative action and as inferential understanding. In C. Symes (ed.), Proceedings [of the] International Conference on Working Knowledge: Productive learning at work (pp. 15–20). Sydney, Australia: University of Technology, Research Centre for Vocational Education and Training.
Bero, L. A., Grilli, R., Grimshaw, J. M., Harvey, E., Oxman, A. D. & Thomson, M. A. (1998). Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal 317, 465–8.
Bickman, L., & Rog, D. J. (eds.). (1998). Handbook of applied social research methods. Thousand Oaks, CA: Sage.
Blettner, M., Sauerbrei, W., Schlehofer, B., Scheuchenpflug, T. & Friedenreich, C. (1999). Traditional reviews, meta-analyses and pooled analyses in epidemiology. International Journal of Epidemiology 28, 1–9.
Bransford, J. D., Brown, A. L., Cocking, R. R., Donovan, M. S., Bransford, J. D. & Pellegrino, J. W. (eds.). (2000). How people learn: Brain, mind, experience, and school. Washington: National Academy Press.
Britten, N., Campbell, R., Pope, C., Donovan, J., Morgan, M. & Pill, R. (2002). Using meta-ethnography to synthesize qualitative research: A worked example. Journal of Health Services Research and Policy 7, 209–15.
Brown, C. H., Wang, W., Kellam, S. G., Muthén, B. O., Petras, H., Toyinbo, P., Poduska, J., Ialongo, N., Wyman, P. A., Chamberlain, P., Sloboda, Z., MacKinnon, D. P., Windham, A. & The Prevention Science and Methodology Group (2008). Methods for testing theory and evaluating impact in randomized field trials: Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence 95, S74–104.
Brown, R. (1986). Suggestive-accelerative learning and teaching in special education. Journal of the Society for Accelerative Learning and Teaching 11, 13–22.
Bushman, B. J., & Wang, M. C. (2009). Vote-counting procedures in meta-analysis. In H. Cooper, L. V. Hedges and J. C. Valentine (eds.), The handbook of research synthesis and meta-analysis (pp. 207–20). New York: Russell Sage Foundation.
Cain, D. W., Rudd, L. C. & Saxon, T. F. (2007). Effects of professional development training on joint attention engagement in low-quality childcare centers. Early Child Development and Care 177, 159–85.
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J. & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science 2, 40. Retrieved February 19, 2008, from http://www.implementationscience.com/content/pdf/17481–5908–2–40.pdf.
Chalmers, I., & Altman, D. G. (eds.). (1995). Systematic reviews. London: BMJ Publishing.
Chalmers, I., Hedges, L. V. & Cooper, H. (2002). A brief history of research synthesis. Evaluation and the Health Professions 25, 12–37.
Clark, R. E. (2009). Translating research into new instructional technologies for higher education: The active ingredient process. Journal of Computing in Higher Education 21, 4–18.
Clow, P., Dunst, C. J., Trivette, C. M. & Hamby, D. W. (2005). Educational outreach (academic detailing) and physician prescribing practices. Cornerstones 1(1), 1–9. Available at http://tracecenter.info/cornerstones/cornerstones_vol1_no1.pdf.
Cole, T. B., & Glass, R. M. (2004). Learning associated with participation in journal-based continuing medical education. Journal of Continuing Education in the Health Professions 24, 205–12.
Cooper, H., Hedges, L. V. & Valentine, J. C. (2009). The handbook of research synthesis and meta-analysis, 2nd ed. New York: Russell Sage Foundation.
Cooper, H. M. (1984). The integrative research review: A systematic approach. Beverly Hills, CA: Sage.
Coscarelli, W. C., & White, G. P. (1982). Applying the ID process to the guided design teaching strategy. Journal of Instructional Development 5(4), 2–6.
Craven, H. H. (1990). The relationship of peer coaching to the frequency of use of effective instructional behaviors in in-service teachers in three selected junior high schools (UMI No. 9028508). Dissertation Abstracts International 51(5), 1491A.
Davies, P. (2000). The relevance of systematic reviews to educational policy and practice. Oxford Review of Education 26, 365–378.
Davies, P. (2006). What is needed from research synthesis from a policy-making perspective? In J. Popay (ed.), Moving beyond effectiveness in evidence synthesis: Methodological issues in the synthesis of diverse sources of evidence (pp. 97–103). London: National Institute for Health and Clinical Excellence.
Davis, D., O’Brien, M. A. T., Freemantle, N., Wolf, F. M., Mazmanian, P. E. & Taylor-Vaisey, A. (1999). Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Association 282, 867–74.
Davis, N. (2005). Just-in-time support for educational leadership. In T. J. van Weert (ed.), Education and the knowledge society: Information technology supporting human development (pp. 271–7). Boston: Kluwer Academic.
Dixon, W. (ed.) (1992). BMDP statistical software manual, Vol. 2. Berkeley: University of California Press.
Dochy, F., Segers, M., Van denBossche, P. & Gijbels, D. (2003). Effects of problem-based learning: A meta-analysis. Learning and Instruction 13, 533–568.
Donovan, M. S., Bransford, J. D. & Pellegrino, J. W. (eds.). (1999). How people learn: Bridging research and practice. Washington: National Academy Press.
DuBay, W. H. (2007). Smart language: Readers, readability, and the grading of text. Costa Mesa, CA: Impact Information.
Dunning, M., Abi-Aad, G., Gilbert, D., Hutton, H. & Brown, C. (1999). Experience, evidence and everyday practice: Creating systems for delivering effective healthcare. London: King’s Fund.
Dunst, C. J., & Raab, M. (2010). Practitioners’ self-evaluations of contrasting types of professional development. Journal of Early Intervention. (vol 32 no 4 239–254)
Dunst, C. J., & Trivette, C. M. (2009a). Let’s be PALS: An evidence-based approach to professional development. Infants and Young Children 22(3), 164–75.
Dunst, C. J., & Trivette, C. M. (2009b). Using research evidence to inform and evaluate early childhood intervention practices. Topics in Early Childhood Special Education 29, 40–52.
Dunst, C. J., Trivette, C. M. & Cutspec, P. A. (2007). Toward an operational definition of evidence-based practices (Winterberry Research Perspectives Vol. 1, No. 1). Asheville, NC: Winterberry Press.
Dunst, C. J., Trivette, C. M. & Deal, A. G. (2011). Effects of in-service training on early intervention practitioners’ use of family systems intervention practices in the USA. Professional Development in Education.(v37, no 2, 181–196)
Dunst, C. J., Trivette, C. M. & Hamby, D. W. (2006). Family support program quality and parent, family and child benefits (Winterberry Monograph Series). Asheville, NC: Winterberry Press.
Dunst, C. J., Trivette, C. M. & Hamby, D. W. (2008). Research synthesis and meta-analysis of studies of family-centered practices (Winterberry Monograph Series). Asheville, NC: Winterberry Press.
Dunst, C. J., Trivette, C. M. & Hamby, D. W. (2010) Meta-analysis of the effectiveness of four adult learning methods and strategies. International Journal of Continuing Education and Lifelong Learning. (Vol 3 N0 1, 91–112)
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology 41, 327–50.
Eby, L. T., Allen, T. D., Evans, S. C., Ng, T. & DuBois, D. L. (2008). Does mentoring matter? A multidisciplinary meta-analysis comparing mentored and non-mentored individuals. Journal of Vocational Behavior 72, 254–67.
Eccles, M. P., Armstrong, D., Baker, R., Cleary, K., Davies, H., Davies, S., Glasziou, P., Ilott, I., Kinmonth, A.-L., Leng, G., Logan, S., Marteau, T., Michie, S., Rogers, H., Rycroft-Malone, J. & Sibbald, B. (2009). An implementation research agenda. Implementation Science 4, 18–25.
Elmes, D. G., Kantowitz, B. H. & Roediger, H. L., III. (1992). Research methods in psychology, 4th ed. St. Paul, MN: West Publishing.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M. & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida. Retrieved October 8, 2007, from http://www.fpg.unc.edu/~nirn/resources/publications/Monograph/pdf/Monograph_full.pdf.
Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., Moscicki, E. K., Schinke, S., Valentine, J. C. & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science 6, 151–75.
Fordis, M., King, J. E., Ballantyne, C. M., Jones, P. H., Schneider, K. H., Spann, S. J., Greenberg, S. B. & Greisinger, A. J. (2005). Comparison of the instructional efficacy of Internet-based CME with live interactive CME workshops: A randomized, controlled trial. Journal of the American Medical Association 294, 1043–51.
Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment research issues and procedures. Journal of Consulting and Clinical Psychology 67, 308–19.
Glasgow, R. E., Lichtenstein, E. & Marcus, A. C. (2003). Why don’t we see more translation of health promotion research into practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health 93, 1261–7.
Graham, I. D., Logan, J., Harrison, M. B., Straus, S. E., Tetroe, J., Caswell, W. & Robinson, N. (2006). Lost in knowledge translation: Time for a map? Journal of Continuing Education in the Health Professions 26, 13–24.
Gutiérrez, R., & Slavin, R. E. (1992). Achievement effects of the non-graded elementary school: A best evidence synthesis. Review of Educational Research 62, 333–76.
Hancock, B. W., Coscarelli, W. C. & White, G. P. (1983). Critical thinking and content acquisition using a modified guided design process for large course sections. Educational and Psychological Research 3, 139–49.
Hannes, K. (2010, June). Qualitative evidence synthesis (QES). Presentation made at the Systematic Review Workshop of the Campbell Collaboration, Leuven, Belgium, June 2010.
Hargreaves, A., & Dawe, R. (1990). Paths of professional development: Contrived collegiality, collaborative culture, and the case of peer coaching. Teaching and Teacher Education 6, 227–41.
Hedges, L. V. (1994). Fixed effects models. In H. Cooper & L. V. Hedges (eds.), The handbook of research synthesis (pp. 285–99). New York: Russell Sage Foundation.
Hedges, L. V. (2008). What are effect sizes, and why do we need them? Child Development Perspectives 2, 167–71.
Hobma, S. O., Ram, P. M., vanMerode, F., vander Vleuten, C. & Grol, R. (2004). Feasibility, appreciation and costs of a tailored continuing professional development approach for general practitioners. Quality in Primary Care 12, 271–8.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.
Jones, M. J. (2004). Application of systematic review methods to qualitative research: Practical issues. Journal of Advanced Nursing 48, 271–8.
Kavale, K. A. (2001). Meta-analysis: A primer. Exceptionality 9, 177–83.
Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist 63, 146–59.
King, W. R., & He, J. (2005). Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems 16, 665–86.
Lacoursiere, Y., Snell, L., McClaran, J. & Duarte-Franco, E. (1997). Workshop versus lecture in CME: Does physician learning method preference make a difference? Journal of Continuing Education in the Health Professions 17, 141–7.
Leat, D., Lofthouse, R. & Wilcock, A. (2006). Teacher coaching: Connecting research and practice. Teaching Education 17, 329–39.
Leviton, L. C., & Lipsey, M. W. (2007). A big chapter about small theories: Theory as method: Small theories of treatments. New Directions for Evaluation 114, 27–62.
Light, R. J., & Smith, P. V. (1971). Accumulating evidence: Procedures for resolving contradictions among different research studies. Harvard Educational Review 41, 429–71.
Lipsey, M. W. (1993). Theory as method: Small theories of treatments. New Directions for Program Evaluation 57, 5–38.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis (Applied Social Research Methods Series Vol. 49). Thousand Oaks, CA: Sage.
Lucas, S. M., & Cutspec, P. A. (2007). The role and process of literature searching in the preparation of a research synthesis (Winterberry Research Perspectives Vol. 1, No. 10). Asheville, NC: Winterberry Press.
Ludlow, B. L., Faieta, J. C. & Wienke, W. D. (1989). Training teachers to supervise their peers. Teacher Education and Special Education 12, 27–32
Marley, J. (2000). Efficacy, effectiveness, efficiency. Australian Prescriber 23, 114–15.
Meehan, M. L., Wood, C. L., Hughes, G. K., Cowley, K. S. & Thompson, J. A. (2004, November 3–6). Measuring treatment integrity: Testing a multiple-component, multiple-method intervention implementation evaluation model. Paper presented at the Annual Conference of the American Evaluation Association, Atlanta, GA, June 3–6, 2004.
Meier, D. (2000). The accelerated learning handbook: A creative guide to designing and delivering faster, more effective training programs. New York: McGraw-Hill.
Metzger, D. S., Koblin, B., Turner, C., Navaline, H., Valenti, F., Holte, S., Gross, M., Sheon, A., Miller, H., Cooley, P. & Seage, G. R., III. (2000). Randomized, controlled trial of audio computer-assisted self-interviewing: Utility and acceptability in longitudinal studies. American Journal of Epidemiology 152, 99–106.
Miller, D. C., & Salkind, N. J. (2002). Handbook of research design and social measurement, 6th ed. Thousand Oaks, CA: Sage.
National Institutes of Health (2007). Dissemination and implementation research in health. Rockville, MD: NIH. Retrieved September 13, 2010, from http://grants.nih.gov/grants/guide/pa-files/PAR-07–086.html.
National Research Council (2002). Scientific research in education. Washington: National Academy Press.
Paul, C. L., Redman, S. & Sanson-Fisher, R. W. (2004). A cost-effective approach to the development of printed materials: A randomized, controlled trial of three strategies. Health Education Research 19, 698–706.
Pazirandeh, M. (2000). Measuring continuing medical education effectiveness and its ramification in a community hospital. Journal of Continuing Education in the Health Professions 20, 176–80.
Popay, J. (ed.) (2006). Moving beyond effectiveness in evidence synthesis: Methodological issues in the synthesis of diverse sources of evidence. London: National Institute for Health and Clinical Excellence.
Raab, M., Dunst, C. J. & Trivette, C. M. (2010). Adult learning process for promoting caregiver adoption of everyday child language learning practices: Revised and updated. Practically Speaking 2(1), 1–8.
Redding, J. C., & Kamm, R. M. (1999). Just-in-time staff development: One step to the learning organization. NASSP Bulletin 83(604), 28–34.
Rohrbach, L. A., Grana, R., Sussman, S. & Valente, T. W. (2006). Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation and the Health Professions 29, 302–33.
Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (eds.), The handbook of research synthesis (pp. 231–44). New York: Russell Sage Foundation.
Rosenthal, R., Rosnow, R. L. & Rubin, D. B. (2000). Contrasts and effect sizes in behavioral research: A correlational approach. New York: Cambridge University Press.
Schöpfel, J., & Farace, D. J. (2010). Grey literature. In M. J. Bates & M. N. Maack (eds.), Encyclopedia of Library and Information Sciences, 3rd ed. (pp. 2029–39). Boca Raton, FL: Taylor & Francis.
Shadish, W. R., Cook, T. D. & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect size. In H. Cooper & L. V. Hedges (eds.), The handbook of research synthesis (pp. 261–81). New York: Russell Sage Foundation.
Shadish, W. R., Jr., & Sweeney, R. B. (1991). Mediators and moderators in meta-analysis: There’s a reason we don’t let dodo birds tell us which psychotherapies should have prizes. Journal of Continuing and Clinical Psychology 59, 883–93.
Sibley, J. C., Sackett, D. L., Neufeld, V., Gerrard, B., Rudnick, K. V. & Fraser, W. (1982). A randomized trial of continuing medical education. New England Journal of Medicine 306, 511–15.
Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews. Educational Researcher 15(9), 5–11.
Slavin, R. E. (1995). Best evidence synthesis: An intelligent alternative to meta-analysis. Journal of Clinical Epidemiology 48, 9–18.
Slavin, R. E., & Cheung, A. (2005). A synthesis of research on language of reading instruction for English language learners. Review of Educational Research 75, 247–84.
Spillane, J. P., Reiser, B. J. & Reimer, T. (2002). Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research 72, 387–431.
Spoth, R., Rohrbach, L., Hawkins, D., Greenberg, M., Pentz, M., Robertson, E. & Sloboda, Z. (2008, May 19). Type 2 translational research: Overview and definitions: Introduction to Mapping Advances in Prevention Science (MAPS) II. Fairfax, VA: Society for Prevention Research. Retrieved May 23, 2010, from http://www.preventionscience.org/commlmon.php.
Suri, H. (2000). A critique of contemporary methods of research synthesis. Post-Script 1, 49–55.
Tansella, M., & Thornicroft, G. (2009). Implementation science: Understanding the translation of evidence into practice. British Journal of Psychiatry 195, 283–5.
Thompson, B. (2005). Replicability of results. In B. Everitt & D. C. Howell (eds.), Encyclopedia of statistics in behavioral science. Hoboken, NJ: Wiley.
Tornatzky, L. G., & Klein, K. J. (1982). Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings. IEEE Transactions of Engineering Management EM-29, 28–43.
Trivette, C. M., Dunst, C. J., Hamby, D. W. & O’Herin, C. E. (2009). Characteristics and consequences of adult learning methods and strategies (Winterberry Research Syntheses, Vol. 2, Number 2). Asheville, NC: Winterberry Press.
vanDoesum, K. T. M., Riksen-Walraven, J. M., Hosman, C. M. H. & Hoefnagels, C. (2008). A randomized, controlled trial of a home-visiting intervention aimed at preventing relationship problems in depressed mothers and their infants. Child Development 79, 547–61.
Wales, C. E., & Stager, R. A. (1978). The guided design approach. Englewood Cliffs, NJ: Educational Technology Publications.
Whitten, P., Ford, D. J., Davis, N., Speicher, R. & Collins, B. (1998). Comparison of face-to-face versus interactive video continuing medical education delivery modalities. Journal of Continuing Education in the Health Professions 18, 93–9.
Woolf, S. H. (2008). The meaning of translational research and why it matters. Journal of the American Medical Association 222, 211–13.

References

Bennett, J., Hogarth, S., Lubben, F., Campbell, B. & Robinson, A. (2010). Talking science: The research evidence on the use of small group discussions in science teaching. International Journal of Science Education 32(1), 69–95.
Bennett, J., Lubben, F. & Hogarth, S. (2007). Bringing science to life: A synthesis of the research evidence on the effects of context-based and STS approaches to science teaching. Science Education 91(3), 347–70.
Bennett, J., Lubben, F., Hogarth, S. & Campbell, B. (2005). Systematic reviews of research in science education: Rigour or rigidity? International Journal of Science Education 27(4), 387–406.
Bennett, J., Lubben, F., Hogarth, S., Campbell, B. & Robinson. A. (2004a). A systematic review of the nature of small-group discussions in science teaching aimed at improving students’ understanding of evidence. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Bennett, J., Lubben, F., Hogarth, S. & Campbell, B. (2004b). A systematic review of the use of small-group discussions in science teaching with students aged 11–18, and their effects on students’ understanding in science or attitude to science. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Bennett, J., Lubben, F. & Hogarth, S. (2003) A systematic review of the effects of context-based and Science Technology-Society (STS) approaches to the teaching of secondary science. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Brown, A. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences 2(2), 141–78.
Cobb, P., Confrey, J., diSessa, A., Leherer, R. & Scauble, L. (2003). Design experiments in educational research. Educational Researcher 32(1), 9–13.
Cochrane, A. (1972). Effectiveness and efficiency: Random reflections on the health services. London: Nuffield Provincial Hospitals Trust.
Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic Press.
Collins, A. (1993). Toward a design science of education. In E. Scanlon and T. O’Shea (eds.), New directions in educational technology. New York: Springer-Verlag.
Cooper, H. (1998). Synthesizing research: A guide for literature reviews, 3rd ed. Thousand Oaks, CA: Sage.
Davies, P. (2000). The relevance of systematic reviews to educational policy and practice. Oxford Review of Education 26, 365–78.
Davies, H., Nutley, S. & Smith, P. (eds.) (2000). What works? Evidence-based policy and practice in public services. Bristol, UK: Policy Press.
Driver, R., & Bell, B. (1985). Students’ thinking and the learning of science: A constructivist view. School Science Review 67(240), 443–56.
Evans, J., & Benefield, P. (2001). Systematic reviews of educational research: Does the medical model fit? British Educational Research Journal 27(5), 527–41.
Hammersley, M. (2001). On ‘systematic’ reviews of research literature: A ‘narrative’ response to Evans and Benefield. British Educational Research Journal 27(5), 543–54.
Hargreaves, D. (1996). Teaching as a research-based profession: Possibilities and prospects. Teacher Training Agency Annual Lecture. Teacher Training Agency (TTA), London.
Hillage, L., Pearson, R., Anderson, A. & Tamkin, P. (1998). Excellence in research on schools. Brighton, UK: Institute for Employment Studies.
Hogarth, S., Bennett, J., Lubben, F. & Robinson, A. (2006). The effect of ICT teaching activities in science lessons on students’ understanding of science ideas. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Hogarth, S., Bennett, J., Campbell, B., Lubben, F. & Robinson, A. (2004). A systematic review of the use of small-group discussions in science teaching with students aged 11–18, and the effect of different stimuli (print materials, practical work, ICT, video/film) on students’ understanding of evidence. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Levinson, R., & Turner, S. (2001). Valuable lessons: Engaging with the social context of science in schools. London: The Wellcome Trust.
Lubben, F., Bennett, J., Hogarth, S. & Robinson, A. (2004). A systematic review of the effects of context-based and Science-Technology-Society (STS) approaches in the teaching of secondary science on boys and girls, and on lower ability pupils. In Research evidence in education library. London: EPPI-Centre, Social Science Research Unit, Institute of Education.
Millar, R., & Osborne, J. (eds.) (1998). Beyond 2000: Science education for the future. London: King’s College/The Nuffield Foundation.
Norris, N. (1990). Understanding educational evaluation. London: Kogan Page.
Oakley, A. (2000). Experiments in knowing. Cambridge, UK: Polity Press.
Oakley, A. (2002). Social science and evidence-based everything: The case of education. Educational Review 54(3), 21–33.
OECD (2002). Educational research and development in England: examiners’ report.
Organisation for Economic Co-operation and Development. Retrievable at www.oecd.org/dataoecd/17/56.
Osborne, J., Duschl, R. & Fairbrother, R. (2002). Breaking the mould? Teaching science for public understanding. London: Nuffield Foundation.
Parlett, M., & Hamilton, D. (1972). Evaluation as illumination: A new approach to the study of innovative programmes (Occasional Paper No. 9). Centre for Research in the Educational Sciences, University of Edinburgh, Edinburgh, Scotland.
Petrosino, A., Boruch, R., Rounding, C., McDonald, S. & Chalmers, I. (2000). The Campbell Collaboration: Social, Psychological, Educational and Criminal Trials Register (C2-SPECTR). Evaluation and Research in Education 14(3), 206–19.
Shavelson, R., & Towne, L. (eds.) (2002). Scientific enquiry in education. Washington: National Academy Press.
Slavin, R. (2002). Evidence-based educational policies: Transforming educational practice and research. Educational Researcher 31(7), 15–21.
Spencer, L., Ritchie, J., Lewis, J. & Dillon, L. (2003). Quality in qualitative evaluation: A framework for assessing research evidence. London: The Strategy Unit.
Tooley, J., & Darbey, D. (1998). Educational research: A critique. A survey of published educational research. London: Office for Standards in Education (Ofsted).
Torgerson, C., & Torgerson, D. (2001). The need for randomised controlled trials in educational research. British Journal of Educational Studies 49(3), 316–28.
Torgerson, C. (2003). Systematic reviews. London: Continuum.
Vulliamy, G. (2004). The impact of globalisation on qualitative research in comparative and international education. Compare 34(3), 261–84.
What Works Clearinghouse (2002). A trusted source of evidence of what works in education. Retrievable at http://ies.ed.gov/ncee/wwc/.