We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Gain a thorough understanding of the entire research process – developing ideas, selecting methods, analyzing and communicating results – in this fully revised and updated textbook. The sixth edition comprises the latest developments in the field, including the use of technology and web-based methods to conduct studies, the role of robots and artificial intelligence in designing and evaluating research, and the importance of diversity in research to inform results that reflect the society we live in. Designed to inspire the development of future research processes, this is the perfect textbook for graduate students and professionals in research methods and research design in clinical psychology.
We focus on preparing a manuscript for publication, emphasizing description, explanation, and contextualization. Description is the most straightforward, providing study details. Explanation is more demanding, as it refers to presenting the rationale of several facets of the study. Contextualization moves a step furtherfrom description and addresses how the study fits in the context of other studies and the knowledge base more generally. Authors are often frustrated that their study and manuscript was not understood or appreciated. No doubt this is often true, but the responsibility lies with us as authors. There is much an author can do to make a compelling case for the study. One summary question to guide an author in preparing a manuscript might be, “Why is this study important, needed, or especially interesting?” The answer serves as the basis of the Introduction and Discussion. We discuss each section of a manuscript, what to include and explain, and how the gestalt can present a storyline that connects the sections. The chapter also discusses guidelines for conducting, evaluating, and reporting research and the journal submission and manuscript review process.
The most common treatment for major depressive disorder (MDD) is antidepressant medication (ADM). Results are reported on frequency of ADM use, reasons for use, and perceived effectiveness of use in general population surveys across 20 countries.
Methods
Face-to-face interviews with community samples totaling n = 49 919 respondents in the World Health Organization (WHO) World Mental Health (WMH) Surveys asked about ADM use anytime in the prior 12 months in conjunction with validated fully structured diagnostic interviews. Treatment questions were administered independently of diagnoses and asked of all respondents.
Results
3.1% of respondents reported ADM use within the past 12 months. In high-income countries (HICs), depression (49.2%) and anxiety (36.4%) were the most common reasons for use. In low- and middle-income countries (LMICs), depression (38.4%) and sleep problems (31.9%) were the most common reasons for use. Prevalence of use was 2–4 times as high in HICs as LMICs across all examined diagnoses. Newer ADMs were proportionally used more often in HICs than LMICs. Across all conditions, ADMs were reported as very effective by 58.8% of users and somewhat effective by an additional 28.3% of users, with both proportions higher in LMICs than HICs. Neither ADM class nor reason for use was a significant predictor of perceived effectiveness.
Conclusion
ADMs are in widespread use and for a variety of conditions including but going beyond depression and anxiety. In a general population sample from multiple LMICs and HICs, ADMs were widely perceived to be either very or somewhat effective by the people who use them.
Systematic assessment is a fundamental across all of the sciences and accounts for enormous advances. Consider some familiar and unfamiliar examples in natural, biological, and social sciences.
This chapter focuses on practical issues about how to evaluate and present one’s results. This information can be used to complement tests of statistical significance and decision making in evaluating and presenting the data. The chapter begins with data evaluation. With all its objections, null hypothesis significance testing (NHST) still dominates and as such the researcher (and reader) ought to be skilled in the approach, mindful of its liabilities, and have an overflowing quiver of options to improve the yield from one’s research. In this chapter, we will discuss practical issues but not specific statistical tests and options and what and when to do them.
We previously discussed several considerations that are used to guide evaluation and selection of measures, as well as the use of multiple measures to assess constructs. There are several considerations that are used to guide evaluation and selection of measures, as well as multiple measures to assess constructs. The reason was the inherent limitation of any measure capturing all facets of a given construct. Also, there is a “method factor,” which means that the findings obtained by measuring a construct in a particular way (e.g., self-report) can be due to the content or construct of the measure as well as this method or way in which that construct was assessed. In most circumstances, we would like to know that the results are not restricted to one way of assessing the construct of interest.
We have now covered a variety of concepts, including threats to validity and various sources of bias that guide thinking when designing, executing, and evaluating research. All that will be critical to keep in mind as we move forward to elaborate research design issues. Yet we begin here with the first step of a research, namely, what will be studied? How and where does one get an idea for an actual study?
In many ways, methodology is all about interpretation of findings in a study. As scientists we engage in special methodological practices, so the results can be interpreted in one way rather than another. Thus, we want to interpret the findings by explaining how a particular variable of interest to us, rather than some other influence, artifact, or bias (e.g., pre-existing group differences, “chance”) is the basis for the results. Also, when the data are collected and analyzed, we want to explain the results in ways that are consistent with what we actually found. Often an investigator makes a little leap moving from the data analysis to the interpretation of what was found. The study then is revealed to be poorly designed. That is, in reading a report of the study, we say, “If this is what the investigator wished to conclude then this was not quite the right way to design the study.” Thus, data interpretation issues are squarely within the realm of methodology. Indeed, it is helpful before designing a study to know exactly what you would like to conclude if your theory or hypothesis is supported. The design is built around making it so that you can reach that conclusion.
Research Design in Clinical Psychology helps students to achieve a thorough understanding of the entire research process – developing the idea, selecting methods, analyzing the results, and preparing the written scientific report. Drawing examples from clinical research, health, and medicine, author Alan E. Kazdin offers detailed coverage of experimental design, assessment, data evaluation and interpretation, case-control and cohort designs, and qualitative research methods. In addition to new pedagogical tools that guide students through the text, the Fifth Edition offers expanded coverage of key topic areas, such as cultural issues, scientific integrity, and recent changes in the publication and communication of research.
We have discussed internal and external validity, which are fundamental to research. Two other types of validity, referred to as construct validity and data-evaluation validity, also must be addressed to draw valid inferences. All four types relate to the conclusions that can be reached about a study. Construct and data-evaluation validity are slightly more nuanced than are internal and external valid-ity. They are more likely to be neglected in the design of research in part because they are not handled by commonly used practices. For example, random assignment of subjects to various experimental and control conditions nicely handles a handful of internal validity threats (e.g., history, maturation, testing, and selection biases), and we are all generally aware of this.
By far the most common research designs within psychology compare groups of subjects who are exposed to different conditions that are controlled by the investigator. The general strategy can entail a variety of different arrangements depending on the groups included in the design, how assessment is planned, and when and to whom the experimental condition is presented. This chapter considers fundamentals of group designs and various options when the investigator manipulates or systematically varies conditions and controls the assignment of subjects to different conditions. We begin with discussing individuals who are to participate in the study, their assignment to groups, and specific arrangements that constitute the experimental manipulation.
When we discuss or consider empirical research, there is a specific methodological paradigm we have in mind. That paradigm or approach is within the positivist tradition and includes the whole package of concepts and practices (e.g., theory, hypothesis testing, operational definitions, careful control of the subject matter, isolation of the variables of interest, quantification of constructs, and statistical analyses).
We have distinguished ethical issues as the responsibilities of researches in relation to participants from scientific integrity as those responsibilities related to the standards and obligations in conducting and reporting research. These can be distinguished at the margins (e.g., ethical issues related to ensuring that informed consent is voluntary for each subject vs. integrity issues such as plagiarizing or fabricating data). Yet ethical issues and integrity overlap in core values (e.g., transparency, honesty) and in how one represents one’s work (e.g., as we describe the project to the participants and as we describe our work to other professionals and the scientific community).
The important concept of plausible rival hypothesis addresses those competing interpretations that might be posed to explain the findings of a particular study. Methodology helps rule out or at least make implausible competing interpretations. An experiment does not necessarily rule out all possible explanations. The extent to which it is successful in ruling out alternative explanations is a matter of degree. From a methodological standpoint, the better the design of an investigation, the more implausible it makes competing explanations of the results. There are a number of specific concepts that reflect many of the interpretations that can interfere with and explain the results of a study. The concepts are critical too as they serve as a methodological checklist so to speak. When planning a study or evaluating the results of a completed study, it is extremely useful to know the many concepts we cover and how they will be or were handled in the design of the study.