We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite their documented efficacy, substantial proportions of patients discontinue antidepressant medication (ADM) without a doctor's recommendation. The current report integrates data on patient-reported reasons into an investigation of patterns and predictors of ADM discontinuation.
Methods
Face-to-face interviews with community samples from 13 countries (n = 30 697) in the World Mental Health (WMH) Surveys included n = 1890 respondents who used ADMs within the past 12 months.
Results
10.9% of 12-month ADM users reported discontinuation-based on recommendation of the prescriber while 15.7% discontinued in the absence of prescriber recommendation. The main patient-reported reason for discontinuation was feeling better (46.6%), which was reported by a higher proportion of patients who discontinued within the first 2 weeks of treatment than later. Perceived ineffectiveness (18.5%), predisposing factors (e.g. fear of dependence) (20.0%), and enabling factors (e.g. inability to afford treatment cost) (5.0%) were much less commonly reported reasons. Discontinuation in the absence of prescriber recommendation was associated with low country income level, being employed, and having above average personal income. Age, prior history of psychotropic medication use, and being prescribed treatment from a psychiatrist rather than from a general medical practitioner, in comparison, were associated with a lower probability of this type of discontinuation. However, these predictors varied substantially depending on patient-reported reasons for discontinuation.
Conclusion
Dropping out early is not necessarily negative with almost half of individuals noting they felt better. The study underscores the diverse reasons given for dropping out and the need to evaluate how and whether dropping out influences short- or long-term functioning.
Ethical issues, discussed in the previous chapter, focus primarily on the responsibilities of investigators in relation to participants; scientific integrity focuses primarily on responsibilities to science and the profession more broadly. This includes adherence to the standards, responsibilities, and obligations in conducting and reporting research. The core values include transparency, honesty, accountability, commitment to empirical findings, addressing or avoiding conflict of interest, and commitment to the public’s interest. Several specific topics were discussed in detail including fraud in science, questionable practices in research, plagiarism, allocation of credit to collaborators, conflict of interest, and sexual harassment. Many concepts were introduced along the way including honorary or gift authorship, ghost authorship, and self-plagiarism. These concepts convey that scientific research is not just a matter of methodology, designs, and statistics. Science is designed to serve the public. Our understanding of phenomena is to increase the knowledge base in ways that will improve the conditions of the world. That is a huge challenge and responsibility and makes ethical issues and scientific integrity critically important. There are protections in place to minimize lapses in ethical behavior and scientific integrity and remedies once such lapses are identified. These are constantly being revised to keep up with new circumstances (e.g., big data, tracking social media, dual-use research) and potential challenges they present to protect the public.
Selection of measures for research is based on several considerations including construct validity, psychometric properties (evidence for different types of reliability and validity), and sensitivity of the measures to reflect change or differences. Also, it is important to consider the sample for which the measure will be used and whether psychometric properties apply to the use (sample, context) one intends. Culture and ethnicity were discussed as among the relevant domains to consider when evaluating use of a test. Use of multiple measures rather than a single measure was recommended because: constructs of interest (e.g., clinical problems, personality, social functioning) tend to be multifaceted and no single measure can be expected to address all the components. Brief, shortened, and single-item measures were discussed. Many cautions were presented too because the primary criteria to keep in mind remain critical (evidence for construct validity, psychometric properties, and measurement sensitivity). Shortened measures and the one- or a few-item measures often do not have the requisite data to recommend their use or to allow their interpretation. Special issues were discussed that also guide selection of measures. Awareness of being assessed was discussed and may be a common method factor across all measures within a study that influences the findings. Response sets were also mentioned. Among these, socially desirable responding is one influence that may be prompted by being aware that one is being assessed.
Single-case experimental designs refer to arrangements that allow experimentation with the individual subject as well as with groups. The methodology is different from the usual group research and relies on ongoing assessment over time, assessment of baseline (pre-intervention functioning), and the use of multiple phases in which performance is evaluated and altered. Three major design strategies, ABAB, multiple-baseline, and changing-criterion designs, are highlighted. The designs vary in the way in which intervention effects are demonstrated and the requirements for experimental evaluation. However, the logic of the designs in demonstrating causal relations is the same in which ongoing assessment across different phases is used to describe, predict, and test predictions as changes are made in the conditions to which participants are exposed. Data evaluation of the results of single-case designs usually relies on nonstatistical methods referred to as visual inspection. Multiple criteria to invoke this method are detailed. Single-case designs have special strengths. These include the ability to: evaluate interventions in everyday settings without the need of control groups or random assignment, provide feedback on the effectiveness of an intervention while that intervention is underway, permit beginning the intervention on a small scale before any larger scale extension, permit evaluation of whether an intervention genuinely is effective with a given individual, and study rare conditions for which group studies are not feasible.
In observational studies, the investigator evaluates the variables of interest by selecting groups rather than experimentally manipulating the variable of interest. Case-control studies were identified and include those investigations in which groups that vary in the outcome or characteristic of interest are delineated. Cohort studies are quite useful in delineating the timeline, that is, that some conditions are antecedent to and in fact predict occurrence of the later outcome. Birth-cohort studies have been a special case that has generated fascinating results related to physical and mental health, educational outcomes, and criminal and social behavior. The cohort usually is followed for decades and that allows investigators to evaluate outcomes at different developmental periods. Data from cohort studies often are used to classify, select, and predict an outcome. Sensitivity and specificity were discussed as key concepts related to the accurate identification of individuals who will show an outcome (sensitivity or true positives) as well as the accurate identification of individuals who will not show an outcome (specificity or true negatives). Critical issues in designing and interpreting observational studies were discussed including the importance of specifying the construct that will guide the study, selecting case and control groups, addressing possible confounds in the design and data analyses, and drawing causal inferences.
The extent to which an experiment rules out as explanations those factors that otherwise might account for the results is referred to as internal validity. Aside from evaluating the internal validity of an experiment, it is important to understand the extent to which the findings can be generalized to populations, settings, measures, experimenters, and other circumstances than those used in the original investigation. The generality of findings is referred to as external validity. Internal and external validity address central aspects of the logic of experimentation and scientific research more generally. The purpose of research is to structure the situation in such a way that inferences can be drawn about the effects of the variable of interest (internal validity) and to establish relations that extend beyond the highly specific circumstances in which the variable was examined (external validity). Internal and external validity are concepts to include in a methodological thinking tool kit. These are central to the evaluation of any study in all areas of scientific research.
Several types of measures were covered that are used in clinical psychological research. These included inventories, questionnaires, scales, global ratings, interviews, projective measures, direct observations of behavior, psychobiological measures, computerized, technology-based and web-based assessment, and unobtrusive measures. Reactivity of assessment and the use of unobtrusive measures and their advantages were also discussed. In general, it is useful to rely upon multiple measures rather than a single measure. It is useful to demonstrate that changes in the construct of interest (e.g., anxiety) are not restricted to only one method of assessment.
The chapter discusses three assessment topics: assessing the impact of experimental manipulations, assessing the clinical significance of change, and the use of ongoing assessment during the course of interventions. Each has direct implications for what one can conclude from a study. Assessing the impact of the experimental manipulation is a check on the independent variable and how it was manipulated or received on the part of the participant. Assessment of clinical significance reflects the concern about the importance of therapeutic changes that were achieved. Statistical significance on the usual measures and indices of effect size do not necessarily reflect whether the impact of the treatment had any practical or palpable value in the lives of the clients. Several indices of clinical significance are discussed. Finally, the value of ongoing assessment of treatment in intervention studies was elaborated. In randomized controlled trials, the usual assessment consists of pre- and post-treatment assessment to evaluate improvement. If the goal is to understand the course or mediators of treatment or to see if the client is improving while the intervention is in effect, ongoing assessment (on multiple occasions over the course of treatment) can be very valuable. The course of change both of mediators and symptoms are likely to vary among individuals. Currently when studies evaluate mediators, one or two assessment occasions are used during treatment to assess the mediator (e.g., cognitive processes, alliance). Ongoing assessment on multiple occasions provide a better way to capture whether the proposed mediators and outcome in fact are related.
Qualitative research is designed to describe, interpret, and understand human experience and to elaborate the meaning that the experience has to the participants. The data usually derive from in-depth interviews of individuals to capture their experiences. A key feature of the approach is obtaining detailed description without presupposing specific measures, categories, or a narrow range of constructs at the outset of the study. Rather, from the data, interpretations, overarching constructs, and theory are generated to better explain and understand how the participants experience the phenomenon of interest. Mixed-methods research is discussed and consists of the combination of quantitative and qualitative research within the same study. Among the obvious benefits is bringing to bear multiple levels of understanding of a phenomenon of interest. Also, the methods allow opportunities for each part of the investigation to influence the other (e.g., qualitative findings can be used to develop measures that will be used later in quantitative research). It is important to be familiar with qualitative and mixed-methods research because of the rich opportunities they provide and because these methodological approaches are much less frequently taught and used in psychological research in comparison to quantitative methods.
Control groups rule out or weaken rival hypotheses or alternative explanations of the results. The control group appropriate for an experiment depends upon precisely what the investigator is interested in concluding at the end of the investigation. No treatment, wait-list, treatment as usual, nonspecific control condition, and yoked controls were discussed. The progression of research and the different control and comparisons groups that are used were illustrated in the context of psychotherapy research. Several different treatment evaluation strategies were discussed that focused on the treatment package, extensions of the treatment, dismantling or constructing treatments, comparisons of different treatments, noninferiority of treatments, and the evaluation of moderators and mediators. These strategies convey the rich opportunities and range of questions that can guide intervention research.
In discussing the results of one’s study, little inferential leaps often are made that can misrepresent or overinterpret what actually was found in the data analyses. Common examples from clinical research were mentioned such as slippage in the use and meaning of “significant” and “predictor.” Another topic critical to data interpretation is the notion of negative results, a concept that has come to mean that no statistically significant differences were found in the experiment. The publication bias favoring statistically significant effects can greatly distort our understanding of the relations in the world and capitalizes on chance findings as well as the use of questionable research practices (e.g., searching for significant effects in the data, whether predicted or not). Replication is a critical topic in all of science because it is the most reliable test of whether the finding is veridical. The logic of statistical analyses suggests that occasionally statistical significance will be achieved even when there are no group differences in the population. Since these findings are likely to be published because of the bias, there could well be a great many findings that would not stand up under any replication effort. Thus, distinguishing those findings in the field that have a sound basis requires replication research. Replication is facilitated by making available information about how the study was done. Open science practices aim to improve the quality and foster greater transparency, which includes access to procedures, data, and facets of decision making.