Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-26T05:04:30.834Z Has data issue: false hasContentIssue false

Service user and clinician perspectives on the use of outcome measures in psychological therapy

Published online by Cambridge University Press:  28 October 2015

Graham R. Thew*
Affiliation:
Department of Psychology, University of Bath, Bath, UK
Louise Fountain
Affiliation:
Avon and Wiltshire Mental Health Partnership NHS Trust, Salisbury, Wiltshire, UK
Paul M. Salkovskis
Affiliation:
Department of Psychology, University of Bath, UK
*
*Author for correspondence: Dr G. R. Thew, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UB, UK (email: graham.thew@msdtc.ox.ac.uk).
Rights & Permissions [Opens in a new window]

Abstract

While the benefits of routine outcome measurement have been extolled and to some degree researched, it is surprising that service user opinions on this common therapy practice have largely not been investigated. This study aimed to assess service users’ experiences of completing measures during psychological therapy, with a view to exploring how therapists can maximize how helpful measures are in therapy. Fifteen participants completed surveys about the use of measures in their current episode of care. Ten clinicians also completed a survey about their use of, and views about, measures. Results showed that despite mixed experiences in how measures were explained and used, service users showed generally favourable attitudes towards their use in therapy, with them being perceived as most helpful when well integrated into sessions by their therapists. Clinicians reported using a wide range of measures, and generally endorsed positive beliefs about measures more strongly than negative ones. Implications for clinical practice, service development, and further research are discussed.

Type
Original Research
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2015 

Introduction

Among health settings, an outcome has been defined as ‘the change in a patient's current and future health status that can be attributed to antecedent healthcare’ (Hunter et al. Reference Hunter, Higginson and Garralda1996). The monitoring of outcomes is becoming a routine part of healthcare in various settings, and can draw upon a wide range of data sources, such as hospital admissions, medication use, and mortality rates. Many of these sources are perhaps more suited to physical health interventions, and as a result it may be true that services providing psychological interventions have lagged behind in obtaining outcome data (see also Salkovskis, Reference Salkovskis1984 for discussion of some psychologists’ reservations about evaluating interventions and research, which may also explain this discrepancy).

Outcome data for psychological interventions comes principally in the form of questionnaire-based measures. These are frequently given to service users across a wide range of healthcare settings and are used to assess current symptoms, difficulties, or general functioning, along with assessing the effectiveness of interventions from the service user's perspective (Dawson et al. Reference Dawson, Doll, Fitzpatrick, Jenkinson and Carr2009). These measures may be used early on in therapy to gather information as part of an assessment process, and they may also be repeated or revisited later to explore changes and evaluate the impact of the psychological intervention. There is some evidence to suggest that the use of standardized outcome measures in psychological therapies can increase the detection of psychological problems (Greenhalgh & Meadows, Reference Greenhalgh and Meadows1999) and potentially improve therapy outcomes (Lambert et al. Reference Lambert, Whipple, Smart, Vermeersch and Nielsen2001, Reference Lambert, Whipple, Vermeersch, Smart, Hawkins and Nielsen2002, Reference Lambert, Whipple and Hawkins2003; Hawkins et al. Reference Hawkins, Lambert, Vermeersch, Slade and Tuttle2004; Harmon et al. Reference Harmon, Hawkins, Lambert, Slade and Whipple2005).

As a result, the use of outcome measures is recommended by a number of guidelines and empirical papers (Department of Health, 1999, 2008; Nordal, Reference Nordal2012). The National Institute for Health and Care Excellence (NICE) guidelines for depression recommend that clinicians ‘use routine outcome measures and ensure that the person with depression is involved in reviewing the efficacy of the treatment’ (NICE, 2009, p. 8; see also guideline for common mental health disorders; NICE, 2011) and the Health and Care Professions Council (HCPC) standards of proficiency for psychologists states they must ‘be able to evaluate intervention plans using recognized outcome measures and revise the plans as necessary in conjunction with the service user’ (HCPC, 2012, p. 24). These documents therefore suggest a role for measures in recognizing when it may be appropriate to change, adapt, or discontinue therapy.

Along with these recommendations, a major driver of change regarding the use of measures is the increasing tendency for commissioners to seek demonstrable evidence of service effectiveness. In some services such as those in the Increasing Access to Psychological Therapies (IAPT) initiative, routine outcome measurement is built into basic service provision and high levels of data completeness is achieved as a result. By contrast, secondary-care services were generally not designed with the infrastructure in place for routine data collection which, coupled with greater clinician choice about the use of measures, and variations in the level of commitment to this, means that service users’ experiences of how outcome measures are used may be more variable.

Surprisingly, the impact of completing measures on clients themselves is significantly under-researched, and very few studies have attempted to seek service user perspectives on this. Where this does occur, studies tend to use focus group methodologies to gather opinions on different measures themselves (Mental Health Research Network, 2010), or ideas about what outcomes are appropriate to be measuring (Perry & Gilbody, Reference Perry and Gilbody2009; Beale et al. Reference Beale, Cella and de C Williams2011). Another study compared therapists’ and clients’ experiences of trialling the CORE-Net outcome measurement system routinely in every therapy session (Unsworth et al. Reference Unsworth, Cowie and Green2012), finding that clients were generally happier than therapists about using the measures, and that measures helped the therapeutic relationship. There is little information as yet on service user views and opinions on how, not just which, measures are used by therapists, along with the use of measures in general clinical practice rather than in specific focus groups or trials. Given the prominence of outcome measurement work in recent years it is concerning that such little attention has been paid to the experiences of those who actually complete them.

Given the overrepresentation of studies focusing on how services can benefit from outcome measurement, it is understandable that clinicians may feel that completing measures is something done just for management purposes, and that clients themselves simply have to endure this as an ‘add-on’ to therapy rather than an integrated part of it. Perhaps due to this perceived conflict of interest between services and clients, clinicians themselves tend to hold quite strong opinions, and voice anxieties and concerns about outcome measures and their use in therapy (Hatfield & Ogles, Reference Hatfield and Ogles2004, Reference Hatfield and Ogles2007; Unsworth et al. Reference Unsworth, Cowie and Green2012). It is perhaps not surprising therefore that implementing standardized outcome measurement procedures within services is associated with many complexities and challenges (McInnes, Reference McInnes2006; Rao, et al. Reference Rao, Hendry and Watson2010) and that even in services using ‘routine outcome measurement’ clinicians may not be using measures routinely (James et al. Reference James, Elgie, Adams, Henderson and Salkovskisin press). Criticisms from clinicians about using measures include practical issues, such as the time needed to complete them, a negative impact on the therapeutic relationship (see McInnes, Reference McInnes2006), or methodological issues, such as the validity of measures used (Greenhalgh & Meadows, Reference Greenhalgh and Meadows1999).

Importantly, these criticisms may or may not be valid, but that at present there is barely any evidence from which to draw conclusions. It seems that both the favourable and unfavourable views of measures held by clinicians are based on assumptions or anecdotal accounts about how service users experience therapy and whether they find it a helpful or unhelpful part of the process. This, along with evidence that clinicians’ perspectives on what they think is helpful for their clients may not match those of the clients themselves (Beale et al. Reference Beale, Cella and de C Williams2011), demonstrates a clear need for both service user perspectives, and clinician attitudes to outcome measurement to be investigated.

The present study aimed to address this gap by investigating the opinions and attitudes of both the users of a secondary-care psychological therapies service and clinicians in that service towards the use of measures in therapy. It aimed to explore whether and how measures were used, and what suggestions people would make to improve their helpfulness. In a context of being encouraged to increase their use of measures, particularly in secondary care, therapists may appreciate a deeper understanding of service users’ experiences and opinions.

Method

Design

This study had two main components. First, service user perspectives on outcome measurement were explored using a survey employing predominantly quantitative approaches, with some qualitative data also collected through the use of free response questions. Second, clinicians’ views towards measures and their value in therapy were obtained using a different brief survey.

This study was approved by both the NHS Research Ethics Committee (Study Reference 12/SC/0517) and the local NHS Research and Development Office.

Service context

The study was conducted within a NHS secondary-care psychological therapies service, serving a countywide population of approximately 290000 adults of working age, living in mixed rural and urban settings. Routine outcome monitoring using measures was not taking place within the service at the time of the study, although more standardized use of measures was being discussed at management level, and clinicians were using measures if they felt it clinically appropriate.

Clinician perspective

Materials

A brief survey was developed to ask clinicians about their use of measures. It addressed the following:

  • The percentage of service users with whom they use measures.

  • The names of outcome measures they most commonly use.

  • A set of beliefs about measures, drawn from discussions with consultees during project development and the authors’ clinical experience. On face validity, the statements were grouped a priori into two subscales representing positive and negative views about measures, and four additional statements represented practical and contextual factors around the use of measures. Clinicians were asked to rate how well each statement applies to them using a Likert scale described in Figure 1.

Fig. 1. Mean clinician ratings of the extent to which each statement applies to them, rated as 0 (does not apply to me), 1 (somewhat applies to me), 2 (strongly applies to me), or 3 (completely applies to me). Error bars represent 1 standard deviation. The statements are presented in three groups: Positive beliefs about measures, Negative beliefs about measures, and Practical considerations. * ‘. . .beyond what I can find out through questioning’. ** ‘. . . but would not choose to otherwise’.

Participants

Ten clinicians working in the service responded to the survey, out of 20 eligible members of staff in the following professions:

  • art psychotherapist;

  • clinical psychologist;

  • nurse practitioner/clinical nurse specialist;

  • occupational therapist.

Procedure

All eligible clinicians in the service were told about the study by the research team and sent a copy of the survey to complete, which could be returned in hard copy or by email.

Service user perspective

Materials

A survey was developed to explore participants’ general impressions and thoughts about how measures are used in therapy. Two versions of the survey (A and B) were produced, for those participants who had and had not completed measures during their therapy sessions, respectively.

The following areas were addressed (response options and Likert scale anchors are provided in Tables 1–3 and Fig. 3):

  • How and when measures were used with the service user (version A; categorical responses).

  • Whether service users who were not given measures were expecting to receive these (version B; binary response).

  • How completing measures made service users feel. These were rated on 0–10 Likert scales (version A).

  • How service users feel measures impact on therapy. These were drawn from the authors’ clinical experience, suggestions from discussions with consultees during project development, and some additional hypotheses. They were rated on -3 to +3 Likert scales (versions A and B).

  • Service user perceptions of the therapeutic relationship, and the helpfulness of measures for them and others. These were rated on -5 to +5 Likert scales (versions A and B).

  • Free response items, such as asking whether service users had suggestions of how to improve how measures can be used (versions A and B).

  • Brief demographic questions (versions A and B).

Table 1. The locations and frequencies of questionnaire completion reported by respondents

Table 2. The mean scores given on each -3 to +3 Likert scale, together with the labels given at each end of the scale

Table 3. The mean scores given on the questions listed, rated on -5 to +5 Likert scales with their respective labels

Potential participants were each given an envelope containing the following:

  • study information sheet;

  • consent form;

  • survey A;

  • survey B;

  • freepost envelope.

Participants

From a total of 42 distributed survey packs, 15 people participated in the study (13 females, mean age 38.9), giving a postal response rate of 36%. All participants were aged ≥18 years, were currently accessing the psychological therapies service and had attended at least three sessions in their current episode of care.

Procedure

Twenty clinicians within the service were approached regarding the study, and were asked to identify eligible participants from their current caseloads and distribute a survey pack to each person at their next appointment. Nine clinicians distributed at least one pack, with a total of 42 packs distributed. Service users were free to read the study information in their own time and decide whether they wished to participate. Participation involved completion of the appropriate survey and the consent form, and its return in the freepost envelope provided.

Data analysis

Data from both clinicians and service users were analysed descriptively to explore trends and relationships in the use of and views about measures. The lack of previous literature in this area meant that there was no basis on which to conduct power calculations to determine appropriate sample sizes, but given the limited sizes of the samples obtained, analysis proceeded cautiously to avoid overstating the findings.

To obtain a summary indicator of how well service users felt measures had been explained and integrated into therapy, the sum of responses to the following four items was calculated:

  • How well were the reasons for using questionnaires explained to you?

  • How well was it explained what you needed to do to complete them?

  • How well were your responses discussed with you?

  • How well were changes in your responses over time discussed with you?

Results

Clinician data

Of the ten clinicians who responded to the survey, six (60%) were clinical psychologists, and four (40%) were psychological therapists with nursing backgrounds.

Clinicians reported using measures with an average of 72% of service users, with individual scores ranging from 17% to 100%. An independent-samples t test (equal variances not assumed) showed that psychological therapists used measures with a significantly higher proportion of service users (93%) compared to clinical psychologists (57.5%) (t 6.15 = 2.7, p = 0.035).

A total of 33 different measures were listed by clinicians as tools they tend to use with their clients. (For frequency of the different measures reported see Supplementary Table S1.)

The ratings of the extent to which clinicians felt the statements about measures applied to them are shown in Figure 1. The two subscales representing positive and negative views about measures showed good internal consistency (Cronbach's α = 0.79 and 0.74, respectively). The highest mean rating was for the statement ‘Measures help with assessment and diagnosis’, and there was a general tendency for clinicians to rate positive beliefs about measures as more applicable to them compared to negative ones.

Service user data

Of the 15 survey respondents, 14 (93%) completed version A of the survey indicating they had completed measures as part of therapy. The number of sessions respondents had attended varied widely, ranging from six sessions, to many over a 3-year period. Sessions occurred most commonly on an individual basis (46%), with the remainder in group format, or a mixture of both.

Practical experience of measures

Regarding how frequently measures were used, five (36%) respondents reported having completed questionnaires once during therapy, with six (43%) completing them every few sessions, and three (21%) respondents every session. Questionnaires were most commonly completed at home (57%), with 43% being completed in session (see Table 1). Completing these in the waiting room was not reported. The same questionnaires had been completed at more than one time-point by 64% of respondents.

Six (43%) participants reported that the questionnaires took less than 5 min to complete, with four (29%) taking 5–10 min, and three (21%) between 11 and 20 min. One (7%) respondent did not answer this question.

Ten (71%) respondents felt that the reasons for using questionnaires had been explained well by their clinician, while two (14%) reported this was done reasonably well, and two (14%) poorly. Similar results were found for how well respondents felt therapists explained how to complete the questionnaires; 64%, 21%, and 14% respectively. No one reported that these were not explained.

Half the respondents felt that their responses to the questionnaires had been discussed well, with 21% reasonably well, and 7% poorly. Two (21%) people reported that their responses had not been discussed with them. Regarding how well any changes in their responses over time had been discussed, four (29%) respondents felt this was done well, three (21%) reasonably, one (7%) poorly, and the remaining six (43%) reporting this was not done or not applicable to them.

Impact of measures on therapy

Respondents' ratings indicated that the questionnaires used in therapy were generally relevant to them, and led to helpful discussions with their therapist (see Table 2).

Respondents felt that their therapists understood them and their difficulties well, and that using questionnaires as part of therapy is generally a good idea. Respondents gave more mixed views as to whether they had personally found questionnaires helpful in their therapy, though the mean rating was positive (see Table 3).

Therapist understanding showed positive correlations with both perceived helpfulness (r = 0.65, p = 0.012) and with positive feelings about the routine use of measures (r = 0.74, p = 0.003; Pearson product-moment correlations, parametric assumptions met). Responses to the latter two questions were also correlated (r = 0.89, p<0.001). All correlations were significant, with p values below the corrected critical value of 0.017 (Bonferroni).

The composite variable of how well service users felt measures had been explained and integrated into therapy showed a significant positive correlation with respondents’ overall ratings of the helpfulness of measures (r = 0.77, p = 0.001). These data are shown in Figure 2.

Fig. 2. Scatterplot showing the association between respondents’ perceptions of how well measures were integrated into the therapy, and how helpful they rated their use overall within their therapy.

Emotional experience of completing measures

The mean ratings of how completing the questionnaires made respondents feel are shown in Figure 3. ‘Anxious’ was the most highly rated item (mean = 5.9, s.d. = 2.5), followed by ‘Down/depressed’ (mean = 5.6, s.d. = 2.4) and ‘Interested’ (mean = 5.1, s.d. = 2.7).

Fig. 3. Respondents’ mean ratings of how completing questionnaires made them feel, where 1 = not at all, and 10 = extremely. Error bars represent ±1 standard deviation.

Comments and suggestions

Six (43%) respondents provided comments on the use of questionnaires in therapy and/or suggestions on how this process could be improved. These were reviewed and the themes identified are presented below with representative extracts.

The main theme (four participants) present in the responses related to the need for service users’ responses to be discussed with them by their therapist, suggesting this does not occur routinely:

It might be helpful to go through my responses over time, I have completed numerous mood questionnaires but they have never been discussed or mentioned. (Participant 1)

It may be that service users have not had the opportunity to discuss their feelings about what the measures show:

When a comparison between initial and later questionnaires was calculated, I felt that the results were not accurate, i.e. an ‘improvement’ was indicated which did not correspond with my feelings. (Participant 6)

Another theme (two participants) related to the way response options are presented within measures, with service users expressing these sometimes feel too broad:

I often find it difficult to limit my considered reply to the ‘one answer’ choice and with my therapist often found I felt I had to make notes to make my reply more accurate so as not to be misunderstood. A more accurate view of the patient's feelings could be achieved if it was possible to give options to briefly clarify or explain replies. (Participant 15)

In the forms that I have been asked to complete, I felt the scales had insufficient grades to allow a subtle enough response. (Participant 6)

Other pertinent comments and suggestions made by individual respondents are presented below:

I do not struggle to talk about mental health problems greatly, whereas a questionnaire could be the voice of someone who does. (Participant 12)

I don't know whether the results are recorded on the computer records but I feel that if they were, then periodic completion of the questionnaires may show any significant change, and anyone involved in the care could gain access to these records then they could prove to be useful to both patient and therapist. (Participant 10)

Personally I have lied on them as there feels like a pressure to improve and you don't want the services to be dropped or lose funding. (Participant 12)

As there was only one respondent who completed version B of the survey, indicating they had not completed measures as part of therapy, these data were not included in the above analyses. This respondent had not been asked to complete measures and had not expected to be, and expressed some doubts about their value and helpfulness in therapy.

Discussion

This study has shown that service users’ perceptions of how well measures were used and integrated into therapy were strongly associated with how helpful they rated measures overall as part of therapy. Service users indicated that the act of completing measures can be difficult, raising feelings such as anxiety or low mood, but that they can also provoke interest. They highlighted that the measures they completed seemed relevant to them, and that generally measures led to helpful discussions with their therapists. These findings were supported by service users’ comments and suggestions relating to the need for therapists to discuss the responses given and to provide an opportunity to seek service users' perceptions of what the measures may indicate.

The clinicians surveyed reported using a wide range of measures with, on average, 72% of service users, and generally endorsed positive beliefs about measures more strongly than negative ones. Perhaps as a result they reported using measures with the majority of service users they work with. Most service users in the study reported completing measures as part of their therapy, but had varied experiences regarding how these were used, and how well they were explained by their therapists. The fact that clinicians more strongly endorsed the positive items indicates a fairly positive attitude towards the use of measures, and this in itself may have meant clinicians were more likely to complete the survey.

The responses from service users suggest a degree of variation in how well measures were explained, used and integrated into therapy sessions by their therapists. It is interesting to note that despite any difficulties with this, and the fact that on average, completing the measures led to an increase in feelings of anxiety and low mood, service users generally reported that measures led to helpful discussions with their therapists and that they would recommend their use as a routine part of therapy.

These findings must be interpreted tentatively given the limited size of the sample, and a large degree of variance in the data. Selection bias may have been present on the part of both clinicians and service users, perhaps being more likely to hand out, or complete, surveys if measures had been used successfully. This may be a particularly pertinent issue in the present study given the surveys themselves asked about questionnaire completion. Both clinicians and service users were encouraged to participate in the study even if measures had not been used, minimizing this bias where possible, but research into why measures may not be completed by service users is much needed. Again a lack of literature in this field perhaps limits the ability to interpret these results in context, but it is hoped that this study will begin the process of developing our knowledge in this area.

The present results point to a number of implications and recommendations for therapists, services, and future research. Perhaps the most clear of these is that therapists need to consider carefully the explanations they provide to service users about the purpose and process of using measures and how they should be completed, which may include encouraging note-writing to clarify responses. Having this ‘foundation’ in place seems fundamental, along with the subsequent tasks of discussing service users’ responses, seeking their experience of completing them, and their perceptions and opinions of the results. Finally where questionnaires are repeated over time, previous responses should be revisited and subjective and objective changes discussed. It appears that this careful and thoughtful approach to using measures in therapy sessions drives their perceived helpfulness overall.

In light of this, it is perhaps appropriate to recommend that services consider the training available to staff, particularly where more routine approaches to collecting outcome data are being implemented or planned. It is suggested that service managers and clinicians should place greater emphasis on how, and not simply whether, measures are being used. It could be hypothesized that if clinicians hold generally negative beliefs about measures this may lead to more tokenistic use, which the present findings indicate is perceived as unhelpful by service users. Additionally, the findings suggest that services need to ensure the availability of appropriate measures, and as highlighted by one of the suggestions made, to consider whether and how responses might be documented and/or recorded centrally, and if so how this is communicated to service users.

Clearly there are opportunities and requirements for further research in this area. As mentioned above, exploring staff training interventions and how clinicians’ beliefs about measures may influence how they use them in therapy is an important step given the present results. Understanding clinicians’ decision making around whether or not to use measures with a particular person may be beneficial, along with exploring the views of service users who are not given measures, which was not possible in the present study due to lack of these responses. Linking clinician and service user views using approaches such as multilevel modelling, and drawing direct comparisons between therapy sessions including and excluding measures may prove helpful in addressing questions such as how measures might affect the therapeutic relationship. Research is also required into the role that new technologies such as mood tracking applications (e.g. Drake et al. Reference Drake, Csipke and Wykes2013) can play in supporting outcome measurement in therapy, and qualitative and case study approaches would help develop our understanding of service user experiences at an individual level.

Summary

  • There is an alarming lack of literature investigating service user perspectives on the use of outcome measures in therapy.

  • Service users in the current study reported varied experiences regarding how well measures were explained and used.

  • Measures were rated as more helpful when they were effectively integrated into therapy by their clinician.

  • Service users emphasized the need for clinicians to discuss their responses, highlighting that this does not occur routinely.

Ethical standards

The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.

Supplementary material

For supplementary material accompanying this paper visit http://dx.doi.org/10.1017/S1754470X15000598.

Acknowledgements

The authors would like to thank the participants and clinicians involved in the study, along with those who provided valuable contributions in the consultation phase.

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Declaration of Interest

None.

Learning objectives

  • To understand service users’ experiences and opinions on the use of measures in therapy.

  • To consider clinicians’ beliefs about measures and how this may affect their use.

  • To consider how the perceived helpfulness of measures might be improved.

References

Recommended follow-up reading

Hatfield, DR, Ogles, BM (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice; Professional Psychology: Research and Practice 35, 485.CrossRefGoogle Scholar
Unsworth, G, Cowie, H, Green, A (2012). Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: a qualitative study. Counselling and Psychotherapy Research 12, 7180.CrossRefGoogle Scholar

References

Beale, M, Cella, M, de C Williams, AC (2011). Comparing patients’ and clinician-researchers’ outcome choice for psychological treatment of chronic pain. Pain 152, 22832286.Google Scholar
Dawson, J, Doll, H, Fitzpatrick, R, Jenkinson, C, Carr, AJ (2009). The routine use of patient reported outcome measures in healthcare settings. BMJ (Clinical Research Edition), 340, c186c186.Google Scholar
Department of Health (1999). National service framework for mental health: modern standards and service models (https://www.gov.uk/government/publications/quality-standards-for-mental-health-services).Google Scholar
Department of Health (2008). IAPT Implementation Plan: National Guidelines for Regional Delivery (http://www.iapt.nhs.uk/silo/files/implementation-plan-national-guidelines-for-regional-delivery.pdf).Google Scholar
Drake, G, Csipke, E, Wykes, T (2013). Assessing your mood online: acceptability and use of Moodscope. Psychological Medicine 43, 14551464.Google Scholar
Greenhalgh, J, Meadows, K (1999). The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. Journal of Evaluation in Clinical Practice 5, 401416.Google Scholar
Harmon, C, Hawkins, EJ, Lambert, MJ, Slade, K, Whipple, JS (2005). Improving outcomes for poorly responding clients: the use of clinical support tools and feedback to clients. Journal of Clinical Psychology 61, 175185.Google Scholar
Hatfield, DR, Ogles, BM (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice; Professional Psychology: Research and Practice 35, 485.Google Scholar
Hatfield, DR, Ogles, BM (2007). Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research 34, 283291.Google Scholar
Hawkins, EJ, Lambert, MJ, Vermeersch, DA, Slade, KL, Tuttle, KC (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research 14, 308327.Google Scholar
HCPC (2012). Standards of proficiency: practitioner psychologists. Health and Care Professions Council.Google Scholar
Hunter, J, Higginson, I, Garralda, E (1996). Systematic literature review: outcome measures for child and adolescent mental health services. Journal of Public Health 18, 197206.Google Scholar
James, K, Elgie, S, Adams, J, Henderson, T, Salkovskis, P (in press). Session-by-session outcome monitoring in CAMHS: clinicians' beliefs. The Cognitive Behaviour Therapist.Google Scholar
Lambert, MJ, Whipple, JL, Hawkins, EJ (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice 10, 288301.Google Scholar
Lambert, MJ, Whipple, JL, Smart, DW, Vermeersch, DA, Nielsen, SL (2001). The effects of providing therapists with feedback on patient progress during psychotherapy: are outcomes enhanced? Psychotherapy Research 11, 4968.Google Scholar
Lambert, MJ, Whipple, JL, Vermeersch, DA, Smart, DW, Hawkins, EJ, Nielsen, SL (2002). Enhancing psychotherapy outcomes via providing feedback on client progress: a replication. Clinical Psychology & Psychotherapy 9, 91103.CrossRefGoogle Scholar
McInnes, B (2006). Management at a crossroads: the service management challenge of implementing routine evaluation and performance management in psychological therapy and counselling services. European Journal of Psychotherapy and Counselling 8, 163176.Google Scholar
Mental Health Research Network (2010). Outcome measurement in mental health: the views of service users (http://www.mhrn.info/data/files/MHRN_PUBLICATIONS/REPORTS/outcome_measures_report.pdf).Google Scholar
NICE (2009). Depression in adults: the treatment and management of depression in adults. Clinical Guideline 90. National Institute for Health and Care Excellence.Google Scholar
NICE (2011). Common mental health disorders. Clinical Guideline 123. National Institute for Health and Care Excellence.Google Scholar
Nordal, KC (2012). Outcomes measurement benefits psychology. Monitor on Psychology 43, 51.Google Scholar
Perry, A, Gilbody, S (2009). User-defined outcomes in mental health: a qualitative study and consensus development exercise. Journal of Mental Health 18, 415423.CrossRefGoogle Scholar
Rao, AS, Hendry, G, Watson, R (2010). The implementation of routine outcome measures in a Tier 3 Psychological Therapies Service: the process of enhancing data quality and reflections of implementation challenges. Counselling and Psychotherapy Research: Linking Research with Practice 10, 3238.Google Scholar
Salkovskis, PM (1984). Psychological research by NHS clinical psychologists: an analysis and some suggestions. Bulletin of the British Psychological Society 37, 375377.Google Scholar
Unsworth, G, Cowie, H, Green, A (2012). Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: a qualitative study. Counselling and Psychotherapy Research 12, 7180.CrossRefGoogle Scholar
Figure 0

Fig. 1. Mean clinician ratings of the extent to which each statement applies to them, rated as 0 (does not apply to me), 1 (somewhat applies to me), 2 (strongly applies to me), or 3 (completely applies to me). Error bars represent 1 standard deviation. The statements are presented in three groups: Positive beliefs about measures, Negative beliefs about measures, and Practical considerations. * ‘. . .beyond what I can find out through questioning’. ** ‘. . . but would not choose to otherwise’.

Figure 1

Table 1. The locations and frequencies of questionnaire completion reported by respondents

Figure 2

Table 2. The mean scores given on each -3 to +3 Likert scale, together with the labels given at each end of the scale

Figure 3

Table 3. The mean scores given on the questions listed, rated on -5 to +5 Likert scales with their respective labels

Figure 4

Fig. 2. Scatterplot showing the association between respondents’ perceptions of how well measures were integrated into the therapy, and how helpful they rated their use overall within their therapy.

Figure 5

Fig. 3. Respondents’ mean ratings of how completing questionnaires made them feel, where 1 = not at all, and 10 = extremely. Error bars represent ±1 standard deviation.

Supplementary material: File

Thew supplementary material S1

Thew supplementary material

Download Thew supplementary material S1(File)
File 13.2 KB
Submit a response

Comments

No Comments have been published for this article.