Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-21T15:04:54.020Z Has data issue: false hasContentIssue false

Simulated patients and objective structured clinical examinations: review of their use in medical education

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Extract

Simulated or standardised patients have been used in medical education and other medical settings for some 30 years (Box 1). Their use encompasses undergraduate and postgraduate learning, the monitoring of doctors' performance and standardisation of clinical examinations. Simulation has been used for instruction in industry and the military for much longer (Jason et al, 1971) but the first known effective use of simulated patients was by Barrows & Abrahamson (1964), who used them to appraise students' performance in clinical neurology examinations.

Type
Research Article
Copyright
Copyright © The Royal College of Psychiatrists 2002 

Simulated or standardised patients have been used in medical education and other medical settings for some 30 years (Box 1). Their use encompasses undergraduate and postgraduate learning, the monitoring of doctors’ performance and standardisation of clinical examinations. Simulation has been used for instruction in industry and the military for much longer (Reference Jason, Kagan and WernerJason et al, 1971) but the first known effective use of simulated patients was by Barrows & Abrahamson (1964), who used them to appraise students’ performance in clinical neurology examinations.

Box 1 Uses of simulated patients in educational settings

Teaching communication skills

Teaching clinical skills

Monitoring the performance of doctors

Clinical examinations

The objective structured clinical examination (OSCE) was first described by Harden & Gleeson as, ‘a timed examination in which medical students interact with a series of simulated patients in stations that may involve history-taking, physical examination, counselling or patient management’ (Reference Harden and GleesonHarden & Gleeson, 1979). Because OSCEs have been shown to be feasible and have good reliability and validity (Reference Hodges, Regehr and HansonHodges et al, 1998) their use has become widespread as the standard for performance-based assessment, particularly in undergraduate examinations.

The use of OSCEs in undergraduate examinations (‘summative’ use) occurs in every London medical college and was pioneered by the Royal London and St Bartholomew's Hospitals. Many colleges across the UK have now adapted their examinations to include these components. In addition, there is considerable uptake in the use of simulated patients for medical student training (‘formative’ use).

Several of the medical Royal Colleges have introduced an OSCE component into their postgraduate membership examinations. For example, the Royal College of Anaesthetists includes an OSCE in Part I of the fellowship examination, and the Royal College of Obstetricians and Gynaecologists has an OSCE in Part II of their examinations. The Royal College of Surgeons, London, are introducing OSCEs and a pilot is being planned for this year. The Royal College of Physicians has a Practical Assessment of Clinical Examination Skills (PACES) in their clinical examinations, part of which comprises a communication and ethics station, in which simulated patients are used. Although the preferred method of examination is with videotape of real consultations, the Royal College of General Practitioners has a ‘simulated surgery’ that about 5% of candidates use.

The Royal College of Psychiatrists has recently proposed changes to the existing membership examinations with a view to increasing their reliability and validity (http://www.rcpsych.ac.uk/traindev/exams/exam_recent.htm). The main changes are to the Part I examination, and from spring 2003 the existing individual patient assessment will be replaced by an OSCE, in which simulated patients are likely to be used. The Part II examination will essentially retain the same format.

In view of the proposed changes we feel that a review of the literature surrounding the use of simulated patients and OSCEs would be both useful and timely. In this paper we focus on the following issues. First, the definition of simulated patients; second, the feasibility of standardising simulators; third, a historical overview of simulated patients in medical settings; and fourth, the reliability and validity of OSCEs. We also comment on the possible burden to the people who are playing the part of patients.

Who are simulated patients?

Despite the increasing use of simulated patients in medical education, there remains a problem when reviewing the literature regarding what is meant by a simulated or standardised patient. Some authors use the term simulated patients (Reference Sanson-Fisher and PooleSanson-Fisher & Poole, 1980; Reference Norman, Tugwell and FeightnerNorman et al, 1982), but others use the term standardised patients (Reference Rubin and PhilpRubin & Philp, 1998). Although having quite different meanings, these two terms are often used interchangeably. Others have used the terms pseudo or surrogate patients (Reference Badger, DeGruy and HartmanBadger et al, 1995). Vu & Barrow's (1994) definition of standardised patients includes ‘real or simulated patients who have been coached to present a clinical problem’. We found only one reference relating to the use of real patients (McLure et al, 1985). Therefore, in the rest of this review we use the term ‘simulated patients’, as we feel it encompasses all definitions.

In some cases simulated patients are professionally trained actors playing the part of patients (Reference Norman, Tugwell and FeightnerNorman et al, 1982). However, Sanson-Fisher & Poole (1980), when comparing medical students’ performance with that of real and simulated patients, used volunteer simulators who were not ‘members of the acting profession . In a description of new medical student teaching at Michigan State University, Jason et al(1971) write that their simulated patients were ‘primarily drama students from our campus’ and additionally ‘several housewives, some of whom had no previous acting experience’. Rubin & Philp (1998), in a study of the health perceptions of simulators, state that simulated patients were recruited from ‘the allied health programmes at local colleges, community volunteer programmes, the community senior citizen programme and from clinics at a university-based department of family medicine’. Others, for example Hodges et al(1996), do not clearly state where they recruited their simulated patients. Therefore, simulated patients are not a homogeneous group and their only common characteristic is that of simulating real patients. This raises the question of whether such a diverse group can be trained to behave in a ‘standardised’ way.

Uses of simulated patients in clinical and educational settings

Teaching communication skills

This is the main use in medical education, where the use of simulation gives students the opportunity to be involved in approximations of real-world settings. Here they are confronted with the challenging task of establishing a relationship while eliciting clinical information. The major advantage of effectively devised simulations is that they can simultaneously have most of the engaging qualities of reality while being explicitly controlled and safe (Reference Jason, Kagan and WernerJason et al, 1971). Other advantages include the role simulators have in giving direct feedback to medical students on their performance and their being readily available for teaching purposes (Reference Sanson-Fisher and PooleSanson-Fisher & Poole, 1980).

In a well-controlled study, Sanson-Fisher & Poole (1980) demonstrated the validity of using simulated patients in the assessment of medical students’ interpersonal skills. The simulators were psychiatry out-patients and the dependent measure was a rating of empathy based on a retrospective review of audio recordings. They demonstrated that student performance, on a rating of empathy, was not significantly different with genuine as opposed to simulated patients. In addition, they reported that students were unable to discriminate between persons simulating a patient role and those who presented a real history.

Teaching clinical skills

Norman et al(1982), in a study within a postgraduate setting, showed that simulated patients could be used in areas that went beyond communication skills alone. Using a sample of 10 residents within hospital and family medicine they compared residents’ performance on four real patients with chronic stable conditions and on four simulators coached to present the same problem. They found that there were no significant differences in the number of questions asked in the history, physical examination findings, diagnosis considered or investigations proposed by the residents. Interestingly, the residents elicited more historical information from the simulated patients, but this was found to be due to a single case in which the real patient, a woman with multiple sclerosis, had memory loss. Residents also correctly identified 67% of the patients as real or simulated (against a chance 50%). Arguably, a problem with the studies by both Norman et al(1982) Sanson-Fisher & Poole (1980) is whether they can be generalised beyond the settings and patient problems involved.

Monitoring doctors’ performance

Another use of simulated patients is in the monitoring of doctors’ performance. For example, Rethans & Boven (1987) in The Netherlands introduced simulated patients into 48 general practitioners’ (GPs’) surgeries and collected data about their performance. The simulators described symptoms of a urinary tract infection and the researchers were interested in whether the GPs acted according to a consensus standard for managing a patient presenting with this condition. The GPs did not detect the simulators and GP performance was on the whole shown to be a more accurate reflection of actual practice than is elicted by written simulations.

Rethans et al(1991) repeated this study for other clinical problems such as headaches, diarrhoea, diabetes and shoulder pain and found similar results, which led them to conclude that ‘standardised patients may be the method of choice in the assessment of quality of actual care of doctors’. They state that existing methods for performance assessment, such as written tests and clinical examinations, have doubtful reliability and validity. Although audio and video recordings of consultations can be reliable and valid, they feel that the limited control over patient characteristics makes it difficult to compare performance between doctors.

A study by McLure et al(1985) introduced trained patients with uncomplicated rheumatic disease into the consulting rooms of 26 family physicians in Arizona. The physicians’ ability to collect diagnostic information and formulate a plan was investigated. They found that despite the fact that most of the physicians neglected areas such as psychosocial impact and mental health issues, the majority made an adequate assessment and virtually all developed an adequate care plan. They felt that the simulated patient method provided an unobtrusive way of auditing physicians’ encounters and that the technique can be used in peer review and the planning of training programmes. It is interesting to note, however, that additional postgraduate education does not seem to change the practice of doctors (Reference Sibley, Sackett and NeufeldSibley et al, 1982).

Simulated patients and clinical examinations

The traditional oral examination has been criticised for its inherent unreliability and poor validity (Reference Hodges, Regehr and HansonHodges et al, 1997). Data gathered by the National Board of Medical Examinations in the USA (1960–1963), involving over 10 000 medical students, showed that the correlation of independent evaluations by two examiners was less than 0.25 (Reference Hubbard, Levitt and SchumacherHubbard et al, 1963). It has also been demonstrated that the luck of the draw in selection of examiner and patient played a significant role in the outcome of postgraduate oral examinations in psychiatry (Reference Leichner, Sisler and HarperLeichner et al, 1984). Indeed, Leichner et al went on to suggest that the use of OSCEs may be a feasible means of improving the validity and reliability of oral examinations (Reference Leichner, Sisler and HarperLeichner et al, 1986).

An advantage of simulated patients over real patients is that of allowing different students to be presented with a similar challenge, thereby reducing an important source of variability (Reference Norman, Barrows, Gliva, Neufield and NormanNorman et al, 1985). Other advantages include their reliable availability and adaptability, which enables the reproduction of a wide range of clinical phenomena tailored to the student's level of skill. In addition, they can simulate scenarios that may be distressing for a real patient, such as bereavement or terminal illness (Reference Sanson-Fisher and PooleSanson-Fisher & Poole, 1980).

A disadvantage of simulated patients is that they may become ideal ‘textbook cases’, to which real patients, with all their idiosyncrasies, do not often conform. Another disadvantage, discussed by Hodges et al(1997), is the expense of simulated patients, whose training and time spent performing accounted for the largest proportion of the direct cost of setting up OSCEs. Hodges et al do, however, go on to point out that this cost is more than made up for by the reduced faculty hours per annum that it takes to set up OSCEs compared with traditional oral examinations (Box 2).

Box 2 Advantages and disadvantages of objective structured clinical examinations

Advantages

Simulations of real-life situations

Close to reality

Controlled and safe

Feedback from the actors (simulators)

Ready availability when required

Stations can be tailored to level of skill to be assessed

Scenarios that are distressing to real patients can be simulated

The patient variable in examination is uniform across trainees

Disadvantages

The idealised ‘textbook’ scenarios may not mimic real-life situations

May not allow assessment of complex skills

Cost

Training issues in setting up the stations

Standardisation and training of simulated patients

The range of clinical problems that simulators can reproduce is wide and varied, but training is essential to making their performance as lifelike as possible. Arguably, this is of key importance with respect to the issue of standardisation. However, we have found little in the literature on what is meant by standardisation or on the benchmark standards to which the simulators are trained.

We feel that standardisation has two components: the validity or accuracy of performance and the reliability or consistency of performance when faced with different examinees. The first of these has been determined in a number of ways. Both content and face validity have been addressed by getting an expert to review the simulations and determine their accuracy. For example, Hodges et al(1997) mention that each simulated patient was observed performing the role by the OSCE station author to verify the realism of the performance. Indirect indicators of validity might include the fact that simulators are rarely distinguished from real patients (Reference Sanson-Fisher and PooleSanson-Fisher & Poole, 1980; Reference Baerheim and MalterudBaerheim & Malterud, 1995). This non-detectability suggests that their behaviour is similar to that of real patients. This was further explored in a double-blind study in which simulated patients were substituted for real patients in the individual patient assessment of mock clinical examinations in psychiatry (further details available from the author upon request). Neither the examiners nor the students could detect the presence of simulated patients among the real patients. Hodges et al(1997) demonstrated the construct validity of OSCE psychiatry clerkship examinations by comparing the performance of residents and medical students. Concurrent validity was demonstrated by asking tutors responsible for the students to rank them with respect to their interviewing skills. The tutors’ rankings predicted the rankings generated by the performance on OSCE stations. Assessment of content validity was made by asking residents how real the simulations were: 80% described the scenarios as real or very real.

Badger et al(1995) investigated reliability of simulated patients in their consistency of performance over time and between trainees. Their study analysed the performance of 13 simulated patients during 228 doctor–patient encounters in a year-long study related to the diagnosis of depression. Results revealed high intra- and interperformance reliabilities, even when intervals between performances were as long as 3 months. The doctors detected depression in 30% of simulators, roughly the same detection rate as in real patients. In a review of simulated patients, Vu & Barrows (1994) state that ‘with good training, the simulated patients can be accurate and consistent in the essential features of their simulations’. Vu et al(1987) have also reported that simulated patients are able to enact their roles reliably up to 12 times a day.

A key requisite for achieving both accuracy and consistency of simulated patient performance is good training. Much of the literature refers to the standard methods described by Barrows (1971). For example, in a study to test the validity of psychiatric undergraduate OSCEs, Hodges et al(1997) describe the work of an experienced simulated patient trainer: ‘training for a role begins with the presentation of written material and where possible, video footage of real patients. Each simulated patient is then observed performing the role by the station's author to verify the realism of the portrayal and to ensure consistency across the simulated patients in their presentation of affect and in their response to questions.’

Objective structured clinical examinations and psychiatry

In many disciplines and specialities OSCEs have been studied extensively and their reliability and validity established (Reference Hodges, Regehr and HansonHodges et al, 1997). This has been more difficult to achieve in psychiatry examinations, and there are several reasons to believe that an OSCE might not be as valid in the assessment of psychiatric skills. First, the binary checklists typically used in most OSCEs are insufficiently sensitive to detect higher clinical components such as empathy, rapport and ethics (Reference CoxCox, 1990). Second, although a typical OSCE station lasts up to 15 minutes, a traditional psychiatric interview is of 50 minutes, raising questions about the content validity of the short OSCE station. Finally, there are some who argue that complex psychiatric presentations such as thought disorder are difficult to simulate (Reference Hodges, Regehr and HansonHodges et al, 1998).

Famuyiwa et al(1991), at the university of Lagos in Nigeria, investigated whether the OSCE examination in psychiatry was effective and valid. By comparing the OSCE scores of 123 students with criterion-based reference scores on multiple choice questions he found that the multiple choice marks correlated significantly with the OSCE marks.

Much of the work in this area, however, has been carried out by Hodges and colleagues at the University of Toronto. For example, in 1995, following the success of a number of pilot OSCEs, they included an eight-station OSCE into the curriculum as a criterion for medical students to pass the psychiatry clerkship. Subsequent analysis of this OSCE suggested that a psychiatric OSCE is a feasible method for assessing complex psychiatric skills. These skills were evaluated in three ways. First, behaviours identified as important to a successful interview were scored as ‘done’ or ‘not done’ on a binary checklist. For example, a station on depression might typically include the questions ‘Asks about sleep?’ and ‘Asks about suicidal ideation?’ Box 3 shows amended extracts from Guy's, King's and St Thomas’ psychiatric OSCE rating form. Second, a global rating scale for performance was scored using four 5-point scales that addressed each student's organisation, rapport, interview-building skills and control of his or her emotions during the interview. Third, each examiner recorded a global impression of each student's performance. They examined 192 medical students on two parallel forms (A and B) of the examination and found it to have a reasonably high interstation reliability (Form A: Cronbach's α = 0.64; Form B, α = 0.66). However, they add that 'Whilst it is generally desirable to have a reliability of 0.80 or greater for high stakes examinations … [t]he reliability we have found is adequate for decisions regarding a clerkship rotation’ (Reference Hodges, Regehr and HansonHodges et al, 1997).

Box 3 Extracts from binary checklist (Yes /No) for the core features of depression in a psychiatric undergraduate examination (Unpublished: reproduced with permission of Teifon Davis)

[Candidate] elicits the following aspects of patient's main complaint

Mood low/tired

Feelings of loss of self-esteem

Early morning wakening

Loss of appetite

Loss of concentration

Loss of libido

Work circumstance/stress

Past psychiatric history

Family history of mental disorder

Patient's response to mental disorder: concerns about medication/self-weakness

Home circumstances: concerns about losing job

We therefore feel that the following issues need consideration with regard to the introduction of OSCEs into Part I of the Royal College of Psychiatrists’ membership examinations:

  1. (a) In view of this being a ‘high-stakes’ postgraduate examination, how are reliability and validity being established?

  2. (b) What is the added value of OSCEs over the current method of examination?

  3. (c) What skills can be adequately tested by binary checklists?

  4. (d) How can key psychiatric skills (e.g. empathy and building rapport with patients be assessed?

Is there a burden to simulated patients?

It is worth bearing in mind that the often highly emotional nature of simulated patients’ roles can have a residual effect on the simulators. Hodges et al (1997) note significant sequelae when simulated patients are required to play difficult roles. These include difficulties emerging from the characters, exhaustion, euphoria and, more seriously, sleep disturbances and heightened levels of anxiety, anger or sadness. They suggest that great care be taken in the selection of simulated patients and that debriefing and monitoring of simulated patients are essential. In a 5-year longitudinal study examining the impact of participation as a simulated patient on the simulators’ own health care perceptions, Rubin & Philp (1998) found that, although overall the simulated patients’ perceptions of their interactions with their own doctors were positive both before and after participation in OSCEs, perceptions of their own health care was significantly worse at 1 year post-OSCE. Again, they suggest the need for debriefing post-OSCE.

Conclusion

The use of simulated patients has a relatively long history. Simulated patient standardisation, however, is a poorly defined term, but it involves accuracy and consistency of performance. Research in OSCE examinations has shown that reliability and validity can be achieved, but it is dependent on adequate and standardised training methods.

With careful planning, it does seem that OSCEs are a feasible means of testing postgraduates in psychiatry, and attempts to make the examination more valid and reliable are to be welcomed. There is the possibility of a burden to those who simulate patients, but this can be minimised by careful selection and debriefing following the examination.

When considering the use of OSCEs to test postgraduates we feel it is important to remember that their nature is to break down clinical skills into small ‘testable’ tasks. This runs the risk of training doctors who are very good at performing these piecemeal tasks without being able to assimilate them into a coherent assessment. An analogy might be a pianist who can play beautiful scales and arpeggios but cannot play a complete sonata. We would hope that senior house officers preparing for OSCEs do not forget how to take a history, make a diagnosis and formulate a management plan. Even though this will be tested in Part II of the membership examinations, we feel these are essential skills for all doctors, from house officer to consultant.

Multiple choice questions

  1. 1. With regard to the selection of simulated patients:

    1. a they are always professional actors

    2. b they may be real patients

    3. c careful selection is unnecessary

    4. d housewives have been used

    5. e it is inadvisable to use students of acting.

  2. 2. The following are interchangeable with the term ‘simulated patient’:

    1. a surrogate patient

    2. b pseudopatient

    3. c pretend patient

    4. d standardised patient

    5. e real patient.

  3. 3. Uses of simulated patients include:

    1. a teaching communication skills to medical students

    2. b reducing variability in examinations by presenting different students with the same challenge

    3. c teaching clinical skills to postgraduates

    4. d psychiatry examinations

    5. e monitoring the performance of doctors.

  4. 4. Simulated patients have the advantage of:

    1. a never being detected

    2. b being more intelligent than real patients

    3. c being able to play scenarios which real patients may find distressing

    4. d being cheaper than real patients

    5. e having no personal emotions that might influence the doctor–patient relationship.

  5. 5. Objective structured clinical examinations (OSCEs):

    1. a have no proven reliability or validity in psychiatry examinations

    2. b test only communication skills

    3. c lend themselves easily to the testing of skills such as empathy or building rapport with the patient

    4. d can be used to test ethics

    5. e are difficult to set up.

MCQ answers

1 2 3 4 5
a F a T a T a F a F
b T b T b T b F b F
c F c F c T c T c F
d T d T d T d F d T
e F e F e T e F e F

References

Badger, L. W., DeGruy, F., Hartman, J. et al (1995) Stability of standardised patients' performance in a study of clinical decision making. Family Medicine, 27, 126133.Google Scholar
Baerheim, A. & Malterud, K. (1995) Simulated patients for the practical examination of medical students: intentions, procedures and experiences. Medical Education, 29, 410413.Google Scholar
Barrows, H. S. (1971) Simulated Patients: The Development and Use of a New Technique in Medical Education. Springfield, IL: Charles C. Thomas.Google Scholar
Barrows, H. S. & Abrahamson, S. (1964) The programmed patient: a technique for appraising student performance in clinical neurology. Journal of Medical Education, 39, 802805.Google Scholar
Cox, K. (1990) No Oscar for OSCE. Medical Education, 24, 540545.Google Scholar
Famuyiwa, O. O., Zachariah, M. P. & Ilechukwu, S. T. C. (1991) The objective structured clinical examination in undergraduate psychiatry. Medical Education, 25, 4550.Google Scholar
Harden, R. M. & Gleeson, F. A. (1979) Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education, 13, 4154.Google Scholar
Hodges, B., Turnbull, J., Cohen, R. et al (1996) Evaluating communication skills in the objective structured clinical examination format: reliability and generalizability. Medical Education, 30, 3843.CrossRefGoogle Scholar
Hodges, B., Regehr, G., Hanson, M. et al (1997) An objective structured clinical examination for evaluating psychiatric clinical clerks. Academic Medicine, 72, 715721.Google Scholar
Hodges, B., Regehr, G., Hanson, M. et al (1998) Validation of an objective structured clinical examination in psychiatry. Academic Medicine, 73, 910912.Google Scholar
Hubbard, J. P., Levitt, E. J., Schumacher, C. F. et al (1963) An objective evaluation of clinical competence. New England Journal of Medicine, 272, 13211328.Google Scholar
Jason, H., Kagan, N., Werner, A. et al (1971) New approaches to teaching basic interview skills to medical students. American Journal of Psychiatry, 127, 140142.Google Scholar
Leichner, P., Sisler, G. C. & Harper, D. (1984) A study of the reliability of the clinical oral examination in psychiatry. Canadian Journal of psychiatry, 29, 394397.CrossRefGoogle ScholarPubMed
Leichner, P., Sisler, G. C. & Harper, D. (1986) The clinical oral examination in psychiatry: the patient variable. Annals of the Royal College of Physicians and Surgeons, Canada, 19, 283284.Google Scholar
McClure, C. L., Gall, E., Meredith, K. E. et al (1985) Assessing clinical judgement with standardized patients. Journal of Family Practice, 20, 457464.Google Scholar
Norman, G. R., Tugwell, P. & Feightner, J. W. (1982) A comparison of resident performance on real and simulated patients. Journal of Medical Education, 27, 708715.Google Scholar
Norman, G. R., Barrows, H. S., Gliva, G. et al (1985) Simulated patients. In Assessing Clinical Competence (eds Neufield, V. R. & Norman, G. R.) New York: Springer.Google Scholar
Rethans, J. J. E. & Boven, C. P. A. (1987) Simulated patients in general practice: a different look at the consultation. British Medical Journal, 294, 809812.Google Scholar
Rethans, J. J. E., Sturmans, F., Drop, R. et al (1991) Assessment of the performance of general practitioners by the use of standardized (simulated) patients. British Journal of General Practice, 41, 9799.Google Scholar
Rubin, N. J. & Philp, E. B. (1998) Health care perceptions of the standardized patient. Medical Education, 32, 538542.Google Scholar
Sanson-Fisher, R. W. & Poole, A. D. (1980) Simulated patients and the assessment of medical students' interpersonal skills. Medical Education, 14, 249253.Google Scholar
Sibley, J. C., Sackett, D. L., Neufeld, V. et al (1982) A randomized trial of continuing medical education. New England Journal of Medicine, 306, 511515.Google Scholar
Vu, N. V. & Barrows, H. S. (1994) Use of standardized patients in clinical assessments: recent developments and measurement findings. Educational Researcher, 23, 2330.Google Scholar
Vu, N. V., Steward, D. E. & Marcy, M. (1987) An assessment of the consistency and accuracy of standardized patients' simulations. Journal of Medical Education, 62, 10001002.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.