Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-24T23:24:01.281Z Has data issue: false hasContentIssue false

Validity of a short clinical interview for psychiatric diagnosis: the mini-SCAN

Published online by Cambridge University Press:  02 January 2018

F. J. Nienhuis*
Affiliation:
University Medical Centre Groningen, Department of Psychiatry, Groningen, The Netherlands
G. van de Willige
Affiliation:
University Medical Centre Groningen, Department of Psychiatry, Groningen, The Netherlands
C. A. Th. Rijnders
Affiliation:
GGZ Breburg Groep, Tilburg and Radboud University Nijmegen Medical Center, Department of Social Medicine, Nijmegen, The Netherlands
P. de Jonge
Affiliation:
University Medical Centre Groningen, Department of Psychiatry, Groningen, The Netherlands
D. Wiersma
Affiliation:
University Medical Centre Groningen, Department of Psychiatry, Groningen, The Netherlands
*
F. J. Nienhuis, University Medical Centre Groningen, Department of Psychiatry, P.O. Box 30.01, 9700 RB Groningen, The Netherlands. Email: f.j.nienhuis@med.umcg.nl
Rights & Permissions [Opens in a new window]

Abstract

Background

To promote clinical application of the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) system a shorter version (the mini-SCAN) was devised. Its psychometric properties were unknown.

Aims

To establish the validity and practical properties of the mini-SCAN.

Method

One hundred and six participants were interviewed twice, once with the SCAN and once with the mini-SCAN. The level of agreement was established for the categories: no disorder, affective disorders, anxiety disorders, non-affective psychotic disorders, affective psychotic disorders.

Results

The mini-SCAN is a valid instrument. Most kappa values were around 0.90. Only for the class of affective psychotic disorders was the agreement moderate. Mean duration of the mini-SCAN interviews was 25 min shorter than the SCAN interviews. Participants and interviewers were generally satisfied with the interview format and questions.

Conclusions

The mini-SCAN can be used as a diagnostic instrument for clinical purposes and for clinical studies when the present episode is the focus of attention.

Type
Papers
Copyright
Copyright © Royal College of Psychiatrists, 2010 

Psychiatric disorders contribute heavily to the global burden of disease. Reference Üstün, Ayuso-Mateos, Chatterji, Mathers and Murray1Reference Prince, Patel, Saxena, Maj, Maselko and Phillips3 However, their detection and standardised treatment falls behind, Reference Demyttenaere, Bruffaerts, Posada-Villa, Gasquet, Kovess and Lepine4 compared with the major somatic disorders. The standard application of (structured and semi-structured) psychiatric interviews might help to improve the quality of psychiatric diagnosis in clinical practice and consequently improve allocation to effective treatment. Despite the possible benefit of interviews, their application in patient care is, however, the exception rather than the rule. This is partly due to their design, user interface and length. Reference Sheehan, Lecrubier, Sheehan, Amorim, Janavs and Weiller5

The Schedules for Clinical Assessment in Neuropsychiatry (SCAN) is a semi-structured psychiatric interview Reference Wing, Babor, Brugha, Burke, Cooper and Giel6 that has a long tradition in psychiatric research. Unlike the Composite International Diagnostic Interview (CIDI) Reference Haro, Arbabzadeh-Bouchez, Brugha, de Girolamo, Guyer and Jin7 and the Structured Clinical Interview for DSM (SCID), Reference Riskind, Beck, Berchick, Brown and Steer8 the SCAN has retained its roots in Anglo-Saxon psychiatry and the phenomenological tradition, which emphasises the personal experience described by the individual. The SCAN, and its predecessor, the Present State Examination (PSE), Reference Wing, Cooper and Sartorius9 has been one of the diagnostic touchstones for several decades. Its psychometric properties have been investigated in several studies. Reference Rijnders, van den Berg, Hodiamont, Nienhuis, Furer and Mulder10Reference Krisanaprakornkit, Paholpak and Piyavhatkul16 The SCAN has also been used as a standard against which to contrast the validity of other instruments. Reference Brugha, Jenkins, Taub and Bebbington17,Reference Brugha, Bebbington, Jenkins, Meltzer, Taub and Janas18 The technique of cross-examination has become a standard in clinical practice and the SCAN's definitions of symptoms are quoted in many textbooks of psychiatry. The cross-examination technique entails in-depth exploration of the symptoms in terms of severity, frequency and interference (often with the use of free-form questions), until the interviewer is satisfied that the criteria for the symptom are met (or not). Thus, unlike in interviews such as the Mini International Neuropsychiatric Interview (MINI) Reference Lecrubier, Sheehan, Weiller, Amorim, Bonora and Sheehan19 or CIDI, a yes or no answer given by the individual is only the beginning of further probing for severity, persistence and interference.

In spite of its strengths, the SCAN is not routinely used in clinical practice, possibly because of its detail, length and relatively extensive training requirements. An abbreviated version of the SCAN might help the acceptability of the SCAN method in daily clinical practice. However, to date, such a routinely applicable, practical version of the SCAN has not been available.

The SCID faced a similar problem, it was too elaborate for daily use. The MINI-Plus was made as a shorter, more practical version of the SCID, intended to be routinely applied in clinical practice. In the MINI the clinical approach of probing and exploring of symptoms was largely lost, however. Sheehan and Lecrubier Reference Lecrubier, Sheehan, Weiller, Amorim, Bonora and Sheehan19,Reference Sheehan, Lecrubier, Sheehan, Janavs, Weiller and Keskiner20 have shown that the MINI resulted in sufficiently congruent diagnostic information when compared with the CIDI and SCID, while reducing the length of the interview by approximately 50%. We therefore hypothesised that parallel to the MINI, a shortened version of the SCAN based on largely similar algorithms might also result in substantial time gain without loss of diagnostic precision.

Thus the mini-SCAN was developed as a more practical and shorter version of the SCAN, retaining its principle of cross-examination. It was developed under the auspices of the World Health Organization Advisory Committee. The aim of the current study was to establish the validity of the mini-SCAN, with the SCAN as the criterion. Practical properties such as patient acceptance and duration of administration were also studied.

Method

Instruments

Both the SCAN and the mini-SCAN are intended for clinicians. They are interviewer based and involve clinical judgement. The interviewer has to make sure that sufficient information is gathered through cross-examination before the rating is given and should probe further using his or her own questions when needed. This type of interview is called semi-structured.

SCAN

The SCAN Reference Wing, Babor, Brugha, Burke, Cooper and Giel6 is the successor of the PSE. Reference Wing, Cooper and Sartorius9 The core of the SCAN is PSE–10, consisting of an interview and a glossary with definitions of symptoms. The interested reader is referred to Wing et al (1990) for further details. Reference Wing, Babor, Brugha, Burke, Cooper and Giel6 We used the computer-assisted version of the SCAN 2.1 (I-shell 1.0.4.6) in the present study. Reference Wing, Babor, Brugha, Burke, Cooper and Giel6 There is also a paper and pencil version of the SCAN.

Training is typically 4–5 days, with two mornings of lectures and 3 to 4 days spent on live interviews.

Mini-SCAN

The mini-SCAN covers a wide range of Axis I disorders (Appendix). The first version (called ‘Present State Examination for clinical use’) was developed by Dr Bertelsen of the Danish Training and Reference Center (TRC) in Aarhus. It was published in a pocket-sized booklet, containing an abridged version of the symptom questions on the right-hand page and the definitions of symptoms on the left-hand page. Its aim was to provide a training tool for registrars and interns and it only contained the queries and definitions of symptoms. It offered no classification items such as questions about the duration of symptoms, interference with functioning or information about the course. There are Danish, English and Dutch versions available of this booklet. The Danish version has been introduced extensively and clinically used.

This first version was expanded and computerised by the first author (F.J.N., who heads the Dutch TRC in Groningen) and named mini-SCAN. There were several aims in the development and computerisation of this instrument. First, symptom and classification items were made compatible with current classification systems. Such items included screening questions, questions about duration of symptoms and interference with functioning. Furthermore, sophisticated algorithms (diagnostic rules) were developed, which will be described in detail below. The overall idea behind the computer-assisted version was to make a user-friendly clinical interview based on the SCAN, producing a diagnosis and a clinically useful report after administration. There is no paper and pencil version of the mini-SCAN.

Training took 1 day, comprising a lecture on symptoms and interview techniques, a pre-recorded demonstration of the interview and hands-on training by means of live interviews.

Software and algorithms

The mini-SCAN software (computerised interview) is the next stage of I-shell, the software written by the World Health Organization (WHO). This software (made by members of the Advisory Committee of the WHO) is an interview shell, containing the whole SCAN text. It has a passive database. The interviewer must administer it as if it were the paper version, meaning that the choice of sections and items to be administered is completely left to the user. Items or sections can be skipped at the user's discretion. The user is not prompted for missing information. At the end of the interview diagnostic algorithms can be run, resulting in a diagnosis (if any).

The mini-SCAN is entirely web-based and can therefore be used on any computer that has access to the internet. It has ‘active’ software. Positively rated screening questions (one for each section) activate the corresponding sections (e.g. depressive symptoms), where the individual symptom questions follow. One symptom screen will show the question and the definition of the symptom below it. The rating options for symptoms are: ‘0’ (absent or subsymptom level), ‘1’ (symptom level) and? (cannot rate/rating deferred). Sections have dynamic skips, thus avoiding superfluous questions. The program does not allow the user to leave a section without completing it. If all core symptoms (which are the first in the section) are rated absent, the user has the choice either to move to the next section or to complete the present one. The reason to change the ‘passive’ approach was that in our experience of using the SCAN sometimes a diagnosis was not given because of missing information, often just one or a few items. This is no longer possible in the mini-SCAN.

After the symptoms have been rated, the program enters a decision phase, comparable to making a differential diagnosis. In this phase ‘prompts’ are presented to the user, usually history questions, clinical judgement or questions about interference with functioning. The prompts are dependent on the combination of positively rated symptoms and are decisive for the final diagnosis. If for example the criteria for a depressive episode are met, the program presents prompts about prior episodes of depression, mania or mixed episodes. It then produces the diagnosis. In the above case this can be DSM–IV 21 major depression (single episode or recurrent) or bipolar disorder, depending on the answers to the prompts. In this prompt phase, symptoms of all sections are considered, much as happens in the clinical diagnostic process. This prompt mechanism makes it possible to handle complex clinical cases with a wide variety of symptoms.

The algorithms produce the DSM–IV diagnoses. The results are shown on the screen in a report, showing the personal data of the individual and interviewer, diagnosis, prompts and the rated items of the administered sections. Also the observed behaviour is presented in the report. This report makes the diagnostic process more transparent.

The raw data can be exported to statistical programs for further analyses. Although the algorithms were developed from scratch, they followed the structure of those made for SCAN 2.1. Both are based on the DSM–IV criteria. For the SCAN 2.1, ICD–10 22 algorithms are also available.

Design

All consenting participants were interviewed twice within a week with a minimum of a 2-day interval. The instruments were administered in counterbalanced order. Individuals were not made aware of the outcome by the interviewer. After completion of both interviews the diagnosis (if any) was communicated with the attending psychiatrist or resident, who could discuss the outcome with the participant.

Sample

Seventy-six participants were recruited from the Groningen University Medical Centre (UMCG). The University Centre for Psychiatry has in-patient units for emotional (anxiety and affective) disorders, psychotic disorders and several out-patient clinics (bipolar, first psychosis, general out-patient clinic). Another 33 participants were recruited from a community mental health centre in the south of the country (GGZ midden-Brabant).

Participants were in episode when interviewed. Most of them had recently been admitted or enrolled for treatment. Respondents were asked to participate at the discretion of the nursing staff or doctors. It was pointed out that the results would be used for the benefit of their treatment and that non-participation was without consequence. The only prerequisite for participation was that the person had to be able to understand and answer the questions. Of the 109 people we approached, 106 (97%) consented to being interviewed for the validity study. The study was approved by the Ethical Committee of the UMCG.

The interview

The interviewers had no knowledge of the participant prior to the interview. In the introductory part of the interview the participant could talk about their symptoms and reasons for treatment. Subsequently the interviewer would ask the screening questions of either the SCAN or mini-SCAN and administer the corresponding symptom sections.

With both instruments the whole episode was covered, which could range from several weeks to months. An episode was defined as a period with clinically significant symptoms, i.e. symptoms with a certain persistence and severity, causing distress or interference with functioning. This is known as the present episode within the SCAN system.

Interviews were administered by very experienced and well-trained clinical psychologists (2) and training psychiatrists (6), each completing between 8 and 25 interviews.

Statistical analysis

All analyses were performed with SPSS 15.0 and AGREE 6 for Windows. Reference Popping23 The unweighted Cohen's kappa for the assignment of participants to diagnostic classes (i.e. no disorder, affective disorder, affective psychosis, non-affective psychosis, anxiety disorder) was the outcome measure for all analyses. We used Landis & Koch's Reference Landis and Koch24 division into classes of agreement: 0.0–0.20, slight; 0.41–0.60, moderate; 0.61–0.80, substantial; 0.81–1.00, almost perfect agreement.

In order to provide a more detailed description of the performance of the mini-SCAN, we calculated its sensitivity, specificity and positive and negative predictive value using the SCAN as gold standard. Sensitivity is the ability to make a true diagnosis, in this case the SCAN diagnosis. Specificity is the measure for identifying a true non-case (a non-case according to the SCAN). Positive predictive value reflects the percentage of mini-SCAN cases which are true (SCAN) cases. Negative predictive value is the percentage of mini-SCAN non-cases which are true (SCAN) non-cases. Efficiency is the measure that expresses the sum total of correct classification of true (SCAN) cases and true (SCAN) non-cases in that class. These parameters are all calculated with a 95% confidence interval.

Results

Table 1 shows the prevalence of the five diagnostic classes assigned by the SCAN and the mini-SCAN. The prevalence rates were highly comparable and showed a wide distribution over the different classes. The assessment with the SCAN and the mini-SCAN resulted in the same diagnostic class for 91 out of 106 participants (86%). The Cohen's kappa for concurrent validity of the mini-SCAN was 0.802 (s.e. = 0.045), indicating substantial agreement.

Table 1 Prevalence of diagnostic classes according to the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and mini-SCAN in the sample (n = 106)

n (%)
Diagnostic class SCAN mini-SCAN
No disorder 15 (14) 15 (14)
Affective disorder 30 (28) 32 (30)
Affective psychosis 12 (11) 11 (10)
Anxiety disorder 34 (32) 33 (31)
Non-affective psychosis 15 (14) 15 (14)

Table 2 shows the sensitivity, specificity and efficiency of the mini-SCAN. Sensitivity, specificity, positive predictive value, negative predictive value and efficiency were all very good for most diagnostic classes. Only the sensitivity and positive predictive value of affective psychoses were lower.

Table 2 Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and efficiency for the mini-SCAN (mini- Schedules for Clinical Assessment in Neuropsychiatry) per diagnostic class using the SCAN as gold standard

κ (95% CI)
Diagnostic class Sensitivity Specificity PPV NPV Efficiency
No disorder 0.80 (0.52-0.96) 0.97 (0.91-0.99) 0.80 (0.52-0.96) 0.97 (0.91-0.99) 0.94 (0.94-0.99)
Affective disorder 0.87 (0.70-0.96) 0.92 (0.84-0.97) 0.81 (0.64-0.93) 0.96 (0.87-0.99) 0.90 (0.83-0.95)
Affective psychosis 0.67 (0.35-0.90) 0.97 (0.91-0.99) 0.73 (0.39-0.94) 0.96 (0.90-0.99) 0.93 (0.87-0.97)
Anxiety disorder 0.94 (0.80-0.99) 0.99 (0.93-0.99) 0.97 (0.84-0.99) 0.97 (0.90-0.99) 0.97 (0.92-0.99)
Non-affective psychosis 0.80 (0.52-0.95) 0.97 (0.88-0.99) 0.80 (0.52-0.96) 0.97 (0.91-0.99) 0.94 (0.88-0.97)

The duration of the interview was timed by the interviewer for 68 of the participants. For the SCAN, the mean duration was 73 min (minimum 30, maximum 140). For the mini-SCAN, the mean duration was about a third shorter, namely 48 min (minimum 15, maximum 90). At the end of each interview the participants were asked to rate how pleasant or unpleasant the interview was. With respect to the SCAN, 77% rated the interview as pleasant or very pleasant; for the mini-SCAN this percentage was 79%.

Discussion

Psychometry of the mini-SCAN

This study investigated the psychometric properties of the mini-SCAN. These properties are sufficient to have confidence in this offshoot of the SCAN.

With the mini-SCAN it is possible to make a valid psychiatric diagnosis, when compared with the SCAN as the criterion. Concurrent validity, reliability, sensitivity, positive predictive value and negative predictive value are all in the excellent range. This means that the mini-SCAN will identify most of the true (SCAN) cases correctly and that the risk of including false positives is relatively low. The only exception is psychotic depression, for which the agreement was somewhat lower. The interviewers discussed the possible source of the lower reliability for psychotic depression. There was disagreement about symptoms such as the individual's conviction of being worthless. Some interviewers felt this was no more than a severe form of a depressive symptom, whereas others deemed it a psychotic phenomenon. This is a general problem in clinical practice and not limited to the SCAN or mini-SCAN.

Practical properties

The instrument was judged as being user friendly by all users. Those who know the SCAN I-shell interview as well found the software of the mini-SCAN easier and less prone to leading to omissions. The interview was acceptable to virtually all respondents as well. There were no mentions of the mini-SCAN being unpleasant or too long. The duration is significantly shorter than that of the SCAN, without loss of validity. This is an important gain, if the interview is to be applied regularly in clinical settings. However, it takes between 15 and 90 min, with a mean duration of 50 min and this could be viewed as still too long by some potential users. It is our conviction that any in-depth diagnostic interview covering Axis I diagnoses cannot be administered in a very short time in all cases. Day-to-day practice shows that the diagnostic process without a structured interview will take just as long and depends largely on the variety of pathology demonstrated in a given individual and his/her ability to communicate about the symptoms. In fact, taking a symptom history without a structured interview may take even longer. The aim was not to make the shortest interview, but a valid and reliable one.

Mini-SCAN and MINI

A justified question remains: why another diagnostic instrument? The existing diagnostic instruments with a comparable coverage of disorders are: CIDI, SCID, MINI-Plus and SCAN. The instruments vary in their scope, rater requirements, duration and output. Reference Sheehan, Lecrubier, Sheehan, Amorim, Janavs and Weiller5 The only other interview geared towards daily application is the MINI-Plus. Since it has the most in common with the mini-SCAN we will compare the two interviews. The MINI-Plus is comparable with the mini-SCAN in coverage of disorders, although differences exist. The method is different. The questions of the MINI-Plus are more checklist-like. It lacks the definitions of symptoms and the cross-examination technique. These differences in principles and method are not trivial and can lead to different results. Reference Brugha, Bebbington and Jenkins25

The duration of administration is comparable (MINI-Plus: 15–60 min; mini-SCAN 15–90 min) although the mini-SCAN will on average take more time. This is partly because it covers more symptoms (e.g. psychotic symptoms) that are not necessarily required for classification but may have clinical significance, and partly because of the cross-examination. The MINI-Plus has lifetime and current diagnoses, the mini-SCAN only current diagnoses (although a representative episode can also be investigated). The mini-SCAN offers a full report of all data after administration.

Which to choose: SCAN or mini-SCAN?

The traditional SCAN user is now faced with another choice: SCAN or mini-SCAN. Both instruments rely on clinical cross-examination and clinical judgement and have a glossary of definitions. There are differences, however. First, the SCAN allows assessment of lifetime, representative episode and present state symptoms. Second, the SCAN allows for four severity ratings: absent (0), subclinical level (1), symptom level moderate (2), symptom level severe (3); the mini-SCAN only has two ratings: absent or subclinical level (0) or symptom level (1). Third, the SCAN has far more elaborate definitions of symptoms, more questions per symptom and a wider coverage of symptoms. Last, it has modules such as the clinical history schedule, making it possible to record earlier episodes and diagnoses. This may be particularly important in epidemiological studies.

For research purposes the SCAN is the first choice, particularly if earlier or lifetime pathology is assessed or if comparison with other studies using the SCAN is pursued. For clinical purposes the mini-SCAN might be the right choice, if an assessment of the present episode is required. For clinical studies where the present episode is the only focus of the interview, the mini-SCAN could be considered as well.

Limitations and future direction

The present study has some limitations. The most significant limitation is the number of interviews. This was a matter of workforce and time constraints. Therefore, as a second limitation, a limited number of disorders could be investigated, in order to retain enough observations per diagnostic class. The reported psychometric properties apply to the investigated disorders. However, since other diagnoses were operationalised in the same way, there is no reason to assume that the psychometric properties of the sections covering these diagnoses should deviate from the investigated ones. This is an avenue for further research.

Another limitation was that the mini-SCAN interviewers, though not officially SCAN trained, were to some degree familiar with the SCAN system and principles. Some of the mini-SCAN interviews may have benefited from this excess knowledge and experience, thus positively biasing validity of the mini-SCAN. This raises two issues. First, the validity found in this study may have been positively biased by this familiarity with the SCAN, although the principles of interviewing are identical (screening, cross-examination, probing). Second, it remains to be studied whether the 1-day training format used in this study is sufficient for SCAN-naive interviewers. More experience and research is needed to determine the optimum training format for users without familiarity with the SCAN.

The instrument was developed to allow clinicians to benefit from the merits of (semi-) structured interviews, retaining the SCAN system and principles. Several trainees on SCAN training courses have expressed the need for an instrument that preserves the principles of the SCAN and is practical at the same time. This instrument is now available.

Funding

The study was funded by the Medical Technology Assessment Bureau of the University Medical Centre Groningen with an unconditional grant.

Appendix
Available diagnoses in the mini-SCAN
Depressive disorders, including subtypes
Bipolar disorders, including subtypes
Dysthymic disorder
Abuse and dependence, any substancea
Social phobia
Cognitive disorder (Mini-Mental State Examination)
Agoraphobia
Specific phobia
Obsessive–compulsive disorder
Generalised anxiety disorder
Post-traumatic stress disorder
Schizophrenia and related disorders
Anorexia nervosa
Bulimia nervosa
Attention-deficit hyperactivity disorder

Footnotes

The study was funded by the Medical Technology Assessment Bureau of the University Medical Centre Groningen with an unconditional grant.

Declaration of interest

None.

References

1 Üstün, TB, Ayuso-Mateos, JL, Chatterji, S, Mathers, C, Murray, CJL. Global burden of depressive disorders in the year 2000. Br J Psychiatry 2004; 184: 386–92.Google Scholar
2 Lecrubier, Y. The burden of depression and anxiety in general medicine. J Clin Psychiatry 2001; 62 (suppl 8): 49.Google Scholar
3 Prince, M, Patel, V, Saxena, S, Maj, M, Maselko, J, Phillips, MR, et al. No health without mental health. Lancet 2007; 370: 859–77.Google Scholar
4 Demyttenaere, K, Bruffaerts, R, Posada-Villa, J, Gasquet, I, Kovess, V, Lepine, JP, et al. Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA 2004; 291: 2581–90.Google ScholarPubMed
5 Sheehan, DV, Lecrubier, Y, Sheehan, KH, Amorim, P, Janavs, J, Weiller, E, et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM–IV and ICD–10. J Clin Psychiatry 1998; 59 (suppl 20): 2233.Google Scholar
6 Wing, JK, Babor, T, Brugha, T, Burke, J, Cooper, JE, Giel, R, et al. SCAN. Schedules for Clinical Assessment in Neuropsychiatry. Arch Gen Psychiatry 1990; 47: 589–93.Google Scholar
7 Haro, JM, Arbabzadeh-Bouchez, S, Brugha, TS, de Girolamo, G, Guyer, ME, Jin, R, et al. Concordance of the Composite International Diagnostic Interview Version 3.0 (CIDI 3.0) with standardized clinical assessments in the WHO World Mental Health surveys. Int J Methods Psychiatr Res 2006; 15: 167–80.CrossRefGoogle ScholarPubMed
8 Riskind, JH, Beck, AT, Berchick, RJ, Brown, G, Steer, RA. Reliability of DSM–III diagnoses for major depression and generalized anxiety disorder using the structured clinical interview for DSM–III. Arch Gen Psychiatry 1987; 44: 817–20.Google Scholar
9 Wing, JK, Cooper, JE, Sartorius, N. The Measurement and Classification of Psychiatric Symptoms. Cambridge University Press, 1974.Google Scholar
10 Rijnders, CA, van den Berg, JF, Hodiamont, PP, Nienhuis, FJ, Furer, JW, Mulder, J, et al. Psychometric properties of the schedules for clinical assessment in neuropsychiatry (SCAN-2.1). Soc Psychiatry Psychiatr Epidemiol 2000; 35: 348–52.CrossRefGoogle ScholarPubMed
11 Brugha, TS, Nienhuis, F, Bagchi, D, Smith, J, Meltzer, H. The survey form of SCAN: the feasibility of using experienced lay survey interviewers to administer a semi-structured systematic clinical assessment of psychotic and non-psychotic disorders. Psychol Med 1999; 29: 703–11.CrossRefGoogle ScholarPubMed
12 Andrews, G, Peters, L, Guzman, AM, Bird, K. A comparison of two structured diagnostic interviews: CIDI and SCAN. Aust N Z J Psychiatry 1995; 29: 124–32.Google Scholar
13 Schutzwohl, M, Kallert, T, Jurjanz, L. Using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN 2.1) as a diagnostic interview providing dimensional measures: cross-national findings on the psychometric properties of psychopathology scales. Eur Psychiatry 2007; 22: 229–38.Google Scholar
14 Paholpak, S, Arunpongpaisal, S, Krisanaprakornkit, T, Khiewyoo, J. Validity and reliability study of the Thai version of WHO Schedules for Clinical Assessment in Neuropsychiatry: sections on psychotic disorders. J Med Assoc Thai 2008; 91: 408–16.Google Scholar
15 Krisanaprakornkit, T, Rangseekajee, P, Paholpak, S, Khiewyoo, J. The validity and reliability of the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN Thai Version): anxiety disorders section. J Med Assoc Thai 2007; 90: 341–7.Google Scholar
16 Krisanaprakornkit, T, Paholpak, S, Piyavhatkul, N. The validity and reliability of the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN Thai Version): Mood Disorders Section. J Med Assoc Thai 2006; 89: 205–11.Google Scholar
17 Brugha, T, Jenkins, R, Taub, N, Bebbington, PE. A general population comparison of the Composite International Diagnostic Interview (CIDI) and the Schedules for Clinical Assessment in Neuropsychiatry (SCAN). Psychol Med 2001; 31: 1001–13.Google Scholar
18 Brugha, TS, Bebbington, PE, Jenkins, R, Meltzer, H, Taub, NA, Janas, M, et al. Cross validation of a general populations survey diagnostic interview: a comparison of CIS-R with SCAN diagnostic categories. Psychol Med 1999; 29: 1029–42.Google Scholar
19 Lecrubier, Y, Sheehan, DV, Weiller, E, Amorim, P, Bonora, I, Sheehan, K, et al. The Mini International Neuropsychiatric Interview (MINI). A short diagnostic structured interview: reliability and validity according to the CIDI. Eur Psychiatry 1997; 5: 224–31.Google Scholar
20 Sheehan, DV, Lecrubier, Y, Sheehan, KH, Janavs, J, Weiller, E, Keskiner, A, et al. The validity of the Mini International Neuropsychiatric Interview (MINI) according to the SCID-P and its reliability. Eur Psychiatry 1997; 12: 232–41.CrossRefGoogle Scholar
21 American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (4th edn) (DSM–IV). APA, 1994.Google Scholar
22 World Health Organization. The ICD–10 Classification of Mental and Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines. WHO, 1992.Google Scholar
23 Popping, R. Measures of Agreement for Nominal Data. University of Groningen, 1983.Google Scholar
24 Landis, JR, Koch, GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–74.CrossRefGoogle ScholarPubMed
25 Brugha, TS, Bebbington, PE, Jenkins, R. A difference that matters: comparisons of structured and semi-structured psychiatric diagnostic interviews in the general polulation. Psychol Med 1999; 29; 1013–20.Google Scholar
Figure 0

Table 1 Prevalence of diagnostic classes according to the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and mini-SCAN in the sample (n = 106)

Figure 1

Table 2 Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and efficiency for the mini-SCAN (mini- Schedules for Clinical Assessment in Neuropsychiatry) per diagnostic class using the SCAN as gold standard

Figure 2

Appendix

Submit a response

eLetters

No eLetters have been published for this article.