Hostname: page-component-848d4c4894-cjp7w Total loading time: 0 Render date: 2024-06-20T07:14:16.198Z Has data issue: false hasContentIssue false

Precision psychiatry: predicting predictability

Published online by Cambridge University Press:  18 March 2024

Edwin van Dellen*
Affiliation:
Department of Psychiatry and University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, the Netherlands Department of Neurology, UZ Brussel and Vrije Universiteit Brussel, Brussels, Belgium
*
Corresponding author: Edwin van Dellen; Email: e.vandellen@umcutrecht.nl
Rights & Permissions [Opens in a new window]

Abstract

Precision psychiatry is an emerging field that aims to provide individualized approaches to mental health care. An important strategy to achieve this precision is to reduce uncertainty about prognosis and treatment response. Multivariate analysis and machine learning are used to create outcome prediction models based on clinical data such as demographics, symptom assessments, genetic information, and brain imaging. While much emphasis has been placed on technical innovation, the complex and varied nature of mental health presents significant challenges to the successful implementation of these models. From this perspective, I review ten challenges in the field of precision psychiatry, including the need for studies on real-world populations and realistic clinical outcome definitions, and consideration of treatment-related factors such as placebo effects and non-adherence to prescriptions. Fairness, prospective validation in comparison to current practice and implementation studies of prediction models are other key issues that are currently understudied. A shift is proposed from retrospective studies based on linear and static concepts of disease towards prospective research that considers the importance of contextual factors and the dynamic and complex nature of mental health.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Introduction

Predicting treatment outcomes and prognosis for psychiatric patients remains a daunting task. Precision psychiatry is a branch of research focused on this problem (Fernandes et al., Reference Fernandes, Williams, Steiner, Leboyer, Carvalho and Berk2017; Vieta, Reference Vieta2015). This field aims to improve the lives of people suffering from mental illness through ‘the development of tools capable of providing better and more accurate diagnosis, of ascertaining prognosis, guiding treatment and predicting response to treatment, and aiding the development of new and better pharmacological and non-pharmacological treatments’. (Fernandes et al., Reference Fernandes, Williams, Steiner, Leboyer, Carvalho and Berk2017). It has been suggested that tailoring treatments in psychiatry requires increasing the predictability of outcomes for individual patients (Bzdok, Varoquaux, & Steyerberg, Reference Bzdok, Varoquaux and Steyerberg2021). The approach is inspired by precision medicine research based on data-driven analyses; machine learning algorithms are trained on multiple variables to make diagnostic classifications or predictions. The question arises when we can reap the benefits from prediction algorithms in clinical practice (Chekroud et al., Reference Chekroud, Bondar, Delgadillo, Doherty, Wasil, Fokkema and Choi2021; Stein et al., Reference Stein, Shoptaw, Vigo, Lund, Cuijpers, Bantjes and Maj2022)?

Methodologically, the precision approach builds on the foundation of statistical prediction models. In the 1950s, Paul Meehl questioned clinicians' ability to make predictions based on their clinical assessments. He posed that statistical predictions outperform clinical judgments when it comes to diagnosis and treatment indication (Meehl, Reference Meehl1956). However, the integration of clinical assessments of an individual with group-level statistical information remained an unsolved problem. The first attempts to solve this issue with artificial intelligence date back to the 1970s, when so-called expert systems were introduced. Expert systems were computer programs that were assigned the task to mimic human decision-making, including clinical decisions (Kassirer & Gorry, Reference Kassirer and Gorry1978). Although promising at the time, this work failed to transform clinical practice. The interest in biological psychiatry later shifted toward biomarker studies and biological subtyping (Kapur, Phillips, & Insel, Reference Kapur, Phillips and Insel2012). With advances in machine learning methodology in the last decade, and success of precision medicine approaches in other fields such as oncology, precision psychiatry, gained interest. It has the advantage over the expert systems from the seventies that the technology is more sophisticated, while big datasets containing a range of information sources are now available, as described by Topol: ‘The ability to digitize the medical essence of a human being is predicated on the integration of multi-scale data, akin to a Google map, which consists of superimposed layers of data such as street, traffic and satellite views. For a human being, these layers include demographics and the social graph, biosensors to capture the individual's physiome, imaging to depict the anatomy (often along with physiologic data), and the biology from the various omics [..]. In addition to all these layers, there is one's important environmental exposure data’ (Topol, Reference Topol2014). Data-driven approaches may shed new light on pathophysiological pathways (Bzdok & Meyer-Lindenberg, Reference Bzdok and Meyer-Lindenberg2018; Grzenda et al., Reference Grzenda, Kraguljac, McDonald, Nemeroff, Torous, Alpert and Widge2021).

Recent reviews emphasize that the field is in an early stage, and suggest that attempts to move beyond trial-and-error treatments are leading to emerging new therapies – for example using brain-circuit-based approaches (Coutts, Koutsouleris, & McGuire, Reference Coutts, Koutsouleris and McGuire2023; Scangos, State, Miller, Baker, & Williams, Reference Scangos, State, Miller, Baker and Williams2023). Several recent studies report promising results (Chekroud et al., Reference Chekroud, Bondar, Delgadillo, Doherty, Wasil, Fokkema and Choi2021; Dwyer, Falkai, & Koutsouleris, Reference Dwyer, Falkai and Koutsouleris2018; Fernandes et al., Reference Fernandes, Williams, Steiner, Leboyer, Carvalho and Berk2017; Williams, Reference Williams2016), for example by predicting antipsychotic treatment response and side-effects with high accuracy (Coutts et al., Reference Coutts, Koutsouleris and McGuire2023; Dominicus et al., Reference Dominicus, Oranje, Otte, Ambrosen, Düring, Scheepers and van Dellen2023; Koutsouleris et al., Reference Koutsouleris, Kahn, Chekroud, Leucht, Falkai, Wobrock and Hasan2016). Unfortunately, validation and implementation largely remain unconsidered, and a closer look at the data currently used for such studies from a clinical point of view suggests that the desired clinical breakthrough is far from within reach (Fountoulakis, Reference Fountoulakis2021). From this perspective, ten clinical and statistical issues in the precision psychiatry literature are discussed (Table 1). First, I will argue that the lack of a valid gold standard in psychiatric diagnoses makes prediction approaches the most promising way forward (challenge 1). I will then consider limitations of commonly used datasets (challenges 2–6) and outcome definitions (challenge 7) for the development of such models. I discuss why the focus of the field needs to shift from technical model development to real-world applicability (challenges 8 and 9), and conclude that complex dynamical systems approaches are the most promising way forward (challenge 10). References to relevant literature on these challenges are provided where available, while newly identified issues (in particular the challenges of treatment (non-)response) are discussed in more detail. Examples used in this paper mainly focus on schizophrenia spectrum disorders because this is the most studied population in the precision psychiatry, but topics discussed here are generalizable to order disorders. Some issues for model development based on retrospective datasets from clinical trials are identified, and promising ways forward are highlighted to really make the translation to clinical implementation.

Table 1. Precision psychiatry: challenges and possible solutions

Classification or prediction: precisely what?

Prediction of future outcomes is the most clinically relevant application of the precision approach. Data-driven classification of patients compared to a ‘gold standard’ such as the Diagnostic and Statistical Manual of Mental Disorders (DSM) (American Psychiatric Association, 2013) is of little value because the specific mix of an individual's symptoms and their evolution over time often poorly fit into one classification (Plana-Ripoll et al., Reference Plana-Ripoll, Pedersen, Holtz, Benros, Dalsgaard, De Jonge and McGrath2019; Romero et al., Reference Romero, Werme, Jansen, Gelernter, Stein, Levey and van der Sluis2022; Van Os et al., Reference Van Os, Gilvarry, Bale, Van Horn, Tattan, White and Murray2000). Heterogeneity in symptoms also exists between patients with the same classification, the classification itself is a poor indicator for treatment susceptibility, and while some possible pathophysiological associations have been identified, these do not form the basis of the diagnosis as they invariably have low diagnostic likelihood ratio's (van Os, Guloksuz, Vijn, Hafkenscheid, & Delespaul, Reference van Os, Guloksuz, Vijn, Hafkenscheid and Delespaul2019).

Precision studies therefore focus on data-driven subtyping of patients based on existing datasets, subsequently comparing the prognosis or treatment susceptibility between categorical subtypes. Alternatively, retrospective studies may attempt to predict outcomes using data from completed treatment trials; prediction models based on randomized controlled trials (RCTs) often use data from the active treatment group to identify patient characteristics that may predict response. However, this approach also has several potential pitfalls that limit translation to clinical practice, as will be discussed in the following (Box 1).

Box 1. Nomenclature in precision psychiatry

Precision psychiatry – Precision in the context of precision medicine refers to similar outcomes with repeated measurements (Ashley, Reference Ashley2016). Interventions may be targeted with more precision when they are based on better characterization of similarities with other patients.

Personalized psychiatry – The term precision psychiatry is sometimes used as an interchangeable term for personalized psychiatry, but they have slightly different meanings. Personalized psychiatry aims to tailor interventions to specific individuals. Precision psychiatry may thus be used to develop models that help to inform patients more accurately about expected outcomes of interventions, and this information can aid personalized clinical decisions (National Research Council, 2011).

Biomarker – A biomarker is a measurable indicator of a biological state or condition. In the context of precision psychiatry, a biomarker could be used as an indicator of treatment response or prognosis (First et al., Reference First, Botteron, Castellanos, Dickstein and Hospital2012).

Machine learning – a form of artificial intelligence where data and algorithms are used to imitate human learning, hereby improving task performance.

Predictor – independent variable in a statistical model that contains information about the occurrence of an event

Accuracy - Accuracy refers to the extent to which an outcome reflects the true state of the targeted construct or condition under investigation. An example is the fraction of correctly predicted outcomes of a prediction model. Accuracy may thus be used to evaluate the merit of precision approaches as compared to a randomized, one-size-fits-all approach.

Patient selection

Patients with psychiatric disorders described in the scientific literature on treatment response and/or prognosis were mostly required to give informed consent for study participation, and for good reasons. However, patients with certain characteristics, for example, those who are severely paranoid at the time of assessment, are systematically undersampled as a consequence (Taipale et al., Reference Taipale, Schneider-Thoma, Pinzón-Espinosa, Radua, Efthimiou, Vinkers and Luykx2022). Similarly, patients are often excluded if treated under justiciary coercive measures (Luciano et al., Reference Luciano, Sampogna, Del Vecchio, Pingani, Palumbo, De Rosa and Fiorillo2014). Studies based on these data will thus consider, on average, moderately ill patients (Taipale et al., Reference Taipale, Schneider-Thoma, Pinzón-Espinosa, Radua, Efthimiou, Vinkers and Luykx2022). This is a well-known limitation of clinical trials for the generalizability of findings to other populations and settings, such as patients with severe psychosis. When clinical information is used as input variable for a prediction model of e.g. treatment outcome, this selection has additional negative impact: the (distribution of) input information deviates from the data in real-world clinical settings, further reducing the generalizability of findings (Brand, de Boer, Dazzan, & Sommer, Reference Brand, de Boer, Dazzan and Sommer2022). For psychosis treatment, male sex, unmet psychosocial needs, and functional deficits are examples of predictors of worse clinical outcome that also increase the likelihood of coercive measures being applied (Koutsouleris et al., Reference Koutsouleris, Kahn, Chekroud, Leucht, Falkai, Wobrock and Hasan2016). As coercive measures are often an exclusion criterion of clinical trials, this will negatively impact prediction model performance in clinical practice.

Future studies should therefore train models based on real-world data with limited exclusion criteria where possible. Data harmonization initiatives that are currently being developed are crucial to ensure that naturalistic data are of sufficient quality to make generalizable inferences (‘Research Harmonisation Award Schizophrenia International Research Society’, n.d.).

Fairness

Diversity and inclusion are essential to consider in precision medicine approaches. This is particularly relevant in the field of psychiatry, as societal exclusion and discrimination are directly linked to the development of psychiatric disorders. Representation of groups sensitive to exclusion, for example based on gender, ethnicity, or sexual orientation, is therefore particularly relevant. Non-native speakers may have been excluded from studies because standardized interviews are otherwise not available, or data have been obtained in psychiatric hospitals which are less accessible for specific groups due to insurance discrimination (Mamun et al., Reference Mamun, Nsiah, Srinivasan, Chaturvedula, Basha, Cross and Vishwanatha2019). Geographic underrepresentation of included samples is another factor that has been shown to limit the generalizability of precision prediction models (Meehan et al., Reference Meehan, Lewis, Fazel, Fusar-Poli, Steyerberg, Stahl and Danese2022).

In the machine learning field, inclusion is closely related to the concept of fairness, which refers to the idea that machine learning models should not be biased or discriminatory (Mitchell, Potash, Barocas, D'Amour, & Lum, Reference Mitchell, Potash, Barocas, D'Amour and Lum2021). To address fairness, one approach is to ensure that the algorithms themselves are not biased or based on discriminatory variables (algorithmic fairness). Algorithms may be systematic biased toward assigning less favorable outcomes to specific groups (group fairness), such as patients with lower educational attainment, both in prediction models and clinical judgment (Sahin et al., Reference Sahin, Kambeitz-Ilankovic, Wood, Dwyer, Upthegrove, Salokangas and Kambeitz2024). Another approach is to consider the impact of the model on different groups of people (group fairness). For example, non-native speakers may have been excluded from studies because standardized interviews are otherwise not available, or data have been obtained in psychiatric hospitals which are less accessible for specific groups due to insurance discrimination (Mamun et al., Reference Mamun, Nsiah, Srinivasan, Chaturvedula, Basha, Cross and Vishwanatha2019). In addition to group fairness, individual fairness involves treating individual instances of data (i.e. similar individuals) equally. By ensuring that precision models are fair and unbiased, we can use them ethically and responsibly. There may be unresolved or unidentified issues related to diversity and inclusivity in precision psychiatry research. To address these issues and promote an inclusive approach, it is recommendable to include a diversity and fairness statement in precision psychiatry papers for transparency, as has been suggested for citations (Zurn, Bassett, & Rust, Reference Zurn, Bassett and Rust2020).

Treatment dose and duration

A substantial number of medication trials treated patients with a dose or duration that is insufficient for the evaluation of treatment efficacy (Howes et al., Reference Howes, McCutcheon, Agid, De Bartolomeis, Van Beveren, Birnbaum and Correll2017). Many clinical trials were designed to demonstrate the efficacy of an agent rather than to determine the optimal dose and duration of treatment. Importantly, the optimal dose and minimal treatment duration to reach an effect may vary across subjects, while the optimal dose for treatment effects may often not be reached due to intolerable side effects (Kahn et al., Reference Kahn, Winter van Rossum, Leucht, McGuire, Lewis, Leboyer and Sommer2018; Leucht et al., Reference Leucht, Cipriani, Spineli, Mavridis, Örey, Richter and Davis2013; Zhu et al., Reference Zhu, Krause, Huhn, Rothe, Schneider-Thoma, Chaimani and Leucht2017). Treatment tolerability is a very important but different issue than treatment effectiveness. Patients can therefore be labeled non-responders to a treatment that is in fact potentially effective because the minimally effective dose is never reached due to intolerability. Finally, many trials have a relatively short follow-up. This may lead to underestimations of the effectiveness (and overestimations of tolerability) of the treatment because a longer follow-up was needed. It may also lead to overestimation of the effectiveness in others because the treatment effects were only evaluated under strict conditions (e.g. during hospital admission), which may not represent the real-world functioning of the patient (Fig. 1). Note that while these issues are addressed here for medication trials, similar issues can occur in studies of other interventions such as psychotherapy or brain stimulation. Minimally effective dose and duration should therefore be defined in outcome prediction studies but are currently rarely reported, and personalized estimates of dose and duration appropriateness should be obtained in prospective studies where possible.

Figure 1. Expected model performance for a ‘gold standard’ model tested on clinical data of patients with schizophrenia spectrum disorders.

Clinical distribution (left panel) based on (Lacro et al., Reference Lacro, Dunn, Dolder, Leckband and Jeste2002; Leucht et al., Reference Leucht, Leucht, Huhn, Chaimani, Mavridis, Helfer and Davis2017; Marsman et al., Reference Marsman, Pries, Ten Have, De Graaf, Van Dorsselaer, Bak and Van Os2020). In a clinical dataset, for example, obtained in a randomized controlled trial of an intervention such as antipsychotic medication, patients are classified as responder or non-responder based on a clinical evaluation at follow-up. Baseline information may be used to predict such outcomes retrospectively, and tested against this clinical classification. This is visualized for a theoretical ‘perfect predictor’ (right panel), that will have low accuracy in practice. Patients may have achieved remission due to factors unrelated to the active treatment (e.g. placebo-effects), and meta-analyses suggest this is the case for 30/51 responders. Similarly, non-response may be the result of non-treatment-related factors, such as treatment non-adherence or social factors (~25/49 non-responders). As a result, prediction models based on such study designs will have false positive assignments to a response group and false negative assignments to a non-response group. Models based on this approach are therefore unlikely to reach the accuracy needed for implementation in clinical practice. Abbreviations: TP, true positive; TN, true negative; FP, false positive; FN, false negative.

Treatment response

A major limitation of retrospective prediction studies on clinical trial data is the lack of consideration of the placebo effect, the Hawthorne effect (the phenomenon where people modify their behavior and may experience symptom reduction due to the fact that they are being observed or studied) and the natural course of the disorder (Howick et al., Reference Howick, Friedemann, Tsakok, Watson, Tsakok, Thomas and Heneghan2013). Psychotropic medication or psychotherapeutic effects are likely at least partially based on separate (biological) mechanisms (Chopra et al., Reference Chopra, Francey, O'Donoghue, Sabaroedin, Arnatkeviciute, Cropley and Fornito2021). In psychiatry, placebo-effects are relatively stronger than active treatment effects (Leucht et al., Reference Leucht, Leucht, Huhn, Chaimani, Mavridis, Helfer and Davis2017; van Os et al., Reference van Os, Guloksuz, Vijn, Hafkenscheid and Delespaul2019). For precision psychiatry studies aiming to predict treatment response, especially when based on biological data, this becomes a major problem.

A thought experiment of a study with a theoretical ‘perfect predictor’ shows the implications of placebo-induced bias. A perfect predictor will only label responders due to active treatment effects with a deviant prediction score, while all other patients will be labeled non-responder. If this predictor is truly specific to active treatment effects, this means that it will categorize ‘placebo-responders’ as non-responders: in these patients, there is no relationship between active treatment and reduction of symptoms.

According to an American Psychiatric Association (APA) consensus statement for (neuroimaging) markers, a biomarker must be at least 80% sensitive, 80% specific, and 80% accurate in order to be considered reliable (First, Botteron, Castellanos, Dickstein, & Hospital, Reference First, Botteron, Castellanos, Dickstein and Hospital2012). For a perfectly reliable predictor to meet these requirements – be it a biomarker or a predictor of any other nature – a treatment would need to be at least four times (80%/20%) more effective than placebo in order account for placebo response in the ‘gold standard’ data.

This level of effectiveness is far from reality for psychiatric treatments. For example, 51% of patients suffering from psychosis are estimated to show minimal response to antipsychotic treatment, in comparison to 30% to placebo treatment (Leucht et al., Reference Leucht, Leucht, Huhn, Chaimani, Mavridis, Helfer and Davis2017). Thus, for every 51 patients classified as a responder, 30 may have recovered due to effects unrelated to the pharmacological antipsychotic treatment response (labeled false negatives). As a result, the sensitivity of our predictor will drop to 41% (21 true positives out of 51 responders) in the trial, and its accuracy will be 70% (21 true positives + 49 true negatives), failing the APA requirements (Fig. 2).

Figure 2. Distribution of non-response and remission classification as a function of treatment dose and duration.

Patients treated with medication (or other interventions such as psychotherapy) in treatment response prediction studies are often classified as responder/remitter or non-responder. Treatment dosing and duration however vary in clinical trials, and the chosen regime may lead to inaccurate classifications due to underdosing or too short treatment durations. In addition, patients may withdraw from treatment due to intolerable side effects before reaching an optimal dose for treatment effects. These factors limit the validity of clinical data to be used as ‘gold standard’ for treatment response prediction.

Setting a more stringent threshold for treatment response (which could be done because this threshold is arbitrary, as will be discussed later) cannot help to overcome this problem. In antipsychotic treatment trials, the response-ratio between active treatment (23%) and placebo treatment (14%) for 50% symptom reduction was similar to that for minimal response (defined as 20% symptom reduction) (Leucht et al., Reference Leucht, Leucht, Huhn, Chaimani, Mavridis, Helfer and Davis2017). With this more stringent threshold for response, sensitivity will even drop to 39%.

To summarize: in psychiatric treatment conditions where placebo effects and natural course of the disorder cannot be disentangled at the individual level, any theoretically perfect predictor will fail the reliability test in clinical trials. Studies reporting predictors of treatment response with high-performance levels without accounting for these issues should caution readers that the reliability of the model may be overestimated.

Treatment non-response

It may be argued that the effects of placebo and natural fluctuations in mental health can be circumvented by making non-response instead of response the target of our outcome predictor. However, several factors may cause false negatives (i.e. treatment is labeled ineffective for a person, even though it could have been beneficial) in the group of non-responders. For example, in patients with schizophrenia spectrum disorders, non-adherence to treatment is approximated at 50% (adherence is here defined as medication taken as described at least 75% of the time) (Lacro, Dunn, Dolder, Leckband, & Jeste, Reference Lacro, Dunn, Dolder, Leckband and Jeste2002). In a study of our perfect predictor, these participants may be classified as responders while they are clinically classified as non-responders, and will therefore be considered as ‘false positives’. Even when placebo-effects are not considered, the accuracy in such a study would be around 75% (24 true negatives + 51 true positives), again failing the APA criteria. The Treatment Response and Resistance in Psychosis (TRRIP) Working Group made recommendations for adherence monitoring, but excluding non-adhering patients from trials will likely induce selection bias, and, in the best-case scenario, will lead to 72% adherence (Howes et al., Reference Howes, McCutcheon, Agid, De Bartolomeis, Van Beveren, Birnbaum and Correll2017).

Social circumstances and external factors such as ongoing exposure to cannabis or (traumatic) stressors during treatment may further contribute to treatment ineffectiveness (Marsman et al., Reference Marsman, Pries, Ten Have, De Graaf, Van Dorsselaer, Bak and Van Os2020; Patel et al., Reference Patel, Wilson, Jackson, Ball, Shetty, Broadbent and Bhattacharyya2016). In clinical trials, these factors may be considered random noise in comparisons between active and placebo interventions, but this assumption is not necessarily helpful for the validation of outcome prediction models.

Possible ways forward are the additional inclusion placebo-treatment data in prediction studies where ethically defendable and feasible, or to perform open-label trials with blinded discontinuation. This would make it possible to predict the proportional improvement due to ‘true’ treatment effects (Hafliðadóttir et al., Reference Hafliðadóttir, Juhl, Nielsen, Henriksen, Harris, Bliddal and Christensen2021). Similar approaches could be used to incorporate estimates of natural course of the disorder or non-adherence, in order to improve the real-world performance of the model. Another promising approach in patients with relatively stable states of disorder and a focus on short-term treatment effects is incorporation of information from multiple N = 1 trials, and subsequent meta-analysis thereof, where the impact of treatment is randomized within an individual (Hendrickson, Thomas, Schork, & Raskind, Reference Hendrickson, Thomas, Schork and Raskind2020).

Outcome definitions

Psychiatric disorders such as psychosis form a spectrum or continuum, ranging from chronically disabling illness to brief, transient, and non-clinical experiences (Guloksuz & Van Os, Reference Guloksuz and Van Os2018). The spectrum is expressed at multiple levels, including symptom severity, genetic liability, neuroanatomical correlates, and functional outcomes after a psychotic episode (Guloksuz & Van Os, Reference Guloksuz and Van Os2018; Ripke et al., Reference Ripke, Neale, Corvin, Walters, Farh, Holmans and O'Donovan2014; Van Dellen et al., Reference Van Dellen, Bohlken, Draaisma, Tewarie, Van Lutterveld, Mandl and Sommer2016; Van Os, Linscott, Myin-Germeys, Delespaul, & Krabbendam, Reference Van Os, Linscott, Myin-Germeys, Delespaul and Krabbendam2009). Clinical translation of these insights remains an unsolved problem. Guidelines for clinical decisions in patients with psychosis are still largely based on research that uses the categorical concept of schizophrenia (van Os et al., Reference van Os, Guloksuz, Vijn, Hafkenscheid and Delespaul2019). The state-of-the-art consensus criteria for remission after treatment in psychosis research are the Andreassen remission criteria, which are based on a subset of Positive and Negative Symptom Scale (PANSS) items (Andreasen et al., Reference Andreasen, Carpenter, Kane, Lasser, Marder and Weinberger2005). Patients diagnosed with psychosis may, however, already fulfill the remission criteria at baseline (Kahn et al., Reference Kahn, Winter van Rossum, Leucht, McGuire, Lewis, Leboyer and Sommer2018). Alternatively, treatment response may be defined as an (arbitrarily defined) cut-off point in the reduction of symptom severity (e.g. 20% reduction on the PANSS) (Howes et al., Reference Howes, McCutcheon, Agid, De Bartolomeis, Van Beveren, Birnbaum and Correll2017; Leucht et al., Reference Leucht, Leucht, Huhn, Chaimani, Mavridis, Helfer and Davis2017). Recent trial data show that this will roughly result in a ‘median split’ dichotomization of the sample into treatment responders and non-responders (Kahn et al., Reference Kahn, Winter van Rossum, Leucht, McGuire, Lewis, Leboyer and Sommer2018). This approach may help to gain statistical power and contrast but is unlikely to represent a (biologically or epidemiologically) plausible contrast between patients, as symptom reduction distributions follow a Gaussian distribution (Fig. 3) (Fried, Flake, & Robinaugh, Reference Fried, Flake and Robinaugh2022; MacCallum, Zhang, Preacher, & Rucker, Reference MacCallum, Zhang, Preacher and Rucker2002). Prediction models of treatment response based on this approach are therefore unlikely to lead to meaningful insights that can be directly implemented in clinical practice. Continuous treatment outcome measures are more realistic and estimating change in symptom severity may be a way forward. Furthermore, absolute reductions rather than relative reductions in symptoms may be used as outcome measures, because treatment may be more effective in patients with more severe symptoms (Furukawa et al., Reference Furukawa, Levine, Tanaka, Goldberg, Samara, Davis and Leucht2015). At another level, outcomes are often defined based on symptom severity scores. Other outcomes – such as social and existential outcomes – are more relevant for patients, and therefore should be prioritized when an algorithm is used to indicate if a treatment would be suitable for the individual (Maj et al., Reference Maj, van Os, De Hert, Gaebel, Galderisi, Green and Ventura2021). A possible mismatch between modeled and desired outcome measures should therefore be considered.

Figure 3. Visualization of setting an arbitrary cut-off in symptom reduction on the distribution of responders and non-responders in clinical data.

Treatment outcome studies often use a relative symptom reduction after treatment with an arbitrary cut-off point (e.g. 20% or 50% reduction compared to the individual baseline symptom severity score) to define treatment response. The implicit assumption of this approach is that patients can be dichotomized as responders and non-responders. Clinical data from treatment studies however often show a Gaussian distribution in both absolute and relative symptom reduction. As a result, the arbitrary cut-off limits the (pathophysiological) plausibility of such prediction models(Fried et al., Reference Fried, Flake and Robinaugh2022). The use of continuous outcomes would therefore be preferable.

Validation and implementation

External validation of prediction models in independent, naturalistic cohorts across multiple settings is required in order to establish the generalizability of findings. In practice, validation studies rarely use the same methods as the original work they aim to replicate (if attempts to do so are made at all). Moreover, prediction algorithms need to be tested prospectively (and in multiple n = 1 studies where possible) before they can be clinically implemented. The current literature not only lacks such rigorous testing but also lacks a comparison of their performance to existing standards of care (Salazar De Pablo et al., Reference Salazar De Pablo, Studerus, Vaquerizo-Serrano, Irving, Catalan, Oliver and Fusar-Poli2021). The evaluation of these models based on symptom severity questionnaires may show a mismatch with patient outcomes if factors such as treatment tolerability are not taken into account (Chen & Asch, Reference Chen and Asch2017). Prospective validation of prediction models across real-life outcomes and settings is thus crucial but rarely performed.

While a lot of research is devoted to the development of new outcome prediction models, few studies address how these models should be implemented in clinical care (Salazar De Pablo et al., Reference Salazar De Pablo, Studerus, Vaquerizo-Serrano, Irving, Catalan, Oliver and Fusar-Poli2021). Factors that may hamper implementation include potential harm to the service user, limited access to data from the local setting, and unfamiliarity with prediction models among practitioners and patients (Baldwin et al., Reference Baldwin, Loebel-Davidsohn, Oliver, Salazar de Pablo, Stahl, Riper and Fusar-Poli2022). This risk increases when the complexity of models increases and the implications and assumptions of the model become less transparent.

Finally, the implementation of prediction models may shape clinical practice, for example by causing a shift in the composition of the patient population. This can in turn impact the validity of the model. Certain treatment options can become more attractive when the outcome is more predictable (for example if potential severe side-effects of treatment can be ruled out in advance). This will change the population treated with this intervention, as this treatment may be considered earlier in the treatment protocol. Adaptive modeling approaches are therefore required, but this introduces new challenges, for example regarding privacy (Garralda et al., Reference Garralda, Dienstmann, Piris-Giménez, Braña, Rodon and Tabernero2019). Federated learning – a learning paradigm to collectively train algorithms in local settings without exchanging the data itself – is an attractive approach to solving such issues. With this approach, models are dispatched to individual healthcare facilities without exchanging personal data. Parameters are optimized to the local setting and sent back to a central server for aggregation. This process actively addresses privacy concerns and minimizes exposure to personal data. Healthcare information processing systems should be transformed to facilitate such approaches (McMahan, Moore, Ramage, Hampson, & Arcas, Reference McMahan, Moore, Ramage, Hampson and Arcas2017; Rieke et al., Reference Rieke, Hancox, Li, Milletarì, Roth, Albarqouni and Cardoso2020).

Contextual behavioral factors

From a contextual behavioral perspective, mental health emerges from the dynamic interaction between the individual and the environment (Ford & Urban, Reference Ford and Urban1998). For studies aiming to predict treatment outcomes, this means that the effectiveness of interventions may vary within an individual depending on the setting and circumstances in which the intervention is provided. For example, the treatment response of medication may be (non-linearly) influenced by the setting: response to treatment could be different in clinical v. outpatient care settings with or without community treatment facilities in place. Other factors include the system of friends and family surrounding the patient, the local mental health care system (e.g. private v. public insurance systems), concomitant treatments (e.g. pharmacological treatment with or without parallel psychotherapy), and judiciary status (e.g. voluntary or coercive treatment) (Glick, Stekoll, & Hays, Reference Glick, Stekoll and Hays2011; Kessing et al., Reference Kessing, Hansen, Hvenegaard, Christensen, Dam, Gluud and Wetterslev2013; Koutsouleris et al., Reference Koutsouleris, Kahn, Chekroud, Leucht, Falkai, Wobrock and Hasan2016; Polese, Fornaro, Palermo, De Luca, & De Bartolomeis, Reference Polese, Fornaro, Palermo, De Luca and De Bartolomeis2019; Taipale et al., Reference Taipale, Schneider-Thoma, Pinzón-Espinosa, Radua, Efthimiou, Vinkers and Luykx2022). The fact that the impact of interventions is context-dependent is further illustrated by the increase in placebo response over time in psychiatric clinical trial data (Weimer, Colloca, & Enck, Reference Weimer, Colloca and Enck2015).

Precision psychiatry studies often (implicitly) assume that treatment response markers are stable over time and context, which may not be the case; all the factors mentioned above may change over time within an individual. Cultural factors, beliefs, expectations, and values of the individual that are to be treated within a precision framework may also contribute to the distress caused by mental health symptoms, both in a positive and negative way (de Andino & de Mamani, Reference de Andino and de Mamani2022). Integrating the variability of mental health and behavior in the (often biologically oriented) precision framework is a major challenge (Köhne & Van Os, Reference Köhne and Van Os2021). A possible solution is the use of an integrative approach during model development, where static and dynamic factors contributing to outcomes are combined. Contextual predictive factors of interest that dynamically change over time (and therefore could be improved with targeted interventions) include the recognition of the impact of discrimination, (self)stigma, and value alignment of the therapy with the familial, social, and cultural context. In addition, quantitative and qualitative research, within the same study sample and in cocreation with patients, may strengthen model validity and may lead to additional insights.

From linear predictions to complex dynamics

Taken together, linear prediction models of outcomes in precision studies are unlikely to lead to improvements in clinical care. Even with complex machine learning approaches, the underlying assumption remains that a combination of factors at baseline will linearly lead to a predictable outcome (Van Os & Kohne, Reference Van Os and Kohne2021). It has also been argued that even successful implementation of precision medicine may only have a limited impact from a public health perspective (Joyner & Paneth, Reference Joyner and Paneth2015). So how to move forward?

There is compelling evidence that mental health is better understood as a complex dynamical system (Borsboom, Haslbeck, & Robinaugh, Reference Borsboom, Haslbeck and Robinaugh2022; Fried & Robinaugh, Reference Fried and Robinaugh2020). Complexity theory suggests that systems are unique and should be approached individually. There is rich diversity in the clinical symptoms of patients and in the contributing factors to their mental health (Fried et al., Reference Fried, Flake and Robinaugh2022; van Os et al., Reference van Os, Guloksuz, Vijn, Hafkenscheid and Delespaul2019). These factors include positive contributing factors in addition to psychiatric vulnerabilities (Huber et al., Reference Huber, Van Vliet, Giezenberg, Winkens, Heerkens, Dagnelie and Knottnerus2016). All these factors are interconnected in systems, and their interactions influence outcomes (Borsboom, Reference Borsboom2017).

Advances in psychiatric symptom network analysis are therefore promising and require further integration with biological, psychological, and social factors. Symptom network theory reconceptualizes mental disorders as intricate networks of interconnected nodes and edges rather than collections of co-occurring symptoms (Borsboom, Reference Borsboom2017). In this network, each symptom acts as a node whose edges describe their interrelationships (Epskamp & Fried, Reference Epskamp and Fried2018). An example of the potential value of symptom network analysis is a study that revealed how childhood trauma may be linked to psychosis through different paths. For some individuals, childhood trauma was connected to psychosis via depression, while in others, it was linked to impulse control (Isvoranu et al., Reference Isvoranu, Van Borkulo, Boyette, Wigman, Vinkers, Borsboom and Myin-Germeys2017). Symptom network analysis may also be used to capture dynamics in mental health. For example, delusions are often a core (central) symptom of psychosis in acute phases, but a few months later, this is no longer the case (Demyttenaere et al., Reference Demyttenaere, Leenaerts, Acsai, Sebe, Laszlovszky, Barabássy and Correll2022). Different antipsychotic treatments can uniquely modulate these symptom nodes, providing more evidence that the network approach offers a potential roadmap for dynamic, personalized treatments (Sun et al., Reference Sun, Zhang, Lu, Yan, Guo, Liao and Yue2023).

In complex dynamical systems, the history of individual elements is crucial for the probability distribution of future outcomes. This again contrasts with the idea that outcomes of future patients can be made predictable based on retrospective analysis of data from others. It stresses the importance of prevention in mental health care and fits naturally in descriptive approaches used in clinical practice when we take patients' personal histories (Psaty, Dekkers, & Cooper, Reference Psaty, Dekkers and Cooper2018). Computational psychiatry and implementations of virtual trials based on personal data, as currently under development in neuroscience, may be important steps forward (de Haan, Reference de Haan2017; Huys, Maia, & Frank, Reference Huys, Maia and Frank2016). For example, virtual brain models are currently being developed to model the impact of resective surgery on epilepsy and brain tumors to inform surgical planning (Jirsa et al., Reference Jirsa, Wang, Triebkorn, Hashemi, Jha, Gonzalez-Martinez and Bartolomei2023; van Dellen et al., Reference van Dellen, Hillebrand, Douw, Heimans, Reijneveld and Stam2013). Similar approaches could be used to model the impact of interventions in psychiatry. Finally, as the survival of dynamical complex (eco)systems depends on their adaptivity or resilience (Gao, Barzel, & Barabási, Reference Gao, Barzel and Barabási2016), specific interventions should go hand in hand with an intervention that increases resilience and flexibility (Davydov, Stewart, Ritchie, & Chaudieu, Reference Davydov, Stewart, Ritchie and Chaudieu2010).

Conclusion

By leveraging the advances in technology and the availability of large datasets, precision psychiatry approaches may contribute to the predictability of prognosis and response to prevention or treatment. Future research should consider limitations of currently available datasets including selection bias, fairness, and the noisy reality of treatment data from clinical trials, and incorporate contextual behavioral factors in a broader framework of mental health as a complex dynamical system. Research on methodological innovations should consider implementation in the real-world settings early on in the process.

Acknowledgements

I thank Jim van Os and Arjen Slooter for their insightful comments on an earlier version of this manuscript.

Funding statement

This work was supported by The Netherlands Organization for Health Research and Development (ZonMW) GGZ fellowship, Award ID: 60-63600-98-711, and a Rudolf Magnus Fellowship from the UMC Utrecht Brain Center.

Competing interests

None.

References

American Psychiatric Association. (2013). DSM 5. American Journal of Psychiatry. https://doi.org/10.1176/appi.books.9780890425596.744053Google Scholar
Andreasen, N. C., Carpenter, W. T., Kane, J. M., Lasser, R. A., Marder, S. R., & Weinberger, D. R. (2005). Remission in schizophrenia: Proposed criteria and rationale for consensus. American Journal of Psychiatry, 162(3), 441449. https://doi.org/10.1176/appi.ajp.162.3.441CrossRefGoogle ScholarPubMed
Ashley, E. A. (2016). Towards precision medicine. Nature Reviews Genetics, 17(9), 507522. https://doi.org/10.1038/nrg.2016.86CrossRefGoogle ScholarPubMed
Baldwin, H., Loebel-Davidsohn, L., Oliver, D., Salazar de Pablo, G., Stahl, D., Riper, H., … Fusar-Poli, P. (2022). Real-world implementation of precision psychiatry: A systematic review of barriers and facilitators. Brain Sciences, 12(7), 934. https://doi.org/10.3390/BRAINSCI12070934CrossRefGoogle ScholarPubMed
Borsboom, D. (2017). A network theory of mental disorders. World Psychiatry: Official Journal of the World Psychiatric Association (WPA), 16(1), 513. https://doi.org/10.1002/WPS.20375CrossRefGoogle ScholarPubMed
Borsboom, D., Haslbeck, J. M. B., & Robinaugh, D. J. (2022). Systems-based approaches to mental disorders are the only game in town. World Psychiatry: Official Journal of the World Psychiatric Association (WPA), 21(3), 420422. https://doi.org/10.1002/WPS.21004CrossRefGoogle ScholarPubMed
Brand, B. A., de Boer, J. N., Dazzan, P., & Sommer, I. E. (2022). Towards better care for women with schizophrenia-spectrum disorders. The Lancet Psychiatry, 9(4), 330336. https://doi.org/10.1016/S2215-0366(21)00383-7CrossRefGoogle ScholarPubMed
Bzdok, D., & Meyer-Lindenberg, A. (2018). Machine learning for precision psychiatry: Opportunities and challenges. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(3), 223230. https://doi.org/10.1016/J.BPSC.2017.11.007Google ScholarPubMed
Bzdok, D., Varoquaux, G., & Steyerberg, E. W. (2021). Prediction, not association, paves the road to precision medicine. JAMA Psychiatry, 78(2), 127128. https://doi.org/10.1001/JAMAPSYCHIATRY.2020.2549CrossRefGoogle Scholar
Chekroud, A. M., Bondar, J., Delgadillo, J., Doherty, G., Wasil, A., Fokkema, M., … Choi, K. (2021). The promise of machine learning in predicting treatment outcomes in psychiatry. World Psychiatry, 20(2), 154170. https://doi.org/10.1002/wps.20882CrossRefGoogle ScholarPubMed
Chen, J. H., & Asch, S. M. (2017). Machine learning and prediction in medicine – beyond the peak of inflated expectations. The New England Journal of Medicine, 376(26), 2507. https://doi.org/10.1056/NEJMP1702071CrossRefGoogle Scholar
Chopra, S., Francey, S. M., O'Donoghue, B., Sabaroedin, K., Arnatkeviciute, A., Cropley, V., … Fornito, A. (2021). Functional connectivity in antipsychotic-treated and antipsychotic-naive patients with first-episode psychosis and low risk of self-harm or aggression: A secondary analysis of a randomized clinical trial. JAMA Psychiatry, 78(9), 9941004. https://doi.org/10.1001/JAMAPSYCHIATRY.2021.1422CrossRefGoogle ScholarPubMed
Coutts, F., Koutsouleris, N., & McGuire, P. (2023). Psychotic disorders as a framework for precision psychiatry. Nature Reviews Neurology, 19(4), 221234. https://doi.org/10.1038/s41582-023-00779-1Google ScholarPubMed
Davydov, D. M., Stewart, R., Ritchie, K., & Chaudieu, I. (2010). Resilience and mental health. Clinical Psychology Review, 30(5), 479495. https://doi.org/10.1016/J.CPR.2010.03.003CrossRefGoogle ScholarPubMed
de Andino, A. M., & de Mamani, A. W. (2022). The moderating role of cultural factors and subclinical psychosis on the relationship between internalized stigma, discrimination, and mental help-seeking attitudes. Stigma and Health, 7(2), 214225. https://doi.org/10.1037/SAH0000377CrossRefGoogle Scholar
de Haan, W. (2017). The virtual trial. Frontiers in Neuroscience, 11, 110. https://doi.org/10.3389/fnins.2017.00110CrossRefGoogle ScholarPubMed
Demyttenaere, K., Leenaerts, N., Acsai, K., Sebe, B., Laszlovszky, I., Barabássy, Á., … Correll, C. U. (2022). Disentangling the symptoms of schizophrenia: Network analysis in acute phase patients and in patients with predominant negative symptoms. European Psychiatry, 65(1), e18. https://doi.org/10.1192/j.eurpsy.2021.2241CrossRefGoogle Scholar
Dominicus, L. S., Oranje, B., Otte, W. M., Ambrosen, K. S., Düring, S., Scheepers, F. E., … van Dellen, E. (2023). Macroscale EEG characteristics in antipsychotic-naïve patients with first-episode psychosis and healthy controls. Schizophrenia, 9(1), 110. https://doi.org/10.1038/s41537-022-00329-6CrossRefGoogle ScholarPubMed
Dwyer, D. B., Falkai, P., & Koutsouleris, N. (2018). Machine learning approaches for clinical psychology and psychiatry. Annual Review of Clinical Psychology, 14, 91118. https://doi.org/10.1146/ANNUREV-CLINPSY-032816-045037CrossRefGoogle ScholarPubMed
Epskamp, S., & Fried, E. I. (2018). A tutorial on regularized partial correlation networks. Psychological Methods, 23(4), 617634. https://doi.org/10.1037/met0000167CrossRefGoogle ScholarPubMed
Fernandes, B. S., Williams, L. M., Steiner, J., Leboyer, M., Carvalho, A. F., & Berk, M. (2017). The new field of ‘precision psychiatry’. BMC Medicine, 15(1), 17. https://doi.org/10.1186/S12916-017-0849-X/FIGURES/1CrossRefGoogle ScholarPubMed
First, M. B., Botteron, K. N., Castellanos, F. X., Dickstein, D., & Hospital, M. (2012). Consensus report of the APA work group on neuroimaging markers of psychiatric disorders. Retrieved from https://www.researchgate.net/publication/261507750Google Scholar
Ford, D. H., & Urban, H. B. (1998). Contemporary models of psychotherapy: a comparative analysis. 768. Retrieved from https://books.google.com/books/about/Contemporary_Models_of_Psychotherapy.html?hl=nl&id=49OUyshEDhYCGoogle Scholar
Fountoulakis, K. N. (2021). Psychiatry: From its historical and philosophical roots to the modern face. Cham: Springer. https://doi.org/10.1007/978-3-030-86541-2/COVERGoogle Scholar
Fried, E. I., Flake, J. K., & Robinaugh, D. J. (2022). Revisiting the theoretical and methodological foundations of depression measurement. Nature Reviews Psychology, 1(6), 358368. https://doi.org/10.1038/s44159-022-00050-2CrossRefGoogle ScholarPubMed
Fried, E. I., & Robinaugh, D. J. (2020). Systems all the way down: Embracing complexity in mental health research. BMC Medicine, 18(1), 14. https://doi.org/10.1186/S12916-020-01668-WCrossRefGoogle ScholarPubMed
Furukawa, T. A., Levine, S. Z., Tanaka, S., Goldberg, Y., Samara, M., Davis, J. M., … Leucht, S. (2015). Initial severity of schizophrenia and efficacy of antipsychotics: Participant-level meta-analysis of 6 placebo-controlled studies. JAMA Psychiatry, 72(1), 1421. https://doi.org/10.1001/JAMAPSYCHIATRY.2014.2127CrossRefGoogle ScholarPubMed
Gao, J., Barzel, B., & Barabási, A. L. (2016). Universal resilience patterns in complex networks. Nature 530(7590), 307312. https://doi.org/10.1038/nature16948CrossRefGoogle ScholarPubMed
Garralda, E., Dienstmann, R., Piris-Giménez, A., Braña, I., Rodon, J., & Tabernero, J. (2019). New clinical trial designs in the era of precision medicine. Molecular Oncology, 13(3), 549557. https://doi.org/10.1002/1878-0261.12465CrossRefGoogle ScholarPubMed
Glick, I. D., Stekoll, A. H., & Hays, S. (2011). The role of the family and improvement in treatment maintenance, adherence, and outcome for schizophrenia. Journal of Clinical Psychopharmacology, 31(1), 8285. https://doi.org/10.1097/JCP.0B013E31820597FACrossRefGoogle ScholarPubMed
Grzenda, A., Kraguljac, N. V., McDonald, W. M., Nemeroff, C., Torous, J., Alpert, J. E., … Widge, A. S. (2021). Evaluating the machine learning literature: A primer and user's guide for psychiatrists. The American Journal of Psychiatry, 178(8), 715729. https://doi.org/10.1176/APPI.AJP.2020.20030250/ASSET/IMAGES/LARGE/APPI.AJP.2020.20030250F4.JPEGCrossRefGoogle ScholarPubMed
Guloksuz, S., & Van Os, J. (2018). The slow death of the concept of schizophrenia and the painful birth of the psychosis spectrum. Psychological Medicine, 48(2), 229244. https://doi.org/10.1017/S0033291717001775CrossRefGoogle ScholarPubMed
Hafliðadóttir, S. H., Juhl, C. B., Nielsen, S. M., Henriksen, M., Harris, I. A., Bliddal, H., … Christensen, R. (2021). Placebo response and effect in randomized clinical trials: Meta-research with focus on contextual effects. Trials, 22, 115. https://doi.org/10.1186/s13063-021-05454-8CrossRefGoogle ScholarPubMed
Hendrickson, R. C., Thomas, R. G., Schork, N. J., & Raskind, M. A. (2020). Optimizing aggregated N-Of-1 trial designs for predictive biomarker validation: Statistical methods and theoretical findings. Frontiers in Digital Health, 2, 13. https://doi.org/10.3389/FDGTH.2020.00013/BIBTEXCrossRefGoogle ScholarPubMed
Howes, O. D., McCutcheon, R., Agid, O., De Bartolomeis, A., Van Beveren, N. J. M., Birnbaum, M. L., … Correll, C. U. (2017). Treatment-Resistant Schizophrenia: Treatment Response and Resistance in Psychosis (TRRIP) working group consensus guidelines on diagnosis and terminology. American Journal of Psychiatry, 174(3), 216229. https://doi.org/10.1176/APPI.AJP.2016.16050503/SUPPL_FILE/APPI.AJP.2016.16050503.DS001.PDFCrossRefGoogle Scholar
Howick, J., Friedemann, C., Tsakok, M., Watson, R., Tsakok, T., Thomas, J., … Heneghan, C. (2013). Are treatments more effective than placebos? A systematic review and meta-analysis. PLOS ONE, 8(5), e62599. https://doi.org/10.1371/JOURNAL.PONE.0062599CrossRefGoogle ScholarPubMed
Huber, M., Van Vliet, M., Giezenberg, M., Winkens, B., Heerkens, Y., Dagnelie, P. C., … Knottnerus, J. A. (2016). Towards a ‘patient-centred’ operationalisation of the new dynamic concept of health: A mixed methods study. BMJ Open, 6(1), e010091. https://doi.org/10.1136/BMJOPEN-2015-010091CrossRefGoogle Scholar
Huys, Q. J. M., Maia, T. V., & Frank, M. J. (2016). Computational psychiatry as a bridge from neuroscience to clinical applications. Nature Neuroscience, 19(3), 404413. https://doi.org/10.1038/nn.4238CrossRefGoogle ScholarPubMed
Isvoranu, A. M., Van Borkulo, C. D., Boyette, L. Lou, Wigman, J. T. W., Vinkers, C. H., Borsboom, D., … Myin-Germeys, I. (2017). A network approach to psychosis: Pathways between childhood trauma and psychotic symptoms. Schizophrenia Bulletin, 43(1), 187196. https://doi.org/10.1093/SCHBUL/SBW055CrossRefGoogle ScholarPubMed
Jirsa, V., Wang, H., Triebkorn, P., Hashemi, M., Jha, J., Gonzalez-Martinez, J., … Bartolomei, F. (2023). Personalised virtual brain models in epilepsy. The Lancet. Neurology, 22(5), 443454. https://doi.org/10.1016/S1474-4422(23)00008-XCrossRefGoogle ScholarPubMed
Joyner, M. J., & Paneth, N. (2015). Seven questions for personalized medicine. JAMA, 314(10), 9991000. https://doi.org/10.1001/JAMA.2015.7725CrossRefGoogle ScholarPubMed
Kahn, R. S., Winter van Rossum, I., Leucht, S., McGuire, P., Lewis, S. W., Leboyer, M., … Sommer, I. E. (2018). Amisulpride and olanzapine followed by open-label treatment with clozapine in first-episode schizophrenia and schizophreniform disorder (OPTiMiSE): A three-phase switching study. The Lancet Psychiatry, 5(10), 797807. https://doi.org/10.1016/S2215-0366(18)30252-9CrossRefGoogle ScholarPubMed
Kapur, S., Phillips, A. G., & Insel, T. R. (2012). Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Molecular Psychiatry, 17(12), 11741179. https://doi.org/10.1038/mp.2012.105CrossRefGoogle Scholar
Kassirer, J. P., & Gorry, G. A. (1978). Clinical problem solving: A behavioral analysis. Annals of Internal Medicine, 89(2), 245255. https://doi.org/10.7326/0003-4819-89-2-245CrossRefGoogle ScholarPubMed
Kessing, L. V., Hansen, H. V., Hvenegaard, A., Christensen, E. M., Dam, H., Gluud, C., & Wetterslev, J. (2013). Treatment in a specialised out-patient mood disorder clinic v. Standard out-patient treatment in the early course of bipolar disorder: Randomised clinical trial. The British Journal of Psychiatry, 202(3), 212219. https://doi.org/10.1192/BJP.BP.112.113548CrossRefGoogle Scholar
Köhne, A. C. J., & Van Os, J. (2021). Precision psychiatry: Promise for the future or rehash of a fossilised foundation? Psychological Medicine, 51(9), 14091411. https://doi.org/10.1017/S0033291721000271CrossRefGoogle ScholarPubMed
Koutsouleris, N., Kahn, R. S., Chekroud, A. M., Leucht, S., Falkai, P., Wobrock, T., … Hasan, A. (2016). Multisite prediction of 4-week and 52-week treatment outcomes in patients with first-episode psychosis: A machine learning approach. The Lancet Psychiatry, 3(10), 935946. https://doi.org/10.1016/S2215-0366(16)30171-7CrossRefGoogle ScholarPubMed
Lacro, J. P., Dunn, L. B., Dolder, C. R., Leckband, S. G., & Jeste, D. V. (2002). Prevalence of and risk factors for medication nonadherence in patients with schizophrenia: A comprehensive review of recent literature. Journal of Clinical Psychiatry, 63(10), 892909. https://doi.org/10.4088/JCP.V63N1007CrossRefGoogle ScholarPubMed
Leucht, S., Cipriani, A., Spineli, L., Mavridis, D., Örey, D., Richter, F., … Davis, J. M. (2013). Comparative efficacy and tolerability of 15 antipsychotic drugs in schizophrenia: A multiple-treatments meta-analysis. The Lancet, 382(9896), 951962. https://doi.org/10.1016/S0140-6736(13)60733-3CrossRefGoogle ScholarPubMed
Leucht, S., Leucht, C., Huhn, M., Chaimani, A., Mavridis, D., Helfer, B., … Davis, J. M. (2017). Sixty years of placebo-controlled antipsychotic drug trials in acute schizophrenia: Systematic review, Bayesian meta-analysis, and meta-regression of efficacy predictors. American Journal of Psychiatry, 174(10), 927942. https://doi.org/10.1176/APPI.AJP.2017.16121358/ASSET/IMAGES/LARGE/APPI.AJP.2017.16121358F6.JPEGCrossRefGoogle ScholarPubMed
Luciano, M., Sampogna, G., Del Vecchio, V., Pingani, L., Palumbo, C., De Rosa, C., … Fiorillo, A. (2014). Use of coercive measures in mental health practice and its impact on outcome: a critical review. Expert Review of Neurotherapeutics, 14(2), 131–141. https://doi.org/10.1586/14737175.2014.874286CrossRefGoogle Scholar
MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 1940. https://doi.org/10.1037/1082-989X.7.1.19CrossRefGoogle ScholarPubMed
Maj, M., van Os, J., De Hert, M., Gaebel, W., Galderisi, S., Green, M. F., … Ventura, J. (2021). The clinical characterization of the patient with primary psychosis aimed at personalization of management. World Psychiatry, 20(1), 433. https://doi.org/10.1002/WPS.20809CrossRefGoogle ScholarPubMed
Mamun, A., Nsiah, N. Y., Srinivasan, M., Chaturvedula, A., Basha, R., Cross, D., … Vishwanatha, J. K. (2019). Diversity in the Era of precision medicine – from bench to bedside implementation. Ethnicity & Disease, 29(3), 517. https://doi.org/10.18865/ED.29.3.517CrossRefGoogle ScholarPubMed
Marsman, A., Pries, L. K., Ten Have, M., De Graaf, R., Van Dorsselaer, S., Bak, M., … Van Os, J. (2020). Do current measures of polygenic risk for mental disorders contribute to population variance in mental health? Schizophrenia Bulletin, 46(6), 13531362. https://doi.org/10.1093/SCHBUL/SBAA086CrossRefGoogle ScholarPubMed
McMahan, B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. Y. (2017). Communication-Efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial In- telligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA (pp. 12731282). PMLR. Retrieved from https://proceedings.mlr.press/v54/mcmahan17a.htmlGoogle Scholar
Meehan, A. J., Lewis, S. J., Fazel, S., Fusar-Poli, P., Steyerberg, E. W., Stahl, D., & Danese, A. (2022). Clinical prediction models in psychiatry: A systematic review of two decades of progress and challenges. Molecular Psychiatry 27(6), 27002708. https://doi.org/10.1038/S41380-022-01528-4CrossRefGoogle ScholarPubMed
Meehl, P. E. (1956). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence [Monograph]. Minneapolis: University of Minnesota Press. https://doi.org/10.1037/11281-000Google Scholar
Mitchell, S., Potash, E., Barocas, S., D'Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Expert Review of Neurotherapeutics, 8, 141–163. https://doi.org/10.1146/ANNUREV-STATISTICS-042720-125902CrossRefGoogle Scholar
National Research Council (2011). Toward precision medicine: Building a knowledge network for biomedical research and a new taxonomy of disease. Washington, DC: The National Academies Press. https://doi.org/10.17226/13284Google Scholar
Patel, R., Wilson, R., Jackson, R., Ball, M., Shetty, H., Broadbent, M., … Bhattacharyya, S. (2016). Association of cannabis use with hospital admission and antipsychotic treatment failure in first episode psychosis: An observational study. BMJ Open, 6(3), e009888. https://doi.org/10.1136/BMJOPEN-2015-009888CrossRefGoogle ScholarPubMed
Plana-Ripoll, O., Pedersen, C. B., Holtz, Y., Benros, M. E., Dalsgaard, S., De Jonge, P., … McGrath, J. J. (2019). Exploring comorbidity within mental disorders among a danish national population. JAMA Psychiatry, 76(3), 259270. https://doi.org/10.1001/JAMAPSYCHIATRY.2018.3658CrossRefGoogle ScholarPubMed
Polese, D., Fornaro, M., Palermo, M., De Luca, V., & De Bartolomeis, A. (2019). Treatment-resistant to antipsychotics: A resistance to everything? Psychotherapy in treatment-resistant schizophrenia and nonaffective psychosis: A 25–year systematic review and exploratory meta-analysis. Frontiers in Psychiatry, 10(MAR), 210. https://doi.org/10.3389/FPSYT.2019.00210/BIBTEXCrossRefGoogle ScholarPubMed
Psaty, B. M., Dekkers, O. M., & Cooper, R. S. (2018). Comparison of 2 treatment models: Precision medicine and preventive medicine. JAMA, 320(8), 751752. https://doi.org/10.1001/JAMA.2018.8377CrossRefGoogle ScholarPubMed
Research Harmonisation Award Schizophrenia International Research Society. (n.d.). Retrieved 15 February 2023, from https://schizophreniaresearchsociety.org/research-harmonisation-award/Google Scholar
Rieke, N., Hancox, J., Li, W., Milletarì, F., Roth, H. R., Albarqouni, S., … Cardoso, M. J. (2020). The future of digital health with federated learning. Npj Digital Medicine, 3(1), 17. https://doi.org/10.1038/s41746-020-00323-1CrossRefGoogle ScholarPubMed
Ripke, S., Neale, B. M., Corvin, A., Walters, J. T. R., Farh, K. H., Holmans, P. A., … O'Donovan, M. C. (2014). Biological insights from 108 schizophrenia-associated genetic loci. Nature, 511(7510), 421427. https://doi.org/10.1038/nature13595Google Scholar
Romero, C., Werme, J., Jansen, P. R., Gelernter, J., Stein, M. B., Levey, D., … van der Sluis, S. (2022). Exploring the genetic overlap between twelve psychiatric disorders. Nature Genetics, 54(12), 17951802. https://doi.org/10.1038/s41588-022-01245-2CrossRefGoogle ScholarPubMed
Sahin, D., Kambeitz-Ilankovic, L., Wood, S., Dwyer, D., Upthegrove, R., Salokangas, R., … Kambeitz, J. (2024). Algorithmic fairness in precision psychiatry: Analysis of prediction models in individuals at clinical high risk for psychosis. The British Journal of Psychiatry, 224(2), 5565. https://doi.org/10.1192/BJP.2023.141CrossRefGoogle ScholarPubMed
Salazar De Pablo, G., Studerus, E., Vaquerizo-Serrano, J., Irving, J., Catalan, A., Oliver, D., … Fusar-Poli, P. (2021). Implementing precision psychiatry: A systematic review of individualized prediction models for clinical practice. Schizophrenia Bulletin, 47(2), 284297. https://doi.org/10.1093/SCHBUL/SBAA120CrossRefGoogle ScholarPubMed
Scangos, K. W., State, M. W., Miller, A. H., Baker, J. T., & Williams, L. M. (2023). New and emerging approaches to treat psychiatric disorders. Nature Medicine, 29(2), 317333. https://doi.org/10.1038/s41591-022-02197-0CrossRefGoogle ScholarPubMed
Stein, D. J., Shoptaw, S. J., Vigo, D. V., Lund, C., Cuijpers, P., Bantjes, J., … Maj, M. (2022). Psychiatric diagnosis and treatment in the 21st century: Paradigm shifts versus incremental integration. World Psychiatry, 21(3), 393414. https://doi.org/10.1002/WPS.20998CrossRefGoogle ScholarPubMed
Sun, Y., Zhang, Y., Lu, Z., Yan, H., Guo, L., Liao, Y., … Yue, W. (2023). Longitudinal network analysis reveals interactive change of schizophrenia symptoms during acute antipsychotic treatment. Schizophrenia Bulletin, 49(1), 208217. https://doi.org/10.1093/schbul/sbac131CrossRefGoogle ScholarPubMed
Taipale, H., Schneider-Thoma, J., Pinzón-Espinosa, J., Radua, J., Efthimiou, O., Vinkers, C. H., … Luykx, J. J. (2022). Representation and outcomes of individuals with schizophrenia seen in everyday practice who are ineligible for randomized clinical trials. JAMA Psychiatry, 79(3), 210218. https://doi.org/10.1001/JAMAPSYCHIATRY.2021.3990CrossRefGoogle ScholarPubMed
Topol, E. J. (2014). Individualized medicine from prewomb to tomb. Cell, 157(1), 241253. https://doi.org/10.1016/J.CELL.2014.02.012CrossRefGoogle ScholarPubMed
Van Dellen, E., Bohlken, M. M., Draaisma, L., Tewarie, P. K., Van Lutterveld, R., Mandl, R., … Sommer, I. E. (2016). Structural brain network disturbances in the psychosis spectrum. Schizophrenia Bulletin, 42(3), 782789. https://doi.org/10.1093/schbul/sbv178CrossRefGoogle ScholarPubMed
van Dellen, E., Hillebrand, A., Douw, L., Heimans, J. J. J., Reijneveld, J. C. C., & Stam, C. J. J. (2013). Local polymorphic delta activity in cortical lesions causes global decreases in functional connectivity. NeuroImage, 83, 524532. https://doi.org/10.1016/j.neuroimage.2013.06.009CrossRefGoogle ScholarPubMed
Van Os, J., Gilvarry, C., Bale, R., Van Horn, E., Tattan, T., White, I., & Murray, R. (2000). Diagnostic value of the DSM and ICD categories of psychosis: An evidence-based approach. Social Psychiatry and Psychiatric Epidemiology, 35(7), 305311. https://doi.org/10.1007/S001270050243/METRICSCrossRefGoogle ScholarPubMed
van Os, J., Guloksuz, S., Vijn, T. W., Hafkenscheid, A., & Delespaul, P. (2019). The evidence-based group-level symptom-reduction model as the organizing principle for mental health care: Time for change? World Psychiatry: Official Journal of the World Psychiatric Association (WPA), 18(1), 8896. https://doi.org/10.1002/WPS.20609CrossRefGoogle ScholarPubMed
Van Os, J., & Kohne, A. C. J. (2021). It is not enough to sing its praises: The very foundations of precision psychiatry may be scientifically unsound and require examination. Psychological Medicine, 51(9), 14151417. https://doi.org/10.1017/S0033291721000167CrossRefGoogle ScholarPubMed
Van Os, J., Linscott, R. J., Myin-Germeys, I., Delespaul, P., & Krabbendam, L. (2009). A systematic review and meta-analysis of the psychosis continuum: Evidence for a psychosis proneness–persistence–impairment model of psychotic disorder. Psychological Medicine, 39(2), 179195. https://doi.org/10.1017/S0033291708003814CrossRefGoogle ScholarPubMed
Vieta, E. (2015). Personalized medicine applied to mental health: Precision psychiatry. Revista de Psiquiatría y Salud Mental (English Edition), 8(3), 117118. https://doi.org/10.1016/j.rpsmen.2015.03.007CrossRefGoogle ScholarPubMed
Weimer, K., Colloca, L., & Enck, P. (2015). Placebo effects in psychiatry: Mediators and moderators. The Lancet Psychiatry, 2(3), 246257. https://doi.org/10.1016/S2215-0366(14)00092-3CrossRefGoogle ScholarPubMed
Williams, L. M. (2016). Precision psychiatry: A neural circuit taxonomy for depression and anxiety. The Lancet Psychiatry, 3(5), 472480. https://doi.org/10.1016/S2215-0366(15)00579-9CrossRefGoogle ScholarPubMed
Zhu, Y., Krause, M., Huhn, M., Rothe, P., Schneider-Thoma, J., Chaimani, A., … Leucht, S. (2017). Antipsychotic drugs for the acute treatment of patients with a first episode of schizophrenia: A systematic review with pairwise and network meta-analyses. The Lancet Psychiatry, 4(9), 694705. https://doi.org/10.1016/S2215-0366(17)30270-5CrossRefGoogle ScholarPubMed
Zurn, P., Bassett, D. S., & Rust, N. C. (2020). The citation diversity statement: A practice of transparency, a way of life. Trends in Cognitive Sciences, 24(9), 669672. https://doi.org/10.1016/J.TICS.2020.06.009CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Precision psychiatry: challenges and possible solutions

Figure 1

Figure 1. Expected model performance for a ‘gold standard’ model tested on clinical data of patients with schizophrenia spectrum disorders.Clinical distribution (left panel) based on (Lacro et al., 2002; Leucht et al., 2017; Marsman et al., 2020). In a clinical dataset, for example, obtained in a randomized controlled trial of an intervention such as antipsychotic medication, patients are classified as responder or non-responder based on a clinical evaluation at follow-up. Baseline information may be used to predict such outcomes retrospectively, and tested against this clinical classification. This is visualized for a theoretical ‘perfect predictor’ (right panel), that will have low accuracy in practice. Patients may have achieved remission due to factors unrelated to the active treatment (e.g. placebo-effects), and meta-analyses suggest this is the case for 30/51 responders. Similarly, non-response may be the result of non-treatment-related factors, such as treatment non-adherence or social factors (~25/49 non-responders). As a result, prediction models based on such study designs will have false positive assignments to a response group and false negative assignments to a non-response group. Models based on this approach are therefore unlikely to reach the accuracy needed for implementation in clinical practice. Abbreviations: TP, true positive; TN, true negative; FP, false positive; FN, false negative.

Figure 2

Figure 2. Distribution of non-response and remission classification as a function of treatment dose and duration.Patients treated with medication (or other interventions such as psychotherapy) in treatment response prediction studies are often classified as responder/remitter or non-responder. Treatment dosing and duration however vary in clinical trials, and the chosen regime may lead to inaccurate classifications due to underdosing or too short treatment durations. In addition, patients may withdraw from treatment due to intolerable side effects before reaching an optimal dose for treatment effects. These factors limit the validity of clinical data to be used as ‘gold standard’ for treatment response prediction.

Figure 3

Figure 3. Visualization of setting an arbitrary cut-off in symptom reduction on the distribution of responders and non-responders in clinical data.Treatment outcome studies often use a relative symptom reduction after treatment with an arbitrary cut-off point (e.g. 20% or 50% reduction compared to the individual baseline symptom severity score) to define treatment response. The implicit assumption of this approach is that patients can be dichotomized as responders and non-responders. Clinical data from treatment studies however often show a Gaussian distribution in both absolute and relative symptom reduction. As a result, the arbitrary cut-off limits the (pathophysiological) plausibility of such prediction models(Fried et al., 2022). The use of continuous outcomes would therefore be preferable.