The most exciting development in modern psychiatry is arguably the field of digital phenotyping. Encompassing data related to sleep, speech, activity, social media and keypad interactions, digital phenotyping promises to measure and interpret human behaviour at unprecedented scale. In psychosis, researchers strive to use such data to predict relapse, while others aim to predict suicide risk using machine-learning techniques.Reference Bedi, Carrillo, Cecchi, Slezak, Sigman and Mota1,Reference Coppersmith, Leary, Crutchley and Fine2 However, with the field in its infancy, the potential social effects of such technological advances are unclear. How successful will digital phenotyping be in clarifying psychiatry's uncertainties? Most importantly, where will the algorithm take us?
A vision of the future
Arriving at work, Dr Singh rests her coffee on the desk and logs into her electronic record system. Ahead of her first appointment she browses the neuroimaging, bloodwork and behavioural data for her patients that day. Reminiscing on how quickly this new world had been sold to the profession, she remembers a lecture, 30 years ago, marking her first encounter with digital phenotyping. ‘Where will the algorithm take us?’, the conference programme asked, leaving Dr Singh shocked at the science fiction surrounding her. The algorithms, already claiming more accurate suicide risk assessment than clinicians, had begun to quantify mood, anxiety, sleep, physical activity, interpersonal interactions and geolocation as markers of social functioning.Reference Tran, Luo, Phung, Harvey, Berk and Kennedy3 A former Director of the Institute of Mental Health had left to join Silicon Valley as early as 2015 and social media companies boasted about their own suicide risk screening tools only 3 years later.Reference Dobbs4,Reference Singer5 While psychiatrists of the past stole glimpses of psychopathology from snapshot mental state examinations and unreliable histories, the ‘big data’ revolution promised to chart patients’ entire behavioural phenotype for doctors to assess. Even the machines were learning, they claimed.
Steve's alarm woke him. The day introduced him with a motivating message, psychoeducation they called it, as he hunched over a bowl of cereal, inputting its nutritional information to his diet-tracking app. The phone prompted him of his exercise schedule for later that day, a pending mood assessment and reflective diary entry, shortly followed by a reminder for his annual appointment with his psychiatrist.
Meanwhile, Dr Singh, like the rest of us, played catch-up. Medical students continued to read textbooks and revise mental state examinations. Mental health legislation remained reliant on risk, while algorithms predicted numbers but failed to rationalise their judgements in a way that humans could understand. Dr Singh watched the world reduce itself to binary, while her former colleagues – the radiologists and general surgeons she'd known since medical school – began fearing losing their jobs to automation and robotics. Governments, eager to improve ‘population well-being’ and keen to avoid culpability for suicide and violence, implemented their own machine-learning projects. Insurance companies demanded that their clients wear smartwatches, such that their every behaviour could be monitored.6 The rush towards big data, ‘the new oil’ as one economist put it, spared no profession, field or domain of daily life.7 As the future came to stay, the algorithms marked their next victim. Psychiatry?
Yes, Steve replied, as the receptionist beckoned him towards the sign marking the out-patient department. He took a seat in the waiting room, reflecting on what to do. How he might break the news. Remembering, tentatively, how he had stood on the bridge, looking across the city, contemplating ending it all. It was News Year's Eve 2049, the new year coaxing him intolerably, daring him on. The fireworks exploding in the distance. A note waiting at his flat. A future without a place for him. He hadn't seen his psychiatrist since then. Would she know what had happened?
Yet in this brave new world, technology wasn't just an adjunct to clinical decision-making. It strove to compete, claiming a therapeutic relationship of its own with patients. As early as the 2010s, self-help phone applications advertised themselves with taglines such as ‘rule your mind or it will rule you’, while others offered individualised therapy through artificial intelligence techniques.Reference Dowling8 The market flooded well beyond the traditional boundaries of academia, researchers were inundated with innovation but starved of time to regulate it. A 2018 analysis found that only 14 of approximately 100 studies using mental health apps had clinically validated evidence of their effectiveness.Reference Wang, Varma and Prosperi9 However, fears about the field's lack of regulation could only chase the technology into the future.
His phone knew. The app which monitored his mood knew, as did the one which monitored his geolocation. Maybe the software which monitored his sleep had worked it out too. He was sure that some of his social media followers had guessed. His internet searches knew. His family didn't. His phone had recognised something his friends had missed. Perhaps it was his phone which had stopped him. Or had it merely helped him stop himself?
In anticipation of Steve's appointment, Dr Singh downloaded his data. She analysed the GPS data first, before turning to the sleep data, exercise records and daily mood assessments. She was sceptical of algorithms that claimed to close the loop between users and their care, seeking to displace the clinician. Even the most well-intentioned apps, the most effective, lacked something, she felt. But she didn't resist the technology entirely. With data of its own, psychiatry demanded parity with physical health. Digital biomarkers offered patients objective evidence for years of lived experience and routine dismissal. The algorithms informed clinical decisions, streamlined cloudy diagnoses and personalised treatment choices. Yet alone they lacked something profoundly human, profoundly therapeutic.
Steve walks into the clinic and sees his doctor of 17 years. A tear drops from his cheek. The psychiatrist offers a tissue. More come. They forget the numbers for a moment. A moment's pause in a world bustling of answers. A moment of silence in a world full of data. Steve looks up. ‘It's good to see you’ he says, wondering how to tell someone what they already know.
They talk, and they reflect. Dr Singh browses Steve's numbers in the way that doctors have glanced over blood tests and clinical observations for years. Steve adds meaning to the data, reflects on the read-outs, adds humanity to the algorithms. Together, they identify where the models shadow their subject like ill-fitting clothes. Algorithms worked in broad assumptions, clinical acumen dealt with individuals, Dr Singh explained, as she tailored the numbers to the man in front of her.
She was reasonable, Steve felt. He liked her. An ally with whom to navigate this increasingly impersonal world. As the appointment drew to a close, Dr Singh offered Steve an outreach service. We can have the data alert us if anything takes a turn for the worse, she explained, a run of poor sleep or abnormal text messaging could prompt your community team to check up on you, or even drop you a visit. It was something which had appeal, a safety net he wondered if he'd benefit from.
Dr Singh was hesitant to offer the remote-monitoring outreach service to all her patients. The qualitative studies had identified that some found it too intrusive, whereas others worried they would become fixated on their own mental health, causing an anxiety of its own.Reference Gillett and Saunders10 Historical concerns around the medicalisation of everyday experience persisted in new forms. Part of her was relieved when Steve declined.
New Year's Eve 2050. Another year had passed. As 2051 beckoned, Steve continued to struggle with his symptoms. He'd developed coping strategies, the data had helped him identify his triggers of relapse. Sometimes he wondered if the algorithms knew him better than he knew himself. And he wondered if that was OK. With their help, he'd learnt to predict and prepare for his relapses. At his most optimistic he wondered if he'd managed to prevent them.
After seeing her last patient for the day, Dr Singh turned back to her computer. Flicking through the screens, she glanced through a collection of records the remote-monitoring outreach programme had flagged for her attention. A man with schizophrenia exhibiting an unusual geolocation trail and a woman with bipolar disorder whose sleep had become increasingly erratic. She would call them in the morning, reassured that even technology could not evade the uncertainties of clinical practice. After shutting down her computer and returning her coffee mug to the kitchen, Dr Singh exited the clinic into the cold December evening.
Approaching midnight, Steve's phone notified him of another upcoming daily mood assessment. He glanced down, hesitated and turned it off. Placing his phone on the table next to him, he looked to the sky, stood up and walked towards the fireworks.
About the author
George Gillett, BA, BM BCh, is an Academic Foundation Doctor at the Oxford University Clinical Academic Graduate School, and at the Department of Psychiatry, University of Oxford, UK.
eLetters
No eLetters have been published for this article.