Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-19T08:12:57.363Z Has data issue: false hasContentIssue false

Beyond clicks and downloads: a call for a more comprehensive approach to measuring mobile-health app engagement

Published online by Cambridge University Press:  11 August 2020

Heather L. O'Brien
Affiliation:
School of Information, University of British Columbia, Canada
Emma Morton
Affiliation:
Institute of Mental Health Marshall Fellow at the Department of Psychiatry, University of British Columbia, Canada
Andrea Kampen
Affiliation:
School of Information, University of British Columbia, Canada
Steven J. Barnes
Affiliation:
Department of Psychology, University of British Columbia, Canada
Erin E. Michalak*
Affiliation:
Department of Psychiatry, University of British Columbia, Canada
*
Correspondence: Erin E. Michalak. Email: erin.michalak@ubc.ca
Rights & Permissions [Opens in a new window]

Abstract

Downloading a mobile health (m-health) app on your smartphone does not mean you will ever use it. Telling another person about an app does not mean you like it. Using an online intervention does not mean it has had an impact on your well-being. Yet we consistently rely on downloads, clicks, ‘likes’ and other usage and popularity metrics to measure m-health app engagement. Doing so misses the complexity of how people perceive and use m-health apps in everyday life to manage mental health conditions. This article questions commonly used behavioural metrics of engagement in mental health research and care, and proposes a more comprehensive approach to measuring in-app engagement.

Type
Editorial
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists

There is no shortage of mobile health (m-health) apps for people facing mental health challenges; one 2018 study estimated 10 000 available for download.Reference Torous, Firth, Huckvale, Larsen, Cosco and Carney1 m-health apps track symptoms, deliver therapy and promote health behaviour interventions. Their success is often evaluated according to user engagement, typically operationalised as frequency and duration of app use, behavioural interaction with the app (for example downloads, clicks) and popularity (for example user reviews, ratings).Reference Fleming, Bavin, Lucassen, Stasiak, Hopkins and Merry2 Usage data is assumed to capture different types and depths of app engagement, yet misses cognitive and emotional responses to the app and is disconnected from behaviour change in real-world settings.Reference Cole-Lewis, Ezeanochie and Turgiss3,Reference Weisel, Fuhrmann, Berking, Baumeister, Cuijpers and Ebert4

User engagement has become an umbrella term used to describe a host of conceptually unique user-centred outcomes, including usability, acceptability, feasibility and satisfaction.Reference Ng, Firth, Minen and Torous5 These different user experiences are all evaluated with the same kinds of usage statistics: dwell time, bounce rates and number of downloads, logins, visits and specific interactions, such as clicking on links, watching videos, completing modules.Reference Fleming, Bavin, Lucassen, Stasiak, Hopkins and Merry2 This creates a schism between the concept of interest and the most salient metrics for its evaluation. It opens the door ‘for [user-engagement indicators] to be selected inappropriately, presented with bias or interpreted incorrectly’ and prevents meaningfully comparisons across apps, studies and user groups.Reference Ng, Firth, Minen and Torous5

m-health engagement is narrowly defined as ‘user uptake … and/or ongoing use, adherence, retention, or completion data’,Reference Fletcher, Foley and Murray6 and thus focuses on quantifying rather than qualifying user engagement. Although it is seemingly objective, easy and unobtrusive to record usage data, studies have documented inconsistencies and called for standardised reporting practices.Reference Fleming, Bavin, Lucassen, Stasiak, Hopkins and Merry2,Reference Ng, Firth, Minen and Torous5,Reference Torous, Lipschitz, Ng and Firth7 In addition, the positive impact of m-health apps on real-world or clinical trial outcomes is inconclusive,Reference Fleming, Bavin, Lucassen, Stasiak, Hopkins and Merry2,Reference Torous, Lipschitz, Ng and Firth7 and the ‘beneficial dose [of apps] … or amount of exposure’ at the population level is unknown.Reference Fleming, Bavin, Lucassen, Stasiak, Hopkins and Merry2 This calls into question which metrics and what thresholds may be indicative of user engagement for different apps, users or mental health conditions.

Steady and sustained app use is typically viewed as positive, disengagement and non-use as negative. This emphasis on user behaviour in user-engagement evaluation misses important information about users and their contexts. User engagement may be influenced by how content is organised and presented, symptom burden, environmental stressors and supports, and the desire for social connectivity.Reference Nicholas, Fogarty, Boydell and Christensen8 These cognitive, emotional and social factors may be evaluated through surveys, interviews and app reviews,Reference Ng, Firth, Minen and Torous5,Reference Nicholas, Fogarty, Boydell and Christensen8 sometimes independent of usage data. Consequently, data sources may be disconnected and unable to inform each other.

It has been proposed that the field of m-health evaluation could be advanced if we understood the relationship between out-of-app and in-app engagement.Reference Cole-Lewis, Ezeanochie and Turgiss3 However, a barrier to this is the way in which in-app user engagement is currently conceptualised and measured.

(Re)Defining user engagement in the m-health space

Human–computer interaction (HCI) researchers define user engagement as capturing and maintaining the attention and interest of technology users,Reference Jacques9 and the cognitive, emotional and temporal investment made by users.Reference O'Brien and Toms10 Mental health interventions target not only behaviour, but emotional and cognitive processes,Reference Perski, Blandford, West and Michie11 and social connectivity may be fundamental for self-management.Reference Nicholas, Fogarty, Boydell and Christensen8 A holistic definition of user engagement with m-health apps goes beyond what people do to how these tools address needs for information/education, social support and personal agency.

HCI researchers have identified attributes of user engagement, including user attention, interest, motivation, control and system usability and aesthetic appeal.Reference Jacques9,Reference O'Brien and Toms10 Focusing on user-engagement attributes offers a targeted approach to measurement. For example, we might use eye-tracking or heat maps to gauge in-app attention, or brief self-report instruments to capture users’ sense of control at defined points in time during an interaction. HCI researchers have modelled user engagement as having natural ebbs and flows over the course of users’ interactions with digital tools. The process of model of engagement suggests that users move through points of engagement, periods of sustained engagement, disengagement and re-engagement and that some attributes are more salient at particular stages than others.Reference O'Brien and Toms10 Process-based models reveal that engagement is not an ‘all or nothing’ phenomenon. Thus, is it essential to not only measure interaction outcomes (i.e. total session duration, total app downloads) but users’ journeys through an app, i.e., ups and downs in interactivity and varying levels of emotional and cognitive involvement.

Qualifying the quantification of m-health app usage

Interpreting the ‘ups and downs’ of in-app user engagement involves looking at usage data differently, and connecting multiple data sources in significant ways. There are ‘unique cognitive, neurological or motor needs arising from mental illness’.Reference Ng, Firth, Minen and Torous5 People with chronic health conditions have fluctuating needs, yet apps do not take into account the diversity of individual lived experiences or different users, for example, young people, those newly diagnosed.Reference Nicholas, Fogarty, Boydell and Christensen8 High usage is not indicative of positive clinical outcomes but may actually reflect worsening mental health. App usage may also exacerbate negative mood by providing poor quality/too much information, technical challengesReference Nicholas, Fogarty, Boydell and Christensen8 or increasing user's awareness of distressing symptoms.Reference Allan, Mcleod, Bradstreet, Beedie, Moir and Gleeson12

Morton et alReference Morton, Hole, Murray, Buzwell and Michalak13 found that people living with bipolar disorder experienced both negative and positive emotions toward self-monitoring; the practice made them feel that they were managing their condition effectively compared with others, but was also ‘an unpleasant reminder that they were living with bipolar disorder’. Nuanced interpretation is not revealed through usage data alone.

Condition-specific knowledge is critical, and participatory design approaches are essential for gathering insights from people with lived experience and healthcare providers. Participatory design draws upon different methods (focus groups, ethnography) to co-create and co-evaluate prototypes with users throughout the design life cycle.Reference Spinuzzi14 Such methods are needed to identify what goals an app should fulfil, how people want to use it and for what purposes, and how design features can reflect user preferences and goals. Such knowledge can aid in the development of appropriate engagement indicators that will facilitate the interpretation of usage and other data sources.

Apps contain different types of content (for example educational articles, quizzes, symptom trackers, social connection tools). Looking at how frequently/long users interact with individual content pages or features may be less informative than grouping features according to function or the need they are intended to serve, such as education, symptom tracking, social support. It may be productive to distinguish content that people typically access once, for example quiz, versus multiple times, for example sleep or exercise logs, and to consider the sequence of content interactions in terms of scaffolding engagement. Rethinking the analysis of usage data would allow for richer interpretations of why people use apps in naturalistic settings, such as for routine maintenance, affirmation, social support. It could also help tailor content according to recovery or illness stage to reduce cognitive load.Reference Morton, Hole, Murray, Buzwell and Michalak13

Algorithms facilitate the identification of engagement patterns based on interactions with apps over time, often based on duration (short versus long term) and frequency (low versus high). Such analyses reveal that different users have different use trajectories, but they do not explain why the pattern is occurring or its significance to clinical outcomes. Diverse and varied streams of data, including user self-reports, discussion forum transcripts, social network data and data from symptom severity, functioning or quality of life measures can be used to make sense of usage data. Cluster analysis is another option, whereby usage patterns are examined in concert with clinical, sociodemographic variables or other data sources, such as in-app text messages between users and coaches to identify reasons for app use/non-use.Reference Chen, Wu, Tomasino, Lattie and Mohr15

Users weigh the benefits of using m-health apps with factors such as usability, convenience, personal risk, for example privacy, and cognitive or emotional effort,Reference Morton, Hole, Murray, Buzwell and Michalak13 which may not be accounted for in in-app assessments. Empirical studies utilising self-report data may not use validated self-report inventories or use them systematically.Reference Ng, Firth, Minen and Torous5 Thus, incorporating different data sources must balance the burden and risk of self-reporting with the benefits, emphasise replicability in the selection and use of self-report instruments, and transparency in data collection and reporting.

Toward a more comprehensive view of in-app engagement

In-app engagement must seek to support the cognitive, emotional and behavioural changes necessary for desired mental health outcomes, including symptom reduction, recovery and quality of life improvements. For this to occur, use/non-use must be connected to broader out-of-app goals, and the value of negative emotions, behaviours and cognitions relative to overall self-management must be considered. For example, non-use may be indicative of improved mental health and the need for less reliance on digital interventions.

It is tempting to want a magic, uniform formula for measuring user engagement, and this has been the appeal of usage data.

We argue instead that user-engagement indices for m-health apps must be:

  • Corroborative, where different measures including usage data, symptom severity assessment scales and subjective outcome assessments (for example quality of life) are used to determine what meaningful engagement with the app entails.

  • Outcome, rather than output, oriented. If m-health apps are meant to improve intervention effectiveness, then how they are used becomes more important than how often they are used.

  • Process based, where we expect to see ebbs and flows in usage. Rather than labelling app users as low or high engagers based on algorithmic (non)-use, we should adopt participatory design approaches (for example journey mapping) to appreciate how different users interact with different features of the app over time.

  • Expert driven, meaning that the expertise of people with lived experience and clinicians is included throughout the design process to identify salient needs (for example social support) and goals (for example establishing a routine, symptom management) and how these can be met with the app, as well as to inform aesthetic and content design choices.

User-engagement indices should be developed in parallel with the app itself, and draw upon condition-specific knowledge and multiple data sources. The COPE approach (Corroborative, Outcome oriented, Process based, Expert driven) necessitates collaboration among people with mental health conditions, healthcare providers and user-experience designers to develop m-health apps. Documenting each element of this framework would result in greater transparency about how design decisions were made, what is being measured and why, and how the resulting app fits into the broader mental health landscape.

Funding

This research is supported by a Canadian Institute for Health Research (CIHR) Project Grant, ‘Bipolar Bridges: A Digital Health Innovation Targeting Quality of Life in Bipolar Disorder’.

Author contributions

Conceptualised and original draft by H.L.O'B; conceptualised and reviewing: E.E.M. and E.M.; critical revision of the manuscript for important intellectual content: all authors.

Declaration of interest

H.L.O'B., E.M., A.K. and S.J.B.: none. E.E.M. discloses grant funding from a private company in the 36 months prior to this publication.

ICMJE forms are in the supplementary material, available online at https://doi.org/10.1192/bjo.2020.72

References

Torous, J, Firth, J, Huckvale, K, Larsen, ME, Cosco, TD, Carney, R, et al. The emerging imperative for a consensus approach toward the rating and clinical recommendation of mental health apps. J Ner Ment Dis 2018; 206: 662–6.CrossRefGoogle ScholarPubMed
Fleming, T, Bavin, L, Lucassen, M, Stasiak, K, Hopkins, S, Merry, S. Beyond the trial: systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res 2018; 20: e199.CrossRefGoogle ScholarPubMed
Cole-Lewis, H, Ezeanochie, N, Turgiss, J. Understanding health behavior technology engagement: pathway to measuring digital behavior change interventions. JMIR Form Res 2019; 3: e14052.CrossRefGoogle ScholarPubMed
Weisel, KK, Fuhrmann, LM, Berking, M, Baumeister, H, Cuijpers, P, Ebert, DD. Standalone smartphone apps for mental health: a systematic review and meta-analysis. NPJ Digit Med 2019; 2: 118.CrossRefGoogle ScholarPubMed
Ng, MM, Firth, J, Minen, M, Torous, J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv 2019; 70: 538–44.CrossRefGoogle ScholarPubMed
Fletcher, K, Foley, F, Murray, G. Web-based self-management programs for bipolar disorder: insights from the online, recovery-oriented bipolar individualised tool project. J Med Internet Res 2018; 20: e11160.CrossRefGoogle ScholarPubMed
Torous, J, Lipschitz, J, Ng, M, Firth, J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J Affect Disord 2019; 263: 413–9.CrossRefGoogle ScholarPubMed
Nicholas, J, Fogarty, AS, Boydell, K, Christensen, H. The reviews are in: a qualitative content analysis of consumer perspectives on apps for bipolar disorder. J Med Internet Res 2017; 19: e105.CrossRefGoogle ScholarPubMed
Jacques, RD. The Nature of Engagement and its Role in Hypermedia Evaluation and Design (Doctoral dissertation). South Bank University, 1996.Google Scholar
O'Brien, HL, Toms, EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Assoc Inf Sci Technol 2008; 59: 938–55.CrossRefGoogle Scholar
Perski, O, Blandford, A, West, R, Michie, S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2016; 7: 254–67.CrossRefGoogle Scholar
Allan, S, Mcleod, H, Bradstreet, S, Beedie, S, Moir, B, Gleeson, J, et al. Understanding implementation of a digital self-monitoring intervention for relapse prevention in psychosis: protocol for a mixed method process evaluation. JMIR Res Protoc 2019; 8: e15634.CrossRefGoogle ScholarPubMed
Morton, E, Hole, R, Murray, G, Buzwell, S, Michalak, E. Experiences of a web-based quality of life self-monitoring tool for individuals with bipolar disorder: a qualitative exploration. JMIR Ment Health 2019; 6: e16121.CrossRefGoogle ScholarPubMed
Spinuzzi, C. The methodology of participatory design. Tech Commun 2005; 52: 163–74.Google Scholar
Chen, AT, Wu, S, Tomasino, KN, Lattie, EG, Mohr, DC. A multi-faceted approach to characterizing user behavior and experience in a digital mental health intervention. J Biomed Inform 2019; 94: e103187.CrossRefGoogle Scholar
Supplementary material: File

O'Brien et al. Supplementary Materials

O'Brien et al. Supplementary Materials

Download O'Brien et al. Supplementary Materials(File)
File 5.8 MB
Submit a response

eLetters

No eLetters have been published for this article.