Hostname: page-component-7479d7b7d-fwgfc Total loading time: 0 Render date: 2024-07-12T15:34:21.507Z Has data issue: false hasContentIssue false

Reliability of electronic patient reported outcomes vs. clinical assessment

Published online by Cambridge University Press:  19 July 2023

S. Krasteva*
Affiliation:
Department of Psychiatry and Medical Psychology, Medical University of Varna, Varna, Bulgaria
Z. Apostolov
Affiliation:
Department of Psychiatry and Medical Psychology, Medical University of Varna, Varna, Bulgaria
H. Kozhuharov
Affiliation:
Department of Psychiatry and Medical Psychology, Medical University of Varna, Varna, Bulgaria
*
*Corresponding author.

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Introduction

The importance of inter-scale and inter-rater reliability is a well-studied factor in maintenance of data consistency in clinical research. The use of patient reported outcomes poses another risk for compromising data integrity, as some studies show that patients tend to report their symptoms differently in direct clinician-lead interview and self-administered questionnaires. Additionally, as technology is advancing and digital endpoints in CNS clinical trials are becoming a reality, we need to further evaluate if the digital means of self-reporting (e.g., mobile app questionnaires) per se could potentially be a contributing factor in data inconsistency.

Objectives

To assess reliability between clinician-assisted evaluation and electronic patient reported outcomes of depressive and anxiety symptoms.

Methods

Patients not previously diagnosed with depression or anxiety disorders were asked to complete PHQ-9 and/or GAD-7, both verbally administered by a physician. Within 24 hours they were asked to complete a digital form of the same questionnaires.

Results

The analysis of 40 completed double assessments showed no correlation for depressive symptoms presence and severity measured by clinician-lead evaluation and electronic patient reported outcomes (Spearman rho = + 0.191, p=0.686), and poor correlation for anxiety symptoms (Spearman rho = + 0.466, p=0.080).

Conclusions

Many factors interfere with data consistency in clinical research, thus the methods and means of evaluation need to be taken into consideration. The reliability of electronic patient reported outcomes needs to be further assessed and preferably cross-checked by using other validated methods of assessment.

Disclosure of Interest

None Declared

Type
Abstract
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the European Psychiatric Association
Submit a response

Comments

No Comments have been published for this article.