Hostname: page-component-848d4c4894-2xdlg Total loading time: 0 Render date: 2024-06-22T11:04:53.338Z Has data issue: false hasContentIssue false

50 Remote Assessment has Minimal Effect on Test-Retest Reliability Among Older Adults with Essential Tremor

Published online by Cambridge University Press:  21 December 2023

Sandra Rizer*
Affiliation:
Columbia University Irving Medical Center, New York, NY, USA.
Silvia Chapman
Affiliation:
Columbia University Irving Medical Center, New York, NY, USA.
Jillian Joyce
Affiliation:
Columbia University Irving Medical Center, New York, NY, USA.
Nikki Delgado
Affiliation:
University of Texas Southwestern Medical Center, Dallas, Texas, USA
Margaret McGurn
Affiliation:
University of Texas Southwestern Medical Center, Dallas, Texas, USA
Allison Powell
Affiliation:
University of Texas Southwestern Medical Center, Dallas, Texas, USA
Daniella Iglesias Hernandez
Affiliation:
University of Texas Southwestern Medical Center, Dallas, Texas, USA
Yian Gu
Affiliation:
Columbia University Irving Medical Center, New York, NY, USA.
Elan D. Louis
Affiliation:
University of Texas Southwestern Medical Center, Dallas, Texas, USA
Stephanie Cosentino
Affiliation:
Columbia University Irving Medical Center, New York, NY, USA.
*
Correspondence: Sandra Rizer, Columbia University Irving Medical Center, sz2667@cumc.columbia.edu
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Objective:

The COVID-19 pandemic increased utilization of remote assessment to allow clinicians and researchers to continue valuable work while maintaining quarantine guidelines. With guidelines relaxing, researchers have returned to in-person assessment. Information is needed regarding the effect of remote assessments on test-retest reliability. COGNET, a longitudinal study of cognition in participants with essential tremor, transitioned from in-person to remote assessments during the pandemic, and has now returned to in-person assessment. The current study investigates the extent to which remote assessment affected test-retest reliability across a range of neuropsychological assessments administered in COGNET.

Participants and Methods:

Participants included 27 older adults enrolled in COGNET (mean age=75.0 (9.1), education=16.2 (2.6), 67% female, and 100% white). Memory tests included: California Verbal Learning Test II, Logical Memory subtest of the Wechsler Memory Scales - Revised, and Verbal Paired? Associates. Executive function tests included: Digit Span Backwards and the Delis-Kaplan Executive Function System subtests of Verbal Fluency, Sorting, and Color-Word. Attention tests included Oral Symbol Digit Modalities Test and Digit Span Forward. Language was assessed with the Boston Naming Test. Intraclass correlation coefficients (ICCs) were calculated to examine test-retest reliability of InPerson to In-Person visits (P-P), and combination visits (e.g., In-Person to Remote (PR), and Remote to In-Person (R-P)). Following Koo & Li (2016), ICCs were interpreted as: >.90 excellent, .75-.90 good, .50-.74 moderate, and <.50 poor reliability. The Feldt approach was used to compare ICCs from P-P visits against ICCs calculated for combination visits (P-R or R-P), with the test statistic compared to an F distribution.

Results:

ICCs for person-to-person assessment ranged from .51 to .89. Memory test ICCs ranged from moderate to good (.51 to .80). Executive function test ICCs ranged from moderate to good (.55 to .89). The attention domain had moderate ICCs (.67 - .68). Language ICC was moderate (.70). ICCs for person-to-remote assessment ranged from .42 to .89. Memory tests ranged from moderate to good ICCs (.59 to .83). Executive function tests ranged from poor to good ICCs (.42 to .89). Attention ICCs were moderate to good (.55 to .79). The Language ICC was moderate (.72). ICCs for remote-to-person ranged from .48 to 86. Memory ICCs ranged from moderate to good (.59 to .86). Executive function ICCs ranged from poor to good (.48 to .83). Attention ICCs were moderate to good (.56 to .79). The Language ICC was good (.78). The only test for which an ICC from a combination visit was significantly lower than a person to person visit was Digit Span Backwards.

Conclusions:

Test-retest reliability was moderate or better for all P-P assessments, consistent with the known psychometrics of these tests. Only one test of executive function showed lower reliability when remote assessment was introduced. From a broad standpoint, current results suggest that remote administration of neuropsychological tests can be used as a reliable substitute for in-person assessment for many measures, and suggest that caution be used when interpreting any change in Digit Span Backwards across person and remote assessments.

Type
Poster Session 08: Assessment | Psychometrics | Noncredible Presentations | Forensic
Copyright
Copyright © INS. Published by Cambridge University Press, 2023