Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-05-26T16:33:28.269Z Has data issue: false hasContentIssue false

Using gamification to enhance clinical trial start-up activities

Published online by Cambridge University Press:  19 May 2022

Karen Lane*
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Ryan Majkowski
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Joshua Gruber
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Daniel Amirault
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Shannon Hillery
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Cortney Wieber
Affiliation:
Tufts Medical Center, Boston, MA, USA
Dixie D Thompson
Affiliation:
University of Utah, Salt Lake City, UT, USA
Jacqueline Huvane
Affiliation:
Duke Clinical Research Institute, Durham, NC, USA
Jordan Bridges
Affiliation:
University of Utah, Salt Lake City, UT, USA
E. Paul Ryu
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Lindsay M. Eyzaguirre
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Marianne Gildea
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Richard E. Thompson
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Daniel E. Ford
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
Daniel Hanley
Affiliation:
Johns Hopkins University School of Medicine, Baltimore, MD, USA
*
Address for correspondence: K. Lane, MD, Department of Neurology, Deputy Director of Research, Division of Brain Injury Outcomes (BIOS), Johns Hopkins-Tufts Trial Innovation Center (JHU-Tufts TIC), Johns Hopkins School of Medicine, 750 E. Pratt St., 16th floor, Baltimore, MD 21202, USA. Email: klane@jhmi.edu
Rights & Permissions [Opens in a new window]

Abstract

Background:

The Trial Innovation Network (TIN) is a collaborative initiative within the National Center for Advancing Translational Science (NCATS) Clinical and Translational Science Awards (CTSA) Program. To improve and innovate the conduct of clinical trials, it is exploring the uses of gamification to better engage the trial workforce and improve the efficiencies of trial activities. The gamification structures described in this article are part of a TIN website gamification toolkit, available online to the clinical trial scientific community.

Methods:

The game designers used existing electronic trial platforms to gamify the tasks required to meet trial start-up timelines to create friendly competitions. Key indicators and familiar metrics were mapped to scoreboards. Webinars were organized to share and applaud trial and game performance.

Results:

Game scores were significantly associated with an increase in achieving start-up milestones in activation, institutional review board (IRB) submission, and IRB approval times, indicating the probability of completing site activation faster by using games. Overall game enjoyment and feelings that the game did not apply too much pressure appeared to be an important moderator of performance in one trial but had little effect on performance in a second.

Conclusion:

This retrospective examination of available data from gaming experiences may be a first-of-kind use in clinical trials. There are signals that gaming may accelerate performance and increase enjoyment during the start-up phase of a trial. Isolating the effect of gamification on trial outcomes will depend on a larger sampling from future trials, using well-defined, hypothesis-driven statistical analysis plans.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Background

Slow site start-up has been a major problem in National Institute of Health (NIH) clinical trials [1,Reference Hudson, Lauer and Collins2]. Overcoming the barriers to site activation has become an improvement priority for investigators during the early trial planning stages. Recent benchmarking exercises and metrics are providing early information on specific factors that delay activation at clinical sites. However, it is still early to use site-level start-up data to determine the most reasonable activation timetable and study start-up design.

Compared to performing trial interventions and testing how new approaches work on diseases, trial activation tasks are repetitive and can be uninteresting. Start-up teams must depend on institutional and departmental personnel for approvals and cope with institutional delays and barriers over which they have little control. Traditionally, site teams receive a bundle of trial materials to process and are left on their own to figure out what to do first or in what order or with whom. This results in waste – time and effort – missed deadlines, a sense of isolation, and dissatisfaction. Given most trial teams have more than one trial in motion at any given time, the trial that is in the start-up phase often is placed on the back-burner in favor of more interesting or shifting priorities elsewhere: participant care comes before paperwork.

Start-up involves the usually thankless responsibilities of gathering regulatory approvals, documenting personnel paperwork, and managing others to meet deadlines that are even farther from their minds. Even though the trial science ahead will be innovative and complex, team commitment and engagement can be slow to take form, given the less enjoyable activities of rounding up and filing documents. And through it all, site teams are not engaged with other sites and just beginning to engage with coordinating center staff; they are unaware of how they compare to other sites – whether they are ahead or behind in readiness or if other sites are already enrolling.

There are many opportunities to reverse start-up inefficiencies and delays. Following lean management principles, ordering tasks using a specific flow pathway starting with the tasks that take the longest completion times can help sites do the work faster by working smarter to eliminate downtime and wasted time. And, continuous and regular site engagement can revive teams who might be losing momentum and missing deadlines. Gamifying start-up activities can help with both smart flow and momentum: games can help site teams identify the right pathway and games can engage and motivate site teams by connecting site personnel to other site teams and the coordinating center staff through competition and fun.

Gamification is the application of typical elements of game playing (e.g., point scoring, competition with others, and rules of play) to other areas of activity like the workplace, or in this case, clinical trials, to produce desired effects [Reference Deterding, Dixon, Khaled and Nacke3]. It is generally achieved in two ways: the modification of existing activities into a more game-like setting and the addition of structural elements (such as digital badges and leader boards) to encourage a player/user to obtain a goal [Reference Hamari4].

Gamification has accumulated popularity as a subject for academic inquiry [Reference Hamari, Koivisto and Sarsa5], and successful business models have utilized gamification within the workplace [Reference Hamari4]. Although the concept of gamification is becoming more widespread, gamification programs and the gamification of clinical trial performance remain limited. As of this writing, there are no published papers on gamifying clinical trial responsibilities. There are a few known gamification programs to improve trial performance. University of Utah researchers are building a game in collaboration with its Entertainment Arts and Engineering Program to enhance accrual and retention rates. By transforming a site screening log into an e-card game format that recognizes recruitment achievements, this gaming-based approach uses embedded game accolades to incentivize research team members to report accrual metrics. Another game used a World Cup theme: the trial had more than 80 centers around the globe and simulating an international sport was a timely match. The goal of this game was to improve overall trial performance – giving points to boost screening and enrollment ratios and praises for timely follow-up visits and data query responses. The points and weighting system were mapped to the trial’s risk management plan, focusing on fewer study participants lost to follow-up, improved data integrity, and protocol and outcome compliance.

Gaming platforms can be useful for medium-sized, multicenter Phase III clinical trials. Knowing that trial start-up cycles progress slower than expected in most cases and that most sites will be unproductive without engagement and motivation [Reference Goodlett, Hung, Feriozzi, Lu, Bekelman and Mullins6], one game was designed to track familiar activation metrics and more frequently engage study coordinators, investigators, and other study staff responsible for completing the trial start-up cycles in two different trials. Although the literature does not yet reflect the potential importance of this technique, there are some signals that engagement with gaming may accelerate start-up cycle time and increase enjoyment and satisfaction, even when the work is repetitive, tangibly unrewarding, and done in isolation.

Methods

The Johns Hopkins coordinating center developed a gamification platform to implement start-up activities for two trials, titled: The TRaditional versus Early Aggressive Therapy for Multiple Sclerosis (TREAT-MS) and VItamin C, Thiamine and Steroids in Sepsis (VICTAS). Available data on site team satisfaction and speed of site activation were evaluated.

Building Games into Existing Platforms

The coordinating center (CC) conducted monthly webinars (Go-To-Training®) and used an electronic data collection system (Electronic Data Capture [EDC]; VISION® Prelude LLC, Austin, TX) for trial data; a global electronic management system (GEMS©; VISION® Prelude LLC, Austin, TX) for start-up cycle tracking; and an electronic trial management filing system (eTMF; VISION® Prelude LLC, Austin, TX) for document storage. The CC employed centralized site navigators to guide sites through start-up activities that were standardized and staged into sequenced, compartmentalized, one-task-at-a-time objectives to reach firm deadlines. The program was designed to shorten the highest-risk delays: sIRB approval, contract execution, authority delegation, and training. With these platforms, the coordinating center put gamification to work by turning the collective efforts required of site teams during a clinical trial start-up into a friendly competition, with the intent to introduce enjoyment and motivation into tasks that required hard-to-meet start-up timelines and deadlines (each individual site team in a race against the trial clock).

Key start-up indicators (metrics) were collected by the coordinating center and mapped to scoreboards. Core metrics included completion of IRB approvals and contracts, regulatory document completion, and completion of training requirements. Game scoreboards were built into an electronic trial management platform functioning as a start-up tracker (e.g., REDCap) and exported to a dedicated trial website on a monthly basis. Both electronic platforms and the website were available only to the coordinating center and site teams. Scoreboards were accessible to site teams so they could check their site scores in real time and the scores of all sites on the website. Webinar broadcasts to participating sites were organized by the coordinating center to provide a forum where trial and game information was shared and applauded. These broadcasts were central to keeping sites engaged in the games, by showcasing monthly site rankings and providing recognition for accomplishments as a regular feature. Additional coordinating center effort was required to create graphics, present standings, and host awards at monthly webinars or through other means of communication.

Elements of game playing (scoring, competition, and rules) were applied by simulating a Mount Everest climb (Fig. 1, left panel). To gamify the full process and not simply score based on which team crossed the finish line first, individual start-up tasks were assigned point values. Once a task was completed, points were awarded and posted on a leaderboard in the eTMF and current standings in reaching higher Mt. Everest basecamps (i.e., matching meaningful, pre-set monthly start-up milestones) and how close sites were to the summit (activation) were shared during monthly webinars. Points were allocated to correspond to specific milestones and mapped to the tracking system. To mitigate any timeline delays institutional offices might have on site performance, the Mt. Everest scoring system was weighted heavily on PI and coordinator tasks – steps external to their direct control, such as the IRB approval (the trials utilized a central single IRB model) and sponsor finalization of the contract, were left out of the scoring. The single component dependent on institutional office speed/performance was the “Partial Execution” of the subaward (Table 1). Direct team responsibilities were valued higher than institutional personnel tasks to complete and were worth more points to win or lose. Likewise, bonuses or penalties for the early/late completion of any given step were capped at 14 days, to prevent any single metric from having an outsized effect on overall score. Site teams received bonus points for time-sensitive tasks and points could be subtracted for work submitted late. Activation was assigned a small point value as a final bonus to reward a generally well-performing site.

Fig. 1. Clinical trial start-up tasks (left panel) were gamified into a Mt. Everest Climb (right panel) to enhance performance during a site activation phase. Monthly task completion was matched to reaching Mt. Everest base camps and the summit.

Table 1. Metrics and point values that made up the core ruleset for the Mt. Everest game

*,The Mt. Everest game was conducted in the context of an Accelerated Start-up Program which features weekly check-in meetings with the coordinating center and monthly educational webinars covering important start-up topics.

Study Aims and Statistical Methods

Aims

The primary endpoint focused on identifying if overall game performance (e.g., Mt. Everest score) was related to improved trial start-up metrics, such as time to site activation, IRB submission and approval, training completion, and contract execution. A secondary analysis looked at whether perceptions of the game were correlated to trial start-up performance.

Satisfaction survey

Once all sites were activated to enrollment-ready status, site teams were surveyed about their satisfaction with the Mt. Everest game and responses were collated. The survey questions were developed by the game designers, and survey data were collected within a confidential Qualtrics survey (Table 2). Survey respondents could be any member of the site start-up team with instructions that only one survey per site should be submitted. No identifiable private information was collected; respondents to the surveys were asked to provide only the name of the institution represented.

Table 2. Voluntary, confidential site survey is distributed following the Mt. Everest start-up competition. Highlighted rows are featured in the analysis

Highlighted rows are featured in the analysis.

Analyses required linking metrics to unique site responses. Responses in the survey were compared to institutional metrics, to look for cross-correlations. Responses to two survey questions, “I enjoyed participating in Mount Everest” (Yes or No) and “I felt too much pressure when the webinar displayed the rankings” (Yes or No), were compared to game scores and component metrics for the respective games.

Statistical methods

All analyses were conducted post hoc using data already collected from each trial. Start-up metrics were maintained via a custom clinical trial management system called Global Electronic Management System (GEMS) built on the Prelude Dynamics VISION® EDC platform. Due to different start-up processes and timelines for both TREAT-MS and VICTAS, the range of Everest scores differed greatly between the two trials. Therefore, this study compared the relative performance of sites by normalizing performance within each trial using Everest score z-values. To understand the relationship between the z-value adjusted Everest scores and time to achieving the start-up milestones across both TREAT-MS and VICTAS trials, univariate Cox proportional hazard models were utilized. Hazard ratios with 95% confidence intervals were then calculated from the regression models. Performance differences were assessed by site survey responses for each game. The Wilcoxon rank sum test was used to compare mean performance (Everest score) based on survey questions: “I enjoyed participating in [game name]” and “I felt too much pressure when the webinar displayed the rankings.” For this analysis, responses were included only if a site was identified in the survey response. There were three instances of a 2:1 difference of opinion in the TREAT-MS dataset, two for Q1, and one for Q2. All multi-response instances in the VICTAS dataset were unanimous. For sites with multiple responses, an average site response was calculated and rounded to a “yes” or “no” overall site response (e.g., 2 “yes” and 1 “no” was treated as a “yes” site response). There were no sites in which there was a split response (e.g., 1 yes, 1 no). Statistical significance was determined at the 0.05 alpha level. All data analyses and table and figure generation were conducted in R Statistical Software version 4.1.3 and R Studio version 2022.02.1 build 461.

Results

A higher z-value adjusted Mt. Everest score was significantly associated with an increase in probability of achieving start-up milestones more quickly in three of the five metrics analyzed (activation time, IRB submission time, and IRB approval time) (Fig. 2). Although activation time was the least weighted component of the total Mt. Everest score, it was the most associated metric. This indicates that the probability of completing site activation faster is increased by 61% for every 1 standard deviation increase (or 32 points) in Mt. Everest score.

Fig. 2. Hazard ratios (HR) for the association between z-value-adjusted Mt. Everest Scores and time to various gaming metrics across both TREAT-MS and VICTAS trials. A higher HR indicates a larger percent increase in the probability of achieving a shorter IRB submission time, IRB approval time, and time to activation.

Satisfaction Survey Results: Enjoyment, Pressure and Performance

Forty percent (18 of 45) of the TREAT-MS sites and 68% (26 of 38) of the VICTAS sites responded to the satisfaction survey with an identifiable site name. Game participation enjoyment was mixed. In the TREAT-MS trial, 9 of 17 (53%) sites enjoyed participating in Mt. Everest and, in the VICTAS trial, 13 of 25 (52%) sites enjoyed participating in Mt. Everest. In TREAT-MS, 10 of 14 (71%) sites reported not feeling too much pressure when rankings were displayed publicly, and 18 of 23 (78%) sites in VICTAS felt the same.

In the TREAT-MS trial, overall game enjoyment appeared to be an important moderator of performance across most gaming metrics analyzed (Figs. 3 and 4). In TREAT-MS, those who enjoyed playing the game, on average, had higher Mt. Everest scores (Fig. 3) and accomplished their start-up tasks more quickly (activation, partial contract execution, IRB submission, IRB approval, and training) (Fig. 4). The association between enjoyment with gaming and mean activation time and training completion time was statistically significant; the other metrics trended in the same direction and, importantly, provided raw mean differences that are very trial-relevant. For example, those who enjoyed the game completed contract execution 18 days faster, IRB submission 64 days faster, and IRB approval 66 days faster than those who did not enjoy the game. In the VICTAS trial, enjoyment had little effect on performance.

Fig. 3. Mean differences in Mt. Everest Score between those who enjoyed or did not enjoy the Mt. Everest game. Enjoyment appeared to be an important moderator of performance in TREAT-MS, but not in VICTAS. Neither result was significant.

Fig. 4. Mean differences in various start-up timeline metrics between those who enjoyed or did not enjoy the Mt. Everest game by study. In TREAT-MS, those who enjoyed playing the game performed better, on average, in all metrics; shorter time to activation and training completion were significant. Differences in mean completion times in VICTAS were small.

Pressure also seemed to be an influential marker of performance in TREAT-MS but less so in VICTAS (Figs. 5 and 6). In TREAT-MS, those who felt the game did not apply too much pressure were much quicker to complete all their start-up objectives than those who felt too much pressure from the game (Fig. 6). However, this trend did not hold in the VICTAS trial. Mean Mt. Everest scores between pressured and not pressured were roughly similar, 38 and 31 respectively (see Fig. 5), and there was no difference in mean IRB approval time and training completion time between groups (see Fig. 6).

Fig. 5. Mean differences in Mt. Everest Score between those who did or did not feel too much pressure playing the Mt. Everest game. Lack of feeling pressure was significantly associated with better Mt. Everest performance in TREAT-MS but not in VICTAS.

Fig. 6. Mean differences in various start-up timeline metrics between those who did or did not feel too much pressure playing the Mt. Everest game by study. In TREAT-MS, those not feeling pressured performed better across all metrics; no result was significant. Differences in mean completion times in VICTAS were small.

Discussion

Games appear in many facets of our lives as a source of entertainment, relationship-building, and motivation. In today’s digital age, games can evolve and expand, bringing typical game elements and mechanics to ordinarily non-game contexts to motivate and engage people.

The Mt. Everest gamification experiences suggest that a game, based on the ordinary metrics of a trial start-up, can be well-designed to assign progress as standings to praise and encourage site teams and track well with actual performance. Across both trials, scores were found to be reflective of overall site performance in each of the metrics analyzed. This, in combination with improvements in cycle times for those who enjoyed the game, makes gamification a useful tool in future trials.

This is encouraging for trial sponsors and stakeholders, game developers, and those who are thinking about connecting clinical trial tasks to a points or scoreboard system. Continued development of games, using electronic trial management tools that provide clinical trial metrics and benchmark data, will support better management of clinical trials and provide sets of connections for gamification [Reference Lamberti, Brothers, Manak and Getz7]. Within a trial, there are many possible elements that can be tracked and gamified without undue focus on the finish line. Game leaderboards can be incorporated into EDC systems with multiple metrics to recognize and applaud well-performing site teams across a variety of metrics rather than one metric, such as activation time or the number of enrollments. Depending on the complexity of the trial protocol and the capabilities of trial tools, such as the EDC, gaining points across many small achievements in trial cycles can reflect the overall performance in critical performance areas yet allow site teams to compete for the lead in one area or another.

Gamification provides the visual and communal opportunity to follow productivity; it allows a site team to track progress towards its own goals while comparing itself with other site teams and the overall goals of the trial. It also provides the opportunity to publicly praise performance and share desired behaviors with those lagging behind. If gamification can improve cycle times, it could stand to reason that using games would be most effective if coupled with ways to convert those who do not enjoy gamifying the workplace into those who do. Although these games are passive and participation is voluntary, if a game standing has a reverse effect on enjoyment and motivation, site teams could be offered opt-in or opt-out of leaderboard postings that release site standings to public view. This would naturally separate site teams into groups that want to view the game standings from those who do not, and it could allow exploration of the question: “Does willingness to play a game in a clinical trial affect performance in that trial?”

Gamification has been shown to be incredibly effective in the workplace [Reference Hamari4,8]. When creating a game in a context where games are not usually played (such as among serious-minded research professionals), it may be crucial to design into game rewards activities that are meaningful to research productivity and trial progress – to make a difference in science as well as the game. The benefits of gamifying clinical trial activities – particularly when site teams feel disengaged and unmotivated – are that gamifying trial metrics and goals, in and of itself, encourages engagement. Additionally, focusing on game standings, if a competitive spirit is sparked, can be motivating and lead to an increase in quality and productivity and a sense of satisfaction with meeting the goals of a trial [Reference Hamari4]. While turning tasks into games in these instances does not alleviate burdens, it turns burdens into opportunities for recognition through friendly competition.

Limitations

Since this is the first study of the use of gamification for study start-up and execution, a larger number of trial experiences with prospectively gathered data will be needed to better define the benefits or unwanted effects of gamifying site activation. For future tests of games, it will be important to design measures that prospectively capture and correlate gaming in the clinical trial workplace with better satisfaction doing the work at hand, improved performance, and enjoyment of the game itself. For now, game utilization should be approached with equipoise.

The analysis regarding enjoyment was a retrospective inspection of limited available survey data. These surveys were not designed with the intent of being part of a quantitative analysis and were meant originally for internal quality improvement. Furthermore, the yes/no nature of the survey questions produced a less predictive analysis and future surveys will use a continuous scale from 0 to 10. From the limited nature of the surveys, it is unclear in which direction the causative arrow points: did enjoyment cause good performance or good performance cause enjoyment? In the former, it might suggest that players who enjoy competition might be more motivated to push various tasks to completion for the sake of the game in addition to, or even instead of, usual work-ethic motivations. In this case, future game designers should focus on making the experience of the game itself more enjoyable, improving engagement with the theme, and ensuring that rules are intuitive, challenging, and rewarding to the players.

It is unclear if site experience in executing trial activities plays a confounding role in overall game performance and/or enjoyment. The coordinating center built the Mt. Everest game within a protocolized start-up sequence that was standard across sites in the trial. The standardization limited the variability among site resources by assigning a site navigator to persistently engage and assist site teams with start-up tasks; yet, we recognize differences cannot be entirely eliminated. Skilled and established research teams with ample resources to devote to the trial may perform familiar tasks at a high level with or without a game (and vice versa for inexperienced sites). This relationship between site experience and level of further improvement after gamifying tasks is unknown and requires further exploration.

There has not been a cost analysis of transforming familiar trial metrics into scoring displays or creating graphics for online sharing in the form of a game. The Mt. Everest game was designed and launched by coordinating center staff intermediately knowledgeable in computer skills. An automated EDC makes the game management easier, but external or professional vendors are not required. Once a game theme and scoring system are developed for a first trial, it can be turnkey for subsequent use. No effort is required of the site teams, except to visit the display sites and attend regularly scheduled trainings and meetings. Site teams play the game simply by performing their trial responsibilities.

Next Steps

This retrospective examination of available data from two gaming experiences may be first-of-kind use in clinical trials; no other publications on engaging trial personnel could be found. Its novel use has piqued the interest of a larger federation of coordinating centers within the Trial Innovation Network to explore if it really works and, if so, how to use gamification to improve trial performance. The Trial Innovation Network gamification working group mission is to incorporate gaming innovations into clinical trial operations for networks to enhance site engagement, using competition as team building to improve task completion enjoyment and ultimately improve performance.

In this very new endeavor, more data and surveys on clinical trial gamification will be needed. As one next step, a Gamification Toolkit on how to gamify clinical trial activities is posted to a Trial Innovation Network website (https://trialinnovationnetwork.org/) Toolbox Page to encourage more trialists to consider the use of games, thereby stimulating more data generation and published results. Isolating the effect of gamification on trial outcomes and improved satisfaction among site teams will depend on a larger sampling of prospective data using well-defined, hypothesis-driven statistical analysis plans.

Acknowledgements

We would like to show our gratitude to the TREAT-MS and VICTAS leadership and trial teams for their participation and engagement in the gamification of their trial start-up cycles. We thank Megan Clark (Johns Hopkins School of Medicine) for assistance in references of this manuscript. We acknowledge the TIN and our colleagues on the TIC/RIC Network Executive Committee for their support.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work is supported by the National Institutes of Health Center for Advancing Translational Sciences (NCATS) and the National Institute on Aging (NIA), in support of the Trial Innovation Network under grant numbers U24TR001609 (Johns Hopkins/Tufts Universities), U24TR001608 (Duke/Vanderbilt Universities), and U24TR001597 (University of Utah). The TREAT-MS trial is supported by the Patient-Centered Outcomes Research Institute (PCORI), grant number MS-1610-37,115, and the VICTAS trial was supported by the Marcus Foundation. The survey and data analysis methods performed by the author team were determined to qualify as exempt research by the Johns Hopkins Medicine IRB (IRB00302606) under DHHS regulations 45 CFR 46.104(d)(4)(ii).

The Trial Innovation Network work was supported by the National Institutes of Health Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), under awards U24TR001609, U24TR001608, and U24TR001597.

Disclosures

The authors have no conflicts of interest to declare.

References

National Center for Advancing Translational Sciences (NCATS). Council Working Group on the IOM report: the CTSA Program at NIH [Internet], 2017 [cited September 30, 2018]. (https://ncats.nih.gov/advisory/council/subcomm/ctsaiom).Google Scholar
Hudson, KL, Lauer, MS, Collins, FS. Toward a new era of trust and transparency in clinical trials. JAMA 2016; 316(13): 13531354. doi: 10.1001/jama.2016.14668.CrossRefGoogle Scholar
Deterding, S, Dixon, D, Khaled, R, Nacke, L. From game design elements to Gamefulness: defining “Gamification”. In: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. MindTrek ’11. Association for Computing Machinery, 2011, pp. 915. doi: 10.1145/2181037.2181040.CrossRefGoogle Scholar
Hamari, J. Transforming homo economicus into homo ludens: a field experiment on gamification in a utilitarian peer-to-peer trading service. Electronic Commerce Research and Applications 2013; 12(4): 236245. doi: 10.1016/j.elerap.2013.01.004.CrossRefGoogle Scholar
Hamari, J, Koivisto, J, Sarsa, H. Does Gamification work? — a literature review of empirical studies on Gamification. In: 2014 47th Hawaii International Conference on System Sciences. IEEE, 2014, pp. 30253034. doi: 10.1109/HICSS.2014.377.CrossRefGoogle Scholar
Goodlett, D, Hung, A, Feriozzi, A, Lu, H, Bekelman, JE, Mullins, CD. Site engagement for multi-site clinical trials. Contemporary Clinical Trials Communications 2020; 19: 100608. doi: 10.1016/j.conctc.2020.100608.CrossRefGoogle ScholarPubMed
Lamberti, MJ, Brothers, C, Manak, D, Getz, K. Benchmarking the study initiation process. Therapeutic Innovation & Regulatory Science 2013; 47(1): 101109. doi: 10.1177/2168479012469947.CrossRefGoogle Scholar
Bunchball. Gamification 101: an introduction to the use of game dynamics to influence behavior. White Paper, October 2010.Google Scholar
Figure 0

Fig. 1. Clinical trial start-up tasks (left panel) were gamified into a Mt. Everest Climb (right panel) to enhance performance during a site activation phase. Monthly task completion was matched to reaching Mt. Everest base camps and the summit.

Figure 1

Table 1. Metrics and point values that made up the core ruleset for the Mt. Everest game

Figure 2

Table 2. Voluntary, confidential site survey is distributed following the Mt. Everest start-up competition. Highlighted rows are featured in the analysis

Figure 3

Fig. 2. Hazard ratios (HR) for the association between z-value-adjusted Mt. Everest Scores and time to various gaming metrics across both TREAT-MS and VICTAS trials. A higher HR indicates a larger percent increase in the probability of achieving a shorter IRB submission time, IRB approval time, and time to activation.

Figure 4

Fig. 3. Mean differences in Mt. Everest Score between those who enjoyed or did not enjoy the Mt. Everest game. Enjoyment appeared to be an important moderator of performance in TREAT-MS, but not in VICTAS. Neither result was significant.

Figure 5

Fig. 4. Mean differences in various start-up timeline metrics between those who enjoyed or did not enjoy the Mt. Everest game by study. In TREAT-MS, those who enjoyed playing the game performed better, on average, in all metrics; shorter time to activation and training completion were significant. Differences in mean completion times in VICTAS were small.

Figure 6

Fig. 5. Mean differences in Mt. Everest Score between those who did or did not feel too much pressure playing the Mt. Everest game. Lack of feeling pressure was significantly associated with better Mt. Everest performance in TREAT-MS but not in VICTAS.

Figure 7

Fig. 6. Mean differences in various start-up timeline metrics between those who did or did not feel too much pressure playing the Mt. Everest game by study. In TREAT-MS, those not feeling pressured performed better across all metrics; no result was significant. Differences in mean completion times in VICTAS were small.