To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data from randomized controlled trials (RCTs) are the primary source for health technology assessment (HTA) however these are limited by strict patient inclusion criteria, leading to concerns about whether treatment benefit estimates are accurate for all patients (generalizability). Real-World Data (RWD) have been proposed as a solution however as these are observational data there is additional potential for bias when estimating treatment effectiveness. To maximize the utility of RWD it is useful to consider the whole process of evidence generation and robustly address issues of feasibility and validity.
A series of complementary studies investigated whether population-based routinely collected health data from Scotland are suitable for estimating the effectiveness of chemotherapy for early breast cancer. Firstly, a prognostic score was validated in this population. Secondly, a comparison of RWD and randomized trial effectiveness estimates was made to investigate feasibility and validity of several methods – Propensity Score Matching (PSM), Instrumental variables (IV) and Regression Discontinuity. Finally, effectiveness estimates in trial underrepresented groups were produced.
PSM and IV were feasible and produced results in relatively close agreement with randomized data. Effectiveness estimates in trial underrepresented groups (women over 70 years and women with high comorbidity) were consistent with an approximate one-third reduction in the risk of death from breast cancer. This is equivalent to approximately a 3–4 percentage point difference in all cause mortality over 10 years in these groups.
RWD are a feasible for generating estimates of effectiveness of adjuvant chemotherapy in early stage breast cancer. The process of using RWD for this purpose should include careful assessment of data quality and comparison of alternative strategies for causal identification in the context of available randomized data.
Recruitment of participants and their retention in randomized controlled trials (RCTs) is key for research efficiency. However, for many trials, recruiting and retaining participants meeting the eligible criteria is extremely challenging. Digital tools are increasingly being used to identify, recruit and retain participants. While these tools are being used, there is a lack of quality evidence to determine their value in trial recruitment.
The aim of the main study was to identify the benefits and characteristics of innovative digital recruitment and retention tools for more efficient conduct of RCTs. Here we report on the qualitative data collected on the characteristics of digital tools required by trialists, research participants, primary care staff, research funders and Clinical Trials Units (CTUs) to judge them useful. A purposive sampling strategy was used to identify 16 participants from five stakeholder groups. A theoretical framework was informed from results of a survey with UKCRC registered CTUs. Semi-structured interviews were conducted and analysed using an inductive approach. A content and thematic analysis was used to explore the stakeholder's viewpoint and the value of digital tools.
The content analysis revealed that ‘barriers / challenges ‘ and ‘awareness of evidence’ were the most commonly discussed areas. Three key emergent themes were present across all groups: ‘security and legitimacy of information’, ‘inclusivity’, and ‘availability of human interaction’. Other themes focused on the engagement of stakeholders in their use and adoption of digital technology to enhance the recruitment/retention process. We also noted some interesting similarities and differences between practitioner and participant groups.
The key emergent themes clearly demonstrate the use of digital technology in the recruitment and retention of participants in trials. The challenge, however, is using these existing tools without sufficient evidence to support the usefulness compared to traditional techniques. This raises important questions around the potential value for future research.
Recruitment and retention of participants in randomized controlled trials (RCTs) is challenging, and is why many RCTs fail or are not completed on time. Digital approaches such as social media, data mining, email or text messaging could improve recruitment and/or retention, but how well they match these purposes is unclear. We used systematic methods to map the digital approaches that have been investigated during the past 10 years.
We searched Medline, Embase, other databases and relevant web sites in July 2018 to identify comparative studies of digital approaches for recruiting and/or retaining participants in clinical or health RCTs. Two reviewers screened references against protocol-specified eligibility criteria. Studies included were coded by one reviewer (with 20 percent checked by a second reviewer) using pre-defined keywords to describe characteristics of the studies, populations and digital approaches evaluated.
We identified 9,133 potentially relevant references, of which 100, reporting 101 unique studies, met the criteria for inclusion in the map. Among these, 95 percent of studies investigated recruitment but only 11 percent investigated retention. Study areas included health promotion and public health (36 percent), cancer (17 percent), circulatory system disorders (13 percent) and mental health (10 percent). Most study designs were observational (89 percent). The most frequent digital approaches for recruitment were internet sites (53 percent of recruitment studies), social media (42 percent), television or radio (31 percent) and/or email (31 percent). For retention these were email (63 percent of retention studies) or text messaging (38 percent). Time and costs of recruitment were reported in 17 percent and 30 percent of recruitment studies respectively, whilst costs were reported in 19 percent of retention studies.
A wide range of digital approaches has been studied, in many combinations. Evidence gaps include lack of experimental studies; studies on retention; and studies for specific populations (e.g. children or older people) and outcomes (e.g. user satisfaction). More robust experimental studies, perhaps conducted as studies-within-a-trial (SWAT), are needed to address knowledge gaps and ensure that estimates of digital tool effectiveness and efficiency are reliable.
Recruitment of participants to, and their retention in, Randomized Controlled Trials (RCTs) is a key determinant of research efficiency, but is challenging. Digital tools and media are increasingly used to reduce costs, waste and delays in the conduct and delivery of research. The aim of this UK Clinical Trials Unit (CTU) survey was to identify which digital recruitment and retention tools are being used to support RCTs, their benefits and success characteristics.
A survey was sent to all UK Clinical Research Collaboration (UKCRC)-registered CTUs with a webinar to help increase completion. A logic model and definitions of a “digital tool” were developed by iterative refinement by project team members, the Advisory Board (NIHR Research Design service, NHS Trust, NIHR Clinical Research Networks and patient input) and CTUs.
A total of 24/52 (46%) CTUs responded, 6 (25%) of which stated no prior use. Database screening tools (e.g. CPRD, EMIS) were the tool most widely used (45%) for recruitment and were considered very effective (67%). The most mentioned success criteria were saving GP time and reaching more patients. Social media was second (27%), but estimated effectiveness varied considerably, with only 17% stating very effective. Fewer retention tools were used, with SMS / email reminders reported most (10/15 67%), but certainty about effectiveness varied. A detailed definition on what constitutes a digital tool with examples and a logic model showing relationships between the resources, activities, outputs and outcomes for digital tools was developed.
Database screening tools are the most commonly used digital tool for recruitment, with clear success criteria and certainty about effectiveness. Our detailed definition of what constitutes a digital tool, with examples, will inform the NIHR research community about choices and help them identify potential tools to support recruitment and retention.
The National Institute for Health Research (NIHR) is a major funder of health research in the United Kingdom. Selecting the most promising studies to fund is crucial, and external expert peer review is used to inform the funding boards. Our aim was to evaluate the influence of different kinds and numbers of peer review and reviewer scores on Board funding decisions, and how we might modify the process to reduce the workload for stakeholders.
Our mixed method study included i) retrospective cross sectional analysis of funding board and external reviewer scores for second stage applications for research funding, using Receiver Operator Characteristic (ROC) curves to quantify the influence of reviewer scores on funding decisions and ii) qualitative interviews with thirty stakeholders (funding board members, applicants, external peer reviewers and NIHR staff).
Analysis of ROC area for reviewers indicated that areas changed very little with increasing numbers of reviewers from four to seven or more. External reviewers with clinical, methodological or patient expertise all appeared to influence Board funding decisions to a similar extent. The stakeholders interviewed valued peer review but felt it was important to develop a more proportionate process, to better balance its benefit with the workload of obtaining, preparing, reading and responding to reviews. Reviews are of most value when they fill gaps in expertise on the Board. Less than four reviews was felt to be insufficient but more than six, excessive. Workload could be reduced by making reviews more focused on the strengths and weaknesses of applications and identifying flaws which are potentially “fixable”.
Stakeholders supported the need for peer review in evaluating funding applications. Our results suggest that four to six peer reviews per application is optimum, depending on the expertise needed to complement that of advisory boards.
It takes on average 17 years to translate a promising laboratory development into better patient treatments or services. About 10 years of this innovation process lies within the National Institute for Health Research (NIHR) research pathway. Innovations developed through research have both national and global impact, so selecting the most promising studies to fund is crucial. Peer review of applications is part of the NIHR research funding process, but requires considerable resources. The NIHR is committed to improving efficiency and proportionality of this process. This study is part of a wider piece of work being undertaken by NIHR (1) to reduce the complexity of the funding pathway and thus make a real difference to patients lives.
This study elicited the views of various stakeholders concerning current and possible future methods for peer review of applications for research funding. Stakeholder groups included: members of boards with responsibility for making funding decisions; applicants (both successful and unsuccessful); peer reviewers and NIHR staff. Qualitative interviews were conducted with stakeholders selected from each group, and results were analyzed and integrated using a thematic template analytical method. The results were used to inform a larger online opinion survey which will be reported separately.
The views and insights of thirty stakeholders across the four groups about the peer review process of applications for funding will be presented. Findings generalizable to other funding programs outside the NIHR will be emphasized. The key themes which emerged included: strengths and weaknesses of applications, feedback, targeting and acknowledgement of peer reviewers.
The results of our study of peer review processes carried out by one national research funder has relevance for other funding organizations, both within our country and internationally.
Peer review of grant applications is employed routinely by health research funding bodies to determine which research proposals should be funded. Peer review faces a number of criticisms, however, especially that it is time consuming, financially expensive, and may not select the best proposals. Various modifications to peer review have been examined in research studies but these have not been systematically reviewed to guide Health Technology Assessment (HTA) funding agencies.
We developed a systematic map based on a logic model to summarize the characteristics of empirical studies that have investigated peer review of health research grant applications. Consultation with stakeholders from a major health research funder (the United Kingdom National Institute for Health Research, NIHR) helped to identify topic areas within the map of particular interest. Innovations that could improve the efficiency and/or effectiveness of peer review were agreed as being a priority for more detailed analysis. Studies of these innovations were identified using pre-specified eligibility criteria and were subjected to a full systematic review.
The systematic map includes eighty-one studies, most published since 2005, indicating an increasing area of investigation. Studies were mostly observational and retrospective in design, and a large proportion have been conducted in the United States, with many conducted by the National Institutes of Health. An example of an innovation is video training to improve reviewer reliability. Although research councils in the United Kingdom have conducted several relevant studies, these have mainly examined existing practices rather than testing peer review innovations. Full results of the systematic review will be provided in the presentation, and we will assess which innovations could improve the efficiency and/or effectiveness of peer review for selecting health research proposals.
Despite considerable interest in, and criticism of, peer review for helping to select health research proposals, there have been few detailed systematic examinations of the primary research evidence in this area. Our evidence synthesis provides the most up-to-date overview of evidence in this important developing area, with recommendations for health research funders in their decision making.
Globally, health systems are struggling with reliably appraising the safety and efficacy of rapidly changing digital health interventions whilst allowing useful innovations to be rapidly adopted. Assessment and regulation of the large number of health apps should be proportional to their clinical risk, but there is large uncertainty about suitable criteria to assess risk (1). We aimed to identify criteria for assessing clinical risks associated with different types of health apps.
Our work builds on previous studies that identified some of the risks that health apps can pose and contextual factors that can moderate these risks (2,3). This work is grounded in a review of existing literature; wide consultation of stakeholders; participation in multi-agency policy discussion; and sense-checking successive versions of the framework that evolved over time. We combined different risk domains for apps (technical safety, usability, intervention quality, and engagement) with their functions (learning, behaviour and cognition change, communication, record keeping, and clinical decision support).
We developed a comprehensive generic risk framework that app users, developers, commissioners, regulators and other stakeholders worldwide can use to guide assessment of the likely risks posed by a specified health app in a specific context. We also propose questions that should help determine whether these risks have been addressed.
Apps are very promising in health care but are very numerous, complex, rapidly evolving and with overlapping functions. A rigorous risk framework should help stakeholders to deal with the large quantity of health apps, classify and manage clinical risks, and improve patient safety by applying generic risk assessment criteria. Further work is needed to test and develop the criteria we propose, especially as apps that integrate different functions are emerging, which will make risk assessment more complex.
The past nine years have seen a steady increase in the number of publications concerning artificial neural networks (ANNs) in medicine (Figure 14.1). Many of these demonstrate that neural networks offer equivalent if not superior performance when compared with other statistical methods, and in some cases with doctors, in several areas of clinical medicine. Table 14.1 gives a by no means exhaustive list of academically driven applications, which are notable for their breadth of potential application areas. Despite this academic research portfolio demonstrating success, we know of very few examples of an ANN being used to inform patient care decisions, and few (if any) have been seamlessly incorporated into everyday practice. Furthermore, we know of no randomized clinical trial (RCT) examining the impact of ANN output on clinical actions or patient outcomes. This is in sharp contrast to the 68 RCTs published since 1976 assessing the impact of reminders and other decision support systems, none of them ANNs, on clinical actions and patient outcomes included in Hunt et al.'s (1998) systematic review.
To check whether our personal experience is reflected in the literature, we conducted a search of the Medline bibliographic database in all languages for the period January 1993 to March 2000 using the Medical Subject Headings term ‘Neural-networks- (computer)’. Using the Silver Platter software, this yielded 3101 articles. When filtered using the Medline Publication Type = ‘clinical-trial’, this number plummeted 50-fold to 61 articles.