To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Rapid antigen detection tests (Ag-RDT) for SARS-CoV-2 with emergency use authorization generally include a condition of authorization to evaluate the test’s performance in asymptomatic individuals when used serially. We aim to describe a novel study design that was used to generate regulatory-quality data to evaluate the serial use of Ag-RDT in detecting SARS-CoV-2 virus among asymptomatic individuals.
This prospective cohort study used a siteless, digital approach to assess longitudinal performance of Ag-RDT. Individuals over 2 years old from across the USA with no reported COVID-19 symptoms in the 14 days prior to study enrollment were eligible to enroll in this study. Participants throughout the mainland USA were enrolled through a digital platform between October 18, 2021 and February 15, 2022. Participants were asked to test using Ag-RDT and molecular comparators every 48 hours for 15 days. Enrollment demographics, geographic distribution, and SARS-CoV-2 infection rates are reported.
A total of 7361 participants enrolled in the study, and 492 participants tested positive for SARS-CoV-2, including 154 who were asymptomatic and tested negative to start the study. This exceeded the initial enrollment goals of 60 positive participants. We enrolled participants from 44 US states, and geographic distribution of participants shifted in accordance with the changing COVID-19 prevalence nationwide.
The digital site-less approach employed in the “Test Us At Home” study enabled rapid, efficient, and rigorous evaluation of rapid diagnostics for COVID-19 and can be adapted across research disciplines to optimize study enrollment and accessibility.
The commercialization of medical devices and biotechnology products is characterized by high failure rates and long development lead times particularly among start-up enterprises. To increase the success rate of these high-risk ventures, the University of Massachusetts Lowell (UML) and University of Massachusetts Medical School (UMMS) partnered to create key academic support centers with programs to accelerate entrepreneurship and innovation in this industry. In 2008, UML and UMMS founded the Massachusetts Medical Device Development Center (M2D2), which is a business and technology incubator that provides business planning, product prototyping, laboratory services, access to clinical testing, and ecosystem networking to medical device and biotech start-up firms. M2D2 has three physical locations that encompass approximately 40,000 square feet. Recently, M2D2 leveraged these resources to expand into new areas such as health security, point of care technologies for heart, lung, blood, and sleep disorders, and rapid diagnostics to detect SARS-CoV-2. Since its inception, M2D2 has vetted approximately 260 medical device and biotech start-up companies for inclusion in its programs and provided active support to more than 80 firms. This manuscript describes how two UMass campuses leveraged institutional, state, and Federal resources to create a thriving entrepreneurial environment for medical device and biotech companies.
A key barrier to translation of biomedical research discoveries is a lack of understanding among scientists regarding the complexity and process of implementation. To address this challenge, the National Science Foundation’s Innovation Corps™ (I-Corps™) program trains researchers in entrepreneurship. We report results from the implementation of an I-Corps™ training program aimed at biomedical scientists from institutions funded by the National Center for Advancing Translational Sciences (NCATS).
National/regional instructors delivered 5-week I-Corps@NCATS short courses to 62 teams (150 individuals) across six institutions. Content included customer discovery, value proposition, and validating needs. Teams interviewed real-life customers and presented the value of innovations for specific end-users weekly, culminating in a “Finale” featuring their refined business thesis and business model canvas. Methodology was developed to evaluate the newly adapted program. National mixed-methods evaluation assessed program implementation, reach, effectiveness using observations of training delivery and surveys at Finale (n = 55 teams), and 3–12 months post-training (n = 34 teams).
Innovations related to medical devices (33%), drugs/biologics (20%), software applications (16%), and diagnostics (8%). An average of 24 interviews was conducted. Teams reported increased readiness for commercialization over time (83%, 9 months; 14%, 3 months). Thirty-nine percent met with institutional technology transfer to pursue licensing/patents and 24% pursued venture capital/investor funding following the short courses.
I-Corps@NCATS training provided the NCATS teams a rigorous and repeatable process to aid development of a business model based on customer needs. Outcomes of this pilot program support the expansion of I-Corps™ training to biomedical scientists for accelerating research translation.
Although the science of team science is no longer a new field, the measurement of team science and its standardization remain in relatively early stages of development. To describe the current state of team science assessment, we conducted an integrative review of measures of research collaboration quality and outcomes.
Collaboration measures were identified using both a literature review based on specific keywords and an environmental scan. Raters abstracted details about the measures using a standard tool. Measures related to collaborations with clinical care, education, and program delivery were excluded from this review.
We identified 44 measures of research collaboration quality, which included 35 measures with reliability and some form of statistical validity reported. Most scales focused on group dynamics. We identified 89 measures of research collaboration outcomes; 16 had reliability and 15 had a validity statistic. Outcome measures often only included simple counts of products; publications rarely defined how counts were delimited, obtained, or assessed for reliability. Most measures were tested in only one venue.
Although models of collaboration have been developed, in general, strong, reliable, and valid measurements of such collaborations have not been conducted or accepted into practice. This limitation makes it difficult to compare the characteristics and impacts of research teams across studies or to identify the most important areas for intervention. To advance the science of team science, we provide recommendations regarding the development and psychometric testing of measures of collaboration quality and outcomes that can be replicated and broadly applied across studies.
Email your librarian or administrator to recommend adding this to your organisation's collection.