Hostname: page-component-7479d7b7d-fwgfc Total loading time: 0 Render date: 2024-07-10T00:41:15.687Z Has data issue: false hasContentIssue false

Toward a science of translational science

Published online by Cambridge University Press:  14 August 2017

Caleb Smith
Affiliation:
Office of Research, Medical School, University of Michigan, Ann Arbor, MI, USA
Roohi Baveja
Affiliation:
Office of Research, Medical School, University of Michigan, Ann Arbor, MI, USA
Teri Grieb
Affiliation:
Office of Research, Medical School, University of Michigan, Ann Arbor, MI, USA Michigan Institute for Clinical & Health Research, University of Michigan, Ann Arbor, MI, USA
George A. Mashour*
Affiliation:
Office of Research, Medical School, University of Michigan, Ann Arbor, MI, USA Michigan Institute for Clinical & Health Research, University of Michigan, Ann Arbor, MI, USA Office of Research, University of Michigan, Ann Arbor, MI, USA
*
*Address for correspondence: G. A. Mashour, M.D., Ph.D., Michigan Institute for Clinical & Health Research, University of Michigan, 2800 Plymouth Road, Building 400, Ann Arbor, MI 48109-2800, USA. (Email: gmashour@umich.edu)
Rights & Permissions [Opens in a new window]

Abstract

Translational research as a discipline has experienced explosive growth over the last decade as evidenced by significant federal investment and the exponential increase in related publications. However, narrow project-focused or process-based measurement approaches have resulted in insufficient techniques to measure the translational progress of institutions or large-scale networks. A shift from traditional industrial engineering approaches to systematic investigation using the techniques of scientometrics and network science will be required to assess the impact of investments in translational research.

Type
Translational Research, Design and Analysis
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
© The Association for Clinical and Translational Science 2017

Introduction

The National Center for Advancing Translational Science recently released a new strategic plan and the Clinical and Translational Science Award (CTSA) program is on the cusp of building a national network focused on clinical trials. In addition to this renewed vision for advancing clinical investigation and the science of translation, in 2016, President Obama signed the 21st Century Cures act into law. This initiative also has a strong focus on lowering the barriers to translation and enhancing therapeutic discovery, development, and delivery. Despite this renaissance of interest in accelerating translational breakthroughs and the substantial investment associated with it, the methods by which to quantify or characterize the conditions for translational success are still nascent. Although it is possible to identify progress along the translational continuum for an individual project, there are insufficient methods to assess whether institutions, networks, or nations are becoming more translational in their scientific activity. In this article we review the ontogeny of translational research as a concept, describe definitions of translation, and propose approaches to measuring translational character in large-scale programs.

Historical Perspectives

Although the practice of translational medicine has existed since at least the time of Galen, it was historically a highly pragmatic exercise rooted firmly in a given physician’s individual experience and informed by the immediate needs of his or her patients. Despite the increasing complexity and specialization of medical science in the following centuries, diagnoses and therapies generally continued to improve with only minimal state support for the development of new applications from basic discoveries. In the United States, Vannevar Bush summed up this general philosophy in the mid-20th century by writing that “scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown” [Reference Bush1]. However, as government support for basic science increased after World War II and through the initial inclusion of a patent clause favoring government monopoly, the ethics and assessment of influencing research direction through funding incentives became more important from an administrative perspective.

With massive federal investment, the scale of academic research and the pace of discovery accelerated dramatically. A number of authors warned of the effects of increasing specialization and academic output on the dissemination of knowledge. To quote Bush again, “…there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less remember, as they appear” [Reference Bush2]. Although Bush penned those words in 1945, by the turn of the 21st century the problem had only grown in importance, with global scientific output doubling every 9 years [Reference Bornmann and Mutz3]. The abundance and increased complexity of publications has made the task of timely translation from laboratory-based research to patient care increasingly difficult at the level of the individual physician, who faces increasing burdens to keep up with new clinical or scientific findings. In 1998, researchers from the RAND Corporation found that only 50% of patients received recommended preventative care and only 60% of those with chronic conditions received recommended care. There were errors not only of omission but also of commission: 30% of patients received contraindicated acute care and 20% received contraindicated chronic care [Reference Schuster, McGlynn and Brook4]. Clinical scientists are similarly overwhelmed by newly reported findings from the basic sciences. Further increasing this burden is the reproducibility crisis in science, which has proven a perennial concern. John Ioannidis’ 2005 determination that most published research findings are false [Reference Ioannidis5] is by now well known. A 2016 poll reported in the journal Nature suggests that little is improving, with over 70% of the 1500 respondents indicating that they had tried and failed to reproduce another scientist’s experiments, and more than 50% indicating having tried and failed to reproduce one of their own [Reference Baker6]. Results such as these threaten translation and undermine trust in foundational or early clinical findings.

In the year 2000, the Institute of Medicine convened the Clinical Research Roundtable (CRR) to discuss emergent issues in clinical research in response to the Association of American Medical Colleges’ publication of “Clinical Research: A National Call to Action.” In June of 2003, the CRR, noting that the clinical research enterprise was truly in crisis, published steps to improve the translation and dissemination of clinical research, identifying 2 translational obstacles—from basic discoveries to clinical practice and from the clinical identification of “things that work” to broader application—as primary concerns [Reference Aungst7]. Only 3 months later, the Committee on the Organizational Structure of the National Institutes of Health, in response to a Congressional request and working under the auspices of the National Research Council and the Institute of Medicine, issued a report [Reference Bush2] recommending, among other items, that the NIH enhance its ability to plan and implement trans-NIH strategic initiatives by further funding and empowering the office of the director for this purpose. This recommendation established the groundwork necessary for the NIH to plan and execute the large-scale initiatives necessary to address systemic problems. In October, only 4 months after the CRR’s report was published and 1 month after the Committee on the Organizational Structure of the NIH made its recommendations, the NIH—fresh from a 5-year doubling of its budget and with a new Director in Dr. Elias Zerhouni—published “The NIH Roadmap,” which described a number of trans-NIH clinical initiatives. One among these was the creation of a national network of regional research centers focused on translation. This program, which would later mature into what we know as the CTSA, would go on to fund more than 60 grantee institutions across the country, representing hundreds of millions of dollars of federal investment on an annual basis. It was during this period that publications targeting the concept of translational research began to proliferate at an exponential rate, increasing by 1800% between 2003 and 2014.

Current Challenges

Although the problem of translational research has received increased interest from both sponsors and academics in the last decade, the literature reveals inconsistent, project-based, and nonquantitative definitions of translation [Reference Woolf8Reference Fudge12]. This has resulted in a paucity of methods to grasp large-scale translational processes and, given current national initiatives, suggests the need for a shift in perspective away from the analysis of small subsystems within local institutions toward solutions focused on complex systems.

Lacking guidance from concrete success measures or the theoretical framework by which to pursue them has resulted in literature about translation focusing on relatively small and well-understood tasks: institutional process modeling, definitional activities, administrative process improvements, and regulatory streamlining. Much of this work is informed by individual experience and experience at the level of individual institutions. Implicit in this approach is the assumption that improvements to these various subprocesses will result in positive outcomes consistent with the CTSA program’s strategic goals. However, the social networks and systems that support translational outcomes as we recognize them are complex, nonlinear, often multidisciplinary or interdisciplinary, and temporally diffuse. This complexity has confused the work of many authors, leading Steven Woolf to state simply that “translational research means different things to different people” [Reference Woolf8]. In addition to confounding attempts at definitional concision, these characteristics indicate that subprocess improvement alone will be, at best, accidentally sufficient in improving national translational outcomes and, at worst, will result in retrogression or other unintended negative systemic results.

Future Solutions

Narrow performance measurements are important for fine-tuning small-scale interventions, but the scope of change that the CTSA program was created to effect requires global measures of sociological evolution and research impact. Although we have traditionally assessed whether projects are translational, we must now assess whether entire programs are translational. New and transparent scientometric indicators, rather than modified industrial engineering approaches, are critically important.

Researchers in the fields of bibliometrics, scientometrics, informetrics, and network science have already established much of the necessary theory needed to pursue the study of translational science policy and have developed a number of tools and measures that are readily applicable. For example, Boyack et al. [Reference Boyack13], building upon earlier work by Francis Narin, have developed a new method by which individual journal articles can be automatically classified according to research level: basic research, clinical investigation, clinical mix, and clinical observation. Surkis et al., recently developed a new model to classify publications along the translational spectrum using similar techniques [Reference Surkis14]. These automated approaches to classification can potentially eliminate the time-consuming task of manual review that previously hindered the systematic study of translational trends. Although both Boyack and Surkis focused on publications, the analysis of article texts tells us only of translational claims. Equally important in the study of translational behavior will be the examination of translational intent. Perhaps the most important source of information on translational intent is the corpus of proposal data held by the NIH and the tributary repositories held within research institutions across the country. Representing an enormous trove of almost completely unexplored data, this full-text corpus contains not only a source to deduce the research intentions of funded investigators but—perhaps more importantly—many thousands of unsuccessful proposals.

Coding all of an institution’s biomedical publications according to where they fall on the translational continuum will allow for the identification of trends in basic and translational productivity. However, these publication trends may be confounded by the effects of Congressional budget allocations, NIH funding priorities, the whims of study groups, an investigator’s writing quality, the varied tastes of journal editors, or by simple skewing in the number of journals targeting one area of the continuum over others. The analysis of submitted proposals, particularly of unfunded proposals, will allow for a more complete picture of the changing interests of biomedical researchers. Importantly, these trends will contain valuable information on the effects of large-scale investment such as the CTSA. By examining the texts of submitted proposals as well as their references we can determine surrogates of the translational intent of these proposals and where on the continuum the proposed work is situated. With this information we can map shifts in an institution’s translational interests—regardless of whether or not these interests are realized through eventual funding and regardless of when, where, or how any findings are published. The potential effects of the interventions of a single CTSA hub are necessarily limited. If these interventions can be correlated with increased translational intent, but not with funding success or publication output, then the hub can perhaps consider additional interventions specifically targeting these areas. A similar process could be explored at the national level, with federal response to opportunities for improvement.

The analysis of grant proposals as a surrogate of scientific activity and publications as a surrogate of scientific productivity still yields an incomplete picture that reflects either an institutional perspective or an undifferentiated aggregate of multiple institutions. What will be critically important for initiatives such as the CTSA is the analysis of translational success by virtue of being a network. Highly efficient networks strike a balance between functional segregation (eg, a unique strength of an individual CTSA institute) and global integration (eg, the communication structures and knowledge transfer among the institutes). Although the CTSA has more recently adopted the approach of network science with new language and hub-and-spoke models, the quantitative methods of network science have been applied rarely and only by individual CTSA institutes [Reference Vacca15Reference Bian19]. The current CTSA focus on developing common metrics should be complemented by consideration of the various properties of the nodes (ie, the individual CTSA institutes) and a quantitative definition of the links between them. This step would allow the rich analysis of various network properties, including degree (number of connections of an individual node), path length (steps to transmit knowledge or best practices), modularity (how isolated or siloed the network is), centrality (the extent to which nodes create shortcuts across complex institutional or national networks), and small worldness (an organizational principle that facilitates integration through strong hubs). This network analysis could span from an individual CTSA institute within an institution, to the CTSA network itself, allowing a quantitative approach to grasping the synergy of CTSA interactions that can advance translational progress on the national scale.

Conclusions

This is an exciting time for translational research. However, the investment of resources and effort in advancing the translational character of the biomedical research enterprise must be matched by the methodology to measure success. Leveraging emerging bibliometric approaches and network science techniques will be an important first step to advance beyond the consideration of individual translational research projects to large-scale translational programs at the national level.

Acknowledgment

This work was funded by the National Center for Advancing Translational Sciences of the National Institutes of Health, grants UL1TR000433 and UL1TR002240.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

1. Bush, V. Science, The Endless Frontier: A Report to the President. Washington, DC: Government Printing Office, 1945.Google Scholar
2. Bush, V. As we may think. Atlantic Monthly 1945; 176: 101108.Google Scholar
3. Bornmann, L, Mutz, R. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. Journal of Association for Information Science and Technology 2015; 66: 22152222.Google Scholar
4. Schuster, MA, McGlynn, EA, Brook, RH. How good is the quality of health care in the United States? The Milbank Quarterly 1998; 76: 509.Google Scholar
5. Ioannidis, JP. Why most published research findings are false. PLoS Medicine 2005; 2: e124.Google Scholar
6. Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 2016; 533: 452454.Google Scholar
7. Aungst, J, et al. Exploring Challenges, Progress, and New Models for Engaging The Public in the Clinical Research Enterprise: Clinical Research Roundtable Workshop Summary. Washington, DC: National Academies Press, 2003.Google Scholar
8. Woolf, SH. The meaning of translational research and why it matters. JAMA 2008; 29: 211213.Google Scholar
9. Trochim, W, et al. Evaluating translational research: a process marker model. Clinical and Translational Science 2011; 4: 153162.Google Scholar
10. Rubio, DM, et al. Defining translational research: implications for training. Academic Medicine 2010; 85: 470475.Google Scholar
11. Rajan, A, et al. Critical appraisal of translational research models for suitability in performance assessment of cancer centers. Oncologist 2012; 17: e48e57.CrossRefGoogle ScholarPubMed
12. Fudge, N, et al. Optimising translational research opportunities: a systematic review and narrative synthesis of basic and clinician scientists’ perspectives of factors which enable or hinder translational research. PLoS One 2016; 11: e0160475.CrossRefGoogle ScholarPubMed
13. Boyack, KW, et al. Classification of individual articles from all of science by research level. Journal of Informetrics 2014; 8: 112.Google Scholar
14. Surkis, A, et al. Classifying publications from the Clinical and Translational Science Award program along the translational research spectrum: a machine learning approach. Journal of Translational Medicine 2016; 14: 235.CrossRefGoogle ScholarPubMed
15. Vacca, R, et al. Designing a CTSA-based social network intervention to foster cross-disciplinary team science. Clinical and Translational Science 2015; 8: 281289.Google Scholar
16. Luke, DA, et al. Breaking down silos: mapping growth of cross-disciplinary collaboration in a translational science initiative. Clinical and Translational Science 2015: 143149.Google Scholar
17. Nagarajan, R, et al. Social network analysis to assess the impact of the CTSA on biomedical research grant collaboration. Clinical and Translational Science 2015; 8: 150154.Google Scholar
18. Bian, J, et al. CollaborationViz: interactive visual exploration of biomedical research collaboration networks. PLoS One 2014; 9: e111928.Google Scholar
19. Bian, J, et al. Social network analysis of biomedical research collaboration networks in a CTSA institution. Journal of Biomedical Informatics 2014; 52: 130140.CrossRefGoogle Scholar