By Hannah Knudsen, PhD, University of Kentucky
The recent report, Adoption of NIDA's Evidence-Based Treatments in Real World Settings, makes clear that the reach of interventions developed and tested with funding from NIDA has been limited. It is important to realize that this "implementation gap" is not unique to NIDA-funded research. The implementation gap is ubiquitous across a range of health conditions and delivery contexts, as evidenced by the standing program announcements across a range of NIH institutes for dissemination and implementation research.
In thinking about the translation of evidence-based practices into routine use, one important distinction may be useful to consider before examining the recommendations contained within the report. Because NIDA is not a funder of drug abuse services (roles largely housed within the Substance Abuse and Mental Health Services Administration and the Center for Medicare and Medicaid Services), it would seem that its focus should be on the science of implementation, rather implementation as an end unto itself. It strikes me that the critical issue is less whether NIDA has succeeded or failed at achieving implementation of evidence-based interventions developed with its funding, but rather whether NIDA is supporting research on strategies for increasing implementation and studies of the barriers that are blocking greater implementation. It is through this lens of implementation science that I approached the recommendations within the report. Of the five recommendations, I was most struck by three of the recommendations upon which I would like to focus:
Large complex organizations, such as NIDA, necessitate the subdivision of the organization into units consisting of specialty areas, and thus, it is logical to consider the value of a new entity within NIDA as the locus of its translational research (Recommendation 1). The establishment of such an entity would also signal to the field that NIDA values translational research that seeks to advance our scientific understanding of implementation processes. There is also pragmatic value in that it signals that such research is a funding priority. It may be useful to look to other NIH institutes and how they have approached developing entities to oversee this type of research. The National Cancer Institute (NCI) has clearly embraced translational and implementation science for both its focal disease and by leading efforts to broaden the field of implementation science regardless of specific disease. NCI houses an abundance of online resources related to noteworthy dissemination and implementation science: http://cancercontrol.cancer.gov/is/.
Establishing guidelines for funding that include implementation potential may be more challenging that it initially appears (Recommendation 2). Consider this question: If this approach had been taken in the late 1990s, what research would not have been funded? Some of the significant research on pharmacotherapies may not have been funded if implementation potential had been required. With the exception of methadone maintenance programs, treatment at that time was largely delivered in specialty facilities that often lacked access to physicians. The legislative changes in the DATA Act of 2000 had not yet occurred, so physicians in private practice were limited in their ability to prescribe medications to treat opioid dependence. By the implementation potential criterion, there is a real question whether, for example, clinical trials of buprenorphine would have met this threshold based on the legal context and service delivery system at the time. Sometimes treatment innovations may be ahead of the delivery system, making it difficult to anticipate which interventions can be taken to scale.
It is also an open question whether implementation science itself is advanced enough that we can reasonably predict whether a novel intervention will be implemented. Implementation science itself as a discipline is relatively young, with a major edited volume on this emerging science being published last year (Brownson, Colditz, & Proctor, 2012). Rigorous tests of the effectiveness of implementation strategies (i.e., specific approaches to promote the use of evidence-based practices) remain relatively rare in the NIH portfolio. Implementation science is a nascent field, which will make it difficult to predict with much certainty and specificity the implementation potential of interventions. It seems to me that guidelines, if they are too rigid, may be premature, but that program announcements could convey to investigators that they should be reasonably attentive to implementation and scalability when describing the significance of their research.
At the same time, I can appreciate that the development of interventions without any consideration of the realities of the treatment system may not represent the most prudent investment of limited funding resources. There may be benefits in bringing health economists and experts in human engineering to the table earlier in the intervention development process. The challenge is that those experts, and the resulting data collection activities needed to support their analytic work (e.g., cost-benefit analyses, etc.), come with their own costs and may mean making trade-offs for the overarching research design. A related question is whether current funding mechanisms, which have not been adjusted for inflation in many years, can support both the scientific resources needed to rigorously test the effectiveness of the intervention under development and this broader scope of work within the same project. My experiences as a grant reviewer suggest that it is possible but can be challenging.
Finally, there is intuitive appeal to the notion of establishing a NIDA-based peer review committee. NIDA has a lengthy history of such review panels, although they have been less common in recent years. Such a panel would give NIDA more oversight over the review process and may address concerns that substance abuse-related implementation research has struggled within the standing Dissemination and Implementation Research in Health study section managed by the NIH's Center for Scientific Review (CSR).
At least two key questions should be further considered. First, does the drug abuse field have the depth of expertise in the theories and models of the emerging field of implementation science to fill such a panel? This is a question for which I do not know the answer. Second, do we risk reinforcing the ongoing silo-ing of drug abuse as something that is not viewed as a disease like the others? Will perspectives from the broader field of implementation science (i.e., theories and methods being tested for other conditions and other service contexts) be lost, resulting in our scientific contributions having less impact?
These are just a few of the questions that arose when I was considering these recommendations. From my perspective, it can be a useful thought exercise to consider how different the research landscape would look if these recommendations had been implemented in the past, as well as what may be gained or lost if these recommendations are implemented in the near future. I can appreciate the desire to have NIDA's research make the largest possible public health impact. Implementation science may be an important vehicle for maximizing these public health benefits, particularly if our research in drug abuse is engaged in the broader scientific discourse about strategies for improving the quality of care through greater use of evidence-based practices.