In our 5-Question series, we highlight the staff and faculty behind the compelling work at Ariadne Labs.
Health care facilities are constantly looking for ways to improve. How to streamline the admission process? How to promote teamwork in the operating room? But efforts to improve are often frustrating. One size does not fit all when implementing quality improvement projects; implementations that succeed in one site fall short in another.
What if, however, site implementers could gain insight before launching on whether their efforts would be successful? What if they could get a better handle on how a change would fit within their own unique context? Questions like these inspired the 2019 launch of the Atlas Initiative at Ariadne Labs, with support from the Peterson Center on Healthcare. The Atlas Initiative is an effort to more effectively and efficiently scale health care innovations by researching the factors that drive successful adoption — or failure — of quality improvement implementation projects.
We recently sat down with Natalie Henrich, a senior scientist with the Ariadne Labs Science and Technology team and the lead faculty of the Atlas Initiative, to talk about her team’s effort to develop a comprehensive context-assessment tool and a data repository that will aid future implementation.
Why is it important to assess a site’s context before diving into a quality improvement implementation?
More than 50 to 60 percent of implementations are not successful. A tremendous amount of time, effort, and resources go into trying to bring change into health facilities, and too often, the change just doesn’t work. Leadership commitment, staff motivation, infrastructure, skills, and culture can all impact implementation. So it is really important to customize and adapt an implementation to the specific context or readiness level so that leaders can make informed decisions about when and how to implement.
What sparked the launch of the Atlas Initiative?
In Ariadne Lab’s own implementation experience, we found we needed a better way to understand the contexts in which we were implementing. Two years ago, Ariadne’s landmark BetterBirth trial was just wrapping up. We tested the impact of the WHO BetterBirth Checklist in 60 facilities in India. Everyone had the same checklist; everyone had the same coaching model. Yet we saw very different results in the different facilities. Likewise, when we rolled out the WHO Surgical Safety Checklist across hospitals in South Carolina, about half the sites were successful and half were not.
What we realized in both cases was that we needed a way to account for the different contexts of the settings — differences in staffing, in leadership, in resources. We needed a way to understand which indicators would make the initiative successful, or what would make it fail, before spending time and resources on an implementation that was not going to work.
There are already a lot of readiness and context assessment tools out there, but many of them are long and take a lot of time to complete. Sometimes they require observations, interviews or lengthy surveys. And some are designed only for specific interventions. We wanted something that would be a low burden for a site and would work for any type of health intervention.
How did you start developing a solution?
We applied the Ariadne Arc to developing a solution – design, test, spread. We did an extensive literature search. We interviewed key players in the field. We originally thought we would come up with one tool — one survey or one kind of assessment. As our work progressed, we realized we needed more than just a single tool. So we moved toward the toolkit concept, in which we have different assessments for different times during an implementation.
We also found there was not a lot of evidence on which factors are most important to implementation success. People rely more on their own expertise and experience rather than scientific evidence. So, we’re also developing a data repository to help build that evidence base and make sure our context assessment tools take into account the critical factors.
The toolkit includes three surveys: The foundation, the launch, and the progress survey. The foundation survey should be done before implementation and it assesses key contextual factors that can inform decisions about whether it’s the right time to implement; if the decision is made to implement, then the results can inform the strategy. The launch survey is completed about six weeks after the staff starts using the new practice or process to assess the need for modifications to the implementation strategy. The third, the progress survey, has only a handful of items and is completed monthly by the site’s implementation team to ensure things are on track. If you can catch problems early you can course correct. But if you don’t catch them, it can make your whole implementation go off the rails.
From these surveys, an auto-generated report is created and sent to you. Everything on it is actionable or important to be aware of in terms of adaptation or support.
We are continuing to research and refine these surveys, and are actively seeking sites to partner with on testing of these tools. Participating sites get full access to the tools, a detailed report on survey results with actionable information, and guidance from the Ariadne Labs team on how to use the information to support your implementation effort.
The goal of repository is to collect a massive data set on context and implementation outcomes from a huge number of different facilities that have launched implementations. This information will allow us to determine the factors that most strongly influence success and how those factors vary by setting and type of intervention. For example, we’d be able to say that if you are a large academic hospital implementing a process change, these five aspects of context are crucial for success, or if you’re a primary care clinic, these three aspects are most important. This will allow people to make informed decisions about their implementation.
However, we can’t do these analyses without vast amounts of data. So we’re looking for sites that are currently implementing a change to share data with us. Sites that contribute data will get a report from us that provides insights into the context of their own organization and identifies areas that might help them be more successful in making future changes. The more sites that contribute to the repository, the more accurate and more reliable the data. If you’re interested in becoming a contributor, email us at firstname.lastname@example.org.
Participating sites will be asked to complete additional surveys; this will take extra effort, but the long-term payoff will be huge for advancing the field and ensuring implementations run much more efficiently and successfully with fewer wasted resources.
With the right information, sites will be able to make informed choices about their implementations and focus their resources on the ones that can really have an impact.
For information on becoming a testing site for context assessment tools, or to learn more about contributing to the data repository, contact email@example.com.
– Interview conducted by Stephanie Schorow