Details
About the Reviewer
Reviewer Organization
Reviewer Tech Stack
Other Products Considered
Summary
Product Usage: Inferscience is used as a sidecar within athenahealth to suggest diagnosis codes and highlight missed areas during patient visits, predominantly for new providers.
Strengths: Inferscience seamlessly integrates with athenahealth, offers effective clinical specificity, supports risk adjustment efforts and is cost-effective for small organizations.
Weaknesses: The tool wasn’t optimized for pre-visit or chart prep usage, took too long to load outside of the patient visit context, and sometimes produced irrelevant or illogical suggestions.
Overall Judgment: Inferscience’s integration with athenahealth and its usefulness for new providers were significant advantages, but limitations in its utility for chart preparation and occasional inaccuracies in its suspecting algorithm led to the eventual use of an internally developed solution.
Review
Today, we’re talking about Inferscience and how it’s used at your company. Before we do that, could you give a brief overview of the company and your role there?
I’m responsible for building part of the population health programs and building out clinical documentation. This involves everything from developing workflows and tooling to meet specific goals to leveraging partnerships and working with the tech team to implement necessary tools and solutions. I also troubleshoot and address any issues that arise and focus on scaling. Essentially, my role involves strategizing and aligning with the overall business strategy of population health in a value-based organization.
What was the core business need that drove you to look for a product like Inferscience?
We were looking for a solution to address the core business problem of accurate clinical documentation in the context of a fully risk-based model. The accuracy of our documentation is crucial for fitting into the risk adjustment model effectively. We considered building a solution internally, but we recognized that it would be time-consuming, so we were actively seeking an off-the-shelf solution.
We were specifically looking for a tool that could bring in past diagnoses that had been coded the previous year, which falls under the revalidation aspect of clinical documentation. Additionally, we needed a tool that had a suspecting algorithm attached to it, meaning that it could detect potential conditions based on existing diagnoses and patient information. For example, if a patient had congestive heart failure and was taking a diuretic, the tool should be able to suggest a secondary condition like hyperaldosteronism for provider workup and confirmation. It was crucial for the tool to provide a high level of clinical specificity and support risk adjustment efforts. These two aspects were the most important criteria for us.
We also wanted a solution that could easily integrate with athenahealth since our need was immediate and we needed quick implementation. It also had to refresh frequently to keep up with the algorithm; we didn’t want it to be a monthly batch processing system. Lastly, it was important for the tool to be user-friendly for clinicians.
Which other vendors did you consider, and how did they compare to Inferscience?
We did consider a couple of other vendors. We evaluated them based on their integration capabilities with athenahealth. The biggest advantage Inferscience had was its seamless integration with athenahealth. It appeared as a toaster notification or a sidecar in the athenahealth interface, which was crucial for our providers; we knew they wouldn’t want to navigate to a different webpage or use a different product.
Integration with other vendors was also projected to take around three months, which was not feasible for us. In terms of algorithm validity, we weren’t too concerned as they seemed to work well across the board.
Pricing-wise, some vendors priced their solutions at the organization level, which was expensive for us since we’re a smaller organization. Inferscience, on the other hand, offered a pricing model based on the number of providers, which was more cost-effective for us.
How would you describe your experience with the overall sales process?
The initial sales process with Inferscience went smoothly. They were easy to work with, although their small team size sometimes led to delays in responding and completing tasks. However, for the most part, it was straightforward to get the necessary documents signed.
One issue we encountered was that, at the time, they were not SOC 2 compliant, which was a significant concern for us. We faced the same challenge with other similar suspecting sidecar tools. We pushed them on this, and we implemented additional safeguards on our end, but it still took some time before we were comfortable moving forward. Despite this, the process of getting a contract and ensuring the business associate agreement was well thought out and executed was relatively easy.
What was the onboarding and implementation process with Inferscience like?
Onboarding and implementation with Inferscience were also quite smooth. All we needed to do was supply our providers with the link to add the Chrome extension and use it within athenahealth. It was a simple task, and overall, the onboarding process was easy to complete.
How does the product actually function within the context of a clinician’s workflow?
The way the product is designed for use is that it pops up during the visit within the athenahealth interface as a toaster tab or a sidecar. Through a Chrome extension, it suggests diagnosis codes and highlights things that the clinicians may have missed. This could be related to coding from the previous year or assessment diagnoses. Clinicians can easily interact with the suggestions by clicking “Yes” or “No.” If they click “Yes,” the plugin automatically adds the diagnostic code into athenahealth without the need for dragging or manual entry. After that, they can continue with the visit and close it out.
The tool is designed specifically for use during patient visits. The healthcare provider opens the patient’s chart right in the tool while they’re in the room with the patient. However, we would have preferred to use it during chart prep to get more value out of it. The tool is directly integrated with athenahealth and is available on the athenahealth Marketplace. This means that it can use all the information in the patient’s charts, and it includes OCR technology that allows it to process PDFs. It can also work with C-CDA files. One of the nice features of the tool is that it tells you where the information came from, whether in a specific cardiologist consultation or a CDA file.
What was your overall experience with the Inferscience tool like?
For some of our newer providers, the tool was actually quite helpful in helping them become familiar with coding in a value-based organization. Overall, I wouldn’t say it was a bad experience. It was net positive; we just eventually developed something better. In our organization, we place strong emphasis on chart prep and doing preparatory work before the visit. This helps us identify what needs to be reviewed, what can be improved, and what preventive actions can be taken prior to the visit. Our goal is to have providers go into every visit with a strong understanding of their patients, rather than being presented with random diagnoses they haven’t considered before, which is what Inferscience sometimes did. Unfortunately, the tool was not optimized for use during chart prep, so it took a long time to load outside of that specific context. I’ve received emails indicating that Inferscience has made improvements, but during the time we were using it, it was challenging to use effectively in our workflow.
What made it difficult to leverage Inferscience during pre-visit or chart prep? Was it solely due to loading issues, or were there other restrictions that caused difficulties?
The issue had to do with the way their API was set up, I believe, so the loading time was untenable. It would take around 20 minutes for it to load if we attempted to use the functionality during chart prep, and sometimes it simply wouldn’t load at all. The only way to access it was in visit mode within athenahealth, which was really inconvenient.
The suspecting algorithm was also a bit strange at times. It seemed as though things weren’t always built very logically from a clinical perspective. This would often create more work because we had to spend time figuring out whether the suggestions were actually useful or whether they should be dismissed. All suspecting algorithms have some level of garbage in them, but it seemed like there was a bit too much unnecessary information sometimes.
Apart from the lack of fit with the chart prep use case, how well did the product perform? What were its relative strengths and weaknesses overall?
I’ll break this down into two perspectives. For providers who are already well versed in documentation and knowledgeable about the value-based care model, the product didn’t work that well.. For providers who were new to our model and were used to a fee-for-service approach, Inferscience was helpful because it would surface things that the providers hadn’t thought about and pull information from PDFs they perhaps hadn’t reviewed.
Overall, if we combine both groups of providers, I would say that the product was around 60 to 70% helpful. However, there was still that remaining 30–40% of cases where it wasn’t that useful.
Do you have feedback on any of the other core features of the tool?
They brought on Carequality at one point to feed through diagnoses, but that wasn’t very useful for us. We work with Particle and have our own local healthcare information exchange access, so that feature wasn’t particularly useful to us. I think most value-based care organizations already have that capability, and it seemed like we might in future have to pay more for a feature that essentially wasn’t useful to us. I guess the functionality might be useful to a very small health system, but we viewed it as an interesting choice. I believe those were pretty much all the features they offered.
What happens after the codes are displayed in the sidecar? Is it always up to the clinician to decide what to include or exclude, and is there a feedback loop to provide input and close the loop?
Yes, there was a feedback loop in place. The clinician would use the Chrome extension and click “Yes” to populate the selected code into athenahealth. They would then need to add the necessary clinical information and provide evidence for that ICD-10 code.
If a provider clicked “Yes,” it would be recorded in the usage metrics collected by Inferscience and on our end as well. This allowed us to track whether the loop was closed or not. I think our providers’ decisions were also fed back into their algorithm, although I’m not certain to what extent they did so.
How useful is the product over time, specifically when working with the same patient?
In theory, the product should become more useful over time. As providers start saying “Yes” to certain suspected diagnoses, more relevant diagnoses are prompted based on that information. On the other hand, if providers consistently say “No” to certain suspected diagnoses, Inferscience should stop prompting those codes, which is also useful.
Another consideration is that risk adjustment resets every year. Although we didn’t use Inferscience for more than a year, the idea is that it would prompt us with the same codes again the following year, simplifying the process.
Once you decided that Inferscience wasn’t meeting your needs, what solution did you decide to implement?
While we were using Inferscience, athenahealth started developing its own version of this product, which is now seamlessly integrated into the existing workflow in athenahealth. That’s made it much easier for everyone involved; we’re now able to input our own data into the system. We hired someone to build a suspecting algorithm, which we’ve incorporated into the solution. We include Particle diagnoses that we access through our Particle integration; we can access all of our claims data directly through athenahealth; we can incorporate any claims data that we want to include; and we can feed in information from other HIEs that aren’t part of Particle.
We can also utilize more advanced algorithms through the platform. Overall, it has become a lot more user-friendly and convenient given that it’s directly integrated into athenahealth without requiring any additional development on our end.
Did the switch to the new solution result in any differences in terms of overall cost and quality of implementation?
Because the new solution is integrated into athenahealth, we do not have to pay for it; it’s already included in our athenahealth subscription. We did incur some additional costs when hiring someone to build the algorithm for us. In terms of quality, we now have much more data being directly inputted into the system, including claims data and data from our own organization. If our CML identifies any missed factors, we can easily make changes to the suspecting algorithm. This level of flexibility and control is something we did not have with Inferscience.
How much effort does your team have to put in on an ongoing basis to maintain and update the algorithm?
Maintaining the algorithm doesn’t seem to require a significant amount of effort, but there are certain tasks that must be completed. For example, if there are changes to coding or regulations, we need to update the algorithm. Additionally, when incorporating claims data, some data cleaning and matching are required to ensure accuracy. Thankfully, since the infrastructure is hosted externally, we don’t have to worry about any potential crashes resulting from anything we do.
How would you compare the user interface of your current solution to Inferscience, specifically in the context of the chart prep use case you mentioned?
The current solution works much better. It’s significantly easier to use, especially within our current setup in athenahealth. Our providers can seamlessly incorporate it during chart prep without any delays. The integration is smooth, and the information is readily available within the patient’s chart. We no longer experience any delays, which is especially beneficial for providers who see a high volume of patients. Although we are not fee-for-service, time is still valuable, and it’s a relief not to have to deal with delays.
As your technology stack evolves and your clinicians gain more experience with the value-based care model, how do you envision the role of this specific solution evolving over time?
Initially, this solution served as both an educational tool and a workflow aid for new clinicians. It helped them understand codes and clinical pathways, saving time and providing guidance. As clinicians gain more experience and become proficient in documentation and workup, this solution becomes a time-saver. They can quickly identify and populate relevant information based on the recommendations provided by the tool, allowing them to focus on the patient during the visit.
In future versions of the solution, it could even serve as a tool for preventative care. For example, based on a patient’s history and risk factors, the solution could suggest additional screenings or diagnoses to prevent potential conditions like diabetes. This potential for prevention and improvement in clinical workup not only benefits patient health but also has revenue implications for our organization.
As technology changes and new tools come to market that specifically affect clinical documentation, how do you envision these tools interacting with a solution like the one we’ve been discussing?
We’re exploring generative AI scribing, such as Ambience, which could potentially impact chart prep and note-taking in the future. However, for now, the solution we discussed still requires manual chart prep and diagnosis by the provider. As technology evolves, generative AI may become able to handle tasks like chart prep, suspecting diagnoses and suggesting items for providers to look at, but a provider’s involvement and expertise in diagnosing and documenting will still play a crucial role. Providers cannot simply rely on automated tools to generate diagnoses for the purpose of increasing risk scores—that would be illegal. So, while the solution might evolve and become more user-friendly, it will always require the active participation of clinicians.
What was the overall quality and effectiveness of the Inferscience integration with athenahealth like?
From an integration standpoint, Inferscience was well integrated with athenahealth. However, we faced limitations in terms of accessing their APIs for data usage and tracking, which made it challenging to extract usage metrics from their backend. That was a frustrating limitation. On the other hand, integration with the athenahealth side of things was smooth and didn’t require much time or effort.
What was your overall experience with Inferscience’s support and account management team like?
Overall, our experience was positive. The customer success team was fantastic—our account representative and salesperson were great to work with. However, when we started requesting technical and algorithm changes, that went beyond their capabilities. It’s understandable that they couldn’t accommodate all of our requests—we may have been asking for too much at times. It’s worth noting that their customer success team was relatively small, so if someone was out, there wasn’t always immediate coverage. However, it’s possible that they have scaled up since then, so this may have changed.
What specific changes were you exploring, and can you now seamlessly implement those changes since you have brought the tooling in-house?
One of the major changes we were exploring was the ability to access and track metrics through Inferscience’s APIs using our big data system, which is hosted outside athenahealth. However, that request was not feasible, and we would have had to bear the cost ourselves, which was not a viable option.
We also asked for some algorithmic changes to align with our coding practices, but those requests were not accommodated either. Now that we’ve brought the tooling in-house, we have full control and integration within our system. Our system effectively integrates with athenahealth, allowing us to easily extract search metrics and make any necessary changes.
In hindsight, do you believe that moving forward with Inferscience earlier this year was the right decision?
Yes; at the time, moving forward with Inferscience was the right decision for us. We didn’t have an internal solution in place, and Inferscience served as a valuable stopgap solution for our needs.
What areas for growth would you highlight for the team at Inferscience?
The algorithm could have been more fine-tuned, and it would have been beneficial to be able to use the tool throughout all phases of the note, even after the visit. We didn’t leverage it in that way, but it just wasn’t seamlessly adaptable to the various workflows that exist in both fee-for-service and value-based care organizations.
What advice do you have for buyers who are selecting and implementing similar tooling, specifically when it comes to onboarding physicians onto the platform?
My advice would be to closely examine the workflow and understand where the tool will be utilized. It’s crucial to conduct thorough user workflow mapping to ensure that the tool is seamlessly integrated into the appropriate stages of the workflow. In our case, we faced challenges with Inferscience because we hadn’t conducted sufficient workflow mapping beforehand. In our case, we may not have been able to make use of another tool given the time constraints we were under, but understanding the optimal placement of the tool within the workflow is crucial to ensure its effectiveness and usability.