AI Strategy for Healthcare Leaders: Defining What to Build, What to Buy, and How to Leverage Both

A recent OptumIQ report found that the number of healthcare leaders implementing an artificial intelligence (AI) strategy in their organization had risen by 88 percent in the last year, to a total of 62 percent. What’s more, nine in 10 leaders are confident they will see a return on their AI investment sooner than previously expected. While AI will never be a replacement for a practitioner, these tools are gaining traction in hospitals for their ability to augment human intelligence. These “Narrow AI” techniques address very specific applications — think language translation, image recognition, and data analysis — to ease our human cognitive load and improve outcomes.

For an example of how AI can empower clinicians, consider the application of the Jvion CORE™ to hospital-acquired pressure injuries. The solution is not focused on diagnosing or treating pressure injuries; its aim is prevention. The CORE identifies patients at risk of developing pressure injuries and recommends interventions to prevent the predicted outcome. The recommendations are patient-specific and address clinical and non-clinical factors, which may include nutrition optimization, mobilization, or skin care management. The solution recommends interventions that allow practitioners to make adjustments to patient care before a serious condition arises, thereby improving outcomes and freeing them up to concentrate their time and attention on other patients that need focused care.

Every healthcare organization’s use of AI will be different, but considering the range of options available, chances are every organization stands to benefit in some way from the implementation of AI tools. But how do organizations identify which tools are right for them? Should they develop their own tool custom built to meet their needs, or invest in existing tools on the market? What resources should be allocated to AI implementation efforts, and how do organizations get buy-in from all stakeholders?

Define your mission — and find the right solution to get it done

Defining what the organization needs to achieve is the first step toward identifying where and how an AI solution fits within the healthcare ecosystem. For some organizations, a top-down approach to defining the mission will be an ideal driver to focus attention and resources on making the AI deployment successful. Others may want to source input from throughout the organization. Ask around: what regulatory or compliance failures are we scrambling to redress? Were there any recent crises that could have been avoided with better foresight? Where do you have clinical workflows that flag 90% of patients as “at risk”? This is the kind of thinking that can help an organization define its mission in implementing clinical AI.

In some cases, somebody else in your clinical business will have already have launched a program that addresses your mission. But more often than not, no such program will exist — yet. Locate your colleagues who are struggling and learn what they need to become effective at achieving the mission. If you can find a vendor that provides the model you need, great. If you have the resources to build your own solution, even better. It doesn’t have to be perfect, but it can show your colleagues how much better their life could be with an AI solution supporting their decision-making.

Of course, the cost will be another important consideration. Developing a custom-built, in-house AI model can be expensive, requiring significant investment in data science resources to build from the ground up. Alternatively, administrators can choose from a range of solutions available on the market. But bear in mind that the AI model itself will only account for a third of the total deployment cost. The rest of the expenses will come from training and retraining care delivery staff to make effective use of the AI model.

Get your staff onboard and keep them engaged

Training your people will ultimately be harder than training your AI models — models don’t need change management — people do. Do not make the mistake of ignoring training or training staff once and crossing it off the list. Poor training practices will inevitably result in frustration and miscues that will torpedo your AI deployment. Don’t rely on your AI vendor to train your care delivery workflow users either; making sure your organization makes the most of the AI tool is your responsibility, not the vendors.

Another common mistake is to overemphasize the “AI” or “machine learning” (ML) aspects of the program. Focusing on these “innovative” qualifiers can make people apprehensive that the technology is too new or unproven, or worse, intended to replace them. Focus on the outcomes, not the technical details. If you’re replacing a prior model, training should be presented as a “refresher” or “in-service”.

Finally, no matter how complex the math or insightful the output, AI projects without a standard-bearer to champion their deployment are doomed to fail. Don’t underestimate the value of a senior leader within the workflow who can passionately advocate for your AI tool. People detest changes they can’t control — you will need an “inside operator” to help clinicians, nurses, or admin staff understand how the AI will make their jobs easier and drive better outcomes for their patients.

Get in Touch

  • (972) 831-7270 | 222 W Las Colinas Blvd., Suite 2200N Irving, TX 75039
  • This field is for validation purposes and should be left unchanged.

Request a demo

  • Download the latest SmartFocus paper “The COVID Aftermath: Why behavioral health is the next crisis health plans should be prepared to manage” brought to you by Jvion and SmartBrief to understand why the current state of analytics leave members and health plans exposed to a behavioral health crisis.

  • We respect your privacy.
  • This field is for validation purposes and should be left unchanged.