Artificial intelligence’s integration into healthcare is poised to redefine care delivery, management, and patient experiences. With the sector’s immense impact on individual lives and the global economy, the stakes for deploying AI are incredibly high, requiring careful navigation of healthcare’s unique complexities—such as stringent data privacy regulations, ethical considerations, and the need to build trust among providers and patients.
At Avasant’s recent Empowering Beyond Summit 2024, Luke Rockenbach, Regional CFO, Providence Health; Dhanusha Maniyambath, Senior Director Information Technology, L.A. Care Health Plan; Anand Nair, Healthcare Lead, UST; and Raj Kadam, SVP Health Plan Operations and CIO, Liberty Dental formed a panel of distinguished experts from across the healthcare industry. They discussed these challenges and opportunities, sharing real-world examples and offering insights into the future of AI in healthcare, which this article will explore.
AI in Healthcare
AI is already making significant inroads into healthcare, with examples demonstrating its potential to enhance both diagnostic accuracy and operational efficiency. One such example, shared by Maniyambath, highlights the use of AI in mammography, where AI is being used as a “second pair of eyes” to assist radiologists in detecting breast cancer. AI technology now enhances the accuracy of breast cancer diagnosis by providing an additional review of medical images. This dual approach increases the reliability of detecting potential issues. The application of AI serves to augment human expertise, helping to catch potential issues that might be missed by the radiologist alone, thereby improving diagnostic accuracy and reducing the likelihood of human error.
Beyond diagnostics, AI is also transforming operational processes within healthcare organizations. Kadam shared a successful implementation of AI in Liberty Dental’s call center, where conversational AI now handles routine inquiries, such as claim status and benefit information.
“We have conversational AI now in three languages, and so far, we have been able to contain calls at 15%,” Kadam noted, with the goal of reaching up to 40% as more use cases are added. This implementation not only streamlines operations but also improves patient satisfaction, particularly among older age groups who have shown a preference for interacting with AI over human agents.
Additionally, Kadam discussed the use of AI in care management, where generative AI has enabled a shift from generic cohort-based care to highly personalized care plans. By analyzing genetic data, family history, and detailed patient notes, AI can now help care managers create precise, individualized care plans in a fraction of the time it used to take.
“Our care managers used to take about an hour to 90 minutes to create a care plan; now, with the help of generative AI, it gets created right away,” Kadam shared. These examples illustrate how AI is not just a theoretical tool but a practical solution that is already making a difference in healthcare delivery.
Challenges in the Implementation process
Implementing AI in healthcare comes with unique challenges, largely due to the sensitive nature of patient data and the ethical considerations involved in AI-driven decisions. Strict data privacy regulations, like HIPAA, require healthcare organizations to be meticulous about where and how AI technologies are deployed, ensuring that patient data is securely stored and protected from cybersecurity threats. Additionally, integrating AI ethically into healthcare is vital, as it must complement the training and practices of healthcare providers. Given the potential for severe consequences if AI recommendations are inaccurate, a cautious and deliberate approach is necessary. Building trust among providers, patients, and stakeholders is essential to ensure AI is used responsibly and effectively in healthcare.
The introduction of AI in healthcare can change the roles and responsibilities of healthcare providers, potentially leading to concerns about job displacement or changes in clinical workflows. There is also a need for ongoing training and education to ensure that healthcare professionals are equipped to work alongside AI technologies effectively. Ensuring that the workforce is prepared for these changes and addressing any concerns about job security are critical challenges. Patients may be skeptical or concerned about the use of AI in their healthcare, especially regarding the potential for errors, privacy breaches, or the impersonal nature of AI-driven care.
Overcoming these challenges
To overcome the challenges of AI implementation in healthcare, organizations are adopting strategies that emphasize careful planning, stakeholder engagement, and ethical governance. One key approach is the incremental testing of AI technologies. For example, Mayo Clinic has been at the forefront of integrating AI into healthcare while ensuring rigorous governance and ethical oversight. They have established the Mayo Clinic AI Council, which oversees AI initiatives across the organization to ensure that AI technologies align with their ethical standards and clinical goals. The council includes senior leaders from across the organization, focusing on transparency and patient safety. Mayo Clinic also emphasizes incremental testing, piloting AI tools in specific departments before scaling them up across the entire organization. This method allows the organizations to refine the technology and address any issues before broader deployment, while also building confidence among early users who can champion the technology and encourage wider acceptance.
Another effective strategy for overcoming the challenges of AI implementation in healthcare is fostering interdisciplinary collaboration and partnerships. This approach involves bringing together experts from various fields—such as medicine, data science, ethics, and technology—to work collaboratively on AI initiatives. By leveraging the diverse expertise of these professionals, healthcare organizations can ensure that AI tools are not only technologically sound but also clinically relevant and ethically responsible. For example, the Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science (CCDS) is pioneering a collaboration, established to promote the integration of artificial intelligence within the Partners Healthcare System. This initiative brings together a diverse team of clinicians, data scientists, and researchers who work collaboratively to develop and implement AI technologies specifically designed to meet the unique needs of healthcare. By leveraging the combined expertise of these two leading hospitals, the CCDS aims to advance the use of AI in improving patient care, enhancing diagnostic accuracy, and optimizing clinical workflows.
Through this interdisciplinary collaboration, the CCDS can create AI tools that are not only technologically advanced but also clinically relevant and ethically responsible. The center’s work underscores the importance of bringing together diverse expertise to address the complex challenges of AI implementation in healthcare. By fostering innovation and ensuring that AI solutions are closely aligned with the practical needs of clinicians and patients, the CCDS is helping to pave the way for more effective and responsible use of AI across the Partners Healthcare System.
Additionally, the development of robust governance structures is essential to ensure that AI technologies are deployed responsibly. Organizations like Cleveland Clinic have established AI advisory boards, including ethics councils, to oversee their AI initiatives. They also became a founding member of the AI Alliance, an international community of researchers, developers and organizational leaders aiming to develop and achieve safe and responsible AI that benefits society.
These councils, often led by senior medical and ethical officers, ensure that AI applications are used transparently and with a strong focus on patient safety. Rigorous quality assurance processes, as seen at Cleveland Clinic, are also critical, as they minimize the risk of errors and build trust in AI systems. By dedicating significant resources to testing and validation, these healthcare organizations are ensuring that their AI solutions are reliable, effective, and aligned with ethical standards, ultimately paving the way for successful implementation in the healthcare sector.
Conclusion
Looking ahead, the potential for AI to revolutionize healthcare is immense. A future where AI plays a central role in delivering personalized care, enhancing patient satisfaction, and significantly reducing healthcare costs. AI could enable highly tailored treatment plans by analyzing genetic data, medical history, and even social determinants of health, leading to more precise and effective interventions. As AI continues to evolve, it can streamline administrative tasks, allowing healthcare providers to focus more on patient care, thus improving the overall patient experience. Additionally, AI’s ability to predict and prevent costly medical events, such as emergency room visits, holds promise for reducing healthcare expenditures on a large scale.
However, realizing this transformative potential will require ongoing collaboration among healthcare providers, technologists, and policymakers. Continuous innovation and ethical oversight will be crucial to ensure that AI is used responsibly, maximizing its benefits while safeguarding patient trust and safety. The advancements made today will shape the healthcare landscape of tomorrow, with AI playing a crucial role in improving health outcomes and making care more accessible and efficient for all.
By Shirvana Bachu, Avasant