APC 技術ブログ

株式会社エーピーコミュニケーションズの技術ブログです。

株式会社 エーピーコミュニケーションズの技術ブログです。

MLOps and AI Governance in Healthcare: Providence's Use Case

Preface

This session introduced the progress that Providence HealthCare has made over the past year, particularly focusing on the development of their AI/ML model marketplace and how they have operated it within Azure’s secure cloud environment.

The presentation conducted by Vivek, the director of the data science division and the data science steward, involved a 30-minute presentation followed by a Q&A session.

Providence Healthcare is a non-profit healthcare system operating in seven western states of the United States. Throughout the session, it was impressive to see how their efforts are generating tangible outcomes and how they operate within a secure environment.

The session detailed case studies on how Providence HealthCare is implementing MLOps and AI governance, and propelling the development of the AI/ML model marketplace. Their practical approach and the resulting outcomes were thoroughly fascinating.

AI Strategy and Partnerships

The healthcare system is known for its inefficiencies and significant cost issues. It is believed that AI can play a crucial role in improving and transforming these problems. As a result, not only will operations become more functional and productive, but patient outcomes and experiences are also expected to improve.

Many individuals expressed major concerns about the introduction of AI during collaborations in the healthcare industry. However, at Providence, we are fortunate to have explicit messaging and leadership at the executive level. Our President and CEO, Dr. Rod Hochman, releases a list of medical predictions every year. In 2018, we decided to focus on his words and recognize the importance of focusing not only on trends but the transformation of healthcare as a whole. As the COVID-19 pandemic continues to have a sustained impact on our operations, his prophecy is more relevant than ever.

This indicates that the AI strategy and partnerships that Providence has built over time have helped overcome the industry's apprehension about AI; building not only a robust system but also adopting AI as a promising tool for serious problem-solving.

The third part of this session explores how Providence established their AI Council and how they decide project priorities.


Collaboration with the AI Council

Providence faced the challenge of rallying the right personnel to delineate projects that aligned with the organization's strategy and prioritized them. The ongoing target is to empower the organized AI Council to allocate funding and resources to new projects.

As a solution, Providence established four AI Councils, each targeting different problem domains and classified into four sectors. While these categories relate to each other like a Venn diagram, each specializes in specific fields of healthcare information processing.

4 Sectors of the AI Councils

The four equally important sectors are defined, and it is essential to remember that each progresses at a different pace:

  1. Patient Care Delivery: The field that requires the most careful and patient effort. Many regulations and ethical issues are involved in developing and implementing AI/ML models for safe and appropriate patient care.
  2. Interaction with Consumers: This sector is already operational, with a chatbot named "Grace" that assists patients in searching for appointments.

Through these case studies, we explored how Providence uses AI to improve healthcare, and the crucial role the AI Council plays and how its operation intertwines with the projects' strategic prioritization. It is interesting to see how these functions interact and contribute to comprehensive healthcare, care promotion, and progress monitoring.

Model Risk Management and MLOps: A Case Study from Providence

Over the past year, Providence has made significant strides in implementing a robust MLOps framework within their secure Azure cloud environment. They achieved this by incorporating the AI/ML model marketplace into their workflow and integrating two salient concepts - model risk management and serverless technology, into MLOps development.

Model risk management plays a central role in the creation of AI and machine learning models. The term refers to the potential for a model to make incorrect predictions, and Providence has implemented measures to mitigate this risk. Traditionally, the financial services industry has been the bulwark of this practice, but it's beginning to draw attention in other sectors, particularly healthcare.

Providence is committed to raising awareness about the importance of model risk management and emphatically incorporates it into all model development projects. This cautious approach ensures that their AI/ML models operate seamlessly and trouble-free, thereby enhancing the trustworthiness and accuracy of these models.

Another key component of Providence's MLOps strategy is the adoption of serverless technology. This innovative technology relieves developers from the responsibility of server maintenance and enables them to focus on developing business logic. Thanks to a partnership with Databricks, the development of predictive models in Providence's serverless environment is effortlessly ongoing, constantly improving, and promoting scaling.

Providence's efforts to integrate model risk management and serverless technology into MLOps have demonstrated significant improvements in the development and operation of AI/ML models. This efficient integration heralds a future promising innovative breakthroughs in AI/ML model deployment within the vast healthcare sector with these progressive tech concepts.

The relevance of adopting model risk management and serverless technology into MLOps grows more evident as Providence paves the way for improved development experiences and operational agility. By applying these progressive practices to the management and deployment of AI/ML models, it is apparent that the industry as a whole stands to benefit.

Open-Source Model Deployments

For those accustomed to using OpenAI's APIs, sending HTTP requests and receiving responses that are seamlessly integrated into applications come easily. However, with open-source models, adjustments to this approach may be necessary.

At Providence, we're proactively addressing this challenge and investigating a collaboration with Databricks. The aim is to provide developers with a flow equivalent to the OpenAI API. The critical point here is to let developers make API requests and seamlessly integrate them into applications that handle open-source models. This strategy was a key topic in our recent meeting with FACA.

Fairness in AI: Non-Negotiable

AI/ML models should be fair in their operation. To achieve this, transparency is the key. It is vital to have a clear, open process that can be thoroughly reviewed from the creation to the deployment of AI/ML models.

At Providence, our commitment to providing fair, user-friendly AI/ML models is unwavering. Our robust MLOps framework and AI governance models play a role in focusing on maintaining fairness across the process. This is an essential part of our strategy and something we continuously prioritize.

In this section, we overviewed how Providence is deploying open-source models and ensuring fairness in AI, as part of a larger discussion on MLOps and AI governance.

Multimodal AI and Regulatory Compliance

Implementation of multimodal AI, which leverages unstructured data, has been receiving significant attention in the world of healthcare AI. However, this implementation can come with considerable challenges from a regulatory compliance standpoint, and Providence was no exception.

In the session, Suhee Lee, a member of Providence's data team, shared her belief that the future of healthcare is multimodal, precision-based, vision-driven, and personalized. This means that different formats of data will be harnessed to personalize patient care and enhance its accuracy.

For example, GigaPath, the foundational model launched by Microsoft and Providence for Digital Pathology, is a multimodal model that integrates patients' text data. Approaches like these will play an increasingly critical role in patient care through AI.

However, these progressive initiatives come with significant regulatory compliance challenges. Regulations not only limit the use of AI but also provide guidance on its development and operation methods. Providence has made efforts to deepen its understanding of MLOps and AI governance and build frameworks that support these principles to properly address this issue.

In a nutshell, multimodal AI that leverages unstructured data simultaneously contributes to the realization of accurate and personalized care in healthcare. However, while leveraging these technical aspects, it's vital to prioritize regulatory compliance. Providence's efforts provide an example of overcoming these compliance challenges while offering advanced healthcare solutions. This success story surely offers valuable insights to other stakeholders in the healthcare industry.

Conclusion

This session detailed how Providence built a framework for MLOps and AI governance while dealing with AI compliance. Multimodal AI contributes to personalized healthcare, but regulatory compliance is essential for success. Regulations provide guidance on AI development and operational methods. Compliance with these should go hand in hand with AI advancements. Providence's efforts demonstrate their success in providing cutting-edge healthcare solutions while overcoming compliance challenges. This success story undoubtedly offers valuable insights to other healthcare providers.

About the special site during DAIS

This year, we have prepared a special site to report on the session contents and the situation from the DAIS site! We plan to update the blog every day during DAIS, so please take a look.

www.ap-com.co.jp