The explosion of AI technology, specifically generative AI, presents possibilities for organizations in every industry. But, like any new technology, it also exposes organizations to risk. In the healthcare industry, those risks can be especially daunting if not effectively managed and mitigated.
Creating a solid AI governance plan is a vital step for any healthcare organization implementing AI technology. To help organizations measure the effectiveness of these plans, HITRUST recently launched an AI Risk Management Assessment, which tests a company’s governance and controls related to AI implementation.
The AI Risk Management Assessment creates the foundation for clear communication of AI strategies to management, boards of directors, and external stakeholders. HITRUST’s comprehensive AI risk management controls are based on extensive research, collaboration with industry-leading working groups, and extensive analysis of AI best practices, emerging technologies, and leading standards. HITRUST harmonized these controls to make them inclusive of the widely recognized International Organization for Standardization/International Electrotechnical Commission guidance on AI risk management (ISO/IEC 23894:2023) and the National Institute of Standards and Technology Risk Management Framework (NIST RMF) standards. The HITRUST controls provide clear, prescriptive definitions of AI policies, procedures, and implementations that can be measured and evaluated.
Building on the AI Risk Management Assessment launch, HITRUST also offers an AI Security Assessment and accompanying certification. This innovative solution helps address the unique security challenges of AI systems and is tailored specifically for organizations deploying AI technologies.
From an industry perspective, AI is being aggressively incorporated in healthcare through electronic health record vendors and other healthcare technology companies in a variety of ways. For example, AI algorithms have demonstrated proficiency in identifying breast cancer, often matching or surpassing human radiologists. A substantial study involving nearly 500,000 participants in Germany revealed that AI is as effective as clinicians in interpreting mammograms, and researchers at Northeastern University developed an AI model that achieved a 99.72% accuracy rate in breast cancer detection.
As AI technologies continue to enhance healthcare, organizations and their vendors and clients are asking questions and requiring certainty regarding how risk is managed and data is secured from a HIPAA perspective, including proposed enhancements to the HIPAA regulation from the Department of Health and Human Services.
Right now, AI regulations are in their infancy stage, which means they are vague, inconsistent, and continually changing. Globally, collaborative efforts are underway to harmonize AI governance. But while the European Union (EU) has established a structured regulatory environment, the U.S. continues to take a decentralized approach with varying state-level regulations.
Following is a breakdown of some of the formal and proposed regulations in the EU and the U.S.
The Artificial Intelligence Act came into force on Aug. 1, 2024. It establishes a comprehensive framework and categorizes AI applications into four risk levels.
A state-by-state approach reflects the absence of a comprehensive, federal AI regulatory framework in the U.S. Following are examples of three states that have enacted legislation.
Organizations should get a handle on AI governance now, as the developments and advances in technology will continue to grow and change. The HITRUST AI security control set can help manage and address current risks and regulations while setting a foundation to help prepare for emerging technology advances, risks, and regulatory uncertainty.
Whether an organization develops, deploys, and markets AI systems to other professional entities or integrates AI directly into its own products, undertaking the HITRUST AI Security Assessment and achieving certification verifies adherence to the highest standards of AI and cybersecurity risk management. HITRUST certification also signifies an organization’s dedication to security and trustworthiness to stakeholders. Some of its key features include:
AI risk management assessment or security certification can benefit a variety of stakeholders in several ways.
With AI at the forefront of innovation, it’s essential to determine whether the HITRUST AI certification aligns with an organization’s needs. Organizations can ask:
If the answer is “yes” to either question, exploring an AI assessment or certification could be a valuable next step, but organizations should consider the following.
The HITRUST certification offers a wide variety of benefits, but it might seem daunting to navigate the ins and outs of the process, especially when organizations are already stretched thin. Working with an organization that understands the nuances of HITRUST – like Crowe – can help organizations plan for the certification in a way that considers their specific needs and goals.
Related insights