AI Security Assessments With HITRUST

FAQ

Erika L. Del Giudice, Jared Hamilton
3/14/2025
AI Security Assessments With HITRUST

The HITRUST AI Security Assessment can help healthcare organizations manage new AI-related regulatory obligations. Our team answers your questions.

The explosion of AI technology, specifically generative AI, presents possibilities for organizations in every industry. But, like any new technology, it also exposes organizations to risk. In the healthcare industry, those risks can be especially daunting if not effectively managed and mitigated.

HITRUST assessment services
Get the transparency your customers want and the efficiency you need with our HITRUST services.

How can frameworks help manage AI risk?

Creating a solid AI governance plan is a vital step for any healthcare organization implementing AI technology. To help organizations measure the effectiveness of these plans, HITRUST recently launched an AI Risk Management Assessment, which tests a company’s governance and controls related to AI implementation.

The AI Risk Management Assessment creates the foundation for clear communication of AI strategies to management, boards of directors, and external stakeholders. HITRUST’s comprehensive AI risk management controls are based on extensive research, collaboration with industry-leading working groups, and extensive analysis of AI best practices, emerging technologies, and leading standards. HITRUST harmonized these controls to make them inclusive of the widely recognized International Organization for Standardization/International Electrotechnical Commission guidance on AI risk management (ISO/IEC 23894:2023) and the National Institute of Standards and Technology Risk Management Framework (NIST RMF) standards. The HITRUST controls provide clear, prescriptive definitions of AI policies, procedures, and implementations that can be measured and evaluated.

Building on the AI Risk Management Assessment launch, HITRUST also offers an AI Security Assessment and accompanying certification. This innovative solution helps address the unique security challenges of AI systems and is tailored specifically for organizations deploying AI technologies.

How is AI being incorporated into healthcare?

From an industry perspective, AI is being aggressively incorporated in healthcare through electronic health record vendors and other healthcare technology companies in a variety of ways. For example, AI algorithms have demonstrated proficiency in identifying breast cancer, often matching or surpassing human radiologists. A substantial study involving nearly 500,000 participants in Germany revealed that AI is as effective as clinicians in interpreting mammograms, and researchers at Northeastern University developed an AI model that achieved a 99.72% accuracy rate in breast cancer detection.

As AI technologies continue to enhance healthcare, organizations and their vendors and clients are asking questions and requiring certainty regarding how risk is managed and data is secured from a HIPAA perspective, including proposed enhancements to the HIPAA regulation from the Department of Health and Human Services.

Why is AI security especially important?

Right now, AI regulations are in their infancy stage, which means they are vague, inconsistent, and continually changing. Globally, collaborative efforts are underway to harmonize AI governance. But while the European Union (EU) has established a structured regulatory environment, the U.S. continues to take a decentralized approach with varying state-level regulations.

Following is a breakdown of some of the formal and proposed regulations in the EU and the U.S.

EU Artificial Intelligence Act

The Artificial Intelligence Act came into force on Aug. 1, 2024. It establishes a comprehensive framework and categorizes AI applications into four risk levels.

  • Unacceptable risk. AI systems that pose a clear threat to safety or fundamental rights are prohibited.
  • High risk. Applications in critical sectors such as healthcare, education, and law enforcement must adhere to strict compliance requirements, including safety assessments and transparency obligations.
  • Limited risk. Systems with limited risk are subject to transparency obligations to inform users when they are interacting with AI.
  • Minimal risk. Certain applications, such as AI-powered video games, are largely exempt from additional regulations.

U.S. AI initiatives and acts

A state-by-state approach reflects the absence of a comprehensive, federal AI regulatory framework in the U.S. Following are examples of three states that have enacted legislation.

  • California introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) in February 2024 to mandate safety tests for advanced AI models. Despite initial progress, Gov. Gavin Newsom vetoed the bill in September 2024, citing concerns over potential impacts on innovation.
  • Tennessee enacted the Ensuring Likeness Voice and Image Security Act in March 2024 to target unauthorized, AI-generated replicas of individuals’ voices and likenesses. Known as the ELVIS Act, it marks the first state-level legislation of its kind in the U.S.
  • Utah passed the Artificial Intelligence Policy Act (SB 149) in March 2024. This legislation established liability for companies that fail to disclose their use of generative AI, and it created the Office of Artificial Intelligence Policy.

Organizations should get a handle on AI governance now, as the developments and advances in technology will continue to grow and change. The HITRUST AI security control set can help manage and address current risks and regulations while setting a foundation to help prepare for emerging technology advances, risks, and regulatory uncertainty.

Why should healthcare or health technology companies become certified?

Whether an organization develops, deploys, and markets AI systems to other professional entities or integrates AI directly into its own products, undertaking the HITRUST AI Security Assessment and achieving certification verifies adherence to the highest standards of AI and cybersecurity risk management. HITRUST certification also signifies an organization’s dedication to security and trustworthiness to stakeholders. Some of its key features include:

  • Comprehensive security framework. The HITRUST framework includes up to 44 controls specifically tailored to meet the unique security demands of AI platforms and systems.
  • Customizable control selection. Customization offers the flexibility to choose controls aligned with various AI deployment scenarios to address inherent risks and strengthen the security and resilience of diverse AI models.
  • Robust validation process. The HITRUST process requires third-party testing and centralized reviews to rigorously verify the effectiveness of implemented security measures.
  • Dynamic threat management. Quarterly updates to HITRUST’s controls help organizations proactively address the constantly changing threat landscape and its emerging risks.
  • Streamlined, practical solutions. The HITRUST framework integrates harmonized controls aligned with the NIST RMF, ISO/IEC 23894:2023, the Open Worldwide Application Security Project, and other industry standards. The controls are analyzed through a proprietary, threat-adaptive engine and delivered in a unified framework with clear, actionable directives for implementation.

AI risk management assessment or security certification can benefit a variety of stakeholders in several ways.

  • Security and risk management teams can use the HITRUST framework as a comprehensive guide to secure AI deployments, providing stakeholders with validated assurance of system security.
  • Sales, marketing, and product heads can demonstrate to customers and prospects that their AI-driven products and services meet stringent security standards, which can facilitate smoother adoption and market trust.
  • Third-party risk management programs can make sure that vendors incorporating AI technologies adhere to robust security practices to better manage and mitigate associated vendor risks.
  • Boards, owners, chief executive officers, and executives can be confident that the organization’s AI systems are secured in line with industry best practices, supported by third-party testing and a certification of their security posture.
  • The cyber insurance industry can use HITRUST certification as a consistent and reliable measure to assess AI-related risks, which could enable more accurate underwriting and the development of better-suited insurance products at competitive rates.
  • Regulators and government bodies can recognize the HITRUST AI Security Assessment and certification as a pioneering program that addresses the growing concerns regarding AI security, particularly within critical infrastructure sectors.

Is AI certification right for your organization?

With AI at the forefront of innovation, it’s essential to determine whether the HITRUST AI certification aligns with an organization’s needs. Organizations can ask:

  • Has our organization deployed AI systems?
  • Is our organization planning to implement AI in 2025?

If the answer is “yes” to either question, exploring an AI assessment or certification could be a valuable next step, but organizations should consider the following. 

  • The organization must manage or control the AI technology, whether that technology is its own solution or one or more application programming interfaces from major providers to enhance its systems.
  • The HITRUST AI Security certification can be pursued as a stand-alone option or added to an existing or new e1, i1, or r2 assessment starting in 2025.

The HITRUST certification offers a wide variety of benefits, but it might seem daunting to navigate the ins and outs of the process, especially when organizations are already stretched thin. Working with an organization that understands the nuances of HITRUST – like Crowe – can help organizations plan for the certification in a way that considers their specific needs and goals.

Contact our HITRUST team

Our HITRUST specialists are here to help you map out an effective HITRUST certification process for your business.
Erika Del Giudice
Erika L. Del Giudice
Principal, HITRUST Consulting Leader
Jared Hamilton
Jared Hamilton
Managing Director, HITRUST Consulting