Assess your AI readiness with our AI governance framework

Clayton J. Mitchell, David Moncure, Benjamin Nay
9/4/2024
An illustration of a brain formed by interconnected circuits originating from a central microchip

Creating an AI governance program is the foundation of AI readiness, no matter where organizations are with AI adoption. Our proprietary framework can help.

Many organizations are quickly implementing AI technology to transform their operations. However, in the rush to implement new technology, organizations can often overlook the risks and challenges – particularly when it comes to AI governance. While information or data governance is not a new concept, applying information governance, testing executions, and evaluating outcomes and decisions from AI systems and the proliferation of data these systems might contain is a new frontier for most organizations.

AI governance is a key component of an overall information governance program. AI governance is the foundation of AI readiness and an excellent way to use AI responsibly, whether organizations are just starting out with AI or already have adopted some AI-powered technology.

Mitigate AI risk with AI governance
If your company is using AI, you need an AI governance plan. We can help.

AI governance framework

Given the rapidly evolving nature of AI technology, how can organizations craft a governance program that fits their needs while leaving room to mature governance practices to align with AI applications and an organization’s risk appetite? While it is not a one-size-fits-all exercise, our AI consulting team created a proprietary AI governance framework to help organizations implement AI responsibly now and evolve in the future.

Crowe analysis, August 2024

Assessment

Organizations that want to implement new or improve their current AI technology can assess AI readiness in several areas, including:

  • Policies and standards. Policies and standards serve as the backbone of a responsible and defensible AI governance program. AI-specific policies should address how an organization designs, delivers, and uses AI. As part of this process, leaders should review current information governance, IT, privacy, and security policies to determine if any modifications or additions should be made based on AI use cases. AI policies and standards should be tailored to an organization’s governance and risk management structures, industry, and use cases, and they should be clearly documented and accessible to employees.
  • Training and awareness. Training and awareness of AI technology and use of company-specific policies and procedures of a defensible AI governance program. Employees need to know how to use AI platforms and tools according to company policies. Robust training can help reduce the likelihood of introducing unknown risks to the organization, such as the misuse of personal data or inappropriate access to data.
  • Accountability and responsibility. It is also important to determine whether the AI solution an organization plans to use (or currently is using) is designed and trained responsibly, with measures to help prevent misuse and unexpected outcomes. AI solutions, like any new technology, introduce increased risk, including net new risks and incremental increases in existing risks, such as privacy and data security risks. An organization can help manage those risks by creating a multidisciplinary team to define risk ownership and control responsibilities throughout the AI life cycle. Ideally, an executive-level position should lead the team (or provide executive-level support), and the team should include representatives from IT, cybersecurity, data privacy, compliance and legal, human resources, and the business units most affected by the use of AI.
  • Transparency, notice, and consent. As part of a responsible and defensible AI governance program, organizations should assess how transparent they have been with customers, consumers, and businesses regarding the use of their data. Consumers and businesses have a right to understand how their data is used, and organizations should provide them with appropriate notice of that use. An AI governance program should align with applicable privacy regulations, which might include receiving individuals’ consent for the use of their data in AI systems. Organizations should update policies and standards to reflect current AI use and technology and then communicate data use policies to individual clients and customers, including through adequate privacy notices and contractual provisions.

Guiding principles

Organizations have their own values and guiding principles, and it’s important to take those into account when creating and implementing AI governance and solutions. This framework is meant to align with organizational values and integrate existing policies with our foundational elements:

  • Ethics
  • Data quality
  • Third-party risk
  • Legal and regulatory
  • Model risk
  • Privacy
  • Security 

Every organization will have varying levels of maturity for each core principle, but collectively, these principles represent the key lenses through which organizations should evaluate their AI governance programs. Each principle also aligns to unique risks that organizations should evaluate and mitigate through implementation of such a program.

Design, implementation, and monitoring

Our proprietary AI governance framework includes five development steps, which are a continual cycle as opposed to a linear, one-and-done exercise. The key milestones of this ongoing development process are:

  1. Strategy. An effective AI governance program begins with defining a strategy regarding AI implementation and use and establishing how the organization intends to execute it responsibly. Organizations should evaluate the specific technology being used, the use case of that technology, and applicable regulations, among other factors. Current policies, procedures, and standards should inform the AI governance strategy; however, leaders should also be mindful of future AI technology and use possibilities, not just today’s technology and landscape.
  2. Design. An AI governance framework should enable consistent, scalable, and responsible AI implementation and use across the enterprise. A holistic, centralized framework design outlines standardized processes, metrics, tools, and accountabilities for evaluating AI technology, use, outputs, and effects across the organization. The framework should be adaptable and designed to mature over time as technology and use cases evolve. 
  3. Validation. Organizations should set standards to test and validate their AI models and AI-enabled business practices, and they should document those testing processes and results. Testing and validation can be applied across the organization and specifically to a department or business unit, depending on the unique needs of each, and should focus on explainability, alignment to expected outcomes, and evaluation for risks such as bias and hallucinations.
  4. Implementation. The implementation of an AI governance structure can be an iterative process and should prioritize risks and focus on effective oversight of the technology applications. Policies, standards, and training programs should be implemented with the same urgency as transparency, notice, and consent within the AI technology. As they implement AI governance structures, organizations should document the policies themselves and the rationale behind the decisions, including establishing committees, charters, risk measurement, and reporting.
  5. Monitoring. Perhaps the most important component of a defensible AI governance program is ongoing monitoring. AI systems can produce unintended and unforeseen consequences, negative impacts, or biased outputs that evolve over time. Therefore, programs should be monitored and measured for current risks and on an iterative basis to identify and mitigate future risks. Monitoring the AI governance program’s foundational elements can help mitigate risk and continue to align AI technology and use with intended objectives.

Case study: The Crowe AI governance framework in action

Our technology audit, applied AI, and machine learning teams recently worked with a client to apply our proprietary AI governance framework to the client’s AI technology and use cases. We helped the client begin its AI governance journey by creating an AI governance policy and a tailored AI adoption road map to support incremental maturation of an AI governance program so that the client would be ready for AI technology adoption and proactively prepared for an increase in related risks. Since the client was located in Colorado, the first U.S. state to issue AI regulations, it was vital to work with a team that understood the regulatory environment and the client’s overall business model and risk posture when developing a governance program to keep up with the pace and scale of AI.

Ultimately, AI governance is not a nice-to-have but a must-have program for any organization that wants to implement AI solutions. Considering the pace of AI adoption, the best time for you to protect your organization by developing an AI governance program is now. By establishing an AI governance program, your organization can keep up with the pace of innovation around AI in a responsible, ethical, and defensible way.

Contact our AI team

With years of experience offering AI services for a variety of organizations, and firsthand experience developing these solutions for our firm, our team is here to help you create and implement the right AI governance program for your organization.
Clayton J. Mitchell
Clayton J. Mitchell
Principal, AI Governance
David Moncure at Crowe LLP
David Moncure
Principal, Forensics Consulting
Benjamin Nay at Crowe LLP
Benjamin Nay
Consulting