Assessment
Organizations that want to implement new or improve their current AI technology can assess AI readiness in several areas, including:
- Policies and standards. Policies and standards serve as the backbone of a responsible and defensible AI governance program. AI-specific policies should address how an organization designs, delivers, and uses AI. As part of this process, leaders should review current information governance, IT, privacy, and security policies to determine if any modifications or additions should be made based on AI use cases. AI policies and standards should be tailored to an organization’s governance and risk management structures, industry, and use cases, and they should be clearly documented and accessible to employees.
- Training and awareness. Training and awareness of AI technology and use of company-specific policies and procedures of a defensible AI governance program. Employees need to know how to use AI platforms and tools according to company policies. Robust training can help reduce the likelihood of introducing unknown risks to the organization, such as the misuse of personal data or inappropriate access to data.
- Accountability and responsibility. It is also important to determine whether the AI solution an organization plans to use (or currently is using) is designed and trained responsibly, with measures to help prevent misuse and unexpected outcomes. AI solutions, like any new technology, introduce increased risk, including net new risks and incremental increases in existing risks, such as privacy and data security risks. An organization can help manage those risks by creating a multidisciplinary team to define risk ownership and control responsibilities throughout the AI life cycle. Ideally, an executive-level position should lead the team (or provide executive-level support), and the team should include representatives from IT, cybersecurity, data privacy, compliance and legal, human resources, and the business units most affected by the use of AI.
- Transparency, notice, and consent. As part of a responsible and defensible AI governance program, organizations should assess how transparent they have been with customers, consumers, and businesses regarding the use of their data. Consumers and businesses have a right to understand how their data is used, and organizations should provide them with appropriate notice of that use. An AI governance program should align with applicable privacy regulations, which might include receiving individuals’ consent for the use of their data in AI systems. Organizations should update policies and standards to reflect current AI use and technology and then communicate data use policies to individual clients and customers, including through adequate privacy notices and contractual provisions.
Guiding principles
Organizations have their own values and guiding principles, and it’s important to take those into account when creating and implementing AI governance and solutions. This framework is meant to align with organizational values and integrate existing policies with our foundational elements:
- Ethics
- Data quality
- Third-party risk
- Legal and regulatory
- Model risk
- Privacy
- Security
Every organization will have varying levels of maturity for each core principle, but collectively, these principles represent the key lenses through which organizations should evaluate their AI governance programs. Each principle also aligns to unique risks that organizations should evaluate and mitigate through implementation of such a program.
Design, implementation, and monitoring
Our proprietary AI governance framework includes five development steps, which are a continual cycle as opposed to a linear, one-and-done exercise. The key milestones of this ongoing development process are:
- Strategy. An effective AI governance program begins with defining a strategy regarding AI implementation and use and establishing how the organization intends to execute it responsibly. Organizations should evaluate the specific technology being used, the use case of that technology, and applicable regulations, among other factors. Current policies, procedures, and standards should inform the AI governance strategy; however, leaders should also be mindful of future AI technology and use possibilities, not just today’s technology and landscape.
- Design. An AI governance framework should enable consistent, scalable, and responsible AI implementation and use across the enterprise. A holistic, centralized framework design outlines standardized processes, metrics, tools, and accountabilities for evaluating AI technology, use, outputs, and effects across the organization. The framework should be adaptable and designed to mature over time as technology and use cases evolve.
- Validation. Organizations should set standards to test and validate their AI models and AI-enabled business practices, and they should document those testing processes and results. Testing and validation can be applied across the organization and specifically to a department or business unit, depending on the unique needs of each, and should focus on explainability, alignment to expected outcomes, and evaluation for risks such as bias and hallucinations.
- Implementation. The implementation of an AI governance structure can be an iterative process and should prioritize risks and focus on effective oversight of the technology applications. Policies, standards, and training programs should be implemented with the same urgency as transparency, notice, and consent within the AI technology. As they implement AI governance structures, organizations should document the policies themselves and the rationale behind the decisions, including establishing committees, charters, risk measurement, and reporting.
- Monitoring. Perhaps the most important component of a defensible AI governance program is ongoing monitoring. AI systems can produce unintended and unforeseen consequences, negative impacts, or biased outputs that evolve over time. Therefore, programs should be monitored and measured for current risks and on an iterative basis to identify and mitigate future risks. Monitoring the AI governance program’s foundational elements can help mitigate risk and continue to align AI technology and use with intended objectives.