people working on laptop at night

Navigating AI Governance and Assurance: Key insights

Mustafa Iqbal, Partner, Consulting
07/11/2024
people working on laptop at night

The rapid advancement of Generative Artificial Intelligence (AI) technology has brought both significant opportunities and pressing challenges. As AI systems become increasingly integrated into various industries, the need for robust governance and assurance frameworks has become imperative.

The growing concern

The World Economic Forum's 2024 Global Risk Report highlighted the potential adverse outcomes of AI technologies as a top 10 global risk. Concerns over AI's use in conflict decisions and the proliferation of disinformation and deepfakes are particularly acute.

At a recent industry summit, European CIOs expressed significant concern about effectively implementing AI governance frameworks and demonstrating the tangible benefits of AI investments.

This article explores the essentials of building an AI Governance and Assurance ecosystem. Governance ensures ethical, transparent, and accountable AI technology outcomes, while Assurance fosters public trust and acceptance. Together, they are vital for risk management in developing, procuring, and deploying AI systems.

Key challenges in AI Governance and Assurance

Several key challenges hinder the effective implementation of AI governance and assurance.

  • AI solutions: the fragmented approach to developing AI solutions mainly targets internal productivity gains and developer needs. Industry AI solutions are failing to meet goals for cost reduction, customer experience, and cash optimisation.
  • Trust and transparency: multiple AI frameworks exist, but they are crowded and lack mature measurement and management standards to support trust and transparency.
  • Capability: there is no clear direction on which professionalisation models are best suited to build trust in AI assurance services.
  • Collaboration: links between industry and independent researchers are underdeveloped, hindering effective collaboration. 

AI Governance and Assurance ecosystem

The UK Government estimates the AI assurance market could move beyond £6.53bn by 2035 (Department for Science, Innovation & Technology: Assuring Responsible Future for AI). This represents independent third-party assurance providers and provision of technical tools that are used to assess AI systems.

Crowe’s approach to responsible and trustworthy AI takes an expansive view of the ecosystem needed for delivering safe and secure AI technology.

Here are some key insights for developing an AI governance and assurance ecosystem.

  • Outcome-driven approach: AI governance should prioritise outcomes over technology, recognising that AI risks vary by model and context. Principles-based regulations should be interpreted to deliver the right outcomes for customers and AI users within specific sectors.
  • Governance: establishing robust governance mechanisms, including clear policies, procedures, and ethical guidelines, is essential to ensure responsible AI development and deployment. Hans-Petter, IBM’s EMEA AI leader, explains “IBM’s Ethics Board integrates privacy and ethics to manage regulatory, reputational, and operational risks, maintaining comprehensive oversight of their “Risk Atlas” for AI solutions”.
  • Strong guardrails: implementing strong guardrails, such as standards and frameworks for security, data quality, privacy, and model risk management, is crucial to mitigate risks and ensure compliance.
  • Assurance spectrum: the level of assurance required should be determined by the organisation's risk appetite and the complexity of the AI model. Robbie McCorkell, Founding Engineer and AI Developer at Leap Labs, explains that the future of “AI assurance will evolve through bespoke approaches based on appropriate risk assessments of AI models”.

We develop our thinking through collaboration with industry leaders. Visit Crowe Consulting to explore our broad range of regulatory, technology, data, and AI solutions. 

Explore our AI Sentinel Talks which explores the key topics around AI Governance and Assurance.

For more information, contact Mustafa Iqbal or your usual Crowe contact.

CroweCast series:AI Sentinel Talks
Mustafa Iqbal is joined by Hans Petter Dalen and Robbie McCorkell to discuss the fascinating world of Artificial Intelligence.

Contact us

Mustafa Iqbal
Mustafa Iqbal
Partner, Technology Consulting
London

Insights

Technology Risk Advisory Partner, Mustafa Iqbal delves into the fascinating world of Artificial Intelligence.
AI is the buzzword promising to solve all our problems, optimise our operations, and even make our coffee in the morning.
Technology Risk Advisory Partner, Mustafa Iqbal delves into the fascinating world of Artificial Intelligence.
AI is the buzzword promising to solve all our problems, optimise our operations, and even make our coffee in the morning.