This article was originally published in Dallas Business Journal and is shared here with permission.
AI is transforming financial services, driving efficiency and innovation, but it also introduces challenges – like ethical risks, regulatory compliance, and maintaining trust. In this dynamic, highly regulated industry, strong governance is essential.
Crowe leaders joined us for a discussion on AI Governance in Financial Services, including:
This interview has been edited for length and clarity.
David: When you're looking at governance, it's both the governance of data and the governance of the AI tools themselves. A good starting place would be examining your basic principles around information governance and building on that foundation of understanding the following: what data you have in your environment; how and why it was collected; how and why it may be used; data flows within and outside of your organization; the application of data retention; and privacy and security compliance. The combination and intertwining of these things creates a framework for an information governance program that can be built upon for the data that you're using with your AI tools and systems. One should also look at the governance of the tools and systems themselves, creating an ethical, transparent, non-biased system to solve your business problem or create efficiencies.
Crystal: From an internal audit perspective, financial service organizations are facing several significant challenges when implementing artificial intelligence, one of which is data quality management. Ensuring the quality, accuracy, and completeness of data use in AI models is critical. Poor data quality can lead to incorrect predictions and decisions, which can have severe financial and reputational consequences. Another challenge is regulatory compliance. Financial services are highly regulated and AI implementations must comply with various laws and regulations. This could include data privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), as well as industry-specific regulations. Ensuring compliance while leveraging AI can be complex and requires continuous monitoring. Yet another challenge lies in and around transparency and explainability. AI models, especially those based on machine learning, can be complex and difficult to interpret. Internal auditors need to ensure that AI systems are transparent and that their decision-making processes can be explained to stakeholders, including the regulators. These are just a few of the challenges that we are seeing within the industry, and addressing those requires a comprehensive approach that includes robust governance, continuous monitoring, and collaboration between internal audit, information technology (IT), and the business units.
Gina: From an external audit perspective, when looking at the end result, it's important to make sure that the tool you're developing is doing what it's supposed to do. Are we getting the answers that we expect? Is it performing the way it's supposed to? And are the answers that we're getting in line with our expectations?
David: What we're seeing right now from a regulatory perspective is a patchwork of regulations and laws at the state level. Colorado had the first AI act at a comprehensive level, and several states are beginning to follow. California is expected to become a pace setter in this area as well, and we'll continue to see more of that at a state level. From a regulatory perspective, you might be faced with enforcement actions from the U.S. Securities and Exchange Commission (SEC) or the Department of Justice (DOJ). The DOJ has commented in their Evaluation of Corporate Compliance Programs about how it expects an organization to be transparent and be able to explain and understand, both internally and externally, the use of AI, including the inputs and outputs of it in systems and tools. We’ll start to see that with other governmental agencies as well, such as the SEC.
Crystal: Many of the regulatory bodies within financial services (FDIC, FRB, OCC, NCUA) have been slow to publish guidance on AI. Regulators are continuing to gather information about how banks are using or plan to extend their use of AI. An executive order was released in October 2023 that directed the Treasury to produce a report on best practices for financial institutions to manage cybersecurity as well as other risks posed by AI. Furthermore, in June 2024, the Treasury released a request for information (RFI) that related to the uses, opportunities, and risks of AI in the financial services sector. These efforts are in response to that executive order, and the Treasury seeks to increase its understanding of how AI is being used within the financial sector by seeking a range of perspectives.
Gina: If I were to narrow it down even further to the entity level, I think the most important thing is ensuring all parties are communicating. Each group likely has a different perspective and perhaps differing knowledge of the various regulations. Ensuring that all groups are involved in the decision making and that everything is transparent and documented is key.
Crystal: From an internal audit perspective, we're always keen on having a well-thought-out, documented project plan to manage the overall process. This will help foster collaboration between the AI developers, users, and other stakeholders so that businesses can take several steps to ensure the accountability and transparency throughout that life cycle. It starts with establishing clear ownership, designating specific individuals or teams responsible for each artificial intelligence system. This will ensure accountability as well as provide a clear point of contact for any issues or questions that may arise. It will also promote cross-functional collaboration, encouraging regular communication and collaboration between the developers, users, and stakeholders. That can be facilitated through workshops, joint projects, and regular meetings to discuss progress and challenge. Of course, the success of many projects depends on getting senior leadership buy-in and support. They're instrumental when it comes to ensuring initiatives align with the organization's overall strategy and goals, allocating the necessary resources and prioritizing projects accordingly, and managing the cultural shift that can often occur in change management process that will likely be needed. I would encourage engagement of stakeholders early and often, including internal audit.
Gina: With the focus being on the end result, it's important to make sure you keep detailed records, so that there's an audit trail for decisions that were made. Of equal importance is monitoring any sort of model validations so that you have the appropriate body of documentation to present at the back end to ensure things are functioning correctly. Also, never stop monitoring your progress, validating whether the tool is doing what you expected it to do.
Crystal: I would also suggest having a project management office, which helps keep tabs on the overall progress and holds individuals accountable for their efforts.
David: Here at Crowe, we look at the balance between innovation and the need to comply with regulation as it comes out, and ensure transparency and accountability to your customers and employees. We came up with a phrase: “Protection at the pace of AI.” That involves taking your AI strategy and building it on top of a solid AI governance foundation. We look at it from a five-part framework that begins with an AI readiness assessment, gauging and preparing for implementation of the tool, system, or program. Next, we look at accountability and responsibilities for AI owners, AI developers, and users. The third step involves transparency, notice, consent, increasing that transparency, and making sure you're engaging the trust amongst all of your stakeholders. Fourth is looking at the policies and standards, making sure there's ethical and safe privacy policies and guidance around that use of AI. Finally, developing training and awareness to inform stakeholder decisions internally makes both internal and external use of AI more transparent, ethical, and fair.
It’s clear that AI governance goes beyond regulations. It's about building trust, promoting accountability, and ensuring responsible innovation. As organizations navigate challenges and opportunities, collaboration and thoughtful governance are the keys to success.