Recognize internal complexities regarding AI adoption
AI tools offer several practical applications for financial crime programs. On a basic level, AI tools can make tasks more efficient and allow personnel to shift their focus toward more complex and critical aspects of their risk management work. More specifically, when applied to AML programs, AI tools can provide specific benefits.
Yet the new and unique nature of AI raises reasonable questions among financial services organizations and their clients:
- Will AI fully replace human roles?
- How will the use of AI keep pace with growing and evolving regulatory scrutiny?
- Will executive pressure to rush AI implementation lead to additional shortcomings and risks?
None of these questions are insurmountable when organizations take a risk-based and analytical approach. AI can serve well as an assistant and enhancement to human capabilities within financial crime programs – not a replacement. AI implementation strategy teams should be open to and respond to feedback throughout the process. Additionally, a quicker AI implementation can be completed more successfully when planning keeps both organizational and regulatory needs in mind.
Evaluate the scope of and reasons for AI implementation
Financial services organizations considering AI tools for use in their AML programs can apply a methodical and defined approach that considers quick wins, long-term benefits, and potential challenges during implementation.
AML and BSA officers can ask themselves the following questions as they consider AI implementation to determine expectations and potential steps forward:
- Why are we considering AI adoption?
Reviewing current AML processes and knowing the specific advantages that AI might introduce can help shape the business case. Perhaps an organization wants to solve a specific problem, such as high operational costs or high volumes of false positives. Part of the reasoning behind advocating for AI adoption might also lie in a desire to keep up with the push for AI in general.
- What is our organization’s current tolerance for AI?
Organizations have varying levels of enthusiasm or hesitance concerning AI use. Surveying the thoughts of team members and leadership can help determine whether decision-makers might need additional information to address concerns and risks.
- How will we manage stakeholder concerns?
Stakeholders might have alternative impressions of AI incorporation. Organizations should gauge how open they are to the technology and establish expectations for what an effective AI implementation would look like, including both qualitative and quantitative benefits and risks. Relevant risks should have a corresponding mitigation plan to help an organization support its AI implementation decision-making.
- When is the right time to start AI implementation?
The timing of an AI implementation project can involve several factors, including whether AI technology is already part of or easily added to current financial crime software. Part of this determination might also rely on whether the organization wants to be seen as an early adopter or prefers to stand by for further evaluation, even if that comes at the risk of falling behind competitors.
- Can this AI AML technology stand the test of time?
In discussion with developers or providers, organizations need to determine how well the tools will integrate with their current AML program environment and technology infrastructure. Redesigns and updates might be required. Additionally, organizations should determine how the products might evolve over time and whether both they and their AI technology providers will be able to support each product in two, five, or seven years.
Take proactive steps before integrating AI
If proceeding with AI adoption for their AML program, organizations should take proactive steps to prepare infrastructure, models, and processes to help establish AI accuracy, quality assurance, and trust from the start. New tools, especially AI tools, must earn user and stakeholder trust by being reliable, validated, and transparent.
Preparatory actions include:
- Collecting and verifying data. AI tools need clean, high-quality, and relevant data for training.
- Determining AI training methods. AI models can be trained with supervised learning, which gives labeled examples to learn, or unsupervised learning, which determines relationships between unlabeled examples. Supervised learning can be more effective for customer risk scoring and event scoring for sanctions lists, while unsupervised learning can be more effective for customer segmentation and suspicious activity detection.
- Training teams. Teams involved with AI tools should undergo training to maintain regulatory compliance and increase familiarity. Implementation teams should emphasize that the purpose of the AI tool is supportive to individuals and not there to replace their jobs.
- Defining review and validation processes. Implementation teams should create robust procedures and schedules for checking AI accuracy and validating models.
- Establishing extensive documentation. Providing full transparency into how all AI tools work, are used, and are validated is critical because such transparency includes documenting internal controls, training, and procedures for ongoing regulatory compliance.
Explore and analyze to mitigate risk
Implementing and using AI tools in an AML program might represent a new frontier for combating financial crime. With the right combination of due diligence and governance, AI can benefit multiple financial crime-related areas, including:
- Transactional analysis
- Risk assessment
- Fraud identification
- Suspicious activity report integration
- Case notes integration
While speed might be a competitive advantage in some situations, it is vital that influential parts of a financial services organization have the same performance expectations and that regulatory compliance is closely monitored.
Given the complexity of AI adoption, organizations might find that a third party with extensive AI and regulatory experience can play a supportive role by providing an external evaluation of an organization’s current position and how AI might best enhance its AML program.