lady with laptop tablet and notepad

Reflecting on 2024: The Good, the Bad, and the Ugly in Artificial Intelligence

Buki Obayiuwana, Managing Director and Head of Transformation
19/03/2025
lady with laptop tablet and notepad
Reflecting on the past year, it’s clear that Artificial Intelligence (AI) has had a transformative impact on nearly every industry and aspect of society. From groundbreaking innovations to the growing pains of regulation and some genuinely concerning misuses, AI’s story this year has been one of both remarkable progress and urgent challenges.

As we move further into 2025, let’s take a closer look at the good, the bad, and the ugly of AI in 2024 and what lies ahead.

The Good: Transformative progress in 2024

The year 2024 saw AI cement itself as a potential driving force for positive change, offering solutions to complex problems and opening new possibilities across industries. I call it the year of the proliferation of use cases.

Governance steps in the right direction

Efforts to govern AI responsibly gathered momentum in 2024. Governments around the world have introduced frameworks to ensure that AI is developed ethically and used transparently. International organisations also took major strides in addressing the challenges posed by AI, especially in elections and public decision-making. These steps have started to lay the groundwork for a more secure and trustworthy future for AI.

Finance and business find new AI frontiers

In the financial services sector, AI has proved itself a game-changer. Leading banks integrated AI into their systems for better decision-making, improved fraud detection, and streamlined customer service. Generative AI tools also showed tremendous promise in creating new financial products, providing tailored investment strategies, and helping institutions make better use of data

Generative AI at the heart of innovation

Most refer to generative AI when they think of AI, and in fairness, Gen AI truly came into its own in 2024. Individuals and businesses embraced it to improve personal productivity and business operations, from automating mundane tasks to delivering more personalised services. Companies in various sectors began using these tools to streamline workflows and create customer experiences that feel both seamless and unique.

Cybersecurity gets smarter

While the challenges of offensive AI continued to grow, so did the development of defensive tools. Organisations began leveraging AI to predict, detect, and counter threats in real time, providing critical protection against increasingly sophisticated cyberattacks.

Collaborative global efforts 

Perhaps most encouraging is the growing spirit of global collaboration. From international summits to cross-industry partnerships, stakeholders have worked together to find common ground on AI governance and ethical development. This cooperation is a positive sign of what’s possible when humanity works together.

The Bad: Persistent challenges in 2024

Despite the progress, 2024 also highlighted significant challenges, revealing how far we still have to go to ensure AI’s benefits are realised without unintended consequences.

Misinformation on the rise 

AI-powered misinformation continued to plague digital platforms, elections, and public discourse. Deepfake technology and sophisticated AI-driven content generation have made it harder than ever to separate fact from fiction, eroding trust in institutions and the media.

Bias and fairness issues

AI systems struggled with fairness as biases embedded in training data continued to manifest in real-world applications. Whether in hiring systems, legal algorithms, or financial services, these biases led to unequal outcomes and reinforced systemic inequalities.

Cyber threats escalate 

Offensive AI tools have become more advanced, enabling cybercriminals to carry out attacks on an unprecedented scale. Automated phishing campaigns, identity theft using AI, and deepfake scams targeted individuals and organisations alike, creating significant financial and reputational harm.

Lagging regulation 

While some progress was made on AI governance, many countries struggled to implement meaningful regulations at scale. Without cohesive international standards, the industry risked fragmented development and uneven enforcement, leaving gaps for bad actors to exploit.

Economic disruption

AI automation, while driving efficiency, continued to displace workers in many industries. As businesses prioritised cost-saving measures, questions arose about how to prepare the workforce for an AI-driven economy.

The Ugly: Misuse and consequences in 2024

Perhaps the most sobering aspects of AI’s evolution in 2024 were how it was used maliciously or recklessly, exposing the darker side of this transformative technology.

Weaponised AI 

AI was increasingly used as a tool for malicious purposes. From automated cyberattacks to the spread of politically motivated disinformation, bad actors exploited the scalability and precision of AI to destabilise systems and undermine trust.

Deepfakes undermine trust

Deepfake technology reached alarming levels of sophistication, allowing for the creation of hyper-realistic but false videos and audio. This technology was weaponised to target individuals, influence public opinion, and disrupt democratic processes, leaving lasting damage to trust in institutions.

AI-driven fraud

Fraudsters used AI to execute scams with unprecedented effectiveness, targeting financial institutions and individuals. Identity theft became a significant concern, with AI enabling highly believable impersonations that were difficult to detect.

Environmental costs 

AI’s rapid growth placed increasing demands on data centres, consuming vast amounts of energy. As the industry scaled up without adequate focus on sustainability, the environmental impact of AI became an issue that could no longer be ignored.

Uncontrolled proliferation

The widespread availability of generative AI tools has raised serious concerns about misuse. From creating harmful content to automating criminal activities, these tools demonstrated the risks of unregulated access to powerful technologies.

The gap - True AI literacy

As we continue into 2025, while AI is transforming industries, a lack of understanding about what AI truly is and how it works remains a significant barrier. Misconceptions about AI as a fully autonomous solution can lead to unrealistic expectations or misplaced fears.

Organisation-wide AI education

AI isn’t just for data scientists or IT teams. Every employee—whether in customer service, operations, or management - needs to understand its potential and limitations and it is necessary to use real-world examples to demystify AI and show its practical applications.

AI awareness for leaders 

Senior executives often make decisions about AI without a clear understanding of its strengths and limitations. This can lead to fear and reticence, underutilisation or overdependence.

Customer-facing AI transparency 

Many customers remain sceptical of AI, particularly when it impacts decisions like insurance premiums or loan approvals. There is a growing need for understandable explanations about how AI decisions are made. Transparency builds trust and reassures customers.

Practical steps for 2025

Reflecting on the past year, the story of AI in 2024 was one of incredible promise tempered by significant challenges and sobering lessons. The good demonstrated AI’s potential to revolutionise industries and solve complex problems. The bad underscored the urgent need to address fairness, regulation, and security. And the ugly one reminded us of the dangers that come with unchecked power.

The risks associated with AI misuse will not disappear overnight. Policymakers must double down on creating universal standards for governance, while businesses must address issues of bias, transparency, and inclusivity. Meanwhile, the arms race between offensive and defensive AI tools will continue to intensify.

The year 2025 promises to be a pivotal year for AI. The focus will likely shift towards increased understanding and demystification, more robust ethical practices, improved regulation, and enhanced collaboration across borders. With proper guidance, AI has the potential to help address pressing global challenges, from climate change to healthcare and education.

To avoid the pitfalls of unregulated growth, 2025 must be a year of responsibility and accountability. Public awareness and education will be critical to ensuring that AI’s benefits are widely understood and its risks effectively mitigated. The path ahead also requires a balance of innovation, governance, and collaboration. Only by addressing these challenges head-on can we ensure AI’s role as a tool for positive change in the years to come.

Here are five practical steps for you as you begin 2025:

  1. Adopt explainable AI tools that provide clear, understandable reasoning for decisions.
  2. Maintain a strong balance between AI automation and human involvement, particularly in nuanced situations.
  3. Strengthen cybersecurity measures and train staff to recognise AI-enhanced threats.
  4. Focus on sustainable AI practices, partnering with green tech providers and monitoring the carbon footprint of AI projects.
  5. Upskill and reskill yourself, your board, executives and employees to prepare them for new AI-driven roles, fostering a culture of awareness and adaptability.

How can Crowe help?

At Crowe, our tailored AI services include education, governance, strategy, execution, and scaling as part of helping businesses achieve operational excellence. We help businesses unlock AI’s potential responsibly and effectively. 

  • AI education: We simplify AI concepts for your team, empowering them with the knowledge to understand and leverage AI technologies in their roles. 
  • AI governance: We ensure your AI systems are transparent, ethical, and aligned with emerging regulations, building trust with stakeholders. 
  • AI strategy: Our experts work with you to design a clear roadmap that aligns AI adoption with your business goals, ensuring a strategic and sustainable approach. 
  • AI execution: From piloting to full-scale implementation, we provide hands-on support to integrate AI seamlessly into your operations. 
  • AI scaling: Once AI is embedded, we help you scale solutions efficiently across your organisation, driving measurable outcomes and long-term value.

Whether you’re just starting your AI journey, exploring, or looking to optimise existing operations and systems, our services are designed to deliver results while addressing the risks and challenges of this rapidly evolving field. For more information or support assessing your AI readiness or navigating the challenges of adoption, contact Buki Obayiuwana or your usual Crowe contact.

Contact us

Buki Obayiuwana
Buki Obayiuwana
Managing Director and Head of Transformation
London

Insights

Did you know that Britain is the third largest AI market in the world?
In today's interconnected world, effective supplier risk management is essential for enhancing organisational resilience.
Market overview: large company reporting on climate-related financial disclosures (CFD)
Did you know that Britain is the third largest AI market in the world?
In today's interconnected world, effective supplier risk management is essential for enhancing organisational resilience.
Market overview: large company reporting on climate-related financial disclosures (CFD)