Most Companies Face Financial Losses Amid Early AI Adoption, EY Survey Reveals
By Reuters Staff
In a sign of the still-maturing state of artificial intelligence implementation across the corporate world, a new global survey by consulting giant Ernst & Young (EY) has found that nearly every large company utilizing AI technology today has experienced some form of risk-related financial loss. The findings, released on October 8, 2025, underscore the formidable operational and regulatory challenges companies must navigate to capture AI’s full promise.
Survey Highlights: AI Implementation Fraught with Risk
The survey, which canvassed C-suite executives from over 1,000 companies worldwide with annual revenues exceeding $1 billion, revealed that initial AI deployments frequently led to losses. Causes ranged from compliance failures and flawed system outputs to the presence of algorithmic bias and disruptions to organizations’ sustainability objectives.
“There is a persistent expectation that AI drives immediate value. In reality, there are substantial growing pains—financial, technical, and ethical—before true returns are realized,” said Rebecca Harding, Global AI Lead at EY, during a press briefing.
Common Pitfalls: From Compliance to Bias
The study reported several recurrent challenges:
- Compliance Failures: Many organizations encountered difficulties in aligning AI applications with regulatory mandates, especially those concerning data privacy, security, and ethical use.
- Flawed Outputs: Early-stage models frequently delivered erroneous results, leading to financial errors and brand reputation risk.
- Algorithmic Bias: Incomplete or biased training data inflicted unintended social, legal, and financial costs, particularly in sectors like finance, healthcare, and recruitment.
- Sustainability Disruptions: Some firms reported that AI rollouts hindered progress on environmental or consumer trust goals, largely due to opaque decision-making and insufficient transparency.
According to a 2024 Stanford University report, over 60% of AI incidents resulting in financial harm last year could be attributed to insufficient testing, poor governance, or a lack of skilled oversight—trends confirmed by the latest EY findings.
Global Context: Surging Investment with Caution
Despite these initial setbacks, companies globally have ramped up investments in AI solutions. According to IDC, enterprise AI spending is projected to reach $225 billion in 2025, a 37% increase from the previous year. Nevertheless, as investment surges, so do expectations—and scrutiny from boards, regulators, and shareholders.
Several headline-making AI failures have raised public alarms in recent months. In April 2025, a major European bank faced a multi-million-dollar compliance fine after an AI-based anti-money-laundering tool misclassified hundreds of transactions. In the healthcare sector, a U.S. hospital network faced a lawsuit when an AI triage system led to critical care delays.
Industry Response: Emphasis on Governance and Risk Management
Industry experts say these losses, while significant, are part of a typical innovation cycle—and emphasize that the push to mature processes and policies is underway.
“AI is not a plug-and-play solution. Corporate leaders are now realizing the necessity of rigorous governance, robust risk frameworks, and ongoing employee training,” said Dr. Michael Cheng, Professor of AI Policy at London Business School.
Establishing cross-functional AI ethics boards, regular audit routines, and independent reviews are among the recommended best practices gaining traction. Larger enterprises are also working with legal, compliance, and technology teams to map regulatory risks and establish clear escalation procedures.
Governments and Regulators Step Up Oversight
Recognizing the growing risks and pace of AI adoption, many governments have tightened regulatory oversight. The European Union’s Artificial Intelligence Act, which will take effect fully in 2026, mandates stringent compliance for high-risk AI systems and imposes hefty fines for violations. In the United States, the Securities and Exchange Commission and Federal Trade Commission recently issued guidance warning firms about algorithmic transparency and accountability.
These rules are designed in part to prevent rapid, unchecked AI rollouts that may jeopardize consumers or undermine market integrity. “The regulatory message is clear: speed without compliance is unacceptable,” remarked Karl Riedel, global regulatory policy analyst at Morgan Stanley.
Future Outlook: Long-term Gains Despite Early Losses
Despite initial setbacks, experts and companies remain optimistic about AI’s long-range value proposition. As tooling, data quality, and organizational expertise improve, companies are increasingly able to deploy AI in ways that enhance productivity, uncover new revenue streams, and deliver better customer experiences.
Leading firms—including multinationals in finance, retail, and logistics—are already reporting notable performance boosts from reengineered, compliant AI workflows. Accenture projects that, by 2028, companies deploying mature AI responsibly could enjoy productivity gains upwards of 40%, alongside improved compliance and customer trust indices.
“The journey with AI is becoming less about magic and more about management,” said EY’s Harding. “The key is learning quickly from missteps, focusing on risk as much as reward, and adopting a culture of continuous improvement.”
Key Takeaways for Corporate Leaders
- Initial financial losses are common, but manageable with the right frameworks and transparency.
- Risk management, compliance, and governance are paramount for sustainable AI adoption.
- Successful companies are integrating cross-disciplinary expertise—including legal, tech, and ethics—early in the AI lifecycle.
- The regulatory landscape is evolving quickly, making proactive compliance a non-negotiable foundation.
As global enterprises continue to harness AI, the lessons of early adopters will prove invaluable, shaping a ‘next normal’ grounded in both ambition and accountability.

