DFSA Artificial Intelligence Survey 2025

Summary

The DFSA’s Artificial Intelligence Survey 2025 presents one of the clearest snapshots so far of how AI is shaping financial services inside the DIFC. It builds on the first edition released in 2024, allowing regulators and firms to track how adoption, governance, and risk practices have evolved in just one year. The picture that emerges is of a market moving quickly, but not blindly. Firms are expanding their use of AI across critical functions, yet remain conscious of the need for stronger oversight and clearer regulatory signposts.

 

The survey’s goal is straightforward. It measures how DIFC-authorised firms are adopting and governing AI, and how prepared they are for the risks and operational challenges that come with it. Because this is the second edition, it offers year-on-year comparisons that show how fast the ecosystem is maturing. The DFSA is using these insights to understand where firms need support, and where regulatory frameworks may need to evolve.

 

These findings sit within a wider national environment where AI has already become mainstream. The UAE continues to position itself as a global benchmark for AI readiness. Almost all residents – 97% – interact with AI in some form. A large majority of organisations, around 73%, already operate with formal governance frameworks. This national maturity creates both momentum and expectation. Firms inside the DIFC are under pressure to match the pace of the wider UAE economy.

 

The survey found that a total of 661 DIFC firms took part, representing an 88% response rate — unusually high for a regulatory survey. Their responses highlight a system that is shifting from experimentation to real deployment. Efficiency, performance gains, and data-driven decision-making remain the top reasons for using AI. What has changed is the scale. Many firms have moved beyond pilots and are now applying AI to core business areas, with adoption jumping from 33% in 2024 to 52% in 2025. Generative AI saw the most dramatic rise, growing by 166% in one year.

 

Progress, however, is uneven. Sixty percent of AI-using firms now have some form of governance structure. Yet 21% still operate without clear accountability, even in sensitive or high-risk functions. This gap is drawing attention. Firms repeatedly asked for clearer guidance, more practical examples, and greater harmonisation among UAE regulators. Their message is simple: adoption is accelerating, but governance frameworks must catch up.

Introduction

The DIFC is entering a new phase of AI maturity, and the shift is easy to see. Firms are using AI to sharpen judgment, strengthen controls, and upgrade how they work. It helps spot risks earlier, tighten compliance, and improve the way customers experience financial services. But the same tools that make operations smarter also introduce new risks. They need oversight that matches their impact. Not too heavy. Not too loose. Just proportional and clear.

 

This is where the DFSA’s approach stands out. It follows a simple idea. Regulate the risk, not the tool. The framework stays technology-neutral and risk-based, which gives firms freedom to innovate while still keeping the system safe. It also keeps the DFSA aligned with global regulators. 

 

The themes match what the FCA and the Bank of England highlighted in their 2024 review. The direction is reinforced again in the Dubai State of AI Report 2025, which sets the national tone for ai adoption and the wider push toward secure ai adoption across the UAE.

 

The survey itself has become an important source of insight. The 2024 edition gave everyone a baseline. It showed the early patterns of enterprise ai adoption and the first signs of an organised ai adoption framework emerging inside the DIFC. 

 

The 2025 edition goes deeper. More firms participated. More use cases surfaced. The data is richer, and the story is clearer. AI is no longer a side project. It is becoming part of core strategy, and the survey now acts as a practical ai adoption report for the region.

AI Adoption and Types of Applications

AI is part of business in the UAE in different ways. Here are some explained:

Adoption Growth

AI use inside the DIFC has jumped fast. Adoption moved from 33 percent in 2024 to 52 percent in 2025. The number of firms using AI almost doubled, rising from 177 to 345. This is real momentum, not hype. It shows that ai adoption is becoming part of the operating model, not an experiment on the side.

Type-Wise Growth

The growth is uneven but telling. Generative AI saw the biggest leap with a 166 percent increase. Narrow AI almost doubled with 99 percent growth. Machine learning and deep learning continued steady expansion at more than 60 percent. These numbers show a shift from theory to practice, especially in ai adoption in banking, where structured data and repeatable processes make deployment easier.

Drivers of AI Adoption

AI is spreading across DIFC firms because it solves real problems. Each driver has its own weight, and together they explain why adoption keeps climbing.

Efficiency gains

This is the biggest driver. Firms want processes that move faster and break less. AI automates routine tasks, reduces manual effort, and cuts waiting time in workflows. Teams can focus on judgment instead of admin. When a firm feels the speed difference once, it rarely goes back.

Enhanced performance

AI helps teams make better decisions. Not louder decisions. Better ones. Models pick up patterns that humans miss in day-to-day work. That leads to sharper forecasting, cleaner prioritisation, and stronger execution. Performance becomes more consistent because it relies less on guesswork.

Better data analytics

Most firms already sit on mountains of data. Very few can use it well. AI changes that. It turns raw information into practical insight. It helps firms see risk, behaviour, and trends with a level of clarity they didn’t have before. For many executives, this is the moment AI stops being a buzzword and becomes a tool.

Improved risk management

Risk teams are using AI to spot issues earlier. Whether it’s transaction monitoring, stress testing, fraud detection, or anomaly checks, AI picks up signals long before traditional controls react. This early-warning ability is why ai adoption metrics keep rising across compliance-heavy firms. It’s prevention, not clean-up.

Compliance automation

Compliance used to scale only by adding people. AI changes the equation. It reads, compares, tracks, and flags. It makes monitoring continuous instead of periodic. It keeps teams updated on policy changes and helps them test controls faster. For regulated firms, this alone creates huge value.

Cost reduction

AI isn’t about replacing people. It’s about reducing waste. Less duplication. Fewer repeated tasks. Fewer manual checks. Over time, this lowers operational cost without weakening control. For many firms, the cost argument becomes the clean business case that pushes AI from “good idea” to “approved project.”

Barriers to AI Adoption

The push to adopt AI comes with friction. Regulatory uncertainty is still a top concern. Cybersecurity risks are another. Implementation costs slow down smaller firms. Data quality issues and old systems get in the way. Some teams worry about ethical risks. Others simply don’t have the skills yet. These challenges explain why secure and structured deployment frameworks, like a clear ai adoption framework, matter more than ever.

Stages of AI Deployment

AI inside the DIFC is maturing fast. Firms are no longer just experimenting. They are building structured paths that take an idea from a small test to a live system that runs every day. These stages help explain how enterprise ai adoption is spreading and why the DIFC is becoming a serious centre for ai adoption in financial services. You can see a clear progression now, and the data shows how quickly firms are climbing the ladder.

Maturity Levels

Most firms follow the same journey. They begin with small proof-of-concepts to test if a model works. Then they shift into pilot phases with limited teams. If the results hold up, AI is deployed across bigger parts of the business. The final stage is when AI becomes critical to operations. At that point, the tool is too important to remove without slowing the business. This path is becoming the standard across the centre.

Maturity Shift from 2024 to 2025

The jump in one year is striking. Large-scale deployments tripled from 41 to 121. AI systems that used to sit in controlled tests are now running across major functions. Even more important, the number of firms that call AI critical to their daily operations doubled from 17 to 29. These are big leaps. They show that firms are no longer just exploring. They are committing. It also explains the growing interest in ai governance, because deeper deployment brings higher expectations.

Internal vs External Deployment

Internal use still dominates. Seventy-nine percent of firms deploy AI in functions like HR, Finance, Legal, and operations. These are controlled environments where risk is easier to manage. Audit, compliance, and risk management teams are heavy adopters too, with 162 firms using AI in these areas. That makes sense. The work is repetitive, data-heavy, and perfect for automation.

 

Customer-facing adoption is rising. One hundred forty-six firms now use AI in client-related functions. This shift reflects the broader push of ai adoption in finance, where internal gains eventually lead to better customer experiences.

Adoption Outlook

Most firms expect their AI footprint to grow. Sixty percent anticipate expansion in the next twelve months. Seventy-five percent expect even wider expansion over three years. These expectations align with national ai adoption by industry 2025 trends and the UAE’s push toward secure ai adoption. The message is simple. AI is no longer a trial phase. It is becoming a long-term capability shaping how financial institutions operate and compete.

Third-Party Providers and Cloud Adoption

AI inside the DIFC isn’t being built in isolation. Most firms rely on outside providers to develop, run, or maintain their systems. This dependence is shaping how AI operates across the centre, especially as more firms shift toward cloud-heavy setups driven by ai adoption, uae tech ai cloud adoption, and the wider push for scalable models.

Reliance on External Providers

A clear majority of firms use third-party AI developers. The pattern is simple. Instead of building everything internally, firms buy specialised tools and plug them into their systems. More than 60 percent now run 90 percent of their AI workloads on cloud platforms. That’s almost full reliance on external infrastructure. This level of dependence speeds up deployment but also raises new questions about control, resilience, and who carries responsibility when things break.

Cloud Concentration

The cloud choices are highly concentrated. AWS, Google Cloud, and Microsoft Azure dominate the market. These providers give firms scale, security features, and fast implementation. But the downside is also obvious. When most of the DIFC runs AI on the same few platforms, one disruption can hit many firms at once. This is becoming an important point in every ai adoption report across the region.

Emerging Risks

The survey highlights three risks that are starting to grow faster than adoption itself.
Structural concentration risk is the first. If one major provider goes down, a large part of the DIFC stalls with it.
Supply chain vulnerabilities come next. Firms depend on long chains of vendors, tools, and model components. One weak link affects them all.

The third is systemic operational risk. As reliance grows, the failure of a single provider or major update could spill across the market. These risks now sit at the centre of conversations about secure ai adoption.

DFSA Warning

The DFSA is clear. Firms must strengthen their third-party risk management. They need continuity plans that actually work, not just documents. The regulator expects firms to understand how their providers operate, what happens during outages, and how quickly they can recover. This guidance now forms part of the region’s expectation for responsible growth in ai adoption in finance.

Governance and Accountability

AI governance is becoming the deciding factor between safe growth and risky deployment. As adoption rises, so does the need for clear structures, real accountability, and people who understand the risks. The DIFC’s shift mirrors global movements in ai governance, ai governance framework, and broader responsible ai governance.

Governance Structures in Place

Seventy percent of AI-using firms now have formal governance frameworks. That’s progress. It means most firms agree that AI needs structure, rules, and oversight. Ninety percent have assigned responsibility for AI oversight to specific teams or leaders. This shows a maturing market. AI is no longer a side task handled by whoever has time. It is now part of corporate governance.

Who Governs AI?

Responsibility varies across firms. Some assign it to a Chief AI Officer or the Head of Compliance. Others rely on department heads or committees. Technology committees handle the technical side. Risk committees monitor model impact and reliability. Audit committees look at controls and testing.

A growing number now have AI ethics or AI governance committees. Even though titles differ, the message is the same. AI oversight is moving toward formal structures that mirror traditional governance models across the ai in finance industry.

Governance Gaps

Even with progress, the gaps are serious. Twenty-one percent of firms still have no accountability for AI. Eleven percent run large-scale AI deployments with no governance structure at all. Twenty-six percent use AI in critical business areas without any formal oversight. These numbers show that adoption is moving faster than governance. Without stronger frameworks, risks will grow quietly under the surface.

Governance Challenges

The biggest challenge is the need for clearer regulatory guidance. Firms want to know how to build governance that aligns with expectations. Skill shortages add pressure. Many teams don’t have enough expertise to manage complex AI systems.


Board understanding is another issue. The number of boards struggling to understand AI jumped from 20 to 53 firms. Executives face similar gaps. Many don’t fully grasp the risk, value, and long-term impact of AI.


These challenges explain why ai governance best practices and ai data governance are becoming essential topics inside the DIFC. Without stronger knowledge and clearer guidance, firms won’t reach the maturity levels they aim for.

Regulatory Guidance and Future Initiatives

The conversation around AI in financial services is changing fast. Firms are no longer asking whether they should use AI. They are asking how to use it safely, intelligently, and in line with expectations from the DFSA. The latest data shows that businesses want clarity. They want consistency. And they want guidance that feels practical, not theoretical.

What Firms Expect from the DFSA

Firms are reaching a point where high-level advice is not enough. They want clear rules they can use in real decisions.

 

Most firms are asking for clarification of regulations. They want to know what is acceptable, what is risky, and where the DFSA draws the line.

 

They also want scenario-driven guidance. Not generic pointers. Actual examples that reflect real situations financial institutions face as they deploy AI across compliance, risk, and operations.

 

Another big request is UAE-wide harmonisation. Many firms operate across multiple regulators. When rules differ, even slightly, it slows down deployment and increases uncertainty. They want a system where standards align so decisions can move faster.

 

Some firms expect the DFSA to introduce AI-specific rules for the DIFC. They are not afraid of rules. They simply want clarity on expectations so they can plan long-term and scale without second-guessing compliance.

 

And finally, there is a strong push for governance best practices. Firms want benchmarks. They want examples of what “good” looks like so they can build internal models that match the DFSA’s view of responsible AI.

What These Expectations Mean

These expectations show that the market is maturing. Firms are smarter about AI. Their questions are more precise. They want predictability, not guesswork.

 

The data also shows an increasing need for unified standards across regulators. As AI becomes more central to business operations, fragmented rules can create friction. Firms want a clean, consistent regulatory environment that supports responsible growth.

Conclusion

AI adoption is accelerating. Firms are confident. They are building internally first, testing models inside their own walls before they scale outward. This internal-first pattern shows they are cautious but committed. They want control. They want safety. But they are moving ahead at full speed.

 

Governance is not keeping up. Many firms deploy AI in critical areas, yet their governance frameworks are still thin or incomplete. The gap between deployment risk and governance maturity is widening. And that gap is where real risks live.

 

The DFSA is preparing for this shift. Their approach is shaping into three priorities.

 

They plan to take a risk-based supervisory model, meaning higher-risk deployments will get more attention, more scrutiny, and more engagement.

 

They are also leaning toward collaborative regulatory development. The market is moving too fast for static rules. Firms and regulators need to build guidance together, sharing insights and shaping common standards.

 

And most importantly, they want to balance innovation with investor protection. The goal is not to slow firms down. It is to make sure AI grows with safeguards, fairness, and accountability built in from the start.

References

Related Articles​​