AI, Trust & Transformation: What the 2025 Global Study Means for the Financial Sector

We all know AI is everywhere — writing emails, drafting contracts, even building full business plans. But here’s the real question: Can we trust it? And if yes, how much?

 

A new global study led by the University of Melbourne puts this front and center. The key message? 

 

Trust is no longer optional. 

 

Especially in finance.

 

For business owners, especially in fast-moving markets like the UAE and Saudi Arabia, this matters. AI is transforming the way we manage money, assess risk, and make decisions. 

 

But without trust, that transformation hits a wall.

The World Is Using AI — But Not Everyone Trusts It

AI is already a very important part of daily life for most people. According to the 2025 Global Study led by the University of Melbourne, 66% of people use AI every single day, often without even knowing it.

 

But here’s the catch: only 46% of them actually trust it.

 

That’s a problem. 

 

Especially in industries like finance, where trust is everything.

 

So, what builds trust in AI?

 

The study points to three main things:

  • AI literacy – People need to understand what AI does and how it works.

  • Oversight – There should be clear human checks on AI decisions.

  • Regulation – Strong rules are needed to make sure AI is used fairly and safely.

Here’s where it gets more interesting:

Trust in AI isn’t the same everywhere.

 

In emerging economies, like some parts of Asia and Africa, AI adoption is fast, but regulation is often weak, and people don’t always get proper training. That makes trust harder to build.

 

In developed countries, adoption may be slower, but there’s usually more awareness, stronger oversight, and tighter rules. That builds confidence.

 

The UAE and Saudi Arabia sit somewhere in the middle — growing fast, investing big in AI, but still working on the trust-building part.

 

And that’s exactly why this matters for you as a business owner: AI can help you grow, but only if you know when to trust it and when to double-check.

Middle East Spotlight: UAE, KSA, and Egypt

In the Middle East, AI isn’t a future trend anymore; it’s already in motion.

 

According to the study, AI usage is sky-high, and there is a significant percentage of people using it daily.

  • UAE: 97%

     

  • Saudi Arabia: 94%

     

  • Egypt: 71%

     

That’s some of the highest adoption in the world.

 

People here are optimistic. Businesses are embracing AI for speed, scale, and smarter decision-making. But there’s a gap between this public excitement and what the regulations currently cover.

 

And that’s the risk.

 

In countries like the UAE and KSA, governments are taking the lead. They’re building frameworks, setting policies, and creating national AI strategies. But it takes more than rules to build trust in something, like AI. 

 

It needs awareness, transparency, and constant learning.

 

For startups and small businesses, that means one thing:
Staying ahead, not just by using AI, but by understanding what’s guiding it.

 

Because in this region, tech is moving fast. But trust and good governance will decide who really wins.

The Hidden Risks: What Happens When AI Isn’t Watched

AI can be powerful. But without clear rules, it can also create serious problems, especially in finance.

The study found that 57% of employees use AI tools at work without telling anyone. Even worse, 48% upload company data into public AI platforms like chatbots or document generators.

Sounds harmless? It’s not.

This kind of behavior can:

  • Break audit trails

     

  • Expose confidential financial data

     

  • Lead to compliance failures

     

  • Even cause financial misstatements that land you in legal trouble

     

In finance, every number matters. If AI is pulling, editing, or suggesting figures behind the scenes, and no one knows, you’re on a dangerous track.

For new business owners, this is a wake-up call:
Using AI is fine. Using it without guardrails is not.

If you’re not setting clear internal rules for AI use, or at least asking how your tools handle data, you’re already at risk.

AI in Accounting & Auditing: Smart, But Needs Supervision

AI, Trust & Transformation: What the 2025 Global Study Means for the Financial Sector

AI is already making accounting faster and easier.

It can:

  • Handle bookkeeping in minutes

     

  • Automate forecasting and tax calculations

     

  • Assist with financial modelling and IFRS disclosures

     

Sounds like a dream for small business owners, right?

But here’s the catch; 

Just because it’s fast doesn’t mean it’s always right.

AI can suggest numbers. It can fill in reports. It can even help prepare your financial statements. But it can’t understand your business context or spot subtle mistakes the way a trained human can.

That’s why the study stresses the need for:

  • Validation – Don’t trust outputs blindly

     

  • Ethics – Make sure decisions made by AI are fair and accountable

     

  • Review checkpoints – Always have a human double-check before submitting anything official

     

Think of AI as a smart assistant and not your financial manager.

In the UAE, where businesses are scaling fast and rules are tightening, this balance between efficiency and oversight is key.

Governance & Training Gaps in Finance

AI is being used. But not everyone is trained.

 

Only 47% of employees in finance have received any formal training in how to use AI tools. That’s less than half.

 

Most firms still don’t have clear policies. No rules for usage. No checkpoints for review. No controls for mistakes.

 

This isn’t just a small issue. Without training or structure, teams can’t use AI safely, especially in areas like finance where accuracy and accountability matter.

 

The gap is wider when you look at global standards. Many financial institutions haven’t aligned with:

  • IFRS for reporting

  • ISQC and ISQM for audit quality

  • ISO 42001, the new global standard for AI management

That’s a risk.

 

For small businesses, this is a warning sign. Before adding AI to your accounting or finance work, make sure your people know how to use it, and your policies say when and how it should be used.

Regulatory Expectations: What the UAE and Global Standards Demand

In the UAE, the Federal Tax Authority (FTA) is watching. And so are global regulators.

 

AI use in finance isn’t unregulated — it’s just catching up. And it’s clear what’s coming:

If AI touches your numbers, there must be a trail.

 

That means:

  • Audit logs

  • Clear documentation

  • Disclosure of AI involvement in financial reporting

Standards like ISA 315 (risk identification), IFRIC (interpretation of accounting rules), and IAASB (international audit standards) all point to the same idea: transparency. If AI makes a decision, someone needs to know how, when, and why.

 

There have been early moves toward AI assurance reviews, and formal check to make sure AI tools in finance are reliable, ethical, and compliant. 

 

That’s going to be big.

Leadership Roles: Who Should Own the AI Conversation?

CFOs, internal auditors, and board members can’t sit this one out.

 

Finance leaders are now expected to:

  • Oversee AI policy

  • Monitor risk and compliance

  • Ensure data integrity from AI systems

Internal audit teams must test not just numbers, but the algorithms and data flows behind them.

 

AI isn’t just a time-saver. It’s a risk tool too. It can spot fraud patterns, detect anomalies, and help with continuity planning.

 

But it only works if leadership takes control.

AI in the Middle East: Not Just Talk Anymore

People say AI is coming. The truth is, it’s already here, and businesses in the UAE, Saudi Arabia, and Egypt are using it right now.

 

Take taxes, for example. Some companies are using AI tools that link straight to the FTA. These tools can tell you, in advance, what you might owe. That’s not a bad thing when you’re trying to plan cash flow and avoid nasty surprises.

 

Then you’ve got the auditors. Instead of going through hundreds of receipts and invoices line by line, they’re feeding data into AI systems. The software flags anything weird, so humans only have to check the tricky parts. Saves hours, maybe days.

 

Even board reports, the kind that used to take forever, are being handled by AI. It pulls the numbers, builds the charts, and can even help forecast what’s coming next. Less time on reports, more time making decisions.

 

And here’s another thing: a lot of these tools are now affordable. Some are local. Many just plug into your existing software. If you’re using cloud accounting, chances are, you already have access.

So, What Should You Do Now?

  1. Figure out where AI is already in play
    Start small. Look at your processes. Are you using any tool that “suggests” numbers, pulls reports, or predicts anything? That’s AI. Just note it down. Know what’s being touched.
  2. Be upfront about it
    If AI is helping with reports or taxes, don’t hide it. Whether it’s for internal use or something going to auditors or tax people, just say it. It’s better to be clear now than to explain later.
  3. Make a few basic rules
    Doesn’t have to be fancy. Just write down what’s acceptable and what’s not. What tools are allowed? When should a human check the results? Who’s responsible for reviewing the output?
  4. Pick someone to keep an eye on things
    You don’t need a full-time AI manager. Just nominate a responsible person, be it your finance lead, your accountant, to keep track of how AI is being used. They don’t need to code. They just need to pay attention.
  5. Keep your ears open
    AI moves fast, and you need to keep your ears open to all the news of forthcoming changes. You don’t need to be a tech wizard, but it helps to read up once in a while. One article a month. One short video. It’s enough to stay in the loop.

Conclusion: Trust AI — But Lead It

AI is already part of how business gets done — in the UAE, in Saudi Arabia, and around the world. But just using AI isn’t enough. What matters now is how it’s being used, and whether the people behind it understand what’s at stake.

 

In finance, where trust and accuracy are everything, leadership matters. Whether you’re running a small business or managing a growing team, it’s up to you to set clear rules, ask questions, and make sure decisions made by AI are checked and understood.

 

The tools are ready. The tech is here. What’s needed now is human judgment — to guide, to review, and to lead with clarity. That’s how trust is built. And that’s how real transformation begins.

FAQs:

AI data isn’t automatically trustworthy. You need to check where it came from, how the system worked it out, and whether there’s a trail showing each step. Someone from the finance or audit side should review it before it’s treated as reliable.

Some tools claim to work with IFRS, but it depends on how well they’re set up. You still need an expert to review the results and make sure they match IFRS logic. AI helps, but it doesn’t replace judgment.

An audit trail shows what the AI did, which data it used, what decisions it made, and when. This is important for external auditors to understand the process and confirm that everything was done by the book.

It’s not a legal must yet, but it’s safer to mention it. If you used AI to calculate or prepare anything for your VAT return, write that down. It shows you’re being open and helps if anything is questioned later.

AI can help speed things up and spot errors early, which is great. But it can also create new risks if no one’s checking how it works. If you’re using AI in audit or reporting, you still need to follow proper quality checks. That means setting clear policies, keeping documentation, and making sure someone is responsible for what the AI does. Otherwise, you’re not meeting ISQC 1 or ISQM 1 — even if the work looks efficient.

Only after a qualified person checks them. AI can pull the data and create drafts, but until a human reviews and signs off, you can’t treat them as legally solid.

AI might miss the grey areas or misread uncertainty in tax positions. IFRIC 23 needs careful thinking, and if you let AI handle it without checking, you might end up filing wrong or getting flagged.

CFOs should look at whether the AI output affects big decisions or financial statements. If it affects reported figures, investor perceptions, or internal strategy, it’s likely material. The key is to assess the size, context, and relevance of the output and to document the reasoning clearly. AI can support decisions, but final judgment must still rest with finance leadership.

AI assurance means checking that the tools used in reporting are safe, accurate, and follow the rules. It’s like quality control for the AI, making sure it doesn’t mess up your reporting or miss something important.

Yes, if AI is part of your reporting process, auditors should test it. They don’t have to understand all the code, but they should know what the tool does, what goes in, and what comes out.

If the AI causes a big mistake, like wrong tax numbers, misstatements, or skipped checks, that’s reportable. Anything that affects financial accuracy or breaks the rules should be flagged.

If the AI system meets the IAS 38 rules, like bringing future value and being controlled by the business, then yes, it can count as an intangible asset. But you need to show proof and track costs clearly.

They can be. NLP tools help with large text data, spotting patterns or red flags. But the results must still be reviewed and documented properly so they hold up under audit standards.

AI makes things faster, but auditors still need to question results. If something looks off, they can’t just trust the tool; they need to investigate. Skepticism is still key, even when AI is involved.

It’s a good idea. Someone needs to track how AI is used, make sure it follows the rules, and step in if anything goes wrong. This person doesn’t need to be a tech expert, just someone who understands finance and can manage risk.

References

Related Articles