The Fragmented Future of Financial AI
From high-frequency trading to credit scoring and fraud detection, artificial intelligence (AI) is transforming the financial sector. Yet this transformation is unfolding faster than the regulatory frameworks meant to manage its risks. In regions like the EU, U.S., and China, policymakers have launched vastly different approaches to AI governance — each aligned with national priorities, but none designed to function in a globally interconnected financial system.
As the Bank for International Settlements (BIS) warns, this fragmented landscape may not only fail to manage emerging AI risks — it could make them worse by increasing systemic vulnerability through uncoordinated oversight and regulatory gaps (BIS, 2024).
This article argues for a global AI risk governance framework tailored to the banking sector — one that transcends compliance checklists and targets the system-wide behaviors of financial AI.
The Current Landscape: Diverse Models, Shared Weaknesses
European Union: Structure and Stringency
The EU AI Act is a risk-tiered, regulation-first framework that defines high-risk AI systems and imposes requirements such as documentation, explainability, and human oversight. For banks, AI used in credit scoring or fraud detection falls into the high-risk category, demanding ongoing audit and conformity assessments (Sieniewicz & Szostak, 2024).
But this approach may be too slow-moving for AI’s pace, and the focus on individual models misses the system-wide risks created when institutions all deploy similar AI systems based on shared datasets.
United States: Agile but Fragmented
The U.S. has no central AI law. Instead, governance occurs through sectoral regulation — e.g., the Federal Reserve for banks, NIST for standards — alongside voluntary frameworks like the NIST AI Risk Management Framework (RMF). This fosters innovation but creates inconsistent oversight and an expanding surface for “shadow AI,” or unauthorized internal deployments (Deshpande, 2024).
China: Fast and Centralized
China’s governance relies on central mandates and state enforcement, enabling rapid deployment of AI regulation. But this model lacks transparency and public oversight. As a result, institutions may comply quickly — but the risk of regulatory blind spots and systemic opacity remains high (Al-Maamari, 2025).
What the BIS Report Adds: A Shift in Risk Thinking
The 2024 BIS report reframes the problem: it is not enough to ensure individual AI systems are compliant. System-wide AI interactions can amplify risk, creating feedback loops and herd behavior among banks.
“AI systems can increase the correlation of errors across institutions, particularly where models are trained on similar data or market assumptions.”
(BIS, 2024, p. 14)
Key BIS insights include:
- Model assurance ≠ system assurance: Certifying each model doesn’t prevent collective failure.
- Risk aggregation is invisible: When many banks use similar AI tools, they may unknowingly create macro-level vulnerabilities.
- Supervisory blind spots: Traditional compliance is backward-looking and slow; AI models evolve in real-time, often escaping detection.
The Coordination Gap: Regulatory Fragmentation as Risk Amplifier
As AI adoption increases, regulators around the world are racing to catch up. But they’re often defining “high-risk AI” differently, setting non-aligned technical standards, and pursuing conflicting compliance timelines.
This leads to:
- Regulatory arbitrage: Banks may shift AI workloads to jurisdictions with weaker oversight.
- Duplicative compliance: Multinational banks face overlapping audits and reporting burdens.
- Compliance gaps: Some AI systems may fall between regulatory regimes and operate unchecked.
The BIS notes that these problems are especially acute in banking, where institutions are systemically important, interconnected, and increasingly reliant on AI for risk assessment, loan decisions, and operational security (BIS, 2024).
Global Governance: What AI GRC Should Look Like
To address these issues, the BIS proposes moving beyond national frameworks toward interoperable, global risk governance. Key recommendations include:
- Shared Model Registries
Regulators and financial institutions should co-develop registries that track AI tools used in core banking functions. These can support both transparency and sector-wide risk monitoring. - Cross-Border Regulatory Sandboxes
A global network of sandboxes would allow institutions to test AI systems collaboratively and under live, but monitored, conditions. This approach could accelerate innovation without sacrificing oversight. - Systemic Risk Auditing
Move beyond model-specific testing. Regulators should develop tools to simulate how AI tools interact at the ecosystem level — particularly under market stress or volatility. - Common Metrics for Explainability and Bias
Without common technical standards for fairness, robustness, and interpretability, banks face uncertainty and inefficiency in AI assurance. Alignment here would reduce compliance friction.
Shadow AI: A Hidden Global Risk
Another major concern raised in parallel research is shadow AI — the use of unauthorized or unregistered AI tools within financial institutions.
A 2025 study found that in sectors like banking, shadow AI was often responsible for bias, privacy breaches, and cybersecurity vulnerabilities — yet few institutions had active controls to detect or govern these deployments (Balogun et al., 2025).
Because shadow AI can propagate undetected across jurisdictions, it further illustrates the need for coordinated oversight and shared assurance tools.
Conclusion: Aligning Innovation With Resilience
The next financial crisis won’t start with spreadsheets — it may start with algorithms.
As AI becomes foundational to risk management, compliance, and customer engagement in banking, its governance must match its reach. The current patchwork of national rules is insufficient to address the interconnected, fast-evolving nature of financial AI risk.
We need a new model – one that:
- Looks beyond borders
- Monitors systemic behavior
- Coordinates across regulators
- And ultimately, builds trust in an AI-powered financial system
The tools exist. The risks are real. The time for global alignment is now.
References
Al-Maamari, A. (2025). Between innovation and oversight: A cross-regional study of AI risk management frameworks in the EU, U.S., UK, and China. ArXiv, abs/2503.05773. https://doi.org/10.48550/arXiv.2503.05773
Balogun, A., Metibemu, O., Olutimehin, A., Ajayi, A., Babarinde, D., & Olaniyi, O. (2025). The ethical and legal implications of shadow AI in sensitive industries: A focus on healthcare, finance and education. Journal of Engineering Research and Reports. https://doi.org/10.9734/jerr/2025/v27i31414
Bank for International Settlements (BIS). (2024). Regulating AI in the financial sector. Bank for International Settlements.
Deshpande, A. (2024). Regulatory compliance and AI: Navigating the legal and regulatory challenges of AI in finance. In 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS) (Vol. 1, pp. 1–5). https://doi.org/10.1109/ICKECS61492.2024.10616752
Sieniewicz, S., & Szostak, K. (2024). The impact of the EU AI Act on the financial services industry. Finance and Capital Markets (formerly Derivatives & Financial Instruments). https://doi.org/10.59403/2czzh3x