Citi, Morgan Stanley, Microsoft and others back FINOS initiative to standardize AI controls for banking
The race to integrate AI into finance just hit a collaborative gear, with some of the world’s largest banks and tech companies agreeing to work together on building shared guardrails before the next algorithmic blunder hits the headlines.
On Tuesday, the Fintech Open Source Foundation (FINOS) announced a new industry-wide initiative called Common Controls for AI Services, backed by banking powerhouses like Citi, Morgan Stanley, RBC and BMO, alongside cloud giants including AWS, Google Cloud, and Microsoft.
Their goal? To set the baseline rules for how artificial intelligence can — and should — be used across the financial sector.
No single playbook? Then write your own
With global regulators still scrambling to figure out how to police AI, banks aren’t waiting.
According to FINOS, the newly launched effort aims to create vendor-neutral controls that can be adopted across banks, tech providers, and developers alike. That means everything from compliance checks and bias monitoring, to documentation standards and real-time risk flags. Basically, all the things you’d want in place before letting AI loose on a high-stakes trading desk.
And they’re not starting from scratch.
This move builds on a prior success: FINOS’ Common Cloud Controls, released in 2023 with support from Citi, which set uniform benchmarks for using public cloud infrastructure in finance. Think of this new AI initiative as Cloud 2.0 — only with more machine learning and more at stake.
One person involved in the project summed it up like this: “We already standardized the plumbing. Now we’re trying to stop the pipes from exploding.”
A who’s-who of Wall Street and Silicon Valley
This isn’t just a couple firms teaming up for a flashy PR boost. It’s the kind of lineup you don’t usually see playing on the same team.
Just look at the roster:
-
Citi, Morgan Stanley, RBC, BMO
-
Microsoft, Google Cloud, AWS
-
Goldman Sachs and Red Hat also lending support
And they’re not doing this out of kindness. The risks are real, the stakes are high, and frankly, no one wants to get caught off guard by another AI snafu that could wipe out billions or trigger yet another hearing on Capitol Hill.
“This is about being proactive, not reactive,” said one FINOS participant, speaking on background. “We’ve seen what happens when innovation outpaces controls. We’re not letting that happen again.”
What’s actually being built?
To keep things focused, FINOS split its AI efforts into two parts: readiness and controls.
Last year, members drafted an “AI readiness” framework — basically a prep checklist for institutions trying to integrate AI responsibly. That work will continue in parallel.
This new effort, however, is all about building practical tools, including:
-
A real-time compliance validation toolkit for developers using AI in financial apps
-
Shared control definitions for major AI service types, like predictive models and chatbots
-
Common risk flags and audit documentation standards
These controls will eventually feed into CI/CD pipelines (a fancy term for automated software testing) to make sure AI-driven apps meet expectations before they go live.
There’s even talk of integration with GitHub and other dev platforms to keep things embedded in the workflow, not buried in a dusty policy PDF.
No help from Washington, so Wall Street steps up
Regulatory clarity? Don’t count on it.
Since President Donald Trump’s return to office, momentum for strict AI regulation has cooled. And while the U.S. Treasury Department has warned financial firms about AI’s cyber risks, it hasn’t laid out any enforceable standards yet.
A March 2024 Treasury report recommended financial institutions “comply with existing laws” but stopped short of proposing any fresh rules specific to AI. Which basically leaves firms stuck translating decades-old compliance frameworks into the AI era.
Hence the FINOS move.
“This isn’t about replacing regulation,” one bank executive said. “It’s about setting a baseline we can all agree on until the rules catch up.”
Where the AI money’s going — and why it matters now
Banks are pouring billions into AI — from customer service bots and fraud detection to algorithmic trading and personalized lending. But with that comes risk, particularly with third-party AI vendors plugging into sensitive financial systems.
Here’s a quick snapshot of where the financial sector’s AI interest is currently concentrated:
AI Use Case | Adoption Level (2025) | Key Risk Factor |
---|---|---|
Fraud Detection | High | False positives |
Loan Underwriting | Medium | Algorithmic bias |
Chatbots & Virtual Assistants | High | Data privacy leaks |
Predictive Trading Models | Low–Rising | Market volatility feedback |
Personalized Marketing | Medium | GDPR and consent concerns |
The idea, FINOS says, is to build “preventative scaffolding” now — so banks aren’t cleaning up messes later.
The developers are watching, too
This isn’t just a boardroom thing. FINOS wants engineers — the folks actually building the stuff — to weigh in, contribute, and adopt the standards as part of their day-to-day work.
If it works, developers will be able to validate whether their AI apps meet bank-approved control definitions straight from their code editor.
One developer from a major financial firm involved in the initiative put it bluntly: “We’re tired of flying blind. This gives us a checklist before regulators even knock.”