Table of contents

Know Your Agent: the missing Compliance Layer

Length

0 min read

Author

Nicolas Devillard

Published

KYC verifies the person and KYB verifies the business, but neither covers the explosion of AI agents acting on accounts (Inbound KYA) or running compliance operations (Internal KYA). This article introduces Know Your Agent (KYA) as the missing compliance layer for governing AI-driven decisions, tracing audit trails, and assigning liability, driven by immediate regulatory pressures like the EU AI Act and DORA.

KYC verifies the person. KYB verifies the business. Neither covers what's coming next.

On April 9, Revolut started rolling out AIR to 13 million UK customers. An AI assistant inside the app that can freeze cards, manage subscriptions, track spending, handle investment queries. The user swipes down and talks to it. AIR replaces multi-step navigation with a single conversational interface. Zero data retention with third-party AI providers. It only accesses what the customer already sees.

That's the front. Behind it, every action AIR takes has to go through Revolut's compliance, fraud, and operational stack. When AIR freezes a card or pauses a payment, something needs to govern that action, log it, and make it auditable. Now multiply by 13 million users and roll it out across Europe.

This is happening from two directions at once. From the outside: customers send AI agents (like AIR) to act on their accounts. From the inside: fintechs deploy AI agents in their own operations to automate fraud detection, KYC reviews, AML screening, transaction monitoring.

Both create the same problem. Nobody is verifying the agent.

The industry is starting to call this KYA. Know Your Agent. The term is getting used loosely, and most people confuse the two dimensions. They're different problems with different timelines, different regulatory drivers, and different solutions.

The two KYAs

Inbound KYA is about verifying an external AI agent that acts on behalf of a customer. When Revolut AIR freezes a card, when a customer's AI assistant initiates a transfer through their bank, when an agent negotiates a price and completes a purchase: the receiving institution needs to answer: who built this agent? What is it authorized to do? Can I trace it back to a verified human? Can the customer revoke it?

Visa launched its Trusted Agent Protocol with Cloudflare, Microsoft, and Shopify. Mastercard launched Agent Pay with agentic tokens at the network level. Amazon's "Buy for Me" and Google's "Shop with AI" are live. In crypto, the x402 protocol (Coinbase-backed, Cloudflare-integrated) has settled over 75 million on-chain AI agent transactions on Base and Solana using USDC. MCP servers are adding x402 support so AI models can pay for tool calls autonomously.

All of this is infrastructure-level. It answers "is this agent who it claims to be?" It doesn't answer what happens next. The institution receiving the agent's action still has no operational workflow to check its authorization scope, apply risk scoring (Sardine for device/behavior risk, ComplyAdvantage for sanctions), and route to a human when something doesn't match. That middle layer between "agent identified" and "action executed" doesn't exist.

Internal KYA is about governing the AI agents a fintech deploys in its own operations. Behind Revolut AIR, behind every neobank's compliance engine, there's a stack of AI-driven suppliers making decisions: Sardine scoring device and behavioral risk. Persona running identity verification. ComplyAdvantage screening against sanctions lists and PEP databases. DotFile and Topograph pulling company registries for KYB. Kolar running autonomous banking operations. Sphinx automating KYC/AML decisions inside existing tools. Eloquent AI processing fraud and disputes end-to-end.

Each of these makes or influences a regulated decision. The institution deploying them needs to ensure: permissions are enforced in code (not in prompts), execution is deterministic and reproducible, every decision has an audit trail inspectable by regulators, a human can intervene at any step, and performance is monitored for accuracy and drift.

Internal KYA is the one that matters right now. External agents initiating payments are still early. Internal agents running compliance ops are live today.

To make this concrete: a fintech running KYC automation plugs in Persona for identity verification, ComplyAdvantage for sanctions screening, and Sardine for fraud risk scoring. Three suppliers, three AI-driven outputs, feeding into one compliance decision. Who governs the interaction between them? Who logs the combined decision path? Who escalates when Persona says approve but Sardine flags risk? That's Internal KYA. The governance of a multi-supplier, multi-agent compliance stack.

In crypto, the same picture with different names: TRM Labs for blockchain intelligence, Chainalysis for transaction tracing, Sardine for device risk. All producing signals. Nobody orchestrating the decision workflow between them.

What the regulations actually say

Three regulatory frameworks are converging. None of them use the term "KYA." All of them require what KYA describes.

EU AI Act (Regulation 2024/1689)

The EU AI Act classifies AI systems used in credit scoring, fraud detection, AML risk profiling, KYC/KYB automation, and automated financial decision-making as high-risk under Annex III.

High-risk systems must have:

A continuous risk management system throughout the lifecycle (Art. 9). Comprehensive technical documentation before market placement (Art. 11). Automatic event logging for traceability (Art. 12). Sufficient transparency for deployers to interpret outputs (Art. 13). Human oversight mechanisms allowing intervention and interruption (Art. 14). Tested accuracy, robustness against errors and attacks, and cybersecurity protections (Art. 15). Registration in the EU database before deployment (Art. 49). A quality management system ensuring ongoing compliance (Art. 17).

The compliance deadline is currently August 2, 2026. The Digital Omnibus proposed by the Commission in November 2025 includes a stop-the-clock mechanism that would push this to December 2, 2027 for standalone high-risk systems. The Digital Omnibus has not been adopted yet. Until it is, August 2 remains the legal deadline.

Either way, the requirements don't change. Only the enforcement date moves.

ACPR (France)

The ACPR published its 2026 work programme in January. AI supervision is one of five strategic axes.

They created a new standing directorate: the Direction de l'Innovation, des Données et des Risques Technologiques, combining innovation, AI supervision, and cyber. The Fintech-Innovation unit launched a programme specifically on algorithm auditing. They're building the methodology for how to evaluate AI systems in financial institutions: governance, performance, robustness, cybersecurity.

The ACPR is designated as market surveillance authority for the EU AI Act in banking and insurance in France. They will inspect. They will enforce.

And they're not waiting for the EU timeline. The directorate is operational now.

PSD3 / PSR

PSD3 and the Payment Services Regulation are expected in the Official Journal by end of Q2 2026. First enforcement wave (fraud, SCA, operational obligations) starts late 2026 or early 2027.

For KYA, the key change: PSD3 will formalize delegated payment initiation by AI agents and force the industry to solve Strong Customer Authentication for autonomous agents. PSD2 doesn't mention AI. PSD3 barely does. The gap between what agents can do and what regulation covers is where the risk sits.

DORA

Already in force. Third-party AI providers used in critical financial functions (fraud detection, AML screening, KYC automation) may be classified as critical ICT third-party providers. Financial institutions must demonstrate they can monitor, audit, and terminate these provider relationships.

Every external AI agent plugged into a fintech's operations is a DORA third-party risk question.

MiCA (for crypto)

MiCA entered full enforcement in December 2024. Over €540M in penalties issued. CASP authorization deadline is July 1, 2026. Every crypto service provider needs transaction monitoring, wallet screening, and travel rule compliance.

When AI agents move stablecoins autonomously through x402 or similar protocols, the compliance stack doesn't change. Wallet screening, transaction monitoring, sanctions checks all still apply. But the operational layer to execute those checks on agent-initiated transactions barely exists.

The crypto KYA problem

Crypto is further ahead on this problem than fiat. Not by choice. By necessity.

AI agents are already transacting on-chain. The x402 protocol settles payments in USDC. The agent signs the transaction, server verifies, done. No human. No login. No SCA equivalent. Base and Solana handle the settlement. Nobody handles the compliance.

The agents don't stay on one chain. They bridge assets, swap tokens, interact with DeFi protocols, sometimes within minutes. Cross-chain movement is where most illicit activity hides.

Blockchain intelligence providers like TRM Labs track cross-chain movement across 30+ blockchains. Their API returns wallet risk scores in under 400 milliseconds. They just launched AI agents for natural language on-chain investigation in March 2026.

But intelligence without an operational workflow is just data. An alert that shows up in a dashboard and gets copy-pasted into a spreadsheet doesn't satisfy MiCA, doesn't produce an audit trail, and doesn't create the documented decision process a regulator expects.

Three workflows emerge:

Agent wallet screening. When an AI agent initiates an on-chain transaction: extract the counterparty wallet, screen it against sanctions and illicit activity databases via TRM Labs, route the result through a decision tree (auto-approve low risk, human review medium risk, auto-block and alert on high risk), log everything.

Continuous agent transaction monitoring. For CASPs running trading bots, arbitrage agents, or DeFi yield agents: feed TRM Labs transaction monitoring alerts into a governed workflow. Auto-classify known patterns. Escalate suspicious patterns with full forensics context. File SARs when thresholds are met.

Agent identity registry. For platforms allowing AI agents to operate: register the agent (creator, owner, capabilities, spending limits), screen its wallets, approve or reject based on risk profile, monitor continuously, suspend or terminate if the agent's wallet touches flagged entities.

What's actually real today

The honest picture: AI agent adoption in European financial ops is still early. Even at scale (50 internal agents, 300 outsourced operators, 8 markets), the most advanced fintechs have automated only 2-3 workflows out of 30 with AI. The FCA in the UK explicitly said in December 2025 they won't introduce AI-specific rules, opting for principles-based oversight.

Inbound KYA (customer agents paying on your behalf) is mostly crypto-native or in pilot. Fiat-side, PSD2's SCA requirements still require human authorization. PSD3 will start addressing this, but enforcement is 2027 at the earliest.

Internal KYA (governing your own AI agents) is where the need is immediate. Agents are being deployed in fraud, KYC, AML, onboarding. The EU AI Act requirements for high-risk systems are concrete. The ACPR is building inspection methodology. DORA requires third-party agent risk management.

Where the gaps are

Three gaps, not one.

The governance gap. Visa and Mastercard are solving agent verification at the payment network level. Sumsub and Vouched are building agent identity credentials. Ballerine launched an "agentic commerce governance" platform in January 2026 for merchant risk at PSPs. None of them solve what happens inside the institution. When Persona returns a KYC verification, when Sardine returns a risk score, when TRM Labs flags a wallet, when ComplyAdvantage returns a sanctions hit, when Kolar executes an autonomous banking operation: someone or something has to take those signals, run them through a decision process, route to a human when needed, log every step, and produce the audit trail that Art. 12 of the EU AI Act requires and that the ACPR's new directorate will inspect.

The operational gap. Governance is necessary but not sufficient. When Sardine holds a payment and ComplyAdvantage flags the same customer for sanctions, that's not an audit trail problem. It's a routing problem. Which team handles it? What's the SLA? Who sends the RFI to the customer? Through which channel? What happens when the customer responds? These are operational workflows that sit between suppliers, between teams, between systems. Today most fintechs handle them through Slack messages and spreadsheets. That doesn't scale when agents multiply the volume of decisions per hour.

The responsibility gap. This is the one nobody wants to talk about. Take a chargeback. A customer's AI agent initiated a purchase. The merchant's fraud agent approved it. The payment processor's risk agent scored it low-risk. The transaction goes through. Now the customer disputes it. Who is liable? The customer who delegated to an agent? The merchant whose agent approved? The processor whose agent scored? The AI provider who built the model? PSD3 will start to formalize liability for AI-initiated transactions but it's not there yet. Today, when an agent-driven chargeback hits, it falls into a dispute workflow where no one has mapped the chain of agent decisions that led to the transaction. The same applies inside compliance: if an AI agent auto-approved a KYC check that turns out to be fraudulent, and three suppliers contributed signals to that decision, the institution needs to reconstruct exactly which agent said what, when, and why. That's not a governance feature. It's an operational requirement for liability attribution.

These three gaps feed each other. You can't attribute responsibility without an audit trail (governance). You can't produce an audit trail without routing the decision through a structured workflow (operations). You can't route it properly if you don't know which agents were involved and what their permissions were (governance again). KYA is not just about tracking agents. It's about connecting the governance layer, the operational layer, and the responsibility layer into something that holds up when a regulator or a dispute hits.

Revolut is building AIR for 13 million users. Fintechs across Europe are plugging in Sphinx, Eloquent AI, Socratix, Variance, Diligent AI for their compliance ops. CASPs are running trading bots and arbitrage agents through TRM Labs and Chainalysis. The front is moving fast.

MiCA has already issued €540M+ in fines. The EU AI Act's requirements for high-risk systems are written. The ACPR has a standing directorate and an algorithm auditing methodology in progress.

The agents are deployed. The governance, the operational workflows, and the responsibility framework behind them are not. That's the KYA problem. It's not one gap. It's three, and they're interlocked.

Ready to build the perfect backoffice for your Operations?

Get a demo and discover why fast-scaling businesses like Qonto or Empathy build their internal tools with us.

Ready to build the perfect backoffice for your Operations?

Get a demo and discover why fast-scaling businesses like Qonto or Empathy build their internal tools with us.

Ready to build the perfect backoffice for your Operations?

Get a demo and discover why fast-scaling businesses like Qonto or Empathy build their internal tools with us.

The ops orchestration layer for fintechs.

Copyright © 2026 Forest Admin