By Danielle Barbour, Director of Product Marketing, Kiteworks
A London investment analyst uploads thousands of client portfolios to ChatGPT to generate market insights. Whilst this would violate UK data protection laws, Kiteworks’ recent research shows that 83% of organisations lack the technical controls to stop it from happening.
According to the findings, companies worldwide overwhelmingly rely on ineffective measures to prevent employees from uploading confidential data to AI tools. This has particular significance for UK financial organisations as Britain’s “pro-innovation” approach to AI regulation faces its first major test while organisations haemorrhage sensitive data into AI systems and multiple regulators prepare enforcement frameworks.
The permanent risk cannot be overstated. Once data enters an AI model, it becomes embedded forever. Accessible to competitors, threat actors, or anyone who knows how to extract it. For financial sector organisations navigating a complex web of sectoral regulations while trying to maintain competitive advantage through AI adoption, this represents an existential compliance threat.
Complex compliance requirements
The AI Regulation White Paper authored by the UK Department for Science, Innovation, and Technology established five key principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Interestingly, the UK has deliberately chosen a path distinct from the EU’s comprehensive AI Act, opting instead for a decentralised, principles-based approach designed to foster innovation while maintaining safeguards. Rather than creating new legislation, the UK distributes AI oversight across existing regulators such as the Information Commissioner’s Office (ICO) and the Financial Conduct Authority (FCA).
The UK’s sectoral regulatory approach means organisations face multiple, overlapping compliance requirements. Each potentially being violated if they follow global patterns of uncontrolled AI usage.
Increased regulatory attention
Financial service companies, however, face particularly severe challenges. The FCA’s AI Live Testing initiative and regulatory sandboxes assume controlled AI deployment with proper risk assessments. Yet, global data shows that 26% of financial organisations report over 30% of AI-uploaded data is private information. That means customer accounts, transaction histories, and credit assessments are all flowing freely into uncontrolled AI systems. Something needs to change.
It is not an issue that is going to go away. Recent developments signal increasing regulatory attention. The government’s AI Opportunities Action Plan in January 2025 aimed to boost AI adoption, while February’s rebranding of the AI Safety Institute to the AI Security Institute reflects growing security concerns. The UK’s signing of the Council of Europe AI Convention in September 2024 added international legal obligations to the mix.
Then there is GDPR, where penalties can reach £17.5 million or 4% of global annual turnover – whichever is higher. Beyond financial penalties, executives face potential criminal liability for certain breaches.
The problem with speed over security
The UK’s voluntary, principles-based approach assumes organisations will self-regulate effectively. Global data proves this assumption wrong. As many as 70% of organisations rely on human-dependent controls like training sessions or warning emails, while 13% have no AI data policies whatsoever.
The voluntary AI Code of Practice for Cybersecurity establishes 13 principles for secure AI systems. Yet, without mandatory compliance or technical enforcement, it risks being ignored if UK organisations follow global patterns.
Technical debt compounds the problem. Varonis’ research found organisations average 15,000 ghost users – stale accounts retaining system access. When credentials leak through AI exposure, the median remediation time stretches to 94 days. Unfortunately, the innovation-first philosophy has privileged speed over security, creating an environment where compliance is mathematically impossible without fundamental changes.
Technical controls needed
Financial service organisations need automated controls that enforce compliance without relying on human discretion. These AI data gateways provide the technical enforcement layer the UK’s voluntary framework lacks.
Such systems work by intercepting data flows to both sanctioned and unsanctioned AI services. They perform real-time content inspection against UK GDPR requirements, identifying special category data, personal information, and confidential business data. When violations are detected, gateways can block transfers, redact sensitive content, or require additional authorisation.
Critically, AI data gateways create comprehensive audit trails satisfying ICO and FCA requirements for accountability. They document what data was shared, by whom, with which AI services, and what controls were applied. This evidence becomes essential for demonstrating compliance to sectoral regulators.
An action plan
The window for voluntary compliance is closing. Now is the time for financial sector organisations to implement automated controls to provide them an additional layer of security. The UK’s flexible approach to AI regulation should not mean flexible compliance. Without technical controls, organisations could face impossible compliance requirements and inevitable enforcement action. The choice is simple. Implement protective controls now or explain failures to regulators later.