Everyone in your organisation is already using AI, here’s what you can do about securing it

AI is moving faster than ever, but without clear ownership and baked-in security, even the best models can create more risk than value. Here’s how to avoid the pitfalls and why it’s about more than just rolling with it

Silvia Lehnis, Consulting Director, Data and AI at UDBS Group

Just like the recent Oasis reunion, the question around whether you’ll adopt AI isn’t if, it’s when. But, just like when the notoriously volatile Mancunian brothers are together, AI gets complicated when it comes to being paired with other technology, especially security. 

But the pressure on CTOs, CISOs and CEOs in public services and financial services to cut costs and deliver results is intense. The real question isn’t whether to use AI, it’s how to do it without risking sensitive data, reputations, or service outcomes.

For security and tech leaders, AI is a two-pronged risk: Commercial off-the-shelf (COTS) tools like ChatGPT, which are more than likely already being used unofficially by half of your organisation, and custom-built AI apps and models that deliver more targeted business transformation. Both introduce significant challenges, and both require security to be baked in from the start, without hampering experimentation or innovation.  

AI security is a multi-team task

AI isn’t plug-and-play. It touches data, infrastructure, architecture, compliance and user behaviour, which means responsibility for securing it shouldn’t sit in just one team. But in reality, this broad reach often means that no one owns it, or ownership is fragmented, raising the risk of security gaps. 

The most successful organisations start by establishing lines of responsibility. It often works best when product or service owners take the lead because they understand the departmental problems that AI can help to solve. They should then be supported by close relationships with security, risk, and compliance teams, which means faster AI adoption without losing sight of due diligence. 

Before any technical components are discussed, start the assurance and compliance discussions at the service design phase and then work through the architecture. So, whether it’s secure by design or well-architected frameworks for the cloud, you’ll be incorporating security elements right from the beginning. 

Data integrity 

Before deploying a model, you need full visibility of the training data going in. There are two types of data to consider: the data that goes into training any of the foundation models, and the organisation’s own unique data that augments the basic foundational language model. This is the way many organisations will bring their data to AI – most organisations do not have the bandwidth or resources to train their own model.  

That means understanding what your unique data is, how it’s been prepared, and whether it poses any privacy or compliance risks. Has it been cleaned? Are you confident in how it’s stored and accessed? Is any of it personally identifiable, and if so, have you considered GDPR obligations, user authentication, and access control?

Privacy and security aren’t just about where the data sits; they’re about understanding its boundaries and ensuring those aren’t crossed, especially in multi-tenant cloud environments. 

Treating AI like any other software system is a good starting point: apply versioning, scanning, change control and monitoring & alerting alongside AI-specific considerations, such as data bias detection. 

The sheer speed at which AI evolves makes continuous monitoring a must. Without that clarity, even the best models can expose you to regulatory breaches or reputational damage.

Monitor and control 

One reason AI changes the game is that it’s raising the capabilities on both sides of the security fence. Whether it’s a nation-state attack or a bedroom hacker, nefarious characters now have access to tools that make phishing attempts more convincing, social engineering more targeted, and malware and ransomware development faster. 

Voice cloning and deepfakes aren’t theoretical risks anymore, they’re real-world issues we’ve already encountered. And, as a business, we’ve already put systems and procedures in place to help counteract these risks. 

Organisations, therefore, have to move faster and more intelligently than hackers with the same tools. Security operations centres (SOCs) increasingly rely on AI tools to analyse threats, automate triage, and handle the scale of today’s data and AI environments. We are increasingly operating in an AI vs AI world. 

But this raises a new set of questions. How do you verify AI decisions? How do you know your models aren’t introducing bias, errors, or new vulnerabilities? When it’s a case of AI versus AI, it’s the quality of your model, monitoring, governance, security policies and human-in-the-loop procedures that will determine which side wins.

Hidden risks

There are other risks, too. Inference attacks, while not common, are still occurring where hackers trick the AI into revealing some of its training data, which may contain personally identifiable information. 

Also, in most cases, organisations aren’t building LLMs from scratch, they’re integrating third-party tools, as well as using publicly available GenAI such as GPT-4 or Amazon Nova. These bring their own risks. Even with business accounts, some GenAI tools operate in multi-tenant architectures. Can you trust that the isolation works? What happens when you stop using the service, or the model is updated? Can your data be fully deleted?  It’s worth reading the small print to find out where you stand in these scenarios. 

And these aren’t hypothetical concerns. We’ve seen organisations realise too late that employees had already uploaded sensitive information into GenAI tools using personal accounts. Once that data is out, there’s no getting it back. 

That’s why technical controls like Data Loss Prevention and encryption remain essential. While user education matters, you can’t rely on it alone to prevent AI data breaches. 

Assign ownership of AI 

While AI’s capabilities continue to generate new levels of hyperbole, the organisations making safe progress are those that embed AI ownership across the organisation. Assigning AI champions within each business unit makes a tangible difference and ensures that securing AI is less overwhelming. These champions understand the strategy, challenges and where AI can make a difference or pose risk.

For example, in our business transformation team, several members work within the internal AI community to explore practical applications, understand the art of the possible, and rethink how consultancy roles might evolve. They map AI’s potential impact across their services in a colour-coded heatmap, highlighting where disruption or risk is most likely, whether through entirely new services or shifts to the operating model.

Crucially, this is fed into quarterly governance sessions with enterprise architects and risk management teams, so these security and business risks can be closely monitored. This keeps architecture cohesive, because without these guardrails, the number of tools, ideas and third parties becomes hard to control and manage. 

Start slowly and use guardrails

The good news is that there are already frameworks and standards in place to help, and that organisations can use to prove the credibility of their systems and procedures. The same standards that underpin good cybersecurity, ISO 27001, for example, still apply. Newer frameworks, like the NIST AI Risk Management Framework or ISO 42001, can be layered on top to guide AI-specific governance.

As with early email and messenger, blocking access to GenAI or stalling on LLMs development isn’t the answer. It’s about encouraging people to use AI safely and confidently – experimentation is important. But managing this is a delicate job, and guardrails should be in place to help control and monitor use. 

End-user education and embedding security from the outset, as well as strong data governance, will help overcome many of the risks. But it’s also about nurturing a culture where AI isn’t feared (nor revered), but where people also understand what’s at stake if they get it wrong.

So are your systems, data, processes, and policies secure enough for AI? The answer, we suspect, for most businesses would be Definitely, Maybe.