AI compliance: turning anxiety into advantage

With AI-driven cyberattacks on the rise, organisations are facing renewed scrutiny over data protection and compliance as regulatory bodies sharpen their focus. Over half of UK businesses have admitted to experiencing data-related issues since the GDPR was introduced seven years ago. Now, AI’s rapid evolution is amplifying existing data concerns, pressuring businesses to innovate around data safety while remaining in line with evolving standards of compliance.

These anxieties are understandable, but for companies already demonstrating strong compliance, adapting to AI’s challenges is less daunting than it appears. Rather than starting from scratch, these organisations can effectively build on established practices and strategically prioritise human involvement to adhere to regulations. None of this can happen, however, without first understanding the unique risks AI can present to their business model.

Define your company’s concept of risk.

Risk is not one-size-fits-all. Businesses generally understand the broader hazards of using AI, such as bias and hallucination, but their unique concept of risk will fundamentally depend on their industry, business model, and the type and volume of data they process. All of this is irrespective of size; any organisation that handles large quantities of personal or sensitive data is inherently at higher risk, in the context of use of AI.

Defining a business’ specific concept of risk is the first step to effective AI compliance. For example, a retailer deploying an AI chatbot for customer service faces the risk of hallucinations and inaccurate data sharing, whereas an organisation introducing AI into HR could see the technology demonstrate bias or discrimination. No matter the scenario, the nature and severity of the risk are inherently contextual and require tailored consideration.

External regulations and standards can offer broad guidelines, but the responsibility to define risk ultimately sits with individual businesses. Doing so lays the groundwork for appropriate next steps, whether that’s implementing targeted new policies or refreshing existing ones.

Update existing frameworks before creating new ones.

Many companies are rushing to draw up completely new frameworks for AI compliance; in fact, research has found that 70% of UK businesses have already established, or are developing, dedicated policies and guidelines. Whilst these can be effective, AI’s rapid advancement means a brand new policy or process can become outdated in a matter of months, or even weeks.

Businesses can work smarter, not harder, by refreshing existing governance, data protection, and security policies and updating their existing tech stacks before investing heavily in entirely new solutions. They should assess where AI is already impacting operations and introduce specific audits and compliance checks within those existing workflows. The same approach applies to upskilling workforces: update existing training with modules that educate employees on AI’s risks and benefits, and regulations like the EU AI Act or GDPR.

These practices embed AI compliance into existing operations, rather than overlaying it. This ensures adherence to regulations without hindering or blocking day-to-day business activities. This approach also fosters responsible and ethical AI use, all while maintaining essential human oversight.

Keeping humans in the loop is non-negotiable.

AI ideally should never be left to its own devices. Responsible companies will ensure human oversight permeates critical operations and processes where AI is involved. With people cross-checking and reviewing automated outputs, businesses can reap the benefits of AI whilst staying accountable and mitigating potential pitfalls.

There are several critical factors to bear in mind here. Firstly, review teams must possess a diversity of perspectives and experiences to ensure comprehensive quality control and spot a wider range of issues. Secondly, humans need to be educated on exactly what to look out for, when to raise an issue, and what an escalation path looks like. They also need a solid, evolving understanding of the company’s defined concept of risk, acceptable AI use cases, and limitations. As the technology develops, so should their training.

Human input is fundamentally there to measure an AI tool’s effectiveness, reliability, and ethical impact, ensuring AI works for the business, not the other way around. But companies shouldn’t just be thinking internally. AI’s growing prominence means external users are becoming increasingly educated on how their data is being used, and they’re demanding transparency and control. In order to maintain compliance, businesses must actively bolster their users’ trust by providing clear, jargon-free information about data use.

Maintain transparency and user control.

Public attitudes towards AI vary greatly. Research from the UK Government found that 43% of adults believe AI could have a positive impact on society and themselves, but 33% think the opposite. Regardless of individual opinion, every user should have the final say on if and how AI tools can leverage their data.

Balancing AI’s rapid growth with transparency and control is not just essential, it’s a non-negotiable. This doesn’t have to be burdensome, but it absolutely must be ongoing. Businesses can provide periodic opt-out pop-ups, signpost clear policy pages, notify users of updates, or send reminders about their data preferences through emails or in-app notifications. There’s no need to reinvent the wheel; simple, consistent steps can have a big impact, empowering external users to own the decisions around their data, and communicating a business’ commitment to transparency—all of which will strengthen trust.

These practices highlight that AI compliance does not need to be an intimidating hurdle. With a tailored plan, simple but effective approaches, and constant human oversight, businesses can confidently adhere to regulations whilst strategically embracing the benefits AI has to offer—and ultimately stay ahead of the curve.

By Sally-Anne Hinfey, PhD, VP Legal at SurveyMonkey