The rapid advancement of Artificial Intelligence (AI) technology brings both significant benefits and complex challenges, prompting new regulatory measures in the EU and the UK. These regions have adopted distinct approaches to AI governance, reflecting their unique priorities and regulatory philosophies. AMR CyberSecurity’s Managing Consultant, Jordan Orlebar, outlines these differing strategies and their implications.
The EU Approach: Prescriptive and Structured
The EU’s AI Act represents a pioneering attempt to create a comprehensive legal framework for AI, categorising AI systems into four risk levels: Unacceptable, High, Limited, and Minimal. This tiered approach aims to safeguard fundamental human rights and societal values by imposing strict compliance requirements on high-risk applications. Overseen by the European AI Office, the Act is expected to come into force in mid-2024, with most provisions applicable by 2026.
The AI Act mandates rigorous risk management, data governance, transparency and adherence to ethical standards for high-risk AI systems. This includes thorough risk assessments, robust data governance protocols, transparency in AI operations and adherence to technical standards. High-risk AI systems must also be registered in an EU database, with significant fines for non-compliance, potentially reaching €30 million or 6% of a company’s annual global turnover.
The Act also introduces the AI Pact, a voluntary initiative encouraging AI developers to comply with key obligations ahead of full implementation. This proactive measure aims to facilitate a smoother transition to the new regulatory environment.
EU Act Risk Levels
- Unacceptable Risk: Technologies like real-time facial recognition in public spaces, social scoring systems, and subliminal manipulation are prohibited.
- High Risk: Includes critical infrastructure, employment management, law enforcement, and democratic processes, requiring stringent regulatory oversight.
- Limited Risk: Involves transparency obligations, such as informing users when they interact with AI, helping combat issues like deep fakes.
- Minimal Risk: Covers most AI use cases, such as video games and spam filters, with minimal regulatory requirements.
The UK Approach: Flexible and Innovation-Friendly
Contrastingly, the UK adopts a principles-based, flexible regulatory framework, emphasising innovation while ensuring ethical standards and public trust. The UK’s strategy focuses on safety, transparency, fairness, accountability and contestability, aiming to support growth and address the ethical impacts of AI technologies.
Creators in the UK are encouraged to develop AI systems aligned with these principles, with guidance and frameworks provided to ensure compliance. The UK’s approach relies on regulators and central government support to enforce these principles, ensuring AI systems are safe, transparent and accountable.
UK Regulatory Principles
- Safety, Security, and Robustness: AI systems should be reliable and safe throughout their lifecycle, with continuous risk assessment and management.
- Transparency and Explainability: AI systems must be transparent and explainable, with details on their purpose, usage, and decision-making processes accessible to relevant parties.
- Fairness: AI systems should avoid unfair discrimination and uphold legal rights, with guidelines and standards developed to ensure fairness.
- Accountability and Governance: Effective governance measures must be in place to oversee AI systems, ensuring clear accountability and adherence to regulatory standards.
- Contestability and Redress: Users and impacted parties should have mechanisms to contest harmful AI decisions, with regulators ensuring appropriate redress routes are available.
Penalties for non-compliance in the UK can be substantial, with fines aligned with GDPR regulations, potentially reaching £17.5 million or 4% of annual global turnover for serious breaches.
The EU and UK’s divergent regulatory approaches to AI highlight the balance between safeguarding public interest and fostering technological innovation. The EU’s structured and prescriptive framework provides clear guidelines and strict compliance requirements, particularly for high-risk applications. In contrast, the UK’s flexible, principles-based approach aims to promote innovation while ensuring ethical AI development and public trust.
For developers, this means adapting to a well-defined regulatory framework in the EU and a more adaptable methodology in the UK.
But both approaches reflect a commitment to addressing the ethical, privacy and security challenges posed by AI, ensuring its benefits are realised responsibly and sustainably.