Guide to the EU Artificial Intelligence Act – Navigating AI Rules

Introduction

As technology keeps changing, so do the rules that shape the future of Artificial Intelligence (AI). The final version is out – the EU Artificial Intelligence Act is a big deal, laying down strict guidelines for AI practices. In this breakdown, we’ll unpack the nitty-gritty of the final text, giving you the lowdown on key no-gos, what to watch for in high-risk situations, and how to dance the fine line between innovation and playing by the rules. We’re breaking down the important stuff, sorting it into easy-to-understand sections. Come along as we decode the path for AI in the EU. #AIRegulation  #ComplianceInnovation  #EUAIArtificialIntelligenceAct  #ComplianceInnovation 

1. Prohibited AI Systems

The AI Act explicitly prohibits certain AI practices, including:

  • Use of subliminal techniques or manipulative methods causing significant harm.
  • Exploitation of vulnerabilities due to age, disability, or socio-economic situations.
  • Biometric categorization systems infer sensitive personal information.
  • AI systems evaluate or classify individuals or groups leading to disproportionate treatment.
  • ‘Real-time’ remote biometric identification in public spaces for law enforcement, except for specific necessary objectives.
  • AI systems assess criminal offence risk solely based on profiling or personality traits.
  • Creation of facial recognition databases through untargeted scraping.
  • Inference of emotions in workplaces or educational institutions, except for medical or safety reasons.

2. High-Risk AI Systems

The final text introduces criteria to determine if an AI system is high-risk, including its intended purpose, autonomy level, and past instances of harm. Notably, AI systems performing profiling of natural persons are always considered high-risk.

Providers must document their assessment of high-risk AI systems before market placement. Registration in the EU database and compliance with AI Impact Assessment requirements are mandatory.

3. General Purpose AI Models (GPAI Models)

A new chapter is dedicated to GPAI models, defined as models capable of competently performing various tasks. The AI Act doesn’t apply to AI models used for research, development, and prototyping before market release.

Providers must notify the European Commission if their GPAI model has systemic risk. Specific obligations and compliance measures are outlined, emphasizing cooperation with authorities and adherence to copyright laws.

4. Deep Fakes

Deployers of AI systems generating or manipulating deep fakes must disclose the artificial nature of the content unless authorized by law. Transparency obligations are limited in certain situations, such as artistic or creative works.

Providers must mark outputs of AI systems to indicate artificial generation, subject to exemptions for standard editing assistance or authorized law enforcement use.

5. Human Oversight

Human oversight measures must align with the AI system’s risks, autonomy level, and context of use. Employers deploying high-risk AI systems must inform workers and their representatives before implementation.

6. Codes of Practice

The AI Office plays a crucial role in developing codes of practice to support AI Act application. Codes should cover obligations, involve various stakeholders, and have clear objectives. Compliance is monitored, and general validity may be granted to approved codes through implementing acts.

7. Testing AI Systems

Conditions and procedures for testing high-risk AI systems outside sandboxes are outlined. Testing plans require approval, and ethical and legal guidelines must be followed. Informed consent is crucial, and SMEs and start-ups receive priority access to regulatory sandboxes.

8. Third-Party Agreements

Providers of high-risk AI systems must have written agreements with third parties. The AI Office may suggest standard contract terms, and compliance with industry standards is essential.

9. Technical Documentation

SMEs and start-ups are allowed simplified submission of technical documentation. Providers of high-risk AI systems must maintain specific documents for 10 years.

10. Finesa

Non-compliance with AI Act obligations incurs fines based on the severity of the violation, with higher penalties for prohibited AI systems and supply of incorrect information.

11. Timelines

The AI Act applies to prohibited systems after six months, GPAI after 12 months, and high-risk AI after 36 months. Codes of practice must be ready within nine months.

12. Next Steps for Organizations

Organizations are advised to proactively align AI systems with the AI Act, considering evaluations, documentation, disclosure, human oversight, data governance, compliance costs, and timely strategic planning.

Conclusion

The AI Act is a big deal, shaking up the rules and making organizations step up to play nice with ethical standards. Checking out high-risk AI systems, being open about what you’re doing, and planning things out smartly in terms of timing are key moves for dealing with these new legal twists and turns.