The European Union has taken a significant step in shaping the future of artificial intelligence (AI) regulation with the release of its First Draft General-Purpose AI Code of Practice. This landmark document, developed through a collaborative effort among industry leaders, academics, and civil society, is poised to set a global benchmark for responsible AI governance.
A Collaborative Approach to AI Governance
The draft, led by four specialised Working Groups, addresses critical aspects of AI governance and systemic risk mitigation. Each group focuses on a specific area:
- Transparency and Copyright Rules: Ensuring clear guidelines for copyright compliance, particularly in AI model training.
- Risk Identification and Assessment: Highlighting systemic risks associated with AI and methods to recognise them.
- Technical Risk Mitigation: Establishing frameworks to minimise risks posed by AI technologies.
- Governance Risk Mitigation: Creating strategies to manage risks at an organisational and policy level.
By aligning with existing legislation, such as the Charter of Fundamental Rights of the European Union, and considering international approaches, the draft aims to provide a proportional, future-proof regulatory framework that can adapt to rapid technological advancements.
Key Objectives and Systemic Risks
The Code of Practice sets out clear goals to guide AI development and deployment, including:
- Defining compliance methods for general-purpose AI model providers.
- Enhancing understanding across the AI value chain to ensure seamless integration into downstream products.
- Enforcing copyright compliance to regulate the use of copyrighted materials in training datasets.
- Identifying, assessing, and mitigating systemic risks associated with AI technologies.
A central feature of the draft is its taxonomy of systemic risks, outlining potential threats such as cyberattacks, biological risks, loss of control over autonomous systems, and large-scale disinformation. The document emphasises the need for ongoing updates to remain relevant as AI technologies evolve.
To address these risks, the draft introduces robust safety and security frameworks (SSFs) that propose a hierarchy of measures and key performance indicators (KPIs) to guide risk identification, analysis, and mitigation throughout the lifecycle of an AI model.
Mandatory Reporting and Collaboration
The draft highlights the importance of accountability, urging providers to establish mechanisms for identifying and reporting serious incidents involving their AI models. Detailed assessments, corrective actions, and collaboration with independent experts are recommended, particularly for high-risk models.
This proactive stance reflects the EU’s commitment to fostering transparency and safety in AI, ensuring that providers remain accountable for the impacts of their technologies.
A Path to Global Standards
The draft regulatory guidance is part of the EU’s broader AI Act, which came into effect on 1 August 2024. The final version of the Code is set for completion by 1 May 2025, underscoring the EU’s commitment to advancing AI regulation that balances innovation with societal protections.
Stakeholders are encouraged to participate in refining the draft, ensuring it reflects diverse perspectives and remains effective in addressing emerging challenges.
While still in its formative stages, the EU’s General-Purpose AI Code of Practice represents a significant step towards creating a comprehensive framework for AI governance. By addressing critical issues such as transparency, risk management, and copyright compliance, the EU is establishing a precedent for ethical and responsible AI development, with implications that could shape global standards.
As the world grapples with the transformative potential of AI, the EU’s initiative signals a path forward—one that prioritises safety, accountability, and the safeguarding of fundamental rights.
Source: https://www.artificialintelligence-news.com/news/using-ai-technologies-for-future-asset-management/