Artificial Intelligence (AI) is revolutionising our world, ushering in unprecedented opportunities for innovation, growth, and societal progress. Nevertheless, it raises significant concerns regarding safeguarding individual rights, particularly the right to privacy. How can we ensure that AI development and deployment align with our core values, respect established rules, and avoid compromising our dignity, autonomy, and security?
The European Union (EU) is at the forefront of addressing these fundamental questions through its groundbreaking legislation, the Artificial Intelligence Act (AIA). The AIA seeks to establish a legal framework that balances the advantages and risks associated with AI, fostering trust and accountability within the AI ecosystem. As the world’s first comprehensive and horizontal regulation of AI, the AIA has the potential to influence the global trajectory of AI development and governance significantly.
The Key Features of the AIA
The AIA is built upon four primary pillars:
A Risk-Based Approach
The AIA categorises AI systems into four risk categories: unacceptable, high, limited, and minimal. Unacceptable risk AI systems, such as real-time remote biometric identification in public spaces or biometric categorisation based on sensitive characteristics like gender, race, or religion, are prohibited under the AIA. High-risk AI systems employed in critical sectors such as healthcare, education, law enforcement, or recruitment are subject to stringent requirements concerning data quality, transparency, human oversight, and accountability. Limited-risk AI systems, like chatbots or virtual assistants, which may cause inconvenience or annoyance to users, are mandated to inform users of their AI nature and offer an opt-out option. Minimal-risk AI systems, like video games or spam filters, face no regulatory constraints under the AIA but are encouraged to follow voluntary codes of conduct and best practices.
A Conformity Assessment
The AIA introduces a process for verifying the compliance of high-risk AI systems with legal requirements before they can be introduced to the market or put into service. The conformity assessment can be conducted by the AI system provider or by a third-party notified body, depending on the level of risk and complexity of the AI system. Additionally, the AIA establishes a European database for high-risk AI systems, requiring providers to register their AI systems and provide relevant information and documentation.
A Market Surveillance and Enforcement Mechanism
The AIA designates competent authorities at the national level responsible for monitoring and enforcing compliance with legal requirements for AI systems. These authorities have the authority to conduct inspections, request information, issue warnings, impose corrective measures or sanctions, and order the withdrawal or recall of non-compliant AI systems. Furthermore, the AIA created the European Artificial Intelligence Board, composed of representatives from national competent authorities and the European Data Protection Supervisor, to ensure consistent and harmonised application of the AIA throughout the EU. The AIA also establishes mechanisms for cooperation and coordination among national competent authorities, the European Commission, and other relevant stakeholders.
A Support and Innovation Framework
The AIA seeks to promote developing and adopting trustworthy and human-centric AI within the EU. This involves the creation of testing and experimentation facilities where AI providers can assess and validate their AI systems in a controlled environment. The AIA encourages developing and adopting standards, codes of conduct, and certification schemes for AI systems, as well as initiatives promoting education, training, and awareness about AI.
Additionally, the AIA supports developing and utilising AI systems that contribute to the public good, such as environmental protection, health, social welfare, or cultural diversity.
Implications of the AIA on the Global AI Landscape
The AIA is a pioneering initiative that could significantly influence the global AI landscape by setting standards and shaping policies. It is expected to create a level playing field and a competitive advantage for the European AI industry, offering legal certainty, consumer trust, and market access. Moreover, the AIA is poised to inspire and influence other nations and regions in developing or revising their AI regulations, including the US, the UK, China, India, and Japan. The AIA also facilitates international collaboration on AI governance by promoting shared values, principles, and norms for AI’s ethical and responsible use.
Nonetheless, the AIA faces challenges and criticism, both internally and externally. These include concerns regarding the regulation’s complexity and feasibility, its provisions’ balance and proportionality, and the AIA’s alignment and compatibility with other existing EU laws and international obligations.
Next Steps for the AIA
The AIA is a draft proposal that must undergo the legislative process before becoming legally binding. Initially presented by the European Commission in April 2021, the AIA is presently under discussion and negotiation by the European Parliament and the Council of the EU, representing the two co- legislators within the EU.
Anticipated to be adopted by the end of 2023 or early 2024, the AIA will come into effect 20 days after its publication in the Official Journal of the EU. It will become applicable two years after that date, allowing AI providers and users adequate time to adapt and comply with the new regulations.
The AIA stands as a visionary regulation that could shape the future of AI within Europe and beyond. It is a dynamic and evolving regulation that can be revised and updated to adapt to changes in AI technology and its societal impacts. Rather than a final answer, the AIA represents the beginning of a dialogue and a journey toward a more ethical and responsible AI landscape.
Compiled by: Jasleen Kaur
Check out: EU AI Policy and Regulation: What to look out for in 2023. By Marianna Drake, Marty Hansen & Lisa Peets
https://www.insideprivacy.com/artificial-intelligence/eu-ai-policy-and-regulation-what-to-look- out-for-in-2023/?ssp=1&setlang=nl-NL&safesearch=moderate