Quantcast
Channel: | Inside Global Tech
Viewing all articles
Browse latest Browse all 31

EU AI Policy and Regulation: What to look out for in 2023

$
0
0

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.

Alongside efforts to finalize the AILD, PLD and AI Act, the EU is working with the United States to develop international standards, tools and repositories for trustworthy AI. The Council of Europe is also advancing its work on developing the first legally binding international instrument on AI.

This blog post will outline the key elements of the Council’s position on the AI Act, noting differences between the Council’s position and the Commission’s initial proposal (for more information about the Commission’s proposal, check out our previous blog post), and consider broader European initiatives on international AI standards and legislation.

The Council’s Position on the AI Act

While the general objectives of the AI Act remain the same (ensuring that AI systems placed on and used in the EU market are safe and respect existing laws on fundamental rights), the Council has introduced amendments to the text of the AI Act as proposed by the Commission. The Council continues to follow a risk-based approach, with a focus on AI systems identified as “high-risk”. The key amendments proposed by the Council include:

  • Definition of an AI system. Narrowing the definition of an AI system to only include systems that are developed through machine learning approaches and logic- and knowledge-based approaches, in order to distinguish AI systems from simpler software. The Commission’s proposal had also included “statistical approaches, Bayesian estimation, search and optimization methods” within its definition of an AI system.
  • Prohibited AI practices. Extending the prohibition on using AI for social scoring to private actors, not just the public authorities listed in the Commission proposal. Additionally, the provision prohibiting the use of AI systems that exploit vulnerabilities of specific groups of persons will also cover persons who are vulnerable due to their social or economic situation.
  • Defining high-risk AI. Adding a horizontal limitation on the ‘high-risk’ classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured by the definition of high-risk AI.
  • Transparency. Increasing the level of transparency required regarding the use of high-risk AI systems. For example, users of an emotion recognition system are obligated to inform natural persons that they are being exposed to such a system.
  • Provisions relating to law enforcement authorities. Introducing wider exceptions for the use of AI systems by law enforcement authorities, for example:
  • The placing into the market, putting into service or use of AI systems for “national security, defence, and military purposes” are now excluded from the scope of the AI Act;
    • deep fake detection by law enforcement authorities, crime analytics, and verification of the authenticity of travel documents are no longer included in the list of high-risk AI use cases; and
    • while both the Council and the Commission have prohibited the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, the Council has widened the exceptions to the prohibition, which allows law enforcement authorities to use such systems where it is strictly necessary for law enforcement purposes.
  • General purpose AI. Extending the scope of the Act to include “general purpose AI systems” and proposing that certain requirements for high-risk AI systems would also apply to general purpose AI. The Council tasks the Commission with carrying out an impact assessment and consultation to determine which of the obligations that apply to high-risk AI systems should also apply to general purpose AI. Under the proposal, the Commission will then introduce an implementing act, setting out the requirements applicable to general purpose AI, to come into force 18 months after the AI Act is adopted.
  • Application of the AI Act. In the Council’s compromise text, the AI Act will apply 36-months following the entering into force of the regulation (as compared to the shorter 24-month period proposed in the Commission’s proposal).

Europe’s continued role in setting international AI standards.

As one of the first jurisdictions to consider a comprehensive regulation on AI, the EU is continuing to play a role in helping to set international AI standards. For example:

  • Council of Europe (“CoE”) AI Convention. The CoE is currently in the process of developing a draft convention on artificial intelligence, human rights, democracy and the rule of law (the “AI Convention”). In January 2023, national delegations to the CoE met to discuss a draft text of the AI Convention. Once finalized, the AI Convention will be the first legally binding international instrument on AI, and be open to participation by non-member States.

In August 2022, the Commission published a recommendation for a Council Decision authorizing the Commission to represent the EU and take part in the CoE negotiations on the AI Convention (the “Recommendation”). Further to this, the European Data Protection Supervisor (“EDPS”) released its opinion on the Commission’s Recommendation (see our previous blog post), and the Council proposed a number of amendments to clarify the Commission’s role in the negotiations and the process the Commission should follow during the negotiation.

Among the main objectives of the Recommendation are to ensure that the AI Convention is (i) consistent with the EU’s values and interests, and (ii) compatible with the AI Act and the proposed AI Liability Directive. For example, the Commission seeks to guarantee that the AI Convention follows a risk-based approach (similar to that set out in the AI Act).

  • The U.S.-EU Joint AI Roadmap. On December 1, 2022, the U.S.-EU Trade and Technology Council (“TTC”) published its joint Roadmap for Trustworthy AI and Risk Management (“Roadmap”). The Roadmap aims to (i) advance shared terminologies and taxonomies by way of a common repository, (ii) share their approaches to AI risk management and trustworthy AI in order to advance collaborative approaches in international standards bodies related to AI, (iii) establish a shared hub of metrics and methodologies for measuring AI trustworthiness, risk management methods, and related tools, and (iv) develop knowledge-sharing mechanisms to monitor and measure existing and emerging AI risks.

In order to achieve its aims, the U.S.-EU TTC seeks to leverage their ongoing work on AI (e.g., the EU AI Act and the U.S. Blueprint for an AI Bill of Rights (the “Blueprint”) (for more information on the U.S. Blueprint, please take a look at our previous blog post)). The Roadmap highlights the similarities and the risk-based approach taken in both the EU AI Act and the U.S. Blueprint. However, the Roadmap acknowledges that, in order to align the U.S. and EU risk-based approaches further, it will be necessary to create a shared understanding and consistent application of concepts and terminology related to trustworthy AI. This will then feed into developing international AI standards, tools for trustworthy AI and risk management, and  knowledge-sharing of AI risks.

*****

The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.


Viewing all articles
Browse latest Browse all 31

Latest Images

Trending Articles





Latest Images