MEPs adopt two AI reports paving the way for Commission’s proposals

On 1 October, MEPs from the Legal Affairs Committee (JURI) of the European Parliament approved two draft reports on Artificial Intelligence:

-draft report with recommendations to the Commission on Framework of ethical aspects of artificial intelligence, robotics and related technologies (rapporteur: MEP Iban GarcĂ­a del Blanco)

-draft report with recommendations to the Commission on a Civil liability regime for artificial intelligence (rapporteur: MEP Axel Voss)

Each report contains a proposal for a regulation which should largely inspire the European Commission for its future proposals on AI. The Commission is expected to release the legislative follow-up to its White Paper at the beginning of next year (Q1 2021).

Eurosmart had previously met the two rapporteurs of the draft reports to present its position on AI. Alignment of definitions was one the points raised during the meetings. Initially, every report contained different definitions, including distinct definitions of what an AI system is. In the adopted versions of the two draft reports, the definition of AI is fully aligned as follows:

“artificial intelligence’ means a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals.”

Iban García del Blanco’s proposed regulation includes ethical requirements for AI systems, including a requirement of cyber-resilience. It foresees a mandatory assessment of compliance with these requirements for high risk systems, leading to the issuance of a European certificate.

Axel Voss’ proposed regulation establishes a strict liability for operators of high-risk AI systems.

Please find below a summary of these reports.

 

Mandatory requirements for high-risk systems (Iban Blanco del García’s report)

The proposed regulation lists ethical requirements that would apply to AI, robotics and related technologies, developed, deployed or used in the EU. These requirements are mandatory for high-risk technologies, they include:

-a requirement for high risk AI to be developed, deployed and used in a resilient manner so that they ensure an adequate level of security by adhering to minimum cybersecurity baseline proportionate to identified risk and one that prevents any technical vulnerabilities from being exploited for malicious or unlawful purpose

-human oversight, transparency, accountability etc.

In addition, the use of biometric data for identification purposes in public areas should be limited to public authorities for public interests and in specific locations.

High-risk AI systems shall undergo an assessment of compliance by national authorities and subsequent monitoring. A European certificate of ethical compliance is delivered if the outcome of the assessment is positive.

It is possible for companies to seek to certify ethical compliance even if the AI system is not considered high-risk.

The adopted version contains a new provision which states that the European Commission can prepare binding guidelines for assessment methodology to be used by national authorities. The initial version of the text did not contain this provision, which had led Eurosmart to worry that the regulation might lead to fragmentation. Eurosmart had raised this concern several times during institutional meetings. This new provision seems to answer these questions.

Interestingly, the introduction of the report suggests to “create a centre of expertise bringing together academia, research, industry and individual experts to foster exchange of knowledge and technical expertise and to facilitate collaboration throughout the Union and beyond”. This echoes Eurosmart’s project to advocate for an AI Competence Centre.

 

What is a high-risk AI system?

Iban del García’s proposed regulation states that “AI, robotics and related technologies […] shall be considered high-risk technologies when, given their specific use or purpose, the sector where they are developed, deployed or used and the severity of the possible injury or harm caused, their development, deployment or use entail a significant risk to cause injury or harm that can be expected to occur to individuals or society in breach of fundamental rights and safety rules as laid down in Union law”.

The European Commission shall draft an exhaustive list of common high-risk technologies and update it.

The following list was published in Annex to the proposed text.

Exhaustive and cumulative list of high-risk sectors, and of high-risk uses or purposes:

High-risk sectors

·Employment

·Education

·Healthcare

·Transport

·Energy

·Public sector (asylum, migration, border controls, judiciary and social security services)

·Defense and security

·Finance, banking, insurance

High-risk uses or purposes

·Recruitment

·Grading and assessment of students

·Allocation of public funds

·Granting loans

·Trading, brokering, taxation, etc.

·Medical treatments and procedures

·Electoral processes and political campaigns

·Public sector decisions that have a significant and direct impact on the rights and obligations of natural or legal persons

·Automated driving

·Traffic management

·Autonomous military systems

·Energy production and distribution

·Waste management

·Emissions control

 

Strict liability for operators of high-risk AI systems (Axel Voss’ report)

The proposed regulation on civil liability establishes a strict liability regime for operators of high-risk AI systems in case such systems cause harm or damage. Strict liability means that a party can be held liable despite the absence of fault. This regime is suitable for situations where it results difficult for the victim to prove a fault.

Operators in the meaning of the proposed regulation are front-end and back-end operators, as long as back-end operators are not covered by the Product Liability Directive.

The proposed regulation covers harm to life, health, physical integrity and property. As a novelty, it also covers significant immaterial harm resulting in a verifiable economic loss.

The proposed text establishes maximum amounts to be paid to the affected person:

-two million euros in the event of death or harm caused to health or physical integrity

-one million euros for significant immaterial harm that results in a verifiable economic loss, or damage to property.

Operators of high-risk AI systems shall take a liability insurance to cover their liability risks.

After indemnifying the affected person, the operator may take action for redress against the producer of the defective product – pursuant to the Product Liability Directive. In the introduction of the report, MEPs further note that the Product Liability Directive should be revised to take into account technological developments. They also take the stance that the Product Liability Directive should become a regulation.

For AI systems which are not considered high-risk, the operator is subject to the fault-based liability regime.

Compromise amendments report Iban GarcĂ­a del Blanco
Compromise amendments report Axel Voss

Next steps:

Week 19 October: final approval by the European Parliament during the plenary session

Q1 2021: legislative initiative(s) from the European Commission

 

If you have any questions on this issue, please do not hesitate to contact Camille Dornier - Policy Manager: camille.dornier@eurosmart.com

Eurosmart
Rue de la Science 14B - 1040 Brussels BELGIUM
Privacy Policy - EU transparency register #21856815315-64
Twitter LinkedIn
Modify your subscription    |    View online