Monday, December 11, 2023

"Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI" : Text of European Parliament Press Release 9 December 2023



Pix Credit 'Variety' here


In earnest since 2021, officials within the European Union, and interested stakeholders worldwide, have been debating the parameters, and ultimately the text, of a comprehensive regulatory framework for the exploitation of generative intelligence systems (so called artificial intelligence). See Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS COM/2021/206 final (Text Explanatory Memorandum) (Annex)

In its 2021 Explanatory Memo, it was explained that the reguñatory framework would be bent toward the following generalized core objectives:
· ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

· ensure legal certainty to facilitate investment and innovation in AI;

· enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

· facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. (Memo, supra, at p. 3)
After much discussion, key political stakeholders have now reached agreement on the final form of the framework.
European Union officials have reached a provisional deal on a legal framework for the development and use of artificial intelligence within Europe, calling for greater transparency as well as setting parameters for high-risk AI. The political agreement, which has yet to be detailed and came together following 37 hours of debates within the European Commission, highlights what is prohibited when it comes to AI, key requirements for using high-risk AI and penalties. (Variety; here)
In its Press Release, the European Parliament stressed the boundaries of AI exploitation:
--biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
--untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
--emotion recognition in the workplace and educational institutions;
--social scoring based on social behaviour or personal characteristics;
--AI systems that manipulate human behaviour to circumvent their free will;
--AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation). (Press Release)
Some of these normative taboos are meant to protect the current state of EU human rights and constitutional principles, especially for example, the restrictions on biometric information that crosses ideological boundaries on politically sensitive classes of data. Others serve to reject contemporary Marxist-Leninist approaches to the nudging of behaviors through so-called social scoring mechanisms, including emotion recognition  (eg Chinese "social credit") systems. These are going to be much harder to actually implement given the insatiable appetite in the liberal democratic camp to use a variety of semiotically powerful modalities to nudge behaviors. Indeed, one already sees in the limitations of the restrictions on emotion recognition technologies (to workplaces and schools) a concession to its obliquity in the marketplace and in the marketplace of ideas (including those of interest to the state). Another limitation appears to restrict the restriction on European social credit systems ONLY to "scoring based on social behavior or personal characteristics." That produces a tension, for example, between the openness of emotion recognition, and the prohibitions against"AI systems that manipulate human behavior to circumvent their free will." On the other hand it is clear (no one reads Nietzsche anymore on the difficulty of contemporary ideologies of free will) that environmental, social, economic, cultural, political and other realities, all contextually framed, already circumscribe (and sometimes make virtually impossible) the exercise of free will in significant ways. On the other hand, it raises the question about what in the field of social relations does NOT manipulate free will. There is nothing in the emerging regulation that suggests that AI may not be used to manage he circumstances against which free will can be curated for individuals or classes of humans identifiable by certain characteristics or predilections.  Managing circumstances and the conditions around which "will" is exercised can be a quite powerful field for AI application--from traffic patterns to just transitions.  The battles over the meaning and application of "manipulation" will consume much human capital.

All of this provides much fuel for anticipation of the official text, and its journey towards adoption and then transposition into Member State legal orders. The full text of the EU Parliament Press Release follows.

 
  • Safeguards agreed on general purpose artificial intelligence  
  • Limitation for the of use biometric identification systems by law enforcement  
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities 
  • Right of consumers to launch complaints and receive meaningful explanations  
  • Fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover  

MEPs reached a political deal with the Council on a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, while businesses can thrive and expand.

On Friday, Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.


Banned applications

Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Law enforcement exemptions

Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.

“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:

  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

Obligations for high-risk systems

For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

Guardrails for general artificial intelligence systems

To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.


Measures to support innovation and SMEs

MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.

Sanctions and entry into force

Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

Quotes

Following the deal, co-rapporteur Brando Benifei (S&D, Italy) said: “It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise - ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology. Correct implementation will be key - the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”.


Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future”.


Press conference


Lead MEPs Brando Benifei (S&D, Italy) and Dragos Tudorache (Renew, Romania), the Secretary of State for digitalisation and artificial intelligence Carme Artigas, and Commissioner Thierry Breton held a joint press conference after the negotiations. The statement of Mr Benifei is available here and Mr Tudorache's here. More extracts are available here


Next steps


The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting.


No comments:

Post a Comment