Thursday, March 14, 2024

The Control of the Self and the Autonomous Virtual Collective Self: EU Parliament Approves Artificial Intelligence Act (With Links to High Level Summary)

 

Pix Credit EU Parliament Press Release

The European Union Parliament issued its Press Release on the adoption of the Artificial Intelligence Act:

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation. The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

The AI Act is a monument to efforts to control the collective self by imposing controls on their interactions with their virtual collective selves, while creating a "safe space" for the exploitation of autonomous virtual and generative intelligence. The AI Act defines AI systems as 

a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (AI Act Art. 3(1)).
It is built around the establishment of obligations for a series of classes of users: providers, deployers, importers, distributors and product manufacturers.

Taken as a whole, the AI Act provides a useful experiment in the legalization of self control in the way in which physical and virtual intelligence interact, the premises of exploitation, and the protection of human producers and consumers of data, including themselves, within a more tightly managed ecology of generative, descriptive and predictive intelligence. It thus creates a space of incentivized behaviors around the construction and utilization of certain autonomous and big data coded programs (General-Purpose AI Models (Art. 52 et seq.), with a regulatory focus on so-called "High-Risk AI Systems" (Art. 6 et seq.).

The European Commission described its arc of regulation this way:

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).  The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. Together, these measures will guarantee the safety and fundamental rights of people and businesses when it comes to AI. They will also strengthen uptake, investment and innovation in AI across the EU.  The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models. (Shaping Europe’s digital future)

And it suggests that AI's forbidden territories will be suppressed--to the extent that is possible (EU AI Act Rt. 5). Whether it is possible and whether the law is well targeted remains to be seen (discussed here). But the experiment is worth the effort.

A "High Level Summary" of the AI Act follows along with the text of the EU Parliament Press Release.

 

In this article we provide you with a high-level summary of the AI Act, selecting the parts which are most likely to be relevant to you regardless of who you are. We provide links to the original document where relevant so that you can always reference the Act text.

To explore the full text of the AI Act yourself, use our AI Act Explorer. Alternatively, if you want to know which parts of the text are most relevant to you, use our Compliance Checker.

View as PDF

Four-point summary

The AI Act classifies AI according to its risk:

  • Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
  • Most of the text addresses high-risk AI systems, which are regulated.
  • A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).
  • Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters – at least in 2021; this is changing with generative AI).

The majority of obligations fall on providers (developers) of high-risk AI systems.

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
  • And also third country providers where the high risk AI system’s output is used in the EU.

Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end-users.

  • Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).
  • This applies to users located in the EU, and third country users where the AI system’s output is used in the EU.

General purpose AI (GPAI):

  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.
  • Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.
  • All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.

Prohibited AI systems (Title II, Art. 5)

The following types of AI system are ‘Prohibited’ according to the AI Act.

AI systems:

  • deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
  • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
  • inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
  • ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
    • searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
    • preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
    • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).

Notes on remote biometric identification:

Using AI-enabled real-time RBI is only allowed when not using the tool would cause considerable harm and must account for affected persons’ rights and freedoms.

Before deployment, police must complete a fundamental rights impact assessment and register the system in the EU database, though, in duly justified cases of urgency, deployment can commence without registration, provided that it is registered later without undue delay.

Before deployment, they also must obtain authorisation from a judicial authority or independent administrative authority[1], though, in duly justified cases of urgency, deployment can commence without authorisation, provided that authorisation is requested within 24 hours. If authorisation is rejected, deployment must cease immediately, deleting all data, results, and outputs.

[1] Independent administrative authorities may be subject to greater political influence than judicial authorities (Hacker, 2024).

High risk AI systems (Title III)

Some AI systems are considered ‘High risk’ under the AI Act. Providers of those systems will be subject to additional requirements.

Classification rules for high-risk AI systems (Art. 6)

High risk AI systems are those:

  • used as a safety component or a product covered by EU laws in Annex II AND required to undergo a third-party conformity assessment under those Annex II laws; OR
  • those under Annex III use cases (below), except if:
    • the AI system performs a narrow procedural task;
    • improves the result of a previously completed human activity;
    • detects decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
    • performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.
  • AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal
    data to assess various aspects of a person’s life, such as work performance, economic situation, health,
    preferences, interests, reliability, behaviour, location or movement.
  • Providers that believe their AI system, which fails under Annex III, is not high-risk, must document such an
    assessment before placing it on the market or putting it into service.
Requirements for providers of high-risk AI systems (Art. 8-25)

High risk AI providers must:

  • Establish a risk management system throughout the high risk AI system’s lifecycle;
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • Design their high risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
  • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • Design their high risk AI system to allow deployers to implement human oversight.
  • Design their high risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • Establish a quality management system to ensure compliance.
Annex III use cases
Non-banned biometrics: Remote biometric identification systems, excluding biometric verification that confirm a person is who they claim to be. Biometric categorisation systems inferring sensitive or protected attributes or characteristics. Emotion recognition systems.
Critical infrastructure: Safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
Education and vocational training: AI systems determining access, admission or assignment to educational and vocational training institutions at all levels. Evaluating learning outcomes, including those used to steer the student’s learning process. Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests.
Employment, workers management and access to self-employment: AI systems used for recruitment or selection, particularly targeted job ads, analysing and filtering applications, and evaluating candidates. Promotion and termination of contracts, allocating tasks based on personality traits or characteristics and behaviour, and monitoring and evaluating performance.
Access to and enjoyment of essential public and private services: AI systems used by public authorities for assessing eligibility to benefits and services, including their allocation, reduction, revocation, or recovery. Evaluating creditworthiness, except when detecting financial fraud. Evaluating and classifying emergency calls, including dispatch prioritising of police, firefighters, medical aid and urgent patient triage services. Risk assessments and pricing in health and life insurance.
Law enforcement:  AI systems used to assess an individual’s risk of becoming a crime victim. Polygraphs. Evaluating evidence reliability during criminal investigations or prosecutions. Assessing an individual’s risk of offending or re-offending not solely based on profiling or assessing personality traits or past criminal behaviour. Profiling during criminal detections, investigations or prosecutions.
Migration, asylum and border control management:  Polygraphs. Assessments of irregular migration or health risks. Examination of applications for asylum, visa and residence permits, and associated complaints related to eligibility. Detecting, recognising or identifying individuals, except verifying travel documents.
Administration of justice and democratic processes:  AI systems used in researching and interpreting facts and applying the law to concrete facts or used in alternative dispute resolution. Influencing elections and referenda outcomes or voting behaviour, excluding outputs that do not directly interact with people, like tools used to organise, optimise and structure political campaigns.

General purpose AI (GPAI)

GPAI model means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.

GPAI system means an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.

GPAI systems may be used as high risk AI systems or integrated into them. GPAI system providers should cooperate with such high risk AI system providers to enable the latter’s compliance.

All providers of GPAI models must:

  • Draw up technical documentation, including training and testing process and evaluation results.
  • Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Establish a policy to respect the Copyright Directive.
  • Publish a sufficiently detailed summary about the content used for training the GPAI model.

Free and open licence GPAI models – whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model – only have to comply with the latter two obligations above, unless the free and open licence GPAI model is systemic.

GPAI models are considered systemic when the cumulative amount of compute used for its training is greater than 1025 floating point operations per second (FLOPS). Providers must notify the Commission if their model meets this criterion within 2 weeks. The provider may present arguments that, despite meeting the criteria, their model does not present systemic risks. The Commission may decide on its own, or via a qualified alert from the scientific panel of independent experts, that a model has high impact capabilities, rendering it systemic.

In addition to the four obligations above, providers of GPAI models with systemic risk must also:

  • Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.
  • Assess and mitigate possible systemic risks, including their sources.
  • Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.
  • Ensure an adequate level of cybersecurity protection.

All GPAI model providers may demonstrate compliance with their obligations if they voluntarily adhere to a code of practice until European harmonised standards are published, compliance with which will lead to a presumption of conformity. Providers that don’t adhere to codes of practice must demonstrate alternative adequate means of compliance for Commission approval.

Codes of practice

  • Will account for international approaches.
  • Will cover but not necessarily limited to the above obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialise throughout the value chain.
  • AI Office may invite GPAI model providers, relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.

Governance

How will the AI Act be implemented?

  • The AI Office will be established, sitting within the Commission, to monitor the effective implementation and compliance of GPAI model providers.
  • Downstream providers can lodge a complaint regarding the upstream providers infringement to the AI Office.
  • The AI Office may conduct evaluations of the GPAI model to:
    • assess compliance where the information gathered under its powers to request information is insufficient.
    • Investigate systemic risks, particularly following a qualified report from the scientific panel of independent experts.

Timelines

See this post for an overview of the full implementation timeline.

After entry into force, the AI Act will apply:

  • 6 months for prohibited AI systems.
  • 12 months for GPAI.
  • 24 months for high risk AI systems under Annex III.
  • 36 months for high risk AI systems under Annex II.

Codes of practice must be ready 9 months after entry into force.

This post was published on 27 Feb, 2024

*       *       *


  • Safeguards on general purpose artificial intelligence  
  • Limits on the use of biometric identification systems by law enforcement  
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities  
  • Right of consumers to launch complaints and receive meaningful explanations  
The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock  

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.


Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Quotes

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

Next steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).


Background

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and (8) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.

No comments:

Post a Comment