Impact and Implications of the European Union’s Artificial Intelligence Regulation

The rapid evolution of Artificial Intelligence (AI) technology has generated growing concerns about its possible impacts on society, the economy and human rights. Against this backdrop, the European Union (EU) has taken the lead in developing the world’s first comprehensive legal framework on AI, the EU AI Act, aimed at guaranteeing people’s health, safety and fundamental rights, while providing legal certainty for companies in its 27 member states.

 

1. introduction

Among the objectives of the Regulation, we highlight the establishment of harmonized rules for the placing on the market and use of artificial intelligence systems; prohibition of certain practices, as well as establishing specific requirements for high-risk AI systems and obligations for the operators of these systems.

And for a better understanding, we believe it is necessary to clarify some of the specific terms used in the Regulation. The term <<OPERATOR>>For example, it concerns a supplier, user, agent, importer or distributor, which are understood in this way by the regulation:

“‘supplier’ means a natural or legal person, public authority, agency or other body which develops an AI-system or has an AI-system developed with a view to its being placed on the market or put into service under its own name or trademark, whether for reward or free of charge;

“user” means a natural or legal person, public authority, agency or other body using an AI-system under its authority, unless the AI-system is used in the course of a personal, non-professional activity;

‘authorised representative’ means a natural or legal person established in the Union who has received a written mandate from an AI-system provider to respectively perform and comply on its behalf with the obligations and procedures set out in this Regulation;

‘importer’ means a natural or legal person established in the Union who places on the market or puts into service an AI-system bearing the name or trademark of a natural or legal person established outside the Union;

‘distributor’ means a natural or legal person in the supply chain, other than the supplier and the importer, who makes an AI-system available on the Union market without changing its properties;”

 

2. Risk-based approach

One of the main features of the EU AI Act is its risk-based approach, defining four levels of risk for AI systems: Unacceptable Risks, High Risks (high risk), Limited Risk and Minimal Risk (or non-existent).

 

a) Unacceptable Risk AI Systems

The regulation prohibits the placing on the market of AI systems classified as Unacceptable Risks, which include:

– AI systems that employ subliminal techniques to bypass a person’s consciousness, substantially distorting their behavior in a way that causes or is likely to cause physical or psychological harm to them or others.

– AI systems that exploit vulnerabilities of a specific group of people associated with their age or physical or mental disability in order to substantially distort the behavior of a person belonging to that group in a way that causes or is likely to cause physical or psychological harm to them or others.

– AI systems used by public authorities to assess or classify the credibility of people based on their social behavior, personality or personal characteristics, when this social classification leads to prejudicial or unfavorable treatment of certain people or entire groups of people in unrelated social contexts or unjustified and disproportionate to the social behavior or severity.

The use of
remote biometric identification systems
“in real time” in public spaces to maintain public order is regulated and must comply with certain conditions, such as the strict need to achieve specific objectives, such as investigating crimes or preventing threats to life or physical safety, and requires prior authorization from an independent judicial or administrative authority.

 

b) High-risk AI systems

These systems are subject to strict obligations before being placed on the market. The list of high-risk AI systems includes a limited number of AI systems whose risks have already materialized or are likely to materialize in the near future, and could be adjusted in the future. They are considered high risk:

– AI systems used as safety components of products or which are products in their own right, provided that the product for which the AI system is intended is subject to a third-party conformity assessment for placing on the market or putting into service, as stipulated in European Union harmonization legislation.

– Critical Infrastructure: For example, systems applied in transportation that can compromise people’s lives or physical integrity.

– Education or Professional Training: Systems that have the potential to restrict access to education and someone’s professional development, such as exam grading.

– Product Safety Components: This includes systems used in robot-assisted surgery.

– Employment, Worker Management and Access to Self-Employment: For example, analysis of CVs in selection processes.

– Essential Public and Private Services: Such as credit scores for granting loans.

– Coercive law enforcement: Systems that may interfere with people’s fundamental rights, such as assessing the reliability of evidence.

– Migration Management and Border Control: For example, verifying the authenticity of travel documents.

– Administration of Justice and Democratic Processes: This includes the application of the law in concrete cases

– They pose a risk of harm to health and safety or a risk of adverse impact on fundamental rights, the severity and likelihood of which are equivalent to or greater than the risks posed by the high-risk AI systems already referred to in Annex III of the regulation.

c) Limited Risk

This refers to the risks associated with a lack of transparency in the use of AI. The regulation imposes specific transparency obligations to ensure that humans are adequately informed when interacting with AI systems, promoting trust and understanding.

d) Minimal or no risk

It covers the use of AI with minimal risk, such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

 

3. General requirements for AI systems

Transparency Obligations for Artificial Intelligence Systems

Vendors must ensure that AI systems designed to interact with humans clearly state that they are interacting with an AI system, unless this is obvious given the context. Exceptions apply to AI systems authorized for specific activities related to public safety.

Users of emotional recognition or biometric categorization systems must inform the people affected about how these systems work. Exceptions apply to AI systems used for public safety purposes.

Users of AI systems that generate or manipulate media content to create deep fakes must disclose that the content has been artificially generated or manipulated, unless it is legally authorized or protected by the right to freedom of expression and artistic freedom, provided that the rights of third parties are duly respected.

In addition, the regulation establishes transparency obligations for all general-purpose AI models, with a view to better understanding and trust in the systems. These obligations include self-assessment, systemic risk mitigation, serious incident reporting, test and model evaluations, as well as cybersecurity requirements.

 

4. Requirements for High Risk Systems

The European AI Regulation establishes a series of specific provisions for risk management in relation to AI systems considered to be high risk. These provisions aim to ensure that these systems are developed, implemented and used in a safe and ethical manner, minimizing the risks to users and society in general.

 

a) Risk Management in High Risk AI systems

AI systems considered high risk must have a risk management system that covers all stages of their life cycle, with regular updates. This involves identifying and analyzing risks, estimating and evaluating risks, continuously assessing risks based on post-marketing data, adopting appropriate risk management measures and carrying out tests to ensure consistent performance. Tests should be carried out during development and before marketing, with special considerations if the system affects children.

 

b) Data Governance in High Risk AI systems

High-stakes AI systems that use data-driven model training techniques must be built on training, validation and test data sets that meet specific quality criteria. These datasets must be managed according to appropriate data governance practices, covering design choices, data collection, preparation and processing, bias assessment and identification of data gaps or deficiencies.

The data set must be relevant, representative, error-free and complete, taking into account the statistical characteristics appropriate to the context of use of the AI system.

Providers of high-risk AI systems can handle special categories of personal data, as long as they ensure adequate safeguards of individual rights, such as pseudonymization or encryption.

Even high-risk AI systems that don’t involve training models must follow data governance practices to ensure compliance with these requirements.

 

c) Transparency in High Risk AI systems

High-risk AI systems must be transparent to allow users to correctly interpret and use their results. They must come with instructions for use that are concise, complete, correct and clear, providing information on the supplier, characteristics, performance, possible risks, human supervision, expected service life and necessary maintenance.

The information should include details about the purpose of the system, its accuracy, robustness, cybersecurity, performance on different user groups and data requirements. In addition, information should be provided on any planned changes to the system and technical measures to make it easier for users to interpret the results.

 

d) Human Supervision in High Risk AI Systems

High-risk AI systems must be designed to be effectively supervised by humans during use. This supervision aims to prevent or minimize risks to health, safety or fundamental rights.

It can be guaranteed by measures integrated by the supplier into the system before it is marketed or by measures implemented by the user. The measures must allow supervisors to fully understand the system’s capabilities and limitations, be aware of automation biases, correctly interpret the results, decide on their use and intervene or stop the system as necessary. For specific systems listed, the measures must ensure that no action or decision is taken on the basis of system identification unless it is verified and confirmed by at least two people.

 

e) Accuracy and Safety in High Risk AI Systems

High-risk AI systems must be developed to achieve appropriate levels of accuracy, robustness and cybersecurity, while maintaining consistent performance throughout their lifecycle. The instructions for use must state the levels of accuracy.

They must be resistant to errors and failures, with technical redundancy solutions, especially for systems that are continually learning.

In addition, they must be protected against unauthorized attempts to alter or exploit vulnerabilities. Cybersecurity solutions must be appropriate to the specific context, and additional measures must be taken to prevent and control attacks, such as data contamination and antagonistic examples.

 

5. Quality system as an obligation for suppliers of high-risk artificial intelligence systems

Suppliers of artificial intelligence (AI) systems classified as high risk have a series of responsibilities to fulfill. This includes ensuring that their systems comply with the requirements set out in the regulation, implementing a quality management system, drawing up the technical documentation for the AI system, keeping records automatically generated by the systems wherever possible and subjecting the system to the conformity assessment procedure before making it available on the market or service.

In addition, they must comply with registration obligations, adopt corrective measures if necessary, inform the competent authorities about the availability of the system, apply the CE marking to AI systems to indicate compliance with the regulation and demonstrate the compliance of the system when requested by a competent authority.

In addition, it is crucial for these suppliers to establish a quality management system that ensures compliance with the AI regulation. This involves a
Quality Management System

The European regulation stipulates that AI suppliers must set up a quality management system in an organized and systematic way, using written policies, procedures and instructions covering at least the following aspects:

  • Development of a strategy to ensure compliance with regulations, including conformity assessment procedures and management of high-risk AI system modifications;
  • Implementation of methods, procedures and systematic actions for the design, control and verification of the AI system;
  • Establishment of systematic procedures for the development, quality control and quality assurance of the AI system;
  • Carrying out examination, testing and validation procedures before, during and after system development, defining the frequency of these activities;
  • Definition of technical specifications, including standards to be followed and means to ensure compliance with regulatory requirements if harmonized standards are not fully applied;
  • Implementation of systems and procedures for data management, from collection to the marketing of AI systems;
  • Establishment of a risk management system as established in the regulation;
  • Establishment, implementation and maintenance of a post-market monitoring system;
  • Development of procedures for reporting serious incidents and anomalies in accordance with regulatory provisions;
  • Communication management with competent authorities, notified bodies, clients and other interested parties;
  • Establishment of systems and procedures to keep records of all relevant documentation and information;
  • Resource management, including measures related to security of supply;
  • Definition of the responsibilities of management staff and other employees in relation to all the aspects mentioned.

6. Conclusion

Applying the recommendations outlined in the European Union’s Artificial Intelligence Regulation in Brazil is not only a necessary measure, but also an opportunity to promote the ethical and responsible adoption of AI in Brazil. When we look at the rapid evolution of AI technology and its possible impacts on society, the economy and human rights, it becomes clear that regulatory measures are essential to guarantee the safety and fundamental rights of Brazilian citizens.

Taking inspiration from the EU AI Act would provide entrepreneurs operating in Brazil with a comprehensive and up-to-date legal framework to deal with emerging AI-related challenges. The risk-based approach, the emphasis on transparency, data governance and human oversight, as well as the strict requirements for high-risk systems, offer a solid model for AI regulation in the country.

In addition, by adopting international safety and ethics standards, Brazil could strengthen its position on the global stage, promoting investor confidence and collaborating more effectively with other countries and organizations.

Implementing the EU’s recommendations in Brazil would not only ensure compliance with the highest international standards, but would also stimulate responsible and sustainable innovation in the field of AI. By establishing a clear and predictable regulatory environment, Brazil could attract investment and encourage the development of innovative technological solutions that benefit society as a whole.

Mais
Insights