Decoding European Union’s recently approved AI Act – A quick primer

The European Union’s AI Act, the world’s first regulatory framework for Artificial Intelligence (AI), was recently approved, marking a significant milestone in AI governance. The legislation provides a comprehensive legal guide for regulation of AI systems, defined as software developed using machine learning, statistical and logic and knowledge-based approaches, that have the capability to generate outputs that impact the environments they interact with.

As technology rapidly evolves, and AI systems become more widespread and available, the possible applications of these systems seem virtually unending, spreading to most aspects of life.

While AI offers exciting, innovative solutions to many current problems, it also brings a new set of issues and concerns that must be addressed. These risks range from lack of transparency and privacy violations to possible reinforcement of existing social biases. A comprehensive regulatory framework is, therefore, crucial to ensure safe use of AI technology and safeguard human rights, while still allowing for innovation and knowledge production to continue.

EU’s AI Act tackles this challenge by classifying AI systems based on risk level and regulating them according to this classification. The Act defines four risk levels:

Unacceptable Risk

This category refers to systems that ar e considered to pose a threat to individuals or groups of people are banned. These are:

  • systems that use manipulative techniques to influence behavior and interfere with people’s ability to make informed decisions;
  • systems that exploit personal or group vulnerabilities;
  • use of biometric data to infer race, religious or political beliefs, sexual orientation;
  • social scoring systems and other that categorize people based on social or economic status, behavior, or personality;
  • real-time remote biometric identification systems in public spaces (safe for exceptional cases, outlined in the Act).

High Risk

This category includes systems that can negatively affect safety or fundamental rights. It covers:

  1. AI systems that are used in products covered in the Union harmonisation legislation (toys, cars, medical devices, etc)
  2. AI systems developed in specific areas, listed in Annex III (biometrics, critical infrastructure, education, employment, access to essential private and public services and benefits, law enforcement, migration, asylum, and border control management, administration of justice and democratic processes).

These systems must be registered in a public database, and comply with multiple regulations around risk management, data governance, monitoring and record-keeping, transparency, and human oversight.

Limited Risk

Limited risk systems include designed general-purpose AI and systems designed to directly interact with people, such as chatbots, image manipulation, and emotional recognition or biometric categorization systems. These will have to comply with transparency obligations and copyright law. Providers and deployers will have to ensure that content is labelled as AI generated, and that people are aware they are interacting with or exposed to AI.

Minimal or No Risk

Minimal or no risk systems include all other systems that do not fit into any of the previous categories. These systems have no restrictions or specific regulations.

The 4 risk levels defined by the EU AI Act

© European Commission

What about research and healthcare?

To ensure that development and innovati  on are not significantly hindered by strict regulations, the EU AI Act excludes all systems developed solely for research, development, or prototyping. Therefore,there is expected to be very  little impact in these areas. However, researchers should always make sure to consider ethical implications of their work, and adhere to transparency, security, and privacy best practices.

Systems developed for healthcare are covered by the EU AI Act and categorized according to the four possible risk-levels. Systems that are considered medical devices, and those that are developed to make emergency dispatch, triage or therapeutic decisions, diagnostic and patient monitoring tools would be considered high-risk, and subject to all the restrictions and regulations imposed by the framework. Others would be considered limited risk and expected to comply with transparency obligations.

The future

The EU AI Act is the first of its kind and it certainly sets a precedent for other global players, including the US and China, which are currently developing their own AI regulations.

Expected to come into effect in May or June, with obligations for general-purpose and high-risk AI starting one year and three years after respectively, the EU AI Act has garnered both praise and criticism. While it represents a crucial step forward in addressing the risks associated with AI, concerns have been raised regarding certain exemptions for law enforcement and migration authorities, as well as potential oversights in addressing societal risks beyond individual harms.

Nevertheless, the EU AI Act signifies a milestone in AI governance, while attempting to balance fostering innovation and safeguarding fundamental human rights. While the criticisms are valid and should be addressed, it serves as a crucial foundation upon which future improvements and refinements can be built. Moving forward, it is imperative that AI regulations continue to evolve to address emerging challenges and uphold ethical principles, ensuring that the benefits of AI are maximized while its risks are mitigated.

Resources:

EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act - website providing up-to-date developments, analyses of the EU AI Act, as well as compliance checker, and AI Act explorer.