The EU AI Regulation
A guide for effective implementation
The rapid development of artificial intelligence (AI) opens up a wide range of application possibilities for companies, but also entails considerable risks. To ensure the responsible use of AI systems, the European Union has AI Regulation (AI-VO) introduced – the world’s first comprehensive set of rules for artificial intelligence.
This article provides a structured overview of the AI Regulation, explains its key content and shows how companies can effectively implement the regulation.
The AI Regulation was developed to regulate the development and use of AI within the EU internal market while ensuring a high level of protection for health, safety, and fundamental rights. The legislative process began with the adoption of the text by the European Parliament on March 13, 2024, followed by editorial adjustments on April 19, 2024. The Council of the EU approved the regulation on May 21, 2024. The regulation entered into force on July 13, 2024, and will apply from August 2, 2024, in a phased procedure.
The AI Regulation pursues a risk-based approach, which classifies various AI systems according to their potential risk to society. The higher the risk, the stricter the regulations. The regulation distinguishes a total of five risk categories:
The AI Regulation applies to all providers and operators of AI systems placed on the market or operated in the EU, regardless of whether they are based inside or outside the EU. Exceptions exist for national security applications, military purposes, and AI systems developed exclusively for scientific research.
The main addressees of the AI Regulation are:
Depending on how their AI systems are used, companies can assume one or more of these roles, which entails different obligations.
Providers and operators of high-risk AI systems must:
For AI systems with limited risk, transparency measures are particularly necessary. These include, for example, labeling content generated or manipulated by AI to prevent deception.
These systems do not require specific measures under the AI Regulation, but may voluntarily follow codes of conduct to create additional trust among users.
Violations of the AI Regulation can result in significant fines of up to EUR 35 million or 7% of the group's worldwide turnover from the previous year. In addition, claims for damages may be asserted under other legal areas. Enforcement is carried out by national supervisory authorities and specialized EU institutions such as the AI Office.
To successfully implement the AI Regulation, companies should take the following steps:
The EU AI Regulation represents an important step toward creating a safe and trustworthy environment for the use of artificial intelligence. Companies are required to address the new regulations early on and implement appropriate compliance measures. Raising employee awareness and establishing robust AI management are particularly crucial for successful compliance with the regulation. Through proactive measures, companies can not only minimize legal risks but also strengthen the trust of their customers and partners.
Your comment
Participate in discussion?Leave us your comment!