Artificial intelligence (AI) is increasingly determining the daily work of many companies, authorities, associations and NGOs. The EU's AI Regulation, which comes into force on August 2, 2024, brings numerous new regulations that companies, authorities, associations and NGOs need to be aware of immediately. In addition to data protection consulting on the use of AI systems, Scheja & Partners also provides legal advice on all issues relating to the AI Regulation. Our lawyers have many years of experience and can help you - also with the help of specially developed tools and software solutions such as the PrivacyPilot– to use your AI systems in a legally compliant manner - quickly, competently and on attractive terms. The focus is on pragmatic and risk-oriented solutions.
Or give us a call: 0228-227 226-0
- AI implementation project: solution-oriented, tool-supported implementation of the AI regulation
- Developing an AI strategy
- Support in the systematic selection of suitable and legally compliant AI systems
- Advice on the data protection-compliant use of AI systems
- AI competence / eLearning to impart legally required knowledge for employees
- Introduction of modern AI management systems in accordance with ISO/IEC 42001:2023, in particular for controlling internal processes
- Drafting contracts with suppliers and service providers
- Fulfillment of transparency obligations, e.g. operating instructions and information for data subjects
- Documentation of AI systems used and measures taken in our AI register (Privacy Pilot)
With our consulting approach, we support you to implement all relevant requirements of the AI Regulation in a resource- efficient manner. Our range of services includes:
We help with the development of "AI competence"
The AI Regulation obliges companies, authorities, associations and NGOs to ensure an "adequate level of AI competence" among their employees. "AI competence" includes knowledge of the opportunities and risks of AI, the rights and obligations arising from the AI Regulation and the ability to competently use AI systems. As this applies to all AI systems, regardless of their risk classification, all entities that use AI systems are affected.
Due to the short transition period of six months, the obligation to guarantee AI competence will apply from February 2, 2025. Therefore, entities using AI systems should act as quickly as possible.
Companies and public authorities should therefore take action now and ensure that their employees are AI- competent. Scheja & Partners offers training and eLearning courses on the topic of "AI competence". These not only provide theoretical background, but also practical knowledge for "daily business" with real-life examples from consulting practice.
Analysis of your AI systems and support with the implementation of your obligations
The core element of the AI Regulation is determining the specific obligations that apply to you. The first step is to check whether there is an "AI system" within the meaning of the AI Regulation. If this is confirmed, the "role" of the companies, authorities, associations and NGOs and the risk posed by the respective AI system must be determined. This is linked to special obligations that must be observed when using AI.
Due to the uncertainties, exceptions and delimitation difficulties in the legal assessment of AI systems, you should not conduct the assessment yourself, but leave it to experts. Scheja & Partners provides you with comprehensive advice on all legal issues relating to the AI Regulation, including the complex determination of role and risk and thus shows you your individual obligations. This allows you to fully concentrate on your core business and ensure that AI systems are used in compliance with the law and that there is no threat of official sanctions (such as fines).
The AI Act is the world's first comprehensive regulation of AI and aims to create a uniform legal framework for AI within the EU. It brings with it a large number of new regulatory requirements for companies, authorities, associations and NGOs. The AI Regulation takes a risk-based approach by attempting to strike a sensible balance between the benefits and risks of AI. AI systems are classified according to the risks they pose. This risk classification is then linked to different obligations for the stakeholders. An overarching goal of the AI regulation is to ensure "AI literacy".
Not all AI systems pose the same risk, so the AI Act provides for a tiered system of obligations. As part of the classification, the risk posed by the respective AI system must first be determined. The AI Act uses the following classification:
- Prohibited AI practices: AI system that is completely prohibited
- High-risk AI systems: AI system that may be used but is subject to strict rules
- certain AI systems (e.g. chatbots, deepfakes): AI system to which special transparency obligations apply
Otherwise, the general rules apply (e.g. obligation to ensure AI literacy).
The AI Act prohibits the placing on the market and use of AI systems that pose an unacceptably high risk ("prohibited practices", Art. 5 of the AI Act). These include, among others:
- the use of manipulative techniques to influence individuals
- the exploitation of people's vulnerability
- social evaluation systems ("social scoring")
- inferring emotions in the workplace and in educational institutions
High-risk AI systems are a selected number of AI systems that are not prohibited but are subject to strict regulations. These AI systems are defined in accordance with Art. 6 of the AI Regulation in conjunction with Annex I and Annex III.
For AI systems to be classified as high-risk AI systems in accordance with Annex I, two conditions must be met:
- the AI system must be used as a safety component of a product covered by the EU rules (regulations and directives) listed in Annex I, or be such a product itself and; such products can be, for example, toys, medical devices or cars.
- the product must be subject to third-party conformity assessment in accordance with the EU regulations listed in Annex I.
In addition, AI systems are considered high-risk if they fall under one of the areas listed in Annex III. These include, among others:
- AI systems in the area of critical infrastructure
- evaluative AI systems in the areas of education or employment
- evaluating AI systems in the use of basic services (e.g. when taking out life and health insurance)
Exceptionally, an AI system listed in Annex III is not considered high-risk if it does not entail a significant risk to the fundamental rights of the persons concerned. There is another exception to this exception. If the AI system performs profiling of natural persons, it is considered high-risk despite a possible exception. A case-by-case assessment must therefore always be carried out.
The AI Act affects various stakeholders who come into contact with AI systems. These primarily include "providers" (entities that develop AI systems or have them developed) and "deployer" (entities that use AI systems "under their own responsibility"), but also other actors such as importers and distributors. The distinction can be difficult in individual cases and requires detailed examination.
The AI Act follows a risk-based approach. The obligations vary depending on the risk classification of the AI system. In addition, the risks also vary depending on the role of the actor. A specific list of obligations can be derived from the combination of risk (e.g. high-risk AI) and role (e.g. deployer). In addition, certain transparency obligations apply to certain AI systems (e.g. chatbots, deepfakes).
The AI Act provides for a tiered system of penalties. The risk classification of the AI system is also taken into account. In detail, the AI Act provides for the following levels of fines:
- Violations of the ban on certain AI systems are punishable by fines of 35 million EUR or 7 percent of the previous year's global annual turnover, whichever is higher
- other violations (e.g. violations of provider or operator obligations regarding high-risk AI and transparency obligations) are punishable by fines of 15 million EUR or 3% of the previous year's global annual turnover, whichever is higher
- incorrect or incomplete disclosure to authorities is punishable by fines of 7.5 million EUR or 1 percent of the previous year's global annual turnover, whichever is higher
For SMEs, on the other hand, the lower of the percentages or sums stated applies.
The AI Act came into force on August 2, 2024 and is fully applicable two years later, i.e. on August 2, 2026. However, there are some important deviations from this: For example, the ban on certain AI systems with unacceptable risk and the obligation to ensure AI competence will already apply from February 2, 2025, and regulations in certain areas (e.g. on general purpose AI models, notifying authorities and sanctions) will already apply from August 2, 2025. Only three years after the AI Act comes into force, i.e. from August 2, 2027, will certain regulations on high-risk AI (high-risk AI systems that fall under Annex I of the AI Regulation) apply.
Deployers of high-risk AI systems that have already been placed on the market or put into operation, or will be by August 2, 2026, are protected for the time being. However, this protection ends if the design of the corresponding AI systems is subsequently "significantly" changed. "Prohibited practices" (AI systems that are generally prohibited) are not affected by this regulation: For these, the comprehensive ban will remain in place from February 2, 2025.