Free cookie consent management tool by TermsFeed Generator

Contact form &
contact person

A direct line to our experts - via a convenient form or targeted contact by specialist area.

We look forward to your message!

Info@consersol.de
+49 202 45 92 97 35
Location & Directions
You will find us in the heart of Germany - centrally located and easily accessible. On-site appointments possible by arrangement.
Opphofer Straße 20
42107 Wuppertal
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Häufig gestellte Fragen

1. What is the EU AI Act?

The EU AI Act is a regulation of the European Union that has been in force since August 2024 and establishes uniform rules for the use of artificial intelligence in all member states. Its goal is to enable innovation while ensuring the safety, protection of fundamental rights, and transparency of AI systems. The regulation follows a risk-based approach, classifying AI systems into different risk levels. The higher the risk to health, safety, or fundamental rights, the stricter the requirements for development, documentation, monitoring, and human oversight.

2. What is the main purpose of the EU AI Act?

The main purpose of the EU AI Act is to create a harmonized legal framework for the use of artificial intelligence within the European Union. Its aim is to promote innovation while ensuring that AI systems are trustworthy, safe, and compatible with people’s fundamental rights. The regulation seeks to prevent AI technologies from causing harm by endangering health, safety, or individual rights. To achieve this, the EU AI Act applies a risk-based approach: the greater the potential risk of an AI system, the stricter the requirements regarding its development, documentation, monitoring, and transparency. In this way, it aims to balance the protection of citizens with the opportunity to advance AI innovation in Europe.

3. Who is affected by the EU AI Act: only providers or also users of AI?

The EU AI Act affects not only providers of AI systems but also their users. Providers are responsible for developing the systems, carrying out conformity assessments, and placing them on the market. Users, in turn, have the obligation to use these systems correctly, monitor their operation, train employees accordingly, and report serious incidents. They must also ensure that they comply with transparency requirements, for example when disclosure of AI use is mandated. Importers, distributors, and resellers are also covered by the EU AI Act because they must ensure that only compliant systems are made available on the European market. Finally, the regulation applies not only within the EU but also to companies outside Europe if their AI systems are used within the EU or if their results are utilized there.

4. What documentation requirements apply to high-risk systems?

For high-risk AI systems, the EU AI Act requires comprehensive and traceable documentation. Providers must prepare a technical documentation that describes the system in all its essential characteristics. It must clearly state how the system was developed, which training, validation, and test data were used, and what measures were taken to identify and minimize risks. The algorithms used, the system architecture, and the intended use cases must also be documented. In addition, an operational log of the system must be maintained. This so-called logging makes it possible to trace how the system made decisions in practice. Users are given access to relevant parts of this documentation so they can operate and monitor the system properly. The documentation must be structured in a way that allows authorities to inspect it at any time as part of conformity checks or market surveillance.

5. Who monitors compliance with the regulations?

Compliance with the EU AI Act is monitored through cooperation between national authorities and a central European body. Each EU member state designates market surveillance and enforcement authorities responsible for checking compliance locally. These authorities can audit companies, request technical documentation, and, if necessary, prohibit the use of a non-compliant or high-risk AI system. At the European level, there is also a European AI Office, established in 2024. This office coordinates the uniform implementation of the regulation, oversees particularly powerful general-purpose models, and supports cooperation between member states. It can issue guidelines and standards to ensure consistent interpretation and application of the rules across the EU.

6. What opportunities does the EU AI Act offer for businesses and innovation in Europe?

The EU AI Act creates a harmonized legal framework across all member states, thereby reducing legal uncertainty. Clear requirements regarding safety, data quality, and transparency strengthen the trust of customers, partners, and regulators. This can give companies a competitive edge, as compliant and explainable AI solutions are more likely to be preferred. At the same time, the regulation opens new opportunities for innovation by providing a reliable framework for developing and deploying AI in Europe. Companies that adapt early can position themselves as leaders in responsible AI and open up new business models.