Leupold Legal offers tailored legal advice on your compliance with the EU AI Act. This includes advice on the correct classification of your AI systems and, for high-risk AI systems, in particular, legal cover for
Leupold Legal reduces product liability risks for your AI systems by
Leupold Legal advises you on IP compliance when
Leupold Legal creates legally watertight end customer contracts and licence conditions (EULA) for the use of your AI as a Service (AIaaS) or on-premises solutions and ensures that your sales model is GDPR compliant.
Whether AI as a Service (AIaaS) or AI on Premises: the introduction of AI systems in companies for the processing of their own business processes is always also an IT project and should be planned and carried out as such. However, it places special demands on the procurement and legally compliant use of the AI systems.
supports you right from the planning phase of your AI implementation project and
If you have individual requirements for the model of the AI system that you want to use in your company but cannot find an AI system whose model meets your requirements, you can have an AI model developed that is suitable for your purposes or have an available AI model adapted.
In either case, you need a project contract that ensures that you will receive an AI model that meets your requirements within the available time and budget. Leupold Legal will provide you with the necessary support for this as well.
Yes, the EU AI Act (Regulation (EU) 2024/1689) establishes a risk-based framework for AI systems and prohibits certain practices in the field of AI. High-risk AI systems, such as remote biometric identification systems or those used for education, vocational training or employment, are subject to particularly strict safety requirements under this Regulation.
In the European Union, Directive 2024/2853 on product liability provides that manufacturers, and in certain cases importers and other economic operators, are liable for personal injury or damage caused by defective software, including that used in AI systems. In addition to this strict liability (i.e. liability without fault), the national tort laws of EU Member States may also entail fault-based liability for personal injury or damage caused by a defective product. Although the proposal for a European AI liability directive has been dropped, the EU AI Act notably establishes care duty standards applicable to high-risk AI systems under the national laws of EU Member States.
According to German and EU copyright law, only human authors can produce works that are eligible for copyright protection. Consequently, works generated by AI without human creative input are not protected. However, if a human makes sufficient creative choices and instructs the AI accordingly, copyright protection may apply to the resulting work. Similarly, in a landmark decision in 2024, the German Federal Supreme Court clarified that only a natural person can be an inventor under the German Patent Act. Therefore, a machine system consisting of hardware or software cannot be designated as the inventor, even if it has artificial intelligence functions (BGH GRUR Int. 2024, 1167 – ‘DABUS’). The U.S. Patent and Trademark Office had previously rejected a patent application for the DABUS system for the same reason, and the European Patent Office later ruled similarly.
Processors of personal data must comply with the European Data Protection Regulation (GDPR), and this includes users of AI systems. Recital 69 of the EU AI Act states that the right to privacy and protection of personal data must be guaranteed throughout the AI system’s entire lifecycle. Consequently, the GDPR’s principles of data minimization and data protection by design and by default apply when personal data is processed. Data privacy can notably be affected by AI if personal data is processed for training AI systems before or after their release, or for generating their output. In both cases, a legal basis for processing the data is required. The Conference of Independent Data Protection Authorities of the Federal Government and the Länder (DSK) has published useful guidance on artificial intelligence and data protection in Germany which provides further information on this topic.
The EU AI Act addresses bias, particularly in relation to high-risk AI systems and stipulates that training, validation and testing data sets must be subject to data governance and data management procedures that include appropriate measures to detect, prevent and mitigate potential bias..
The Charter of Fundamental Rights of the European Union states that ‘any discrimination based on grounds such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited’.
In Germany, the General Equal Treatment Act aims to prevent or eliminate discrimination on the grounds of race or ethnic origin, gender, religion or belief, disability, age or sexual identity.
Under German law, contracts require matching declarations of intent (offer and acceptance) from the parties involved. If an AI system is used by a human being as a tool, the resulting contract is valid. Whether fully autonomous AI systems without direct human involvement can enter into legally binding contracts remains controversial, as AI systems do not have legal capacity. However, if AI systems are used as tools to achieve results within defined parameters, the prevailing opinion is that users must bear responsibility for the declarations of intent made with such systems.
Employers in Germany must ensure that AI systems used in employment decisions comply with the EU AI Act employment law and data protection laws. This includes transparency, non-discrimination, and involving works councils when introducing such technologies.
Yes. According to Article 4 of the EU AI Act, “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
In any event, all C-level managers and employees of organisations intending to deploy AI systems to support their business processes should receive base training enabling them to understand how these systems function, their potential use cases, and their limitations and risks, so they can operate them safely. Depending on the staff categories concerned and the nature of the AI system used, this initial training should be supplemented with specialized training in technical, legal, and ethical topics.
The EU AI Act stipulates that AI systems intended for direct interaction with natural persons must be designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious to an informed, attentive and reasonable user given the circumstances and context of use. Providers of AI systems, including general-purpose AI systems that generate synthetic audio, visual, video or text content, must ensure that the outputs of these systems are labelled in a machine-readable format and are identifiable as being artificially generated or manipulated. Further transparency obligations apply to emotion recognition and biometric categorization systems.
Additionally, the GDPR mandates transparency about automated decision-making processes, including providing meaningful information about the logic involved.
In Germany, trade secrets relating to AI, including algorithms and training data, are protected by the Trade Secrets Act (GeschGehG), which transposes Directive (EU) 2016/943 into national law. To qualify for protection, reasonable steps must be taken to keep the information confidential. In addition to this statutory protection, trade secrets can be safeguarded through appropriate non-disclosure agreements (NDAs), which oblige the receiving party to maintain the secrecy of the information provided by the owner of the trade secret.