Skip to main content

European AI Act: how does it affect marketing?

02
Apr, 2025
Ellen Göppl

You may embrace the hype around AI, may want to beam it up to Mars, or may use it where it makes sense. Whatever the case may be, using AI comes with a certain risk – and this risk does not just affect quality. This is where the EU Directive 2024/1689  laying down harmonised rules on artificial intelligence comes in. You may wonder what the risk of AI translation is? You may find the answer in an anecdote further below.

The intention of the AI act is to facilitate “the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment”. Let me tell those of you who worry that the AI Act will lead to Germany and the EU being left behind in the race to the top that the law is not about sweeping prohibition, but about the responsible application and development of AI tools.

The European legal framework sets four risk levels for AI systems

Risk classDescriptionRegulationExample
Unacceptable riskViolation of basic rightsProhibitedSocial scoring systems
High riskPotential high risk of damageFar-reaching requirementsCredit checks, CV recognition
Limited riskHuman interactionObligation of transparencyChat bots
Low riskAll other systemsNo requirementsPredictive maintenance

Source (adapted) https://www.ihk.de/darmstadt/produktmarken/digitalisierung/ai-act-die-eu-reguliert-ki-6261116

The EU Regulation wants to subject AI systems that pose a high risk for the general public to strict rules, or even prohibition. The majority of AI systems do not fall under this category, so there are no restrictions, or merely an obligation to be transparent. While the EU Regulation will not come into full force until 2nd August 2026, some of its rules will be applicable sooner:

  • AI systems with unacceptable risk will be prohibited after six months, i.e. from February 2025.
  • The requirements for general purpose AI models will apply after 12 months, i.e. from August 2025.

So what does this mean for marketing?

  1. Transparency when it comes to AI-generated content 
    • Marking obligation: From August 2026 onwards, “deep fakes” must be marked. These are AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.
    • AI generated content that is not a “deep fake” does not necessarily have to be marked. Nevertheless, good practice dictates marking such content in order to gain the trust of customers.
  2. Quality assurance
    • All AI-generated content should be checked by a competent person prior to publication. This also applies to translations.
    • You should always ensure that the AI has sufficiently considered your company’s context and your target group.
  3. Adaptation to brand voice or corporate identity
    • Edit any AI-generated content to ensure that the look and feel is that of your brand identity.
    • We recommend using a style guide to navigate rules and requirements for your company’s content. Please contact us if you would like help writing a style guide.
  4. Data protection and GDPR conformity
    • When using AI to analyse personal data or to personalise an advertising campaign, you must always adhere to the General Data Protection Regulation (GDPR). This means that users have to give their consent, and that personal data may not be transferred to third countries with insufficient data protection rules.
    • When using AI to make automated decisions about consumers (e.g. dynamic pricing), consumers have the right to know, and may object.
  5. Training and information
    • Create clear guidelines for using AI in content creation and
    • provide training on the relevant rules and regulations for your team. This applies to marketing tools such as chat bots, predictive analytics and content management tools as well as to machine translation.

So what does that mean for AI-generated translations?

The AI Act does not class machine translation tools as high-risk systems. Nevertheless, certain use cases can be considered high-risk, if they affect important aspects of life. Such as the following case: In May 2024, a train travelling through Bavaria was stopped and evacuated, and a man was arrested, following an alleged bomb threat. Federal police investigations later revealed that there had never been a threat. A passenger had wanted to ask a harmless question using an app on his mobile phone to translate from Arabic to German. AI translation errors – which are very common – raise the question of liability. A professional, human translation is covered by the translator’s liability, which is why professional translators always have a professional liability insurance.

In summary: Before using AI tools, it is important to weight the risks. If you are in doubt about a machine-generated translation, get in touch – we provide professional guidance. Simply give us a call or send an e-mail.

Sources:

https://digital-strategy.ec.europa.eu/de/policies/regulatory-framework-ai (AI translation)

https://www.bundesregierung.de/breg-de/aktuelles/ai-act-2285944

https://www.dihk.de/de/themen-und-positionen/wirtschaft-digital/dihk-durchblick-digital/europaeisches-gesetz-ueber-kuenstliche-intelligenz-63750

https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX:32024R1689

Logo Peschel Communications GmbH

PESCHEL COMMUNICATIONS GmbH
Wallstraße 9
79098 Freiburg
Germany

  • Office hours:

    Monday–Friday: 09.00am–05.00pm

Partner login

Click here to access our portal.

Visit us on
Languages

© 2024 PESCHEL COMMUNICATIONS GmbH