AI, Accountability, and Safe Data Practices: What’s Taking Shape Across EMEA

Published on Nov 17, 2025

The WLG Privacy & Data Protection Group convened in October for an EMEA-focused discussion centered on three fast-moving areas: immaterial damage claims under the GDPR, Microsoft’s contractual and technical data protection regime, and how AI models can be trained lawfully under European privacy law. The session offered a clear view of how expectations around accountability and operational rigor are shifting across the region.

Immaterial damage claims: small numbers, big exposure

Reemt Matthiesen (CMS Germany) outlined where Article 82 stands today. Courts across Europe have confirmed that non-material harm such as anxiety, anger, or loss of control over data can qualify as compensable damage, with no de minimis threshold. While individual payouts may be modest, scale risk is significant in breach scenarios. Because controllers bear the burden to show they were not in breach, documentation has become central to both regulatory scrutiny and litigation strategy. Breach notification decisions now require careful judgment, as notices can satisfy legal requirements while also prompting claims.

Microsoft’s DPA in practice: one framework, local wrinkles

Felix Glocker (CMS Germany) walked through Microsoft’s global Data Protection Addendum and Product Terms. The structure is intentionally uniform across online services, professional services, and certain cloud-connected software features, with product-specific rules set out in public terms. Two practical issues stood out: preview and beta features frequently sit outside full DPA coverage, and some jurisdictions impose standard clauses that can be difficult to append to global agreements. For technical assurances, the Service Trust Portal and Microsoft Learn remain essential resources.

AI model training under the GDPR: design for explainability and anonymity

Lukas Stelten (CMS Germany) focused on training and fine-tuning AI models with enterprise data. Legitimate interests will often be the most workable lawful basis, with consent mainly reserved for special-category data. That choice requires demonstrating that training aligns with reasonable expectations, conducting a defensible balancing test, and addressing purpose limitation when repurposing historic datasets. Regulators expect high-level explainability and will scrutinize LIAs and DPIAs, including mitigation against data leakage. Designing toward a model that operates as anonymous for GDPR purposes significantly reduces the operational impact of post-hoc objections.

Regional perspectives and what clients will ask next

Members compared developments across the United States, Latin America, and APAC. U.S. programs continue to be shaped by state privacy laws, with many teams using EU standards as their governance benchmark. Brazil’s requirements for transfers and standard clauses are influencing negotiations with global vendors, while APAC jurisdictions such as Vietnam are adopting damages frameworks that mirror European trends. Clients are increasingly focused on three areas: the lawful basis for AI training, vendor alignment with local requirements, and incident response plans that reflect Article 82 risk.

What firms can do now

Treat documentation as a core output, not an administrative step. Align AI training practices with a repeatable LIA and DPIA flow. Confirm whether cloud preview features are in scope before enabling them. Refresh breach notification criteria to reflect the current posture on non-material damages. For cross-border matters, maintain a short list of jurisdiction-specific clause requirements that pair effectively with global DPAs.