EC Clarifies Literacy Requirements Under the AI Act

Published on May 22, 2025

The European Commission (EC) has published a FAQ on AI Literacy to further clarify the meaning of Artificial Intelligence literacy and the best efforts that organisations subject to the AI Act must undertake. The FAQ comes in response to Article 4 of the AI Act, which since February 2025 requires all providers and deployers of AI systems operating in the EU to ensure a "sufficient level of AI literacy” among employees and relevant third parties, such as contractors.

What the AI literacy requirements mean for your organisation

Under the AI Act, organisations that develop, deploy, or put into use AI systems (i.e., providers and deployers) must ensure that their employees, contractors, and any other persons acting under the organisation’s authority possess a "sufficient level of AI literacy". This is not merely a regulatory formality; it is a safeguard to ensure that individuals working with or affected by AI systems understand their function, use, and risks. 

While the act does not impose mandatory exams or certifications, it requires organisations to implement tailored training and guidance. This training must reflect the individual’s technical background, professional experience, and the specific AI systems that the employees or third parties are using.

A one-size-fits-all approach won’t suffice. The level and depth of training should be proportionate to the risk associated with the AI application. For example, when high-risk systems are used (e.g. as is often the case with health care, recruitment, and financial services) organisations must provide more comprehensive and context-specific training. In such cases, effective human oversight is fundamental, as the prevention of harmful outcomes can depend on informed human judgment. Relying on user manuals or automated prompts is unlikely to meet the standard, especially where safety, fundamental rights, or legal obligations are involved. 

Organisations must assess whether everyone under their authority, especially technical and operational staff, fully understands the following:

  • How AI systems function;
  • What potential harms could arise from misuse or misunderstanding;
  • What actions can be taken to prevent or mitigate such risks;
  • And how legal, ethical, or compliance concerns intersect with their daily AI use.

In short, AI literacy goes far beyond knowing how to operate an AI system. It includes the ability to question outputs, recognise limitations, and act responsibly.  

Regulatory flexibility

The AI Act allows for a degree of regulatory flexibility for AI literacy. The newly established AI Office, which is responsible for ensuring consistent implementation across the EU, states there is no single best practice for meeting these obligations. Organisations are expected to tailor their training strategies to reflect the specific AI systems in use and the roles of their employees.

This means that different training approaches for, say, IT specialists and administrative personnel are entirely appropriate. It can also be assumed that experienced AI developers or data scientists already possess baseline AI literacy. Given the rapid pace of AI development, however, even highly skilled technical staff may require regular updates on new functions, regulatory developments, and emerging risks.

Compliance and enforcement

Organisations are urged to document clearly their training and awareness efforts and programmes to demonstrate compliance. Companies should set training policies, procedures, and requirements for every function connected to an AI system so that the AI courses given are relevant to the role of every employee and third party. 

How organisations document compliance with AI literacy obligations may differ, but the obligation is clear: businesses developing, deploying, and using AI systems should keep records of training sessions, workshops, and initiatives created to strengthen AI literacy in their organisations.

While formal certification is not required, failure to meet AI literacy obligations could lead to regulatory enforcement or even private litigation by affected parties. National market surveillance authorities, which are expected to begin enforcement of the AI Act in August 2026, will oversee these tasks in coordination with the AI Board, ensuring that penalties and remedial measures remain proportionate to an infringement.

Global impact

AI literacy obligations reach beyond the EU’s borders by applying to all AI systems operating in the EU market or its jurisdiction. This extraterritorial scope underscores the EU’s ambition to shape globally aligned standards and promote responsible innovation while safeguarding against harm. For international businesses, AI literacy is a strategic necessity. As AI is embedded in critical sectors such as finance, healthcare, and mobility, the EU is sending a clear signal: technological advancement must go hand in hand with ethical human oversight.

Visit CMS’s AI Insight pages for more on responsible AI use or review CMS AI Act Q&A.

For more information on the AI Act and how its specific obligations may affect your organisation, contact your CMS client partner or these CMS experts: Tom Jozak, Pieter Jordans.