BipHoo UK

collapse
Home / Daily News Analysis / What the EU AI Act requires for AI agent logging

What the EU AI Act requires for AI agent logging

Apr 19, 2026  Twila Rosenbaum  5 views
What the EU AI Act requires for AI agent logging

The EU AI Act, a comprehensive document spanning 144 pages, lays out essential logging requirements crucial for developers of AI agents. These requirements are detailed across four interconnected articles, which highlight the importance of compliance in the evolving landscape of AI regulation.

High-Risk Classification of AI Agents

While the EU AI Act does not explicitly mention "AI agents," it focuses on the functions performed by these systems. If your AI agent is involved in high-stakes decision-making—such as assessing credit applications, filtering job applications, determining healthcare benefits, pricing insurance, or triaging emergency calls—it falls under Annex III and is categorized as high-risk.

According to Article 6(3), AI agents may have a pathway to avoid high-risk classification if they do not significantly influence decision outcomes. However, this can be challenging to demonstrate, especially for agents that autonomously interact with various tools and act based on the results they obtain.

It’s important to note that general-purpose AI models have distinct obligations outlined in Chapter V of the Act. While the model itself is not labeled high-risk, the system built on it will be once it is deployed in a high-risk context. The provider of the model retains obligations from Chapter V, while the integrator assumes the high-risk obligations as per Article 25.

Key Articles for Logging Requirements

Article 12 mandates that high-risk AI systems must be capable of automatically recording events (logs) throughout the system’s lifespan. The key terms here are "automatic," which means logs must be generated independently by the system, and "lifetime," indicating that logging is required from deployment until decommissioning, not just for the current version.

Article 12(2) outlines three log categories that must be documented: events that may present risks or significant modifications, data for post-market monitoring, and operational monitoring data for deployers. The regulation does not prescribe a specific format or enforce particular fields, focusing instead on the three stated purposes.

Furthermore, Article 13 emphasizes the need for clear documentation on how deployers can collect and interpret these logs. This serves as a technical integration guide for the logging system, rather than a compliance manual.

Articles 19 and 26 establish a minimum retention period of six months for logs. Financial services firms may incorporate AI logs into their existing regulatory documentation, while other sectors must retain logs for at least six months, with possible extensions based on specific sector regulations.

The Limitations of Standard Logging

AI agents interact with various tools, delegate tasks to sub-agents, and utilize large language model (LLM) responses to generate final outputs. Standard application logging can adequately capture these interactions.

The issue arises when regulators review logs six months later, seeking proof that the logs have not been altered. Traditional application logs are stored on controlled infrastructure, which may be subject to unauthorized modifications or replacements without detection.

Although Article 12 does not explicitly state that logs must be "tamper-proof," if logs can be altered without detection, their evidentiary value diminishes significantly, which is particularly problematic for high-risk systems.

This challenge prompted the exploration of cryptographic signing methods for agent logs. This approach involves signing each action taken by the agent with a key that the agent itself does not possess, linking each signature to the previous one, and securely storing the receipt in an inaccessible location. Any alterations to the logs would disrupt this chain, providing a clear indication of tampering.

The Current State of Standards

As of now, there is no finalized technical standard for the logging requirements outlined in Article 12. Two drafts to monitor include prEN 18229-1, which addresses AI logging and human oversight, and ISO/IEC DIS 24970, focusing on logging for AI systems. Neither of these drafts has reached completion.

Organizations are facing a regulation that defines desired outcomes without detailing the necessary implementation methods. Teams that proactively establish effective logging systems will be better positioned when formal standards are introduced, while those that delay may face challenges in adapting under pressure.

Compliance Deadlines and Potential Penalties

Obligations outlined in Annex III are set to take effect on August 2, 2026. Though the European Commission proposed a delay through the Digital Omnibus package last November, which could push the deadline to December 2027, neither the Council nor Parliament has finalized this change, leaving August 2026 as the current enforceable date.

Failure to comply by this deadline could result in penalties of up to 15 million euros or 3% of worldwide annual turnover, whichever is higher. While the statute applies uniformly to all entities, Article 99 specifies that penalties must be proportionate and dissuasive, allowing national authorities to consider a company’s size and economic viability. Consequently, startups and SMEs might face lower fines than the maximum allowable amount.

Critical Questions for Developers

  • Can your system automatically generate logs for every decision made?
  • Are your logs secured against tampering?
  • Can you maintain them for six months in a format that regulators can access?

If the answer to any of these questions is no, the deadline is approaching faster than anticipated.


Source: Help Net Security News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy