BipHoo UK

collapse
Home / Daily News Analysis / AWS Rex Is a Big Step for Agentic AI Security, But Not the Final Layer

AWS Rex Is a Big Step for Agentic AI Security, But Not the Final Layer

May 14, 2026  Twila Rosenbaum  6 views
AWS Rex Is a Big Step for Agentic AI Security, But Not the Final Layer

On May 4, 2026, AWS open-sourced a critical piece of infrastructure that should fundamentally change how security teams architect agentic AI deployments. The project, Trusted Remote Execution (Rex), gates every system operation an AI-generated script attempts against a Cedar policy defined by the host owner rather than the agent. While the runtime achievement is significant, the data security problem it leaves untouched is equally important and represents the most critical gap that security leaders must address.

Agentic AI systems are capable of autonomously performing complex tasks by generating and executing code. This capability introduces serious security risks: models can produce hallucinated code, be subject to prompt injection attacks, or misinterpret instructions in overly broad ways. Developers have acknowledged these risks, with OpenAI stating that prompt injection is unlikely to ever be fully solved, and Anthropic agreeing that it remains a fundamental challenge. The problem is not hypothetical, and the industry has been searching for a practical security architecture.

What AWS solved with Rex

The mechanics of Rex are elegantly designed. Scripts run in Rhai, a lightweight embedded language with no built-in access to the operating system. Every read, write, or open operation is intercepted by a Rex SDK call, which evaluates a Cedar policy before permitting the underlying system call. If the policy denies the action, the script receives an ACCESS_DENIED_EXCEPTION and the operation never reaches the kernel. This ensures that even if an agent generates malicious or erroneous code, the host remains protected.

A critical architectural decision is that the script and the policy are versioned separately. The host owner – not the developer of the script, and not the agent that generated it – defines what is allowed. This inversion of trust is significant. Instead of trying to bound what the agent generates, Rex bounds what any host operation the agent invokes can actually accomplish. The pattern is the right one: it treats prompts as instructions rather than access controls, and it treats the agent's claimed identity as something to be verified rather than trusted. Vendor security questionnaires, internal architecture reviews, and audit evidence packages can now reference a working open-source implementation.

Rex specifically targets three failure modes: hallucinated code, prompt injection, and overly eager task interpretation. These are well-documented attack classes that the industry has struggled to contain. The runtime layer is now effectively solved for system-level operations. Security teams should adopt this pattern immediately.

What AWS did not solve – the data security gap

Now for the part that should change how every security and compliance leader reads this announcement. Rex governs system calls. It does not govern data security. The distinction is critical. Protecting the host from the agent is different from protecting the data from misuse. Passing a runtime audit is different from passing a regulatory one.

A Cedar policy can permit a file read operation on a customer records file. That is the correct policy at the kernel layer. But at the data layer, a much more nuanced set of questions must be answered:

  • Is this read happening on behalf of a specific human user with the right authorization, or is the agent acting on its own claimed identity?
  • Is the requester operating within the scope of the engagement that authorized access to this data in the first place?
  • Are the records returned minimum-necessary for the task, or is the agent pulling more context than the prompt requires?
  • Are any records subject to a deletion request, a legal hold, or a jurisdictional restriction that has not yet propagated to the file system?
  • Is the access being logged in a tamper-evident form with sufficient detail to reconstruct who authorized what, potentially years later when the model has been retired?

Rex does not answer these questions. Cedar policies on system calls cannot answer them. They live one layer below the runtime, where the data lives, and that layer is where data security must be enforced. Without it, an organization can run every agentic workload through Rex, prove that no script ever exceeded its host permissions, and still be unable to demonstrate to a regulator that the right person authorized the right access to the right data for the right purpose.

The operational and legal implications are profound. GDPR Article 5 demands purpose limitation, data minimization, storage limitation, and accountability. HIPAA's minimum-necessary standard requires controls on which data the agent is permitted to access, not just which system calls the script is allowed to make. CMMC Level 2 access control families assume enforced authorization for AI access to controlled unclassified information. None of these frameworks is satisfied by runtime gating alone, and none is addressed by Rex.

Real-world data highlights the gap

The Kiteworks Data Security and Compliance Risk 2026 Forecast Report provides concrete numbers that illustrate the challenge. The report found that 63% of organizations cannot enforce purpose limitations on AI agents. 60% cannot quickly terminate a misbehaving agent. 55% cannot isolate AI systems from broader network access. 54% cannot validate AI inputs. Some of these gaps are exactly what Rex closes at the runtime layer – termination, isolation, input validation. However, purpose limitation is a data-semantics control that cannot be enforced on a system call; it must be enforced on the data itself.

Only 43% of organizations have a centralized AI data gateway. The remaining 57% run agentic AI through fragmented or partial data-layer controls. Adding Rex to that 57% closes the runtime gap but leaves the data gap untouched. The audit-defensible layer is not the kernel; it is the data.

The Five Eyes joint advisory on agentic AI released in late April and early May 2026 names five risk categories: privilege, design and configuration, behavior, structural, and accountability. Rex addresses parts of two categories. It does not address structural risks across multi-agent systems, and crucially, it does not address the accountability category – the one that auditors and regulators will care about most. Accountability requires evidence about who accessed what data on whose behalf for what purpose. A system call audit log does not produce that evidence. A data-layer audit log does.

Implementing a robust data-layer security architecture requires more than just policy definition. Organizations must deploy a centralized AI data gateway that sits between the agentic workload and the data sources. This gateway performs attribute-based access control (ABAC) by evaluating the user context, data classification, and purpose of access in real time. It also enforces data minimization by masking or filtering sensitive fields that are not needed for the specific task. All access decisions are logged in a tamper-evident audit trail, often using append-only storage or blockchain-backed records to ensure long-term integrity. Such a gateway can integrate with Rex at the runtime layer, where Rex handles host-level permissions and the gateway handles data-level permissions.

The Five Eyes advisory emphasizes that accountability is a key risk category for agentic AI. Accountability requires not only logging who accessed what, but also the ability to reconstruct the chain of authorization from human to agent to data. Without data-layer controls, this chain is broken. For example, if a human user delegates a task to an AI agent, and that agent accesses a patient record, the organization must be able to prove that the human had the right to access that record and that the agent's access was within the scope of that delegation. Rex does not capture the human-agent relationship; a data-layer gateway does by requiring the agent to present a token that identifies the human principal and the purpose of access.

Architecting data security for agentic AI

The architecture that holds up under regulatory enforcement must be layered, and the layers are not interchangeable. Runtime controls like Rex enforce what the host will permit. Identity controls enforce who the agent is acting on behalf of. Data-layer controls – attribute-based access control evaluated against classification, jurisdiction, consent, and purpose – enforce what data the agent is allowed to touch. Each layer addresses a different failure mode. None substitutes for the others.

The data layer is where data security lives. In a properly designed data layer, every access is authenticated against the human user the agent is acting for, every authorization decision is evaluated against attribute-based policies that respect classification and consent, and every operation produces a tamper-evident audit record that outlives the model that initiated it. AWS does not provide that layer in the Rex release. It is the architect's responsibility and must be built explicitly.

Security and compliance leaders should adopt the runtime pattern immediately. Rex is open-source under Apache 2.0, hosted on GitHub, and runs on Linux and macOS. There is no procurement obstacle. However, leaders must not treat runtime gating as the whole answer. They should map current controls against the Five Eyes advisory's five risk categories and identify where the architecture stops at the kernel and where the data layer is still ungoverned. Finally, they should build the audit trail at the layer that survives model lifecycle changes. The model can be retired; the runtime can be replaced. The data layer is the only place where the evidence outlasts the agent that produced it.

AWS solved part of the problem. Data security – the part that actually shows up in audits, regulatory inquiries, breach notifications, and litigation discovery – requires governance at the data layer. The runtime layer just got easier. The data layer is still the architect's responsibility, and it is the layer that decides whether the next agentic AI audit succeeds or fails.


Source: TechRepublic News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy