AEPD Publishes Comprehensive Guidance on Agentic AI and Data Protection: What DPOs Need to Know
Agentic AI is no longer a theoretical concept. The AEPD released an 81-page guidance document addressing data protection challenges posed by agentic AI — one of the first comprehensive regulatory publications in the EU on this topic.
AEPD Publishes Comprehensive Guidance on Agentic AI and Data Protection: What DPOs Need to Know
Published: March 2026 | DPO Pilot Blog
Introduction
Agentic AI is no longer a theoretical concept confined to research papers. Organizations are already deploying AI systems that autonomously plan tasks, access corporate databases, interact with external services, and execute decisions with minimal human intervention. The privacy implications are significant — and until recently, regulatory guidance on how to govern these systems under the GDPR was virtually nonexistent.
That changed on February 18, 2026, when the Spanish Data Protection Authority (Agencia Española de Protección de Datos, or AEPD) released an 81-page guidance document specifically addressing the data protection challenges posed by agentic AI. This is one of the first comprehensive regulatory publications in the EU that directly tackles the intersection of autonomous AI agents and privacy law.
For Data Protection Officers, this guidance is essential reading. Even if your organization operates outside Spain, the AEPD's analysis is grounded in the GDPR — making it directly relevant across the European Economic Area and beyond. This post breaks down the key points, highlights the most actionable recommendations, and outlines concrete steps DPOs should take in response.
What Is Agentic AI According to the AEPD?
The AEPD defines agentic AI as systems that use large language models (LLMs) to achieve specific objectives by adapting their behavior based on evolving goals and environmental circumstances. This is a deliberate and precise framing — it distinguishes agentic AI from simpler chatbot interfaces or static rule-based automation.
The guidance identifies six defining characteristics of agentic AI:
- Autonomy — The system operates independently, making decisions without requiring step-by-step human instruction.
- Environmental perception — It ingests and interprets data from its operating environment (emails, calendars, databases, APIs, web content).
- Action-taking capabilities — It doesn't just recommend; it executes. It can send emails, book travel, modify records, or trigger workflows.
- Proactivity — It anticipates needs and initiates actions rather than waiting for explicit commands.
- Planning and reasoning — It decomposes complex goals into subtasks and sequences them logically.
- Memory and adaptability — It retains context across sessions and adjusts behavior based on prior interactions.
To make this concrete, the AEPD uses a practical example throughout the guidance: an AI agent tasked with managing an employee's business trip. This single agent autonomously accesses the employee's calendar, contacts hotels, purchases flight tickets, monitors weather conditions, and adjusts plans accordingly. The example is deliberately chosen — it's mundane enough to feel realistic, yet it immediately surfaces the data protection complexities. In the course of a single task, the agent processes personal data from multiple sources, interacts with third-party services, makes decisions that affect the data subject, and retains information across time.
What the Guidance Covers
The AEPD document is thorough. It maps agentic AI against core GDPR obligations and operational requirements, covering:
- Controller and processor roles — Who is the controller when an AI agent autonomously engages a third-party service? The guidance examines how the traditional controller-processor framework strains under autonomous multi-agent architectures where an AI system, not a human, selects sub-processors.
- Transparency obligations — How do you provide meaningful information to data subjects when the AI agent's decision-making process is opaque or emergent? The guidance emphasizes that transparency must extend to the logic of the agent, not just the existence of automated processing.
- Data subject rights — Exercising rights like access, erasure, or rectification becomes significantly more complex when personal data is distributed across an agent's memory, external tool calls, and third-party systems.
- Records of Processing Activities (ROPAs) — Agentic AI complicates ROPA maintenance because the agent may dynamically create new processing activities that weren't anticipated at design time.
- Automated decision-making — Article 22 GDPR implications when agents make or materially influence decisions affecting individuals.
- Data Protection Impact Assessments (DPIAs) — The guidance makes clear that deploying agentic AI will almost certainly trigger the DPIA requirement under Article 35, given the systematic monitoring, large-scale processing, and novel technology involved.
- Breach management — How to detect, contain, and report breaches in systems where the AI agent itself may be the vector or the vulnerability.
Privacy Vulnerabilities: The Attack Surface Is Different
One of the most valuable sections of the AEPD guidance is its detailed analysis of privacy vulnerabilities specific to agentic AI. The document categorizes risks into three distinct groups:
Authorized Risks (Risks from Legitimate Use)
Even when working as designed, agentic AI introduces privacy risks that traditional systems do not. The AEPD highlights:
- Lack of accountability — When an AI agent autonomously chains together multiple actions, it becomes difficult to attribute specific processing decisions to a responsible human.
- Poor data access management — Agents often require broad data access to function effectively, creating tension with the principle of least privilege.
- "Shadow-leak exfiltration" — A particularly concerning concept: the agent may unintentionally leak personal data to external services or contexts during normal operation, without any malicious intent. For instance, an agent querying a hotel API might transmit employee dietary restrictions, health information, or travel patterns to a third party without any explicit instruction to do so.
Unauthorized Risks (Attack Vectors)
The guidance catalogs specific attack vectors that are unique to or amplified by agentic architectures:
- Prompt injection — Manipulating the agent's instructions through crafted input data.
- Memory poisoning — Corrupting the agent's persistent memory to alter future behavior.
- Session hijacking — Taking control of an active agent session to redirect its actions.
- Privilege escalation — Exploiting the agent's access permissions to reach data or systems beyond the intended scope.
Resilience Risks
- Dependency on external services — Agentic AI often relies on third-party APIs, cloud services, and model providers. Disruption or compromise of any link in the chain can cascade.
- Denial of Service attacks — Targeting the agent's infrastructure to disrupt operations or force fallback behaviors that may be less privacy-protective.
What This Means for DPOs
The AEPD guidance carries a clear message: the DPO must be involved early and substantively in any agentic AI deployment. This is not a compliance checkbox exercise.
Governance Must Be Tailored
The guidance calls for a tailored information-governance framework — not a generic AI policy bolted onto existing documentation. The DPO should be a key architect of this framework, ensuring it addresses the specific characteristics of agentic systems: their autonomy, their memory, their ability to engage external services, and their capacity to create new processing activities dynamically.
Continuous Evaluation, Not One-Off Assessment
Traditional compliance approaches — conduct a DPIA, document it, review annually — are insufficient for agentic AI. The AEPD recommends continuous evidence-based evaluation, including automated monitoring of agent behavior, regular benchmarking against expected outcomes, and meaningful human-in-the-loop oversight. This shifts the DPO's role from periodic reviewer to ongoing monitor.
Data Minimization Requires Active Enforcement
Agentic AI systems are, by nature, data-hungry. They perform better with more context, more memory, more access. The AEPD is explicit: organizations must implement strict data retention policies, disable unnecessary persistent storage, and deploy Data Loss Prevention (DLP) tools to prevent shadow-leak exfiltration. The DPO must be the voice insisting on these constraints even when they reduce agent performance.
Human Oversight Must Be Meaningful
The guidance repeatedly emphasizes meaningful human oversight at every pipeline stage — not rubber-stamp approval, but genuine review by someone who understands both the processing and its implications. For sensitive or high-risk actions, the AEPD recommends requiring explicit human approval before the agent executes.
Practical Action Items for DPOs
Based on the AEPD guidance and the analysis published by Alston & Bird, here are concrete steps DPOs should take now:
1. Inventory agentic AI deployments (including shadow deployments). Map every instance where AI agents operate with any degree of autonomy in your organization. This includes officially sanctioned tools and — critically — any unofficial or experimental deployments by individual teams. You cannot govern what you don't know exists.
2. Conduct or update DPIAs specifically for agentic AI use cases. Generic AI DPIAs are not sufficient. Each agentic deployment needs an assessment that specifically addresses the six characteristics identified by the AEPD: autonomy, environmental perception, action-taking, proactivity, planning/reasoning, and memory. Pay particular attention to shadow-leak exfiltration paths and privilege scope.
3. Establish a continuous monitoring regime. Work with your IT and security teams to implement automated monitoring of agent behavior — what data is accessed, what external services are contacted, what actions are executed, and whether these align with the stated purpose. Define anomaly thresholds and incident response procedures.
4. Define and enforce human-in-the-loop requirements. For each agentic AI deployment, explicitly define which actions require human approval before execution. Document these thresholds, communicate them to the teams operating the agents, and audit compliance regularly. Err on the side of more oversight during initial deployment phases.
5. Review and tighten data access and retention controls. Audit the permissions granted to AI agents. Apply the principle of least privilege aggressively. Disable persistent memory where it is not strictly necessary. Implement DLP controls to detect and prevent unauthorized data flows to third parties during agent operation.
Conclusion
The AEPD's guidance on agentic AI is a landmark document — not because it invents new rules, but because it methodically applies existing GDPR principles to a technology category that many organizations are adopting faster than their governance frameworks can keep up. The guidance makes clear that agentic AI is not just another AI application; its autonomy, memory, and action-taking capabilities create fundamentally different privacy risks that require fundamentally different oversight approaches.
For DPOs, the message is straightforward: get ahead of this now. Agentic AI deployments are expanding rapidly, and the gap between what these systems can do and what your governance framework covers is a liability. The AEPD has given you a detailed roadmap. Use it.
Sources:
- AEPD Guidance: https://www.aepd.es/en/guides-and-tools/guides
- Alston & Bird Analysis: https://www.alstonprivacy.com/spanish-dpa-releases-agentic-ai-guidance/