When “Low”, “Limited”, or “Minimal” Risk AI Still Needs Explaining

This article accompanies Hour 5: DPIAs in Practice in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

The most difficult AI assessments are not always the obviously high-risk ones. They are often the ordinary tools that look assistive, low impact and operationally convenient until their outputs begin shaping how people understand, prioritise or respond to information. That is where the DPIA becomes more useful than it first appears. So, this is how to use the DPIA to test transparency, reliance and explainability before small AI tools become governance blind spots

For many organisations, the instinct is to reserve deeper AI governance work for systems that clearly make decisions about people, process sensitive information at scale, or fall into a higher regulatory risk category. That makes sense as a starting point. Risk classification matters. Proportionality matters. No organisation should treat every small AI feature as though it were a high-risk decision engine.

However, a system can sit outside the highest risk category and still require careful explanation. It may not make final decisions. It may not reject applications, determine eligibility or produce a legally significant outcome. It may only summarise, rank, draft, route or recommend. In practice, however, those activities can still affect how people are treated, particularly where the output changes what a human user sees, how quickly an issue is escalated, or how confidently a conclusion is reached.

“A system does not need to make the final decision to shape the conditions in which that decision is made.”

That is the practical gap this article is concerned with. The issue is not whether every limited or apparently low-risk AI system requires a heavy assessment. It is whether the organisation has used the DPIA well enough to understand what needs to be explained, to whom, and why.

The ICO and Alan Turing Institute guidance on explaining AI-assisted decisions is useful here because it does not limit explanation to fully automated decisions. It is concerned with decisions and services delivered or assisted by AI, and with helping affected individuals understand how those systems are used in practice. That is the right starting point for DPIA work in this area because many of the most common AI deployments sit somewhere between back-office productivity and meaningful influence over a person’s experience. (ico.org.uk)

Risk classification is not the same thing as explanation need

One of the most useful distinctions for a DPO, privacy lead or AI governance function is the difference between risk classification and explanation need.

Risk classification asks what type of AI system this is, what regulatory category it may fall into, and what level of governance attention it attracts. Explanation need asks a slightly different question. It asks whether people need to know that AI is being used, what role it plays, how much weight is placed on its outputs, and what they can do if the output affects them.

Those two questions are related, but they are not the same.

A system may not be classified as high risk, but still be used in a context where explanation matters. A chatbot that answers general opening hours questions is very different from a chatbot that helps people navigate welfare, healthcare, housing, complaints or employment processes. A summarisation tool used to tidy internal meeting notes is different from one used to summarise safeguarding records, grievance material, medical correspondence or customer complaints. A drafting assistant used for generic marketing copy is different from one used to prepare responses to individuals about access, eligibility, delay, debt, disciplinary matters or service refusal. The technology may look similar. The context changes the governance question.

“Low risk is not a conclusion about explanation. It is only one part of the wider assessment.”

This is where the DPIA can add real value. It gives the organisation a structured way to examine not only what the system is, but how it is used. It allows the privacy team to separate the question of formal classification from the more practical question of whether affected people, staff, management or regulators could understand the role the system plays.

The OECD AI Principles make this point in a broader way. Their transparency and explainability principle asks AI actors to provide meaningful information appropriate to context, including information about capabilities and limitations, awareness of interaction with AI systems, and, where feasible and useful, information that helps affected people understand and challenge outputs. That is a helpful framing because it is not confined to the most serious AI deployments. It is about meaningful information in context. (OECD.AI)

For a DPIA, that means the question should not stop at “is this high risk?” It should continue to “what would a person reasonably need to understand about this system, given how it is being used?”

Why assistive systems still deserve proper explanation

The most common AI systems in organisations are often described as assistive. They support staff. They draft first versions. They summarise long records. They flag likely categories. They suggest responses. They help route work. They do not, formally speaking, decide anything. That language is usually accurate, but it can also be incomplete.

It is important to understand, and not underestimate, that:

  • Assistance can still be influential.
  • A summary can change what the reader notices.
  • A triage suggestion can change what is dealt with first.
  • A draft response can shape tone and content.
  • A risk flag can influence how much attention a case receives.
  • A ranking can affect what is seen, what is missed and what is treated as urgent.

The problem is not that assistive systems are inherently inappropriate. Often they are entirely sensible. The problem is that the word “assistive” can make the use sound less consequential than it is.

“Assistive does not mean neutral. It means the system acts through the person who uses it.”

That is the point a DPIA needs to capture. If the system assists a human user, the assessment should explain how that assistance operates. Does it simply reduce administrative effort, or does it frame the substance of the decision? Does the user treat the output as a prompt, or as the starting point for their view? Does the workflow encourage challenge, or does it encourage acceptance?

This is not an academic distinction. It affects transparency, fairness, accountability and evidence. If an individual challenges an outcome, the organisation may need to explain not only the final human decision, but whether AI played a role in shaping the information that decision-maker saw.

That is also why explainability cannot be confined to the technical layer. For many lower-risk systems, the more useful explanation is not a mathematical account of the model. It is a plain account of what the tool does in the process, what it does not do, what weight is placed on the output and what human review remains.

The DPIA as an explainability test

The DPIA is often treated as a risk assessment document. In AI contexts, it should also be treated as an explainability test.

That does not mean turning every DPIA into a public-facing explanation. It means using the DPIA process to test whether the organisation can explain the system at the right level to the right audience. 

Those audiences are different. For example, an affected individual may need to know that AI was used, what role it played, what information was relevant, whether a human reviewed the matter and how they can query or challenge the outcome. A staff member using the system needs to understand what the tool is doing, where its limitations sit and when not to rely on it. A DPO or compliance reviewer needs to understand the lawful basis, data flows, output use, safeguards and review triggers. Senior management needs to understand residual risk, reliance and assurance. A regulator or auditor may need to see how the organisation reached its conclusions and what evidence supports them.

The explanation does not need to be identical for all of those audiences. It does need to be consistent.

“A good DPIA should allow the organisation to explain the same system clearly from several angles.”

This is a practical way to use the ICO and Turing approach. The guidance refers to explaining AI-assisted decisions and services to affected individuals, and includes consideration of what goes into an explanation, contextual factors and how to select priority explanations by considering domain, use case and impact. Those ideas translate naturally into DPIA work because they encourage the organisation to ask what explanation is needed in this particular setting, rather than assuming one generic transparency statement will do. (ico.org.uk)

For lower-risk systems, this may result in a relatively light explanation. That is fine. The point is not to overcomplicate. The point is to make sure the organisation has consciously decided what explanation is proportionate, and can justify that decision.

Use-level explainability matters where model-level transparency is limited

One reason organisations struggle with explainability is that they equate it with full technical transparency. That can make the task feel unrealistic, especially where the system depends on a third-party model and the vendor is unable or unwilling to disclose detailed information about training data, model logic or tuning. Vendor opacity can affect the quality of risk assessment, particularly where the system is used in sensitive contexts or where outputs carry material influence.

But it does not follow that explanation is impossible.

In many lower or limited-risk AI uses, the organisation may not need to explain the full model internals to provide a meaningful account of the system. What it often needs is use-level explainability. That means explaining the purpose of the tool, the information it uses, the type of output it produces, the limits of that output, the role of human review and the way a person can query the outcome.

“Where model-level transparency is limited, the organisation still needs use-level explainability.”

This is a useful distinction because it keeps the DPIA practical. It avoids the unhelpful binary of either having full technical explainability or having none. It recognises that organisations can still provide meaningful information about how the system is being used, even where some technical detail remains unavailable.

NIST’s AI risk work is helpful on this point because it treats explainability and interpretability as part of a wider trustworthy AI picture, alongside accountability, transparency, privacy, fairness, safety, reliability, security and resilience. NIST also distinguishes explainability as information about mechanisms underlying AI operation and interpretability as the meaning of outputs in the context of the system’s purpose. That distinction is particularly useful for DPIAs because the organisation may not be able to explain everything about the model, but it should still be able to explain what the output means in the workflow in which it is used. (NIST AI Resource Center)

This matters for the DPO as much as for the technical team. If the vendor cannot provide full model documentation, the DPIA should not pretend otherwise. It should identify the limit, consider whether it is acceptable in context and adjust the control position accordingly. That may mean limiting the use case, reducing the weight placed on outputs, adding human review, improving staff guidance, increasing monitoring, or deciding that the tool is not suitable for a particular context.

What the DPIA should be able to explain in a lower-risk AI use case

For systems that appear limited, assistive or low risk, the DPIA should still be able to answer a set of practical questions. Not as a theoretical exercise, but because these questions determine whether the organisation understands the role the AI is playing.

It should be able to explain what the tool is doing in plain terms. Not in vendor language, and not in a way that simply repeats the product description. The organisation should be able to say whether the tool summarises, classifies, ranks, generates, recommends, routes, detects or predicts, and where that function sits in the workflow.

It should also explain what the tool is not doing. This is often just as important. If the tool does not make final decisions, does not replace professional judgement, does not determine eligibility and does not remove human review, that should be clear. However, those statements should be tied to how the process actually works. A statement that the system is “assistive only” is not enough unless the organisation can explain what assistance means in practice.

The DPIA should also explain what the output means. A risk score, priority flag, summary or recommendation should not be treated as self-explanatory. The organisation should know what the output is intended to indicate, what it does not indicate, and what the user is expected to do with it.

“An AI output is not explained by naming it. It is explained by saying what role it plays.”

This is particularly important where staff may be tempted to treat outputs as more authoritative than they are. A summary may omit nuance. A recommendation may reflect incomplete data. A ranking may be useful for workflow management but unsuitable as a proxy for importance or merit. If the DPIA does not capture those limitations, the organisation may be relying on staff to infer them.

Finally, the DPIA should be able to explain what happens when the output is wrong. This is often the simplest test of whether the system has been properly understood. Questions you can ask:

  1. If the output is inaccurate, who notices.
  2. If it is misleading, who corrects it.
  3. If it causes a person to be treated differently, how can that be identified.
  4. If someone challenges the outcome, can the organisation reconstruct the role the system played.

These are ordinary but necessary governance questions that become more important where AI is being used.

Context changes the explanation requirement

A useful DPIA should avoid treating the same technology as the same risk in every setting. The issue is not the label attached to the tool. It is the effect of the tool in context. The context of use matters. For example:

  • A chatbot used to answer general website questions may require a different level of explanation from a chatbot used by service users trying to understand entitlements, complaint routes or healthcare options.
  • A summarisation tool used by internal staff to condense non-sensitive material may require a different level of assessment from the same tool used to summarise HR grievances, safeguarding concerns, legal correspondence or medical history.
  • A drafting assistant used to polish language may be different from one used to generate responses explaining why a person has not received a service.

“The same AI function can require a different explanation when the setting changes.”

This is where the DPIA should bring together legal, operational and technical judgement. The legal team may understand transparency and rights implications. The operational team may understand how staff use the output. The technical team may understand system limits. The DPO or privacy lead needs to make sure those perspectives are brought into the same assessment.

This is also where the OECD and NIST language around context is valuable. OECD refers to meaningful information appropriate to context, while NIST frames trustworthy AI as socio-technical, involving human, organisational and technical factors. Those points are helpful because they keep the assessment grounded in use rather than treating AI explainability as a purely technical property. (OECD.AI)

This is often the most useful practical lesson. Do not start by asking whether the tool is generally explainable. Ask whether the organisation can explain this tool, in this context, to the people who need to understand it.

Transparency obligations are not only a high-risk issue

The EU AI Act reinforces the need to separate risk category from transparency requirement. Although the full high-risk regime receives much of the attention, Article 50 includes transparency obligations for certain AI systems, including informing people when they are interacting directly with an AI system, marking certain AI-generated or manipulated content, and informing exposed persons about emotion recognition or biometric categorisation systems. The information must be provided clearly and accessibly. (ai-act-service-desk.ec.europa.eu)

That does not mean every AI DPIA should become an AI Act compliance assessment. It does mean organisations should be careful not to assume that lower risk classification removes the need for transparency thinking.

The GDPR position also remains relevant. Where personal data is processed, transparency is not simply about identifying a lawful basis. It is about enabling people to understand how their information is used in a way that is meaningful enough for them to exercise rights, raise concerns and challenge outcomes where appropriate.

The DPIA is a useful place to bring those points together. It can ask whether the organisation has identified the relevant audience for explanation, whether the explanation is proportionate to the context, and whether the transparency measure is meaningful rather than cosmetic.

“A transparency notice is not the same thing as an explanation.”

That line matters because organisations can sometimes satisfy themselves too quickly by pointing to privacy notices or AI usage disclosures. Those may be necessary, but they may not be sufficient. If a person is affected by a process in which AI plays a meaningful role, the organisation may need to explain not only that AI is involved, but what role it played and how the person can question the result.

Human oversight needs its own explanation

Human oversight is often used as the reason why a system is treated as lower risk. The organisation explains that the AI does not make a final decision, that a person remains responsible, and that outputs are reviewed before action is taken. That may be correct, but it is not yet an explanation.

The DPIA should test what the human reviewer is actually doing. Are they checking accuracy. Are they reviewing fairness. Are they confirming that context has not been lost. Are they comparing the output with source material. Are they empowered to reject or depart from the output. Are they trained on the tool’s limitations. Is there evidence that challenge happens in practice.

“Human oversight is not a transparency answer unless the organisation can explain what the human is actually reviewing.”

This is particularly important where the explanation to an individual depends on the presence of human review. If the organisation says that AI is only used to assist staff and that staff remain responsible, that explanation is only meaningful if the organisation can describe the human role with some precision.

Otherwise, there is a risk that human oversight becomes a reassurance rather than a safeguard. The DPIA should therefore link human oversight to real operational design. It should identify who reviews outputs, what they are expected to review, what information they have available, what happens if they disagree, and how oversight is recorded where the risk profile requires it.

For lower-risk systems, this does not need to be burdensome. A simple, well-understood review process may be enough. But it should still be real.

Explainability is also about challenge and correction

One reason explainability matters is that people may need to query, correct or challenge an outcome. This is true even where the AI system is not the final decision-maker. If AI has shaped the information seen by a decision-maker, influenced the prioritisation of a case, generated a draft explanation or classified a person’s issue, then the individual may reasonably need to understand enough about that role to challenge the outcome. This does not mean exposing trade secrets or providing technical detail that would be meaningless to the person. It means providing enough information to make the process intelligible.

The OECD transparency and explainability principle expressly links meaningful information to enabling affected people to understand outputs and, where adversely affected, to challenge them. That connection is useful for DPIA practice because it frames explanation as something functional, not decorative. (OECD.AI)

“An explanation is only useful if it helps someone understand what happened and what they can do about it.”

For a DPO or legal team, this is a strong way to test whether transparency is working. If a person asked why they received a particular response, delay, escalation, recommendation or classification, could the organisation explain the role the AI system played in ordinary language. If the answer is no, the issue is not only communication. It may be that the organisation itself has not sufficiently understood the system’s role.

That is why the DPIA should examine challenge routes. Not only rights under data protection law in the abstract, but practical routes for raising concerns, correcting errors, requesting human review or obtaining a meaningful explanation.

What senior stakeholders should receive

Senior stakeholders do not need the technical mechanics of every AI tool. They do need a clear account of systems that may affect individuals, even where those systems are classified as limited, assistive or low risk.

For board or senior management reporting, the useful information is usually quite focused.

  • What is the system being used for.
  • Who may be affected.
  • What explanation is provided to staff or individuals.
  • How much influence do outputs have.
  • What human review exists.
  • What are the main limits of understanding.
  • What evidence shows that the tool is being used as described.
  • What would trigger review.

“Management does not need a model explanation. It needs a clear explanation of reliance.”

That is a useful governance distinction. The question for management is not whether the model can be explained in technical depth. The question is whether the organisation can explain what it is relying on, why that reliance is acceptable, and how it would know if the position changed.

This is also where DPIAs can improve reporting quality. A good DPIA should produce a short summary that can be understood outside the privacy team. If the summary cannot explain the role of the AI system without resorting to technical jargon or compliance labels, the underlying assessment may need more work.

Using the DPIA proportionately

There is always a risk that AI governance becomes heavier than it needs to be. That is not the aim here. The point is not that every AI feature should be put through a full, high-intensity DPIA. The point is that where a DPIA is being used, or where screening suggests one may be needed, explainability should be part of the assessment.

For some systems, the conclusion may be simple. The tool is used internally, does not involve personal data beyond ordinary staff use, does not influence decisions about individuals, and requires only a basic internal explanation and acceptable use guidance. For others, the same kind of technology may sit closer to individual impact and require a more detailed explanation, stronger oversight and better evidence.

“Proportionate does not mean light-touch by default. It means matched to the role the system plays.”

That is the judgement senior professionals are expected to exercise. Not every system is high risk. Not every system is harmless. The DPIA helps locate the system between those positions by asking what the tool actually does, where it sits, who is affected and what needs to be explained.

Takeaway

For a DPO, privacy lead, legal adviser or senior compliance professional, the most useful way to apply this learning is to run a low-risk AI explainability test against one live or proposed AI use case. The objective is not to turn a modest tool into a major governance exercise. It is to check whether the organisation can explain the system at a level that matches the role it plays. The checklist below is intended to support that review.

1. Role of the tool
Can the organisation explain in plain language what the tool does. Does it summarise, classify, rank, generate, recommend, route, detect or predict. Has the organisation avoided relying only on vendor terminology.

2. Context of use
Is the tool used in an internal administrative context, or does it sit in a process that affects individuals. Does it touch complaints, employment, healthcare, education, vulnerability, eligibility, safeguarding, debt, access to services or other sensitive contexts.

3. Data and input sources
What information is submitted to the system. Does this include personal data, special category data, confidential material, children’s data, employee data or information about vulnerable individuals. Has actual use drifted from the original assumed data set.

4. Output meaning
Can the organisation explain what the output means and what it does not mean. Is a summary treated as complete. Is a score treated as a risk indicator. Is a recommendation treated as a preferred answer. Are users clear on the limits.

5. Decision influence
Does the output affect how work is prioritised, how a person is described, how a file is framed, how quickly a case is escalated, or how a response is drafted. Even if the system does not decide anything formally, does it shape the decision-making environment.

6. Human oversight
Who reviews the output. What are they reviewing for. Do they have authority to depart from it. Are they trained to recognise limitations. Is challenge expected in practice. Is there evidence that human review changes outcomes where necessary.

7. Explanation to affected individuals
Where individuals are affected, what are they told about the role of AI. Is the explanation meaningful, or does it merely state that AI may be used. Can the organisation explain the role of the tool in ordinary language if someone asks.

8. Explanation to staff
Do staff understand what the system is for, how much weight to place on outputs, when to check source material, when not to use the tool and how to escalate concerns. Is the guidance operational rather than generic.

9. Vendor explanation limits
Where the vendor cannot provide detailed model-level transparency, has the organisation recorded what is known, what is not known and why the remaining uncertainty is acceptable in this use case. Has the organisation considered whether use-level explainability is sufficient.

10. Challenge and correction
If an output is wrong or misleading, how is that identified and corrected. Can an affected person query the outcome. Can staff override the system. Is there a route for reporting repeated problems or unexpected behaviour.

11. Evidence and logs
Can the organisation show how the system was used, who reviewed outputs, what guidance was provided, what changes were made and whether controls operated in practice. Is the evidence proportionate to the risk and context.

12. Review triggers
Has the organisation defined what would require reassessment. This may include new data sources, expanded use, increased reliance, changed outputs, vendor updates, complaints, incidents, evidence of bias, or use in a more sensitive context.

The most valuable part of this exercise is often the comparison between what the organisation believes the tool does and how it is actually being used. Where those two positions are aligned, the DPIA is likely to be stronger. Where they are not, the issue is usually not the classification label. It is the explanation gap.

So, pick one AI tool that has been described internally as low risk, assistive or administrative, and trace one real workflow. Look at the input, the output, the human review and the final action. Then ask whether the current DPIA or screening record explains that workflow in a way that would make sense to a staff member, a senior manager, an affected person and a regulator.

This article is intended to support the learning covered in Hour 5 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

EDPB Annual Report for 2025

This article accompanies Hour 1: Global Privacy Law Updates in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What the EDPB’s 2025 Annual Report Means for Organisations

The European Data Protection Board’s 2025 Annual Report is one of the clearest indications available of where European data protection regulation is moving in practice. Read alongside the Helsinki Statement on enhanced clarity, support and engagement, it shows an EDPB focused not only on consistency and enforcement, but also on making GDPR compliance more workable in an increasingly complex digital regulatory environment.

That matters because 2025 was not simply another year of GDPR guidance. It was a year in which the EDPB responded to a substantially more crowded regulatory landscape, with data protection increasingly intersecting with the Digital Services Act, the Digital Markets Act, the AI Act, competition law, adequacy decisions, procedural reform and proposals for simplification of regulatory obligations.

For organisations, the practical significance is straightforward. GDPR compliance can no longer be approached as a standalone legal exercise. Nor is it sufficient to rely on policies, privacy notices and internal guidance alone. The EDPB’s 2025 work points towards a model of compliance that is more integrated, more operational and more explicitly concerned with clarity, dialogue, consistency and practical implementation.

That is particularly relevant for in-house DPOs, compliance leads, legal teams and senior management. The deeper value of the report lies not only in what the EDPB did, but in what those activities suggest about how organisations are increasingly expected to govern privacy in practice.

The Helsinki Statement is more important than it first appears

The single most important framing point in the 2025 report is the Helsinki Statement, adopted on 2 July 2025. The Statement commits the EDPB to new initiatives to facilitate easier GDPR compliance, strengthen consistency, deepen stakeholder dialogue and develop stronger cross-regulatory cooperation in the evolving digital landscape. It also makes clear that these initiatives are intended in particular to support micro, small and medium organisations, enable responsible innovation and reinforce competitiveness in Europe.

This is not a retreat from strong privacy standards. The Statement expressly frames its approach as “a fundamental rights approach to innovation and competitiveness”. That formulation matters. The EDPB is not saying that privacy needs to give way to innovation. It is saying that innovation and competitiveness should be supported through a clearer and more usable regulatory environment, while fundamental rights remain central.

The Annual Report shows that this was not just aspirational language. The Board sought practical feedback from stakeholders on which templates organisations would find most useful, committed to publicly reporting the outcomes of consultations, and pushed forward work on more practical and more accessible guidance formats. The report is also explicit that the EDPB wants its guidance to be clearer, more practical and easier to understand, and that it has updated internal working methods accordingly.

The most useful illustration is the “Six months of progress since the Helsinki Statement” section on page 13. That timeline shows progress in four broad areas: making GDPR easier, improving consistency and enforcement, strengthening stakeholder dialogue and enhancing cross-regulatory cooperation. By December 2025 the EDPB had already produced internal guidance to improve the clarity and usability of its own outputs, held a stakeholder event on anonymisation and pseudonymisation, and endorsed joint DMA/GDPR guidance with the European Commission. It also set out a pipeline for 2026 that included a DPIA template, a common data breach notification template, a form to signal inconsistencies between national and EDPB guidance, and further joint AI/GDPR guidance.

This is worth taking seriously. Annual reports often describe activity after the event. Here, the EDPB is also signalling how it intends to work differently going forward.

In practice, many organisations do not primarily struggle because GDPR obligations are unclear in theory. They struggle because those obligations must be applied across real systems, suppliers, digital products, service environments, business timelines and governance structures. That is particularly evident where:

  • the privacy function is small
  • operational teams are moving quickly
  • procurement or product choices are made before privacy analysis is complete
  • several legal frameworks now apply to the same activity
  • the organisation needs something more practical than a long legal memo

One of the more useful aspects of the Helsinki Statement is that it reflects a growing regulatory recognition of this operational reality. Easier compliance, in this context, does not mean lower standards. It means guidance that can be translated into actual organisational behaviour more effectively.

One practical point worth adding is that simplification is only valuable if it improves judgement. Templates, summaries and checklists can be extremely useful, but only if they help organisations ask the right questions earlier and more consistently. Used badly, they can become a substitute for thinking. Used well, they are often what allows smaller or overstretched teams to make better decisions in time.

Regulatory direction: The EDPB is actively shifting towards more practical, usable and implementation-focused compliance support, while maintaining a fundamental rights-based approach. This suggests that privacy governance should increasingly be built around operational clarity and usable controls rather than documentation alone.

GDPR now needs to be understood within a broader digital regulatory landscape

A major theme in the report is the EDPB’s growing role in clarifying how GDPR interacts with other EU digital laws. The foreword states that the rapid expansion of the EU’s digital regulatory framework has added complexity to the data protection ecosystem and that regulators now have a responsibility to clarify the interplay between data protection rules and other digital laws, and to ensure legal certainty and consistency.

This is an important shift in emphasis. GDPR is not being displaced, but it is increasingly being interpreted as part of a wider digital rulebook. The report gives several concrete examples. In 2025, the EDPB:

  • adopted Guidelines 3/2025 on the interplay between the DSA and the GDPR
  • endorsed its first joint guidelines with the European Commission on the interplay between the DMA and the GDPR
  • continued work with the Commission and the AI Office on guidance addressing the interplay between the AI Act and EU data protection laws
  • adopted a position paper on the interplay between data protection and competition law

These are not merely institutional exercises. They indicate the kinds of legal and governance issues that organisations increasingly need to handle in joined-up ways.

The DSA/GDPR guidelines, for example, are said to address how GDPR principles and safeguards apply to notice-and-action mechanisms, recommender systems, transparency of advertising, deceptive design patterns, and privacy and safety protections for minors, including prohibitions on certain forms of profiling-based advertising. The DMA/GDPR guidance addresses specific choice, consent, data combination, portability and other obligations affecting gatekeepers, business users and individuals.

That is highly relevant even for organisations that are not gatekeepers or major platforms. The broader point is that privacy can no longer be assumed to sit on a separate compliance track. Product design, interface choices, ad-tech, platform functionality, AI deployments and user account models increasingly need to be understood across multiple legal frameworks.

In practice, the challenge is often not doctrinal but organisational. Different teams tend to own different parts of the problem:

  • privacy or legal may own GDPR
  • product or engineering may own user journeys and platform functionality
  • digital teams may own DSA or consumer-facing processes
  • AI or innovation teams may own model adoption
  • commercial teams may shape onboarding, consent journeys or personalisation features

Where these functions do not meet early enough, organisations can find themselves technically progressing in one area while creating avoidable exposure in another.

This can happen in very ordinary ways. An interface change designed to improve conversion may create a consent issue. A safety or moderation feature may affect rights or profiling analysis. A DMA-style data portability design may have implications for lawful basis, minimisation or transparency. A recommender system or advertising tool may need to be assessed through both DSA and GDPR lenses.

A distinctive point from practice is that many governance problems are no longer “privacy-only” problems. They are governance coordination problems. The relevant question is often less “what does GDPR say?” and more “who in the organisation is joining up privacy with the rest of the digital legal environment?”

That is particularly important in organisations dealing with higher-risk user groups, digital service delivery, education, health-related environments, children’s data, or AI-enabled decision support.

Cross-regulatory risk: GDPR compliance increasingly overlaps with other digital regulation, including the DSA, DMA and AI Act. Organisations should expect privacy, product and regulatory governance to become more integrated rather than more separate.

The guidance priorities are practical and implementation-focused

The EDPB’s 2025 guidance agenda is strikingly practical. In addition to the interplay guidance, the Board adopted:

The choice of topics is revealing. These are not primarily abstract questions about doctrine. They are questions about how organisations build systems, choose safeguards, structure services and minimise unnecessary friction or over-collection.

The pseudonymisation guidelines explain the role of pseudonymisation as a safeguard that may be appropriate and effective for meeting obligations under the GDPR, particularly in relation to data protection principles, privacy by design and default, and security. They also analyse the technical and organisational safeguards needed to preserve confidentiality and avoid unauthorised identification.

The blockchain guidance is similarly practical. It addresses architecture choices, role allocation, data minimisation, storage approaches and the handling of transparency, rectification and erasure in blockchain environments. The report states clearly that, as a general rule, storing personal data on a blockchain should be avoided where it conflicts with GDPR principles.

The recommendations on account creation for e-commerce websites are perhaps the most visibly user-oriented. The EDPB states that, as a general rule, users should be able to make purchases without being required to create an account, and that guest checkout or voluntary account creation should be offered wherever possible, with mandatory account creation only justifiable in limited cases such as subscription-based services or access to exclusive offers.

This matters because it illustrates a wider regulatory tendency. The EDPB is increasingly engaging with the practical design choices that shape data processing, not only the downstream legal justifications for them.

In practice, these are exactly the kinds of issues that tend to surface late:

  • is this safeguard actually effective?
  • is this architecture compatible with rights?
  • do we genuinely need persistent accounts?
  • are we collecting data because it is necessary, or because it makes the business model simpler?

We see privacy teams brought in after these choices have substantially hardened. At that point, the conversation becomes one of damage limitation rather than design improvement.

A useful perspective from practice is that privacy risk often becomes materially easier to manage where the organisation treats privacy analysis as part of design and procurement, rather than as a review stage after implementation decisions are already largely fixed. This is particularly relevant in outsourced digital services, AI-enabled workflows, health and care settings, education environments, public service delivery and products aimed at or accessible by children.

Operational design lesson: Recent EDPB guidance priorities suggest that privacy risk is increasingly being assessed through design choices, architecture, account models, safeguards and minimisation decisions. Early-stage design governance is therefore becoming more important.

AI moved from policy discussion to supervision and methodology

The EDPB’s 2025 report confirms that AI is no longer a peripheral policy topic. It is now part of mainstream supervisory and methodological work.

The report places AI at the centre of several activities:

  • ongoing joint work with the Commission and the AI Office on GDPR/AI Act interplay guidance
  • Support Pool of Experts projects on AI supervision and LLM privacy risks and mitigations
  • training curricula on AI security, AI compliance and secure AI systems handling personal data
  • an EDPB bootcamp on AI and AI auditing involving 50 participants from 24 countries
  • extension of the ChatGPT taskforce into a broader Taskforce on Generative AI Enforcement

The LLM risk project is especially significant. The report describes it as offering a comprehensive risk management methodology and practical mitigation measures for common privacy risks in LLM systems, illustrated through use cases such as customer service chatbots, student progress support tools and AI assistants for travel and schedule management.

This indicates a maturing supervisory posture. The EDPB is moving beyond general debate about AI and toward more structured evaluation of how AI systems are built, trained, deployed and audited.

In practice, AI adoption continues to outpace governance in many organisations. AI tools are already in use across customer support, analytics, drafting, workflow automation, education, compliance, HR, healthcare-adjacent settings and digital service delivery. But the visibility of those uses, and the consistency of governance around them, is often uneven.

Common issues include:

  • incomplete mapping of AI use
  • weak understanding of what personal data is involved
  • insufficient lawful basis analysis
  • underdeveloped vendor due diligence
  • unclear treatment of downstream or model-training implications
  • low visibility of AI risks at board or executive level
  • inconsistent decisions about when a DPIA, LIA or broader governance review is required

A distinctive point from practice is that AI risk often becomes most acute not where AI is technically most advanced, but where it is adopted most casually. Embedded AI features, low-friction productivity tools, trial deployments and vendor-enabled features can all create governance blind spots precisely because they do not always look like major AI projects.

From a practical DPO perspective, that means ordinary governance disciplines matter a great deal:

  • identifying where AI is already in use
  • understanding what personal data is involved
  • checking what vendors are doing with that data
  • escalating material use cases to proper review
  • updating records, notices and risk assessments where appropriate

AI governance: The EDPB’s 2025 work confirms that AI is now part of mainstream supervisory activity. Organisations should assume that AI-enabled processing requires structured privacy governance, clear accountability and proportionate escalation to senior management where risk or impact is significant.

Enforcement is becoming more thematic, better supported, and more methodical

Although the 2025 report gives more prominence to clarity and stakeholder dialogue, enforcement remains central. The “Supporting Enforcement” chapter shows that the EDPB continues to invest in the practical infrastructure of consistency and enforcement.

The Coordinated Enforcement Framework remains one of the clearest examples. In January 2025, the EDPB adopted a report on implementation of the right of access, based on coordinated national actions carried out in 2024. For 2025, the Board selected the implementation of the right to erasure as the focus of its coordinated action, with 32 DPAs participating and 764 controllers responding across Europe.

The Support Pool of Experts also remains significant. In 2025, the EDPB published the deliverables of seven projects launched in 2024 and launched nine new projects, including work on AI supervision, LLM privacy risk, training curricula, the digital euro, website auditing tools and AI auditing bootcamps.

The report also records that in 2025:

  • 414 cross-border cases were created in the EDPB case register
  • 1,299 One-Stop-Shop procedures were triggered
  • 572 of those led to final decisions

At national level, DPAs issued a total of €1.145 billion in fines, with France and Ireland accounting for the largest totals at €486.854 million and €530.773 million respectively.

In practice, organisations often pay close attention to large fines but less attention to how supervisory capability is evolving. That can be a mistake. Coordinated actions, expert tools, audit methodologies and cross-border procedures often signal where regulators are becoming more consistent and more prepared. If the EDPB is investing in areas such as access rights, erasure, AI supervision, website auditing and methodological support, that is often a better indicator of where scrutiny is deepening than any single headline decision.

A useful perspective from practice is that mature organisations tend to respond better to thematic regulatory signals than to isolated enforcement headlines. If a regulator is building tools and methodologies around a topic, it usually means expectations are becoming more structured. That is a strong reason to review those areas proactively rather than waiting for a specific complaint or incident.

Enforcement maturity: Enforcement is becoming more thematic and methodical, supported by coordinated actions, expert methodologies and cross-border processes. This suggests that organisations should pay attention not only to major fines, but also to the areas where supervisory capability is clearly deepening.

The EDPB is trying to make guidance easier to consume and use

A major theme running through both the Secretariat section and the core activities section is accessibility of guidance. The EDPB explains that in 2025 it intensified efforts to make GDPR information more accessible to a wider, non-technical audience, using clearer and more straightforward language, in line with the Helsinki Statement and the 2024 – 2027 Strategy.

The Board also published additional summaries of guidelines in 2025, covering pseudonymisation, personal data breaches, blockchain technologies, right of access and the DSA/GDPR interplay. It separately consulted on which ready-to-use templates organisations would find most useful, including privacy notices and RoPA templates.

This should not be dismissed as mere communications work. It reflects a deeper point: if guidance is to improve compliance in practice, it needs to be understandable, adaptable and capable of being used by people who are not specialist privacy lawyers.

One of the recurring barriers to operational privacy maturity is not resistance. It is translation. Many organisations have committed and capable teams, but they need guidance that can be turned into workflow, process design, internal controls and practical instructions. Where guidance remains too abstract, organisations tend either to over-engineer or under-implement. More usable formats can materially improve the ability of internal DPOs and compliance teams to engage productively with operations, procurement, HR, IT, education, care or service teams that do not work in privacy full-time.

A useful lesson here is that internal privacy support should often mirror the direction the EDPB itself is taking: shorter supporting materials, targeted guidance, templates, checklists and summaries can strengthen compliance when used to support sound judgement rather than replace it.

Guidance usability: The EDPB is increasingly prioritising guidance that is concise, practical and usable by non-experts. Internally, this suggests value in translating privacy requirements into clearer operational tools rather than relying solely on long-form legal documentation.

Stakeholder dialogue is becoming a more formal part of the regulatory model

The EDPB’s 2025 report gives unusual prominence to consultation and stakeholder dialogue. The Board launched five public consultations in 2025, including on pseudonymisation, blockchain, DSA/GDPR interplay, DMA/GDPR interplay and e-commerce account creation, and separately sought views on which templates organisations would find most useful.

The report also describes a stakeholder event on anonymisation and pseudonymisation involving over 100 participants from sector associations, organisations, companies, law firms and academia. In line with the Helsinki Statement, the EDPB says it will systematically publish reports on input received during such stakeholder events.

This matters because it suggests that stakeholder engagement is becoming part of how the EDPB builds legitimacy, improves practicality and strengthens consistency.

Organisations often assume that guidance is something regulators issue to them, rather than something they can shape through consultation, response and engagement. The EDPB’s 2025 approach suggests a more participative model, particularly where implementation questions matter. That may be especially relevant for sectors where data protection issues arise in distinctive operational settings, such as healthcare, education, charities, financial services, public bodies, AI-enabled services, and work involving children or vulnerable individuals.

Organisations with recurring compliance challenges should pay more attention to consultation opportunities. There is often value in engaging early where draft guidance touches lived operational issues. This is one of the clearest ways to help ensure that final guidance reflects practical realities.

Regulatory engagement: The EDPB is increasingly building consultation and stakeholder dialogue into how guidance is developed. Organisations in more regulated or higher-risk sectors may benefit from monitoring these processes more actively as part of horizon scanning and policy input.

International adequacy and global engagement remain active governance issues

The report also shows continuing EDPB attention to adequacy, international engagement and cross-border consistency. In 2025, the Board provided five adequacy-related opinions concerning the UK, Brazil and the European Patent Organisation. It also continued international engagement through fora such as the G7 DPA Roundtable and the Global Privacy Assembly, and held a second meeting with DPAs from countries and organisations with EU adequacy decisions.

This is useful context because it reinforces that international data governance remains a live topic, not a settled one. The opinion on Brazil, for example, positively noted substantial alignment in many respects, while still inviting the Commission to assess further issues such as onward transfers, secrecy-related limits and the treatment of public authority access in criminal law contexts.

Many organisations still treat international transfer compliance as a one-time legal implementation task. The EDPB’s ongoing adequacy work suggests a more dynamic reality. International data governance continues to evolve, particularly where cloud services, AI providers, outsourced support, global vendor chains or disclosure scenarios are involved. We also see that international governance issues often surface indirectly through procurement, system configuration, support models, managed services, AI tooling or incident response rather than through a discrete “international transfer” project.

International data governance problems often do not emerge as abstract transfer-law questions. They emerge as operational questions: who can access the data, from where, for what purpose, under what contractual structure, and with what onward-use implications.

International data governance: Adequacy, cross-border data use and onward transfer conditions remain active supervisory issues. Organisations should periodically review whether international data governance is still aligned with their actual service, supplier and technology footprint.

What this means for organisations now

Taken together, the EDPB’s 2025 report shows a regulator trying to do several things at once:

  • preserve strong protection of fundamental rights
  • make compliance easier in practice
  • reduce fragmentation across Europe
  • help organisations navigate overlapping digital laws
  • build better tools for supervision and enforcement
  • respond more concretely to AI and emerging technologies

That is a significant development. It means the European privacy framework is not simply becoming more demanding. It is also becoming more operational. For organisations, the practical implications are clear.

First, GDPR governance can no longer be isolated from broader digital compliance. The interplay with the DSA, DMA, AI Act and competition law is now part of the practical compliance picture.

Secondly, privacy maturity increasingly depends on usable implementation, not just formal documentation. The EDPB’s emphasis on templates, summaries, consultations and practical guidance reflects that reality.

Thirdly, AI governance should now be treated as part of mainstream privacy governance and not as a specialist or experimental side-stream.

Fourthly, organisations should pay closer attention to the EDPB’s enforcement-support work. Coordinated actions, expert methodologies and practical supervisory tools often indicate where scrutiny is becoming more structured.

Finally, organisations should expect privacy governance to be judged increasingly through its operational reality: how systems are designed, how rights are handled, how controls are evidenced, how risks are escalated and how legal obligations are translated into day-to-day practice.

What to take back to your organisation

A focused internal review after reading the 2025 report could reasonably include the following questions:

  • Do we understand where GDPR overlaps with other digital regulation that affects us?
  • Are our privacy controls being applied early enough in design, procurement and AI adoption?
  • Is our internal privacy guidance practical enough for non-specialist teams to use?
  • Are there areas where we are relying on policy statements without enough operational evidence?
  • Are board and executive visibility strong enough for digital and privacy risk areas that are changing quickly?

Conclusion

The EDPB’s 2025 Annual Report is not simply a record of activity. It is a statement of regulatory direction. Its core message is that data protection now operates in a more complex and interconnected regulatory environment, and that this complexity should be met with more practical guidance, stronger consistency, deeper stakeholder dialogue and more usable compliance tools, not weaker standards. For organisations, the challenge is no longer simply understanding GDPR in principle. It is integrating privacy governance into the wider reality of digital regulation, AI deployment, product design, supplier management, operational accountability and senior decision-making.

For many organisations, that requires a shift:

  • from privacy as documentation to privacy as governance
  • from siloed compliance to cross-functional coordination
  • from abstract policy to practical implementation
  • from reactive advice to earlier design-stage involvement

That is the deeper significance of the EDPB’s 2025 report. It is not only about what the Board did in 2025. It is about the kind of privacy governance environment European organisations are now being expected to build.

This article is intended to support the learning covered in Hour 1 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Why AI DPIAs Become Harder Than They First Appear

This article accompanies Hour 5: DPIAs in Practice in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What Makes an AI DPIA Difficult

Most organisations do not need to be persuaded that AI use should be assessed. The harder question, and the more useful one in practice, is whether the DPIA is still telling the truth once the system is live. In AI work, that is usually where the real complexity sits. The document may be complete, the workflow may be followed, and the lawful basis may be clear on paper, but the system itself can still move underneath it through changing outputs, shifting reliance, vendor opacity, uncertain ownership of prompts and outputs, and a level of decision influence that is often more significant in practice than it first appears. That is where the assessment stops being a formality and starts becoming a test of whether the organisation really understands the processing it is trying to stand over.

Why this becomes difficult even in mature organisations

In most mature organisations, the problem is not that there is no governance process. There is usually a screening route, some form of privacy review, a procurement track, contractual diligence and a familiar DPIA workflow. In many cases, those structures work well for conventional systems because the relationship between purpose, data, system and output is reasonably stable. The organisation can say with some confidence what the system is for, what data it uses, what the output is, and where the main risks sit.

AI systems are harder to assess because they loosen that relationship. The model may be procured for one purpose but used in several. It may sit inside an existing platform rather than arrive as a standalone project. Its outputs may be advisory, but still influential. The vendor relationship may look processor-like at first glance, but on closer inspection the organisation may have only partial visibility into how the model is trained, updated or bounded. Even where the organisation’s own use case is fairly narrow, the system it is relying on may be the product of a much larger lifecycle that sits outside its line of sight.

That is why experienced teams often feel a degree of friction when carrying out AI DPIAs. The process itself is not necessarily wrong. It is that the assessment is being asked to do more than record a fixed set of facts. It is being asked to capture a moving relationship between use, control, vendor assurances, human behaviour and downstream impact.

The practical difficulty is not that the organisation lacks a DPIA. It is that the system is often more fluid than the assessment assumes.

The question is not whether AI needs a separate framework in every case. The more useful question is whether the current DPIA method is deep enough to capture the operational and legal complexity that AI introduces into otherwise familiar governance environments.

Ownership is often less clear than the DPIA assumes

One of the first places this becomes visible is in the question of ownership. In ordinary vendor arrangements, the organisation can usually draw a fairly clear line. It controls the purpose. It determines the means to a sufficient degree to act as controller. The vendor provides a service and acts, in broad terms, as processor. That model may still be correct at a high level, but in AI systems it often hides a more complicated reality.

The organisation may control the workflow into which the AI tool is introduced, but it may not own the model, the training conditions, the update cycle, or the terms under which prompts, outputs and telemetry are handled. It may know that a processor agreement is in place and that the platform sits within a defined environment, but still be relying heavily on a set of representations from the vendor about how the model behaves, what happens to prompts, whether outputs are reused for training, and what forms of logging or retention take place. 

That is already enough to show why ownership is harder here than a standard controller and processor diagram suggests. The organisation may be fully accountable for the use of the tool in its own processing, while still lacking complete control over the conditions that shape the tool’s behaviour. That does not make deployment impossible. It does mean the DPIA needs to be more candid about the limits of control than many conventional assessments are used to being.

In AI systems, accountability often sits more neatly than control.

If the organisation is being asked to stand over the system, what exactly is it standing over. Is it standing over the workflow in which the system is used. The prompts submitted by staff. The outputs generated. The vendor assurances. The deployment environment. Or all of those together. A DPIA that collapses those layers into one bland statement of processor support is often telling the reader less than they need to know.

Why vendor opacity matters more than IP ownership in practice

In AI DPIAs, the more practical issue is often not who owns the intellectual property in a strict legal sense, but how intellectual property is used to limit visibility into how the system actually works. Vendors will frequently rely on IP protections to avoid disclosing detail about training data, model behaviour, tuning, or how outputs are generated. That position may be legitimate from a commercial perspective, but it creates a gap for the organisation carrying out the DPIA.

The organisation is still expected to assess risk, explain processing, and stand over the system’s use. That becomes more difficult where key elements of the system are effectively treated as a black box.

The organisation is being asked to assess risk in a system it cannot fully see, and that is where the difficulty tends to sit.

This is where the DPIA can become thinner than it appears. The document may describe the use case, the data inputs, and the intended outputs, but still rely heavily on vendor assurances rather than understanding. Where explanations are limited, the assessment often shifts from analysis to trust.

That is not necessarily a failing, but it does need to be recognised for what it is. In practice, this affects several aspects of the risk assessment. It becomes harder to explain how outputs are generated, whether they may reflect underlying training data, how the system may behave in edge cases, or how it might change over time. It also affects the organisation’s ability to explain its own safeguards. Controls such as human review, output checking or restricted use are often relied upon more heavily where the system itself cannot be interrogated.

This is also where the question of prompts and outputs becomes relevant, not primarily as an ownership issue, but as a proxy for understanding how the system is being used and what happens to the data once it enters it. Organisations will often seek confirmation that prompts, outputs and customer data are not reused for model training or service improvement without permission. That is a sensible step, but it is also an indication that the organisation is working within a defined trust boundary rather than full visibility.

At that point, the DPIA is partly an assessment of the system, and partly an assessment of the organisation’s reliance on the vendor.

For a DPO or privacy lead, the more useful question is not whether the organisation has perfect insight into the model. That is rarely achievable. The question is whether the limits of visibility are clearly understood, documented and reflected in how the system is used. If the organisation cannot explain where its understanding ends and where it is relying on vendor assurance, the DPIA is likely to overstate certainty. If it can, the assessment becomes more realistic and more defensible, even where some elements of the system remain opaque.

The assessment often focuses on inputs while the real complexity sits in outputs

Most AI DPIAs start where privacy teams are used to starting, namely with input data. What personal data is being submitted to the system. Is special category data involved. Is children’s data involved. What is the lawful basis. Are there transfer issues. Are there processor terms. All of this remains necessary.

The difficulty is that, in AI use cases, the output is often where the processing becomes genuinely consequential. The output may summarise, classify, prioritise, predict, recommend or draft. Once that output enters a workflow, it may change how staff behave, how issues are framed, how quickly matters are escalated and how confidently decisions are taken. It may also itself become a new record.

That means a DPIA that does not give sufficient attention to outputs is often assessing only half the story. A system that takes in personal data and returns an answer is not complete when the answer is generated. The real processing chain continues when that answer is used. In some contexts, the output becomes part of the permanent file. In others, it is copied into correspondence or becomes the basis of a recommendation to another decision-maker. In still others, it is used transiently, but still shapes what someone chooses to do next.

In AI systems, outputs are often not the end of processing. They are the beginning of the next stage.

That is one reason why generic phrases such as “AI is used only to assist staff” can be so weak in practice. Assistance can still be consequential. An assistive tool that changes how staff interpret an enquiry, assess a risk, or allocate attention may not be determinative in a legal sense, but it is still part of the machinery that produces outcomes. 

Decision influence is often more important than formal automation

Many organisations become overly focused on whether Article 22 is engaged. That is understandable, but it can lead to the wrong emphasis. In practice, the more difficult question is often not whether the decision is fully automated, but how significantly the AI tool is influencing what humans do.

One of the areas where AI DPIAs tend to require more careful judgement is in distinguishing between formal automated decision-making and practical decision influence. Many systems will be positioned, correctly, as assistive rather than determinative. There is a human in the loop. The system does not make final decisions independently. On that basis, the strict threshold for automated decision-making may not be met. That is often where the analysis stops.

In practice, the more relevant question is not whether the system is formally making decisions, but how much it is shaping them. AI tools can influence how information is presented, how cases are prioritised, how risks are framed, and how responses are drafted. Even where a human retains responsibility, the system may still play a significant role in directing attention and shaping judgement.

The absence of fully automated decision-making does not mean the absence of meaningful influence.

This is where DPIAs tend to benefit from going beyond classification. The presence of a human reviewer, by itself, does not say much about how decisions are actually made. The more useful analysis looks at how that review operates in practice. Whether outputs are routinely challenged, whether alternative interpretations are considered, and whether the user has both the context and the authority to depart from the system’s output where necessary.

A similar issue arises in other types of system design, where the question is not whether a process is technically automated, but whether it materially shapes outcomes. In those contexts, the focus tends to move away from labels and towards effect. Who sees what, in what order, with what framing, and with what constraints. Those are often the factors that determine how decisions are ultimately reached. The same approach is useful in AI assessments.

The more practical question is not whether the system makes the decision, but how much it shapes the conditions under which the decision is made.

For a DPIA, that shift in perspective allows the assessment to better capture where risk arises in practice. It also provides a clearer basis for identifying safeguards, not only at the point of decision, but in how the system is embedded within the workflow that leads to that decision. The presence of a human does not tell you very much by itself. The more useful question is what room remains for real human judgement once the system’s output is in front of them.

Human oversight only matters if it changes the outcome

“Human oversight” is one of the most overused phrases in AI documentation, partly because it often sounds reassuring while remaining undefined. A strong DPIA should resist that temptation. The fact that a human looks at an output does not necessarily mean the output is meaningfully challenged. Nor does it mean the risk created by the system has been mitigated to a defensible level.

The better question is whether the human user has the authority, time, context and habit of mind to depart from the AI output where needed. If the workflow, productivity expectation or cultural framing of the tool encourages the user to accept outputs at pace, the control may exist on paper while doing much less in practice. If the system is used repeatedly and begins to feel reliable, challenge may reduce over time even where the initial design assumed active review. If staff are not trained to recognise uncertainty, hallucination, model limitation or output ambiguity, the review may become little more than a sense-check.

The human review itself needs to be designed as a control. It needs to be evidenced. It needs to sit in a workflow where disagreement with the output is legitimate, expected and not quietly penalised by time pressure.

Human oversight only has governance value if it is capable of changing the outcome.

If the reviewer disagreed with the system, what would happen next? If the answer is unclear, then the organisation may be relying on a control it has not really operationalised.

Legal basis and necessity can drift even where nobody thinks the use case has changed

A further pressure point in AI DPIAs is the tendency to treat legal basis and necessity as fixed once the initial assessment has been completed. In many cases, the original reasoning is sound. The organisation identifies a legitimate purpose, determines that the processing is compatible with that purpose, and documents the lawful basis accordingly.

The difficulty is that AI systems can change how that purpose is operationalised without anyone consciously deciding that the use case has changed. The same tool begins to support more teams. Outputs start to shape decision-making more directly. A summarisation tool begins to act as a risk triage aid. A drafting assistant becomes part of formal communication. A classification tool quietly changes how issues are prioritised. Nothing about the headline purpose may appear to have moved. In practice, the processing may be doing more, or doing something more intrusive, than the original justification assumed.

The lawfulness problem is often not that the organisation chose the wrong lawful basis at the start. It is that the organisation no longer tests whether the current behaviour of the system is still consistent with the reasoning it originally documented.

The purpose statement may remain stable while the operational reality beneath it becomes more ambitious.

In your organisation revisit one live use case. Ask what the system is now doing in practice. Then ask whether the original necessity analysis still describes that reality.

Vendor opacity creates borrowed risk

Vendor dependence is not itself unusual. What makes AI different is the extent to which the organisation may be asked to rely on a system it cannot fully inspect, control or explain, while still carrying the burden of accountability for how it is used. That creates what can fairly be described as borrowed risk.

The organisation may receive assurances about data residency, processor role, model training separation, retention and isolation. Those assurances matter and should be documented. For example, you might look for DPA, EU deployment, and the non-use of prompts and outputs for model training, and treat positive answers as key assurances that materially improved the data protection position. But those assurances do not eliminate the underlying asymmetry. The organisation is still being asked to stand over a tool it did not build and cannot fully interrogate.

That does not mean the DPIA should become speculative. It does mean it should be candid about the trust boundary. Where the organisation is relying on vendor representations, that reliance should be explicit. Where the vendor can update the system in ways that may affect output behaviour, the review trigger should not depend on the organisation discovering change after the fact. Where the organisation cannot fully assess the training lifecycle, that limitation should inform how narrowly or cautiously the tool is deployed.

A vendor assurance can improve the risk position. It does not remove the fact that some of the risk remains borrowed.

The real test is whether the organisation can evidence how the system is actually used

At some point, every strong DPIA comes back to evidence. Not because documentation is the goal in itself, but because accountability is difficult to sustain where the organisation cannot show how the stated controls work in practice.

Do not adopt a view on the least-worst interim option, but look to support implementations through formal access approval, SOPs, audit, time-bound permissions, a DPIA addendum (as development moves on) and updated transparency measures. Comfort level might be linked to documented confirmation on processor position, data usage, retention and continued human review . The common theme is that defensibility comes from the combination of reasoning and proof.

That matters in AI work because systems can drift quietly. Outputs may be reused. Users may rely more heavily on them. Vendor changes may affect behaviour. If the organisation has no logs, no review trail, no evidence of challenge, no record of who approved expanded use, and no mechanism for revisiting the DPIA when the system changes, then the assessment may still exist but its value under scrutiny is much lower.

A DPIA is strongest where the organisation can still explain the live system six months later and show how its controls actually operate.

Senior management or an executive board does not need another generic assurance that an AI assessment was completed. They need to know whether the organisation can evidence the assumptions on which that assessment rests.

What next for your organisation?

The most useful response to this complexity is usually not to rewrite the template first. It is to test one live use case against present reality. Take a current AI deployment and trace it from input to output to downstream use. Start with the prompts or source data being submitted. Then look at what comes back from the system. Then follow what happens next. Does the output sit transiently on screen, or does it become part of a case file, client communication, recommendation, or record of decision. Who is expected to review it. What would count as disagreement with it. What does the contract or vendor documentation say about prompts, outputs, retention and training. What happens when the vendor changes the service. Which part of that story is clearly owned by the organisation, and which part depends on trust in the vendor environment.

Then read the DPIA against that workflow, not the other way around. If the document still accurately describes the processing, identifies the main points of influence, reflects the real control position and is supported by evidence that the controls operate, the organisation is in a stronger place than many. If it does not, the issue is usually not that the DPIA was pointless. It is that the assessment needs to be reconnected to the live system.

The DPO is the function that tests whether the organisation is still telling itself the truth about the system it is using.

Practical Takeaway

For a DPO, privacy lead or senior compliance professional, the most practical next step is to pick one live AI use case and run a focused reality check against the current DPIA. Do not start with the template. Start with the workflow.

Trace the processing from the moment data is submitted to the system through to the point where an output influences action. Confirm what categories of personal data are actually being used in practice, including any drift from the original use case. Check whether special category data, children’s data, confidential commercial material or legally sensitive content are now entering the tool more often than originally assumed.

Then test the ownership and control picture. Can the organisation explain what it owns, what it merely uses, and what it is relying on vendor assurances to support. Is there a clear position on prompts, outputs, telemetry, retention and model training. Can the organisation explain whether outputs are stored, reused, incorporated into files, or copied into communications. If those points are not clear, the DPIA is already working on incomplete assumptions.

Move next to output handling. Identify whether outputs remain transient or whether they become records, recommendations, drafts, triage inputs, notes or evidence relied upon later. Ask whether the DPIA currently treats outputs as part of the processing chain or still focuses mainly on inputs. In many organisations, this is where the assessment is thinnest.

Then look at decision influence. Forget for a moment whether Article 22 is engaged. Ask instead how much practical weight the output carries. Does it change how quickly a matter is escalated, how attention is allocated, how a person is described, or how a professional frames their judgement. If the answer is yes, then the DPIA should say so in plain terms.

After that, test human oversight as a real control. Who reviews the output. What are they expected to do with it. Can they depart from it without friction. Is there evidence that challenge happens in practice. Have users been trained on limitations, or only on functionality. If human review is being relied upon as the key safeguard, it should be possible to explain how it changes outcomes and where that is evidenced.

Then revisit legal basis and necessity. Ask whether the purpose statement and lawful basis still match how the system is actually being used now. This is particularly important where the same tool has spread to new teams, new contexts or more influential stages of decision-making. If the use has become more ambitious, the original justification may need to be refreshed even where the headline purpose appears unchanged.

Finally, test evidencing and governance connection. Can the organisation show vendor due diligence, approval records, review triggers, logs, audit trails, SOPs, training records, change control, and any escalation or reapproval where the use case expanded. If the answer is largely no, then the DPIA is doing too much work alone.

A practical checklist for that review is below.

  • Does the DPIA still describe the live system rather than the original proposal?
  • Are prompts, outputs and any derived content clearly accounted for in the assessment?
  • Is there a documented position on whether prompts or outputs are reused for model training, service improvement or other vendor purposes?
  • Can the organisation explain the ownership and permitted use position for outputs?
  • Are outputs stored, reused, copied into files or relied on in later processing?
  • Does the assessment explain how outputs influence decisions, prioritisation or professional judgement in practice?
  • Is human oversight defined as a real control, with authority, training, time and evidence behind it?
  • Has reliance on the system increased over time in ways the original DPIA did not anticipate?
  • Do the lawful basis and necessity analysis still reflect the current use of the tool?
  • Are vendor updates, new data sources, expanded use cases and incidents clearly defined as review triggers?
  • Can the organisation evidence the operation of its controls through logs, audit records, approvals, training and governance documentation?
  • If challenged by a regulator, auditor or court six months from now, could the organisation still explain the system as it is actually being used?

That exercise is rarely wasted. Even where the answer is broadly positive, it usually sharpens the organisation’s understanding of where the true points of risk and reliance sit. Where the answer is less comfortable, it gives the DPO or privacy lead something much more useful than another abstract debate about AI governance. It gives them a concrete basis for updating the assessment so that it once again reflects the system the organisation is genuinely trying to stand over.

This article is intended to support the learning covered in Hour 5 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

DPC and EDPB Annual Reports for 2024

This article accompanies Hour 1: Global Privacy Law Updates in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What Organisations Should Take Away

The 2024 annual reports published by the Data Protection Commission (DPC) and the European Data Protection Board (EDPB) are useful for far more than tracking enforcement statistics. Read together, they provide a practical picture of where supervisory attention is being directed, which failures continue to recur, and what organisations should now be reviewing in their own privacy governance.

That is particularly important in Ireland. The DPC sits at the intersection of domestic complaints, breach reporting, supervision and cross-border regulation. The EDPB, in turn, continues to shape a more consistent European approach to enforcement, guidance and technological change. When those two layers are read together, the result is not simply a summary of what happened in 2024. It is a fairly clear indication of what organisations should expect to matter in 2025 and beyond.

For DPOs, compliance leads, legal teams and senior management, the real value of these reports lies in the signals they send. Those signals are not confined to fines. They touch the recurring weaknesses that show up in access rights, breach handling, operational discipline, role clarity, AI governance, board visibility and the overall ability of an organisation to evidence accountability.

In this article, we look at the most significant themes emerging from the DPC and EDPB annual reports for 2024 and, for each one, we set out what we commonly see in practice when organisations try to operationalise these obligations.

Enforcement remains active, but the more useful lesson is what sits behind it

The DPC’s 2024 Annual Report records 11 finalised inquiry decisions and over €652 million in administrative fines. It also shows a regulator dealing with substantial operational volume: over 32,000 contacts, 11,091 new cases processed, and 10,510 cases resolved in 2024 alone. By year end, 89 statutory inquiries remained on hand, while four large-scale inquiries were concluded during the year.

Those figures matter, but for most organisations the more useful lesson is not the amount of fines in isolation. It is what the DPC’s workload tells us. Data protection compliance remains a live governance issue. The DPC’s work in 2024 ranged across cross-border inquiries, domestic breach cases, CCTV, children’s data, AI model development, biometrics, sensitive health data, supervision activity, and legislative engagement. That is a broad and demanding field of regulatory attention.

The DPC’s foreword is also worth noting for tone as much as substance. It emphasises fairness, consistency and transparency, but it also states clearly that while organisations have flexibility in how they structure compliance, they remain accountable for the choices they make to both individuals and regulators. In other words, flexibility in approach is not a substitute for evidence of control.

The EDPB annual report reinforces that broader direction of travel. Its 2024 – 2027 strategy is built around four pillars: advancing harmonisation and promoting compliance, reinforcing a common enforcement culture, addressing technological challenges, and enhancing the EDPB’s global role. That is a useful frame for organisations because it shows that data protection authorities are not only reacting to specific infringements. They are also building a more structured, more consistent, more cross-border regulatory environment.

In practice, one of the recurring difficulties is that organisations still tend to treat regulatory developments as something that happens externally, rather than as indicators of how their own internal arrangements are likely to be assessed. A fine against a large platform is often viewed as distant from the experience of an ordinary Irish organisation. The more useful reading is usually the opposite. The issue is not whether an organisation will face the same factual pattern as a large-scale inquiry. The issue is whether the same underlying governance weaknesses are present in a smaller, less visible form.

We also often see organisations underestimate how much the DPC’s broad activity matters outside formal enforcement. Complaints handling, supervision, public guidance, case studies and legislative input all tell organisations something about regulatory expectations. If the privacy function only looks at headline fines, it will miss the more practical indications of what the regulator continues to see going wrong.

In terms of internal considerations for your organisation, a sensible first step is to move away from reading annual reports as backward-looking summaries. They should be read as practical indicators of:

  • which issues continue to generate friction
  • which operational weaknesses are still common
  • which areas are likely to receive more structured attention in the near future

For boards and senior management, the key point is that privacy remains a current governance issue and not simply a legal housekeeping topic.

Regulatory activity remains substantial. The DPC’s 2024 Annual Report records 11 finalised inquiry decisions and over €652 million in fines, alongside 11,091 new cases processed and 10,510 cases resolved. The broader implication is that privacy compliance continues to be an active governance and accountability issue rather than a background legal function.

Access rights remain one of the clearest indicators of programme weakness

Access rights remain a central theme in the DPC’s report. Subject access requests account for the largest share of national complaints, representing 34% of all complaints received, followed by fair processing at 17% and the right to erasure at 14%. By the end of 2024, the DPC had received 914 new complaints solely related to the right of access and concluded 904.

The DPC also explains why so many of these complaints arise. In many cases, organisations either fail to respond within the required timeframe or apply redactions and exemptions without giving sufficiently clear explanations. The report is explicit that it is not enough merely to list an exemption or cite legislation. The reason the exemption is being applied should be clearly explained, and those decisions should be documented.

The DPC case studies make this even more concrete. A hospital only provided access data after DPC intervention, despite the urgency of the matter. A financial services provider withdrew its reliance on restrictions and released previously withheld personal data only after the DPC challenged the legal basis for withholding it. Another organisation initially over-redacted records and later re-released them in partially redacted format after engagement with the DPC on the balancing exercise required by Article 15(4) GDPR.

The EDPB report places this in a wider European context. One of its 2024 highlights was the launch of a coordinated enforcement action on the right of access in February 2024. That is a useful signal. Access rights are not just a recurring domestic irritation; they remain a live topic of coordinated supervisory interest across Europe.

In practice, access rights often expose broader weaknesses in privacy governance. The legal right itself is not usually the main difficulty. The recurring problem is the organisation’s operational ability to comply consistently. Issues would be:

  • uncertainty over who owns the request internally
  • incomplete searches across mailboxes, shared drives or business systems
  • over-reliance on individual employees who “know where the data is”
  • poorly documented decisions on exemptions and redactions
  • difficulty explaining why certain information has been withheld
  • delays caused by weak coordination between legal, HR, operations and IT

A common issue is that organisations assume DSAR handling is mainly about retrieval. In practice, it is equally about judgement, explanation and evidencing of reasoning. Where that reasoning is weak or overly informal, the response often becomes difficult to defend.

Another recurring issue is that access rights are treated as episodic rather than systemic. If there is no repeatable internal workflow, organisations end up re-solving the same problem each time a request comes in. That is one reason DSAR performance often becomes such a useful indicator of the maturity of a privacy programme more generally.

Access rights are often one of the first places where weak accountability becomes visible so what can you do? Review whether your access request process is genuinely operationalised:

  • Is ownership clear?
  • Can you demonstrate adequate searches?
  • Are exemptions and redactions documented properly?
  • Can the reasoning be explained clearly to the individual and, if necessary, to the DPC?

Access rights remain a key area of exposure. Subject access requests account for 34% of complaints received by the DPC. This makes DSAR handling a practical indicator of whether privacy governance is functioning effectively, particularly in relation to timelines, searches, exemptions, redactions and decision-making quality.

Breach trends still point to ordinary operational weakness

The DPC received 7,781 valid breach notifications in 2024, an 11% increase on the previous year. Of those notifications, 81% were concluded by year end. The most important practical point, however, lies in the cause of those breaches. Fifty per cent of notified cases arose because correspondence was sent to the wrong recipient.

The DPC’s breach chapter develops that point in more detail. The highest category of breaches continues to involve unauthorised disclosures affecting single individuals or small groups, with poor operational practices and human error remaining prominent. The detailed breakdown shows:

  • 32% postal material to incorrect recipient
  • 14% email to incorrect recipient
  • 10% accidental or unauthorised alteration
  • 8% accidental loss or destruction
  • 5% hacking

This is one of the most useful themes in the annual report because it corrects a common misconception. Many organisations continue to associate privacy incidents primarily with cyber compromise. The DPC’s figures show that ordinary administrative failures remain an equally important source of regulatory exposure.

The DPC’s case studies reinforce this. In the broadcasting-sector phishing case, an employee was deceived into disclosing credentials, leading to unauthorised access to personal and special category data, with the DPC pointing to improved filters, training and revised procedures as part of the response. In another case, a third-level institution published non-anonymised survey data, leading to review of internal reporting processes and stronger controls. In another general case study, the DPC addressed the forwarding of work emails and special category data to a personal email account, again highlighting the importance of both technical and organisational controls.

In practice, many organisations have incident notification procedures but less mature arrangements for reducing repeat incidents over time. The response process exists, but the learning loop is weaker.  Observed is:

  • good incident logging, but weak analysis of recurring themes
  • legal and privacy teams involved late rather than early
  • corrective actions agreed in principle, but not tracked to closure
  • overly narrow focus on whether a breach is reportable, rather than why it happened
  • underinvestment in very ordinary controls, especially around correspondence, manual handling, exports and publication

A recurring issue is that operational teams may not see privacy incidents as part of governance. They may be treated as isolated mistakes rather than symptoms of a process or control issue. That makes repeat incidents much more likely.

Another common difficulty is that breach reporting to senior management can be descriptive rather than managerial. An incident is noted, but the question of whether similar incidents are reducing over time is not always asked clearly enough.

Breach management is not just about notification; it is also about demonstrable reduction in repeated failure. A takeaway would be to review your incident landscape in practical terms:

  • What are your most common breach types?
  • Are the same errors recurring?
  • Which ones are administrative and therefore more controllable?
  • Can you show that lessons learned have led to specific changes?

Breach trends remain heavily operational. The DPC received 7,781 valid breach notifications in 2024, with 50% arising from correspondence being sent to the wrong recipient. This supports continued focus on administrative controls, checking procedures, staff discipline and repeat-incident reduction rather than relying solely on breach notification workflows.

The case studies show that basic accountability failures remain common

The value of the DPC’s case studies is that they move the discussion away from abstract trends and show what organisations are still getting wrong in concrete terms. They reveal repeated issues in timing, documentation, explanation, role clarity and ordinary operational judgement.

Access request case studies show delayed responses, weak searches, poor handling of exemptions, and over-redaction. The controller/processor case study shows the continuing importance of properly identifying roles and understanding who must actually respond to a rights request. The personal-email case study illustrates how ordinary staff behaviour can create significant exposure, especially where special category data is involved. The rectification case study shows how customer service issues can quickly become privacy complaints where inaccurate personal data causes practical harm.

These are not exotic scenarios. That is precisely why they are useful. They remind organisations that a significant part of data protection compliance still comes down to the quality of routine governance and service delivery.

In practice, one of the recurring mistakes is to assume that serious data protection risk only arises in large-scale or technologically complex contexts. Very often, the issue is much more ordinary:

  • a request is not picked up on time
  • the search is incomplete
  • the explanation is weak
  • the agreement is unclear
  • the wrong document is shared
  • the control exists on paper but not in day-to-day behaviour

We also see organisations separate service problems from privacy problems too sharply. In reality, the boundary is often thin. Inaccurate data, weak complaint-handling, poor customer correspondence or unclear internal ownership can all become data protection issues very quickly when they affect rights or outcomes.

You can use the DPC’s case studies as a practical audit tool:

  • Which of these failures could happen here?
  • Which already have?
  • Are our controls strong enough to prevent them?
  • Are our staff sufficiently clear on what to do when something goes wrong?

EU coordination is becoming more important, not less

The DPC’s annual report reflects an Irish regulator operating in an increasingly coordinated European environment. The EDPB’s report makes that development more explicit.

The EDPB’s 2024 – 2027 strategy focuses on harmonisation, common enforcement culture, technological challenges and cross-regulatory cooperation. It also highlights the expanding responsibilities of data protection authorities in the context of the AI Act, DMA, DSA, Data Act and other digital frameworks. The Board notes that it issued 28 consistency opinions in 2024, including eight under Article 64(2), designed to address matters of general application or major cross-border relevance.

The report also underlines the role of coordinated enforcement. The 2024 highlights include a coordinated enforcement report on the role of DPOs, the launch of a coordinated enforcement action on the right of access, and the adoption of Opinion 28/2024 on AI models in December 2024.

This is important because it means organisations should increasingly assume that core GDPR issues are being understood within a shared European framework. That affects not only multinational businesses. It also affects domestic organisations whose practices touch issues that are receiving coordinated regulatory attention, such as access rights, AI, profiling, children’s data or cross-border services.

In practice, organisations often track DPC developments more closely than EDPB developments. That is understandable, but it can leave a gap in strategic awareness. We often see privacy governance shaped around domestic complaint themes, immediate contractual issues, specific incidents, and/or sector expectations. What can get missed is the extent to which EU-level consistency work shapes the direction of travel. That means organisations sometimes respond to a theme too late, after it has already become part of a broader supervisory agenda.

Another practical issue is that some organisations still assume that “European” developments only matter where large-scale cross-border processing is involved. Increasingly, that is too narrow. If a theme is receiving EDPB-level attention, it often signals a broader expectation of consistency that will eventually affect ordinary organisational practice as well.

Do not read the DPC report in isolation. The more useful question is:

  • what themes are being reinforced at EU level?
  • where is consistency increasing?
  • what does that suggest about where scrutiny is likely to deepen next?

The regulatory environment is becoming more coordinated across the EU. The EDPB’s 2024 – 2027 strategy focuses on harmonisation, enforcement culture, technological challenges and cross-regulatory cooperation. Organisations should assume that core privacy risks are increasingly being assessed in a more consistent European framework.

AI has moved firmly into mainstream privacy governance

One of the clearest themes in both the DPC and EDPB material is the centrality of AI to current regulatory thinking.

The DPC’s foreword states that regulation of AI model training attracted a great deal of public interest in 2024 and notes that new inquiries were commenced into AI models, biometrics and the security of sensitive health data. Its 2024 timeline records DPC engagement with Meta’s LLM plans, High Court proceedings concerning X’s Grok processing, the launch of an inquiry into Google’s AI model, and the DPC’s request to the EDPB for an Article 64(2) opinion on the use of personal data for development and deployment of AI models.

The EDPB annual report adds the broader European layer. Its foreword explains that the Board adopted an opinion on the use of personal data to train AI models in order to support responsible AI innovation while ensuring protection of personal data and compliance with the GDPR. The same section notes that AI developers can use legitimate interests as a legal basis for model training under certain conditions and that the EDPB set out a structured three-step test to help developers determine lawful use.

This is an important regulatory message. AI is not treated as outside GDPR. Nor is it treated as unlawful by default. It is treated as something that must be governed within ordinary accountability structures, using disciplined analysis rather than assumption.

The EDPB also makes clear that the AI Act and other digital legislation are expanding the responsibilities of DPAs and intensifying cross-regulatory cooperation. For organisations, that means AI governance is likely to become more, not less, integrated with privacy, product, risk and regulatory oversight.

In practice, AI adoption often moves faster than governance. Organisations begin using AI tools, copilots, model-based services or AI-enabled vendors before internal accountability arrangements have caught up. Recurring issues include:

  • uncertainty over lawful basis
  • weak transparency analysis
  • unclear supplier roles and sub-processing chains
  • underdeveloped DPIAs or no DPIA refresh at all
  • insufficient clarity on what personal data is being used, where, and for what purpose
  • treating AI as an innovation topic first and a governance topic second

A common issue is that organisations may have sensible general privacy controls but have not yet adapted them to AI-related realities. For example, vendor diligence may not yet ask the right questions about model training, retention, downstream use or human oversight. Similarly, internal teams may not yet distinguish clearly between use of an AI-enabled tool and development or deployment of an AI system with a materially different risk profile.

What to do? Map where AI is already present:

  • internal tools
  • external vendors
  • product features
  • workflow automation
  • model-assisted decisions

Then ask:

  • is this captured in our privacy governance?
  • has lawful basis been assessed properly?
  • do our notices and internal records reflect reality?
  • do we know what our vendors are doing with personal data?

AI is now part of mainstream privacy governance. Both the DPC and EDPB treated AI model development and deployment as core regulatory topics in 2024. AI-related processing should therefore be governed through existing privacy, risk and accountability structures rather than treated as a separate informal innovation stream.

DPIAs, role clarity and processor accountability remain highly practical issues

Even where the annual reports do not dwell on DPIAs as a standalone theme, they reinforce the wider accountability expectations that make DPIAs and role clarity so important.

The DPC’s access-related case studies show that role confusion still arises, particularly around controller and processor responsibilities. In one case, the DPC accepted that an organisation was acting as a processor and had complied with its obligations by referring the request to the controller, supported by a detailed written agreement setting out roles and instructions. This is a useful reminder that outsourcing does not remove responsibility. Rather, it increases the need for clear role definition and operational coordination.

The DPC’s annual report also shows how much emphasis continues to fall on practical explanations, evidence, and ability to justify decisions. That same logic applies to DPIAs and other risk assessments. It is no longer sufficient to have a template completed somewhere in the project file. The question is whether risk has been assessed at the right time, whether alternatives have been considered, whether decisions can be followed, and whether safeguards are reflected in actual controls.

In practice, role clarity and risk assessment still cause difficulty. DPOs see:

  • processor agreements that exist, but do not really support day-to-day rights handling or breach response
  • unclear internal understanding of who is controller, processor or joint controller in more complex service chains
  • DPIAs drafted too late in the project lifecycle
  • risk assessments that identify issues but do not clearly drive design or operational change
  • mitigations that are described, but not obviously tied to real controls

These issues often become more acute in AI-related, vendor-heavy or fast-moving projects. Where several parties are involved, or where technology adoption is proceeding quickly, the temptation is often to finalise role allocation and risk analysis after the main decisions have already been made.

Useful review points are:

  • whether processor/controller roles are clearly documented and understood
  • whether key agreements support rights handling, incident management and accountability
  • whether DPIAs are being carried out early enough and updated where processing changes
  • whether risk assessments are functioning as decision tools rather than paperwork

The common message is visible, measurable and auditable accountability

Taken together, the DPC and EDPB material points in a common direction. Privacy programmes are increasingly expected to be visible to decision-makers, measurable in practice and capable of withstanding scrutiny.

The DPC’s values include accountability, fairness, consistency and transparency. The EDPB strategy places emphasis on harmonisation, enforcement, practical guidance and technological readiness. Both sets of material suggest that regulators are looking beyond the existence of policies. The more important question is whether organisations can show how privacy governance actually works.

That means showing:

  • how rights are handled
  • how incidents are learned from
  • how high-risk processing is assessed
  • how senior management is informed
  • how improvements are tracked
  • how accountability is evidenced over time

In practice, board and executive visibility remains uneven. Many organisations do report privacy issues upwards, but the reporting is not always sufficiently management-focused. Accountability reporting can be:

  • narrative-heavy reporting with limited metrics
  • updates that describe activity but do not clearly show trend or risk
  • breach reporting without repeat-incident analysis
  • rights reporting without process-health indicators
  • DPO reporting lines that technically exist but do not create real organisational visibility

A recurring issue is that privacy becomes visible to leadership after an incident, but less visible in advance of one. That makes it harder for the organisation to demonstrate proactive accountability.

In your organisation, ask the following:

  • What does the board actually see on privacy?
  • Are privacy metrics meaningful and decision-useful?
  • Can the organisation show trends, not just isolated events?
  • Is accountability visible before regulatory or reputational issues arise?

Current expectations increasingly favour privacy programmes that are visible, measurable and auditable. Regulators are looking beyond policies to whether organisations can show functioning governance, practical control, clear ownership and evidence of remediation.

Summary

The DPC and EDPB annual reports are useful not only because they describe the last year of regulatory activity. They are useful because they show where pressure continues to build and what kinds of organisational weakness remain most likely to matter.

Many of the issues that continue to generate complaints, breaches and supervisory attention are not new. They are recurring weaknesses in access handling, explanation, operational discipline, accountability, role clarity and governance visibility. What is changing is the environment in which those weaknesses are being judged. It is becoming more coordinated, more structured and more technologically alert.

For many organisations, the real challenge is not understanding GDPR in principle. It is embedding that understanding into ordinary governance, processes, decision-making and reporting. That is what the DPC and EDPB annual reports help to illuminate, and that is why they remain worth reading carefully.

This article is intended to support the learning covered in Hour 1 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Vendor Oversight and Legal Characterisation

This article accompanies Hour 4: Vendor Management Oversight in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Vendor oversight often weakens before monitoring even begins

Vendor management is often framed too narrowly. The organisation onboards a supplier, legal reviews the paperwork, a data processing agreement is requested, and privacy is then asked whether the vendor is “covered”. That sequence is familiar. It is also where a great deal of weak practice begins.

The first privacy question in a vendor relationship is not whether a DPA is in place. It is whether the organisation has correctly understood what role the other party is actually performing in relation to the personal data. If that point is missed, the rest of the oversight model is built on the wrong foundation. The wrong agreement may be used, wrong risks may be reported, or the wrong controls may be prioritised. In some cases, the organisation may believe it has documented accountability when it has only described part of the relationship.

Vendor oversight often weakens at the point where the organisation chooses the wrong legal model for the relationship and then builds its controls around that error.

This is one of the reasons vendor management repeatedly becomes a source of exposure. Organisations often move too quickly from “supplier involved” to “processor appointed”. That may be right in some cases. In others, it is incomplete or simply wrong.

A supplier may be a processor for some elements of the service and an independent controller for others. In some relationships, the other party is not a processor at all. In others again, the arrangement may involve elements of joint controllership that are not being recognised because the parties prefer the apparent neatness of processor language.

That is why this topic matters for the DPO or privacy manager. Vendor oversight is not just about monitoring suppliers after onboarding. It starts with legal and factual accuracy. If the relationship is characterised too loosely at the outset, the organisation’s privacy position is likely to remain weaker than it appears throughout the life of the arrangement.

The classification question comes before the agreement question

Privacy review is often reduced to paperwork. The contract arrives, a DPA is attached, and the organisation asks whether the necessary clauses are present. That is a normal operational step, but it is not the point at which the analysis should begin.

The more important question is what the facts of the arrangement actually show. Who is deciding why the personal data is being used? Who is deciding the essential means of the processing? Is the supplier acting only on the organisation’s instructions, or is it using the data for its own purposes as well? Does the answer differ depending on the data flow, the service element or the stage of the relationship?

These are not drafting niceties. They determine the legal model that should apply. A processor is not created by labelling the supplier a processor. A controller-to-controller relationship is not avoided by putting Article 28 wording into a contract. Joint controllership is not displaced simply because the parties would prefer one side to take a more passive label. The factual reality of the processing remains the starting point.

That matters because privacy obligations attach differently depending on the role actually being performed. If the organisation applies a processor framework where the other party is determining its own purposes, the contract may give a false impression of control. If the relationship is treated as controller-to-controller where the organisation is in fact instructing the other party in a more constrained way, the oversight model may be looser than it should be. If the relationship contains mixed elements, but only one side of the picture is documented, the organisation may be carrying accountability gaps that do not become obvious until the relationship is tested.

The privacy risk in many vendor arrangements begins before any monitoring issue arises. It begins when the relationship is documented on a legal basis that does not match the facts.

For the DPO or privacy manager, this is one of the most useful ways to reframe vendor review. The immediate task is not to decide whether the paperwork is present. It is to decide whether the legal characterisation is sound enough to support everything that follows.

Where the supplier really is a processor

There will, of course, be many cases where the supplier is acting as a processor. If the organisation determines the purposes of the processing, defines the essential features of what is to be done with the data, and the supplier is carrying out that processing on documented instructions, then the processor framework is likely to be the correct one.

Where that is the case, the familiar GDPR consequences follow. Article 28 becomes central. Appropriate contractual provisions are required. Due diligence matters. Oversight matters. The controller remains accountable for the processing carried out on its behalf. None of that is in doubt. What is often misunderstood is what the processor model does and does not achieve in practice.

A processor agreement is not a substitute for understanding the service. It does not eliminate the need to assess what the processor is actually doing, how sub-processing is structured, how changes to the service are managed, whether monitoring is meaningful, or whether the organisation’s theoretical rights are usable in practice. An Article 28 framework only supports accountability if the organisation is able to retain some practical grip on the relationship over time.

Weak processor oversight is not usually just a drafting failure. It is often a failure to revisit whether the organisation still understands the relationship as it has developed. A processor approved for a bounded purpose may, over time, handle larger volumes of data, support more business functions, rely on a wider sub-processing chain or become much harder to challenge commercially than at the point of onboarding.

A processor arrangement is only as strong as the organisation’s ability to understand the service, monitor material change and respond where the practical level of control weakens over time.

This is where the DPO or privacy manager needs to be more exact than a basic contract review allows. The issue is not simply whether the processor clauses are there. It is whether the processor model still fits the service as used, and whether the organisation’s oversight is operating at the level the risk now requires.

Not every vendor relationship is a processor relationship

One of the most persistent oversimplifications in vendor management is the assumption that any supplier handling personal data is acting as a processor. That is often operationally convenient, but it can be legally and practically misleading.

Some third parties use personal data for their own purposes and within their own regulatory or commercial frameworks. In those relationships, they are not simply carrying out another organisation’s instructions. They may be receiving data from the organisation, but that does not make them its processor.

That point is easy to lose in practice because controller-to-controller relationships are often less tidy from a governance point of view. They force the organisation to confront the fact that the other party has its own purposes, its own lawful basis position, its own transparency obligations and, in many cases, its own onward disclosure model. The organisation does not retain the same type of instruction-based control that exists in a processor relationship.

Examples vary by context, but the pattern is familiar. A professional adviser may receive personal data and use it within its own regulated professional role. An insurer, benefits provider, fraud prevention service, credit agency, external platform provider or specialist analytics provider may determine certain purposes of use for itself. Some software providers may act as a processor for the hosted client environment while separately using certain data for product security, abuse detection, fraud control or service improvement on their own account.

In those cases, a DPA alone is not enough. In some cases, it is not the right primary instrument at all.

A more appropriate arrangement may require controller-to-controller provisions or a data sharing agreement that properly addresses the legal and governance consequences of disclosure to another controller. That means looking more carefully at the purpose of the sharing, the lawful basis relied upon, transparency to data subjects, onward transfers, retention boundaries, rights handling, security expectations and any restrictions or conditions the organisation wants to attach to the sharing.

Where the other party is determining its own purposes, the issue is no longer processor oversight alone. It becomes a question of whether the organisation has a defensible controller-to-controller sharing model.

That is a materially different type of privacy analysis. The DPO or privacy manager should not be satisfied simply because “privacy terms” exist somewhere in the contract. The key question is whether the organisation has recognised that the relationship is not one of delegated processing only, and whether the right agreement structure has been used as a result.

Why data sharing agreements matter in controller-to-controller scenarios

Data sharing agreements are sometimes treated as secondary documents compared with DPAs. In practice, where personal data is disclosed to another controller, a well-structured data sharing agreement can be at least as important from a governance perspective.

That is because the legal and accountability issues are different. The organisation is no longer only asking how to bind a supplier to instructions. It is asking on what basis the data is being shared, what each party is entitled to do with it, what constraints or expectations apply to onward use, how transparency is addressed, how retention is framed, how rights requests are handled where responsibilities overlap, and what happens if the legal position around the sharing changes.

This is especially important where the relationship is significant but not neatly reducible to a pure service provider model. Without a good controller-to-controller framework, organisations can end up assuming that the existence of a commercial contract means the privacy position is adequately covered. It often is not.

A well-structured data sharing agreement does not make accountability disappear, but it does force the parties to confront the right questions. What is being shared, why, under what legal basis, with what expectations around use, and with what allocation of responsibility? Those are exactly the questions that tend to go underexplored when the relationship is hastily labelled as a processor arrangement.

For the DPO or privacy manager, this is often where advisory value is clearest. The privacy function should be able to explain not just that a DPA is the wrong tool in a particular case, but why the relationship requires a more accurate controller-to-controller analysis and what practical consequences follow from that reclassification.

Hybrid relationships are often the real trap

Some of the hardest vendor arrangements are not purely processor or purely controller relationships. They are mixed. That is often where privacy oversight becomes both more difficult and more important.

A SaaS provider may act as a processor in relation to customer data hosted in the service, while separately acting as a controller for certain telemetry, fraud prevention, service security or product improvement functions. A benefits or HR platform may process employee data on the organisation’s instructions for certain service elements while separately determining aspects of use for its own legal, product or operational purposes. An external specialist may process personal data under instruction for one strand of a project while independently determining how related information is used for another.

These mixed models create a practical temptation. The organisation may want to pick one label for the relationship as a whole and move on. That is understandable. It is also often the source of poor documentation and weak oversight.

A hybrid relationship needs to be analysed by reference to the relevant data flows and purposes. One part of the arrangement may require an Article 28 processor framework. Another may require controller-to-controller data sharing provisions. In some cases, there may need to be a combination of both within the same broader contractual package, supported by clear internal mapping of which activities fall into which category.

Some of the weakest vendor arrangements are not wrongly documented because nobody cared. They are wrongly documented because the relationship was more complex than the organisation was willing to analyse properly.

This is a particularly important point for the DPO or privacy manager because hybrid arrangements often create false confidence. The organisation may believe the relationship is “covered” because a DPA is in place, while significant parts of the vendor’s role sit outside that processor model altogether. Unless the mixed nature of the arrangement is explicitly recognised, the oversight model is likely to remain partial and reporting to management may understate the real legal and governance complexity.

Joint controllership is different again

Joint controllership creates a different type of issue, and it is often under-identified for the same reason hybrid arrangements are under-analysed: the processor model is usually seen as more straightforward and more comfortable from a governance perspective.

But joint controllership cannot be wished away through preference. If two parties are jointly determining the purposes and means of the processing, then the organisation needs to recognise that reality and deal with it properly.

This is not the same as saying that both parties are merely involved in the same broad environment. Joint controllership is not established simply because two organisations are both present or both interested. The question is whether they are together shaping the relevant processing in a sufficiently real way that responsibilities need to be allocated accordingly.

That can arise in partnerships, co-branded initiatives, consortium arrangements, shared programmes, data-enabled campaigns, platform relationships or service designs where both sides materially shape why and how the processing occurs. In those cases, the organisation should not be looking only for processor language. It should be asking whether an Article 26-style joint controller arrangement is needed and whether the essentials of that arrangement are reflected in practice.

That matters because joint controllership affects how responsibilities are allocated, how transparency is addressed and how the organisation explains the relationship to data subjects and to regulators if challenged.

Joint controllership often becomes visible only when the organisation stops asking who is receiving the data and starts asking who is shaping the purpose of the processing.

For a DPO or privacy manager, this is another area where advisory value lies in resisting oversimplification. The relationship may be commercially described as a vendor arrangement, but that does not answer the privacy question. If the other party is participating in the determination of purpose and means, processor language may obscure more than it clarifies.

Why misclassification matters so much in practice

It is easy to make this sound theoretical. It is not. The reason classification matters is that the legal model chosen at the outset shapes everything that follows: what agreement is used, what due diligence is prioritised, what monitoring takes place, what transparency position is taken, what rights-handling assumptions are made, how incidents are escalated, what is reported to management, and what the organisation thinks it can defend if the relationship is later scrutinised.

If a third-party controller is treated as if it were a processor, the organisation may believe it has more control than it does. If a hybrid relationship is reduced to a single processor model, important elements of onward use or independent purpose-setting may go undocumented. If a joint controller scenario is treated as a routine vendor arrangement, accountability may be allocated in a way that bears little resemblance to how the processing is actually designed and operated.

This is one of the reasons vendor oversight often looks stronger than it is. The organisation may genuinely have a contract, a review process and some monitoring in place. But if the underlying legal model is wrong, those controls are operating against the wrong understanding of the relationship.

A vendor arrangement cannot be treated as well governed simply because it is documented. The first question is whether it has been documented on the right legal basis at all.

That is a point worth taking seriously in management and board reporting, because it shifts the conversation from process completion to legal and governance accuracy.

The DPO’s role is to explain what the relationship means, not just what document is missing

A common failure mode for privacy teams is that they identify an issue but report it in a way that is too generic to support good decisions. Saying that a vendor arrangement needs a DPA, or that the supplier should be reviewed, may be technically correct but not especially useful if the deeper issue is misclassification, increasing dependency or structural limits in oversight.

The privacy function adds most value where it can explain what type of relationship exists and why that changes the organisation’s accountability position. That means being able to distinguish clearly between a processor oversight issue, a controller-to-controller sharing issue, a joint controller issue or a mixed model that needs to be mapped and governed in parts.

That distinction matters because the governance consequences are different. A processor relationship calls for strong instruction-based oversight, monitoring, sub-processor scrutiny and change control. A controller-to-controller arrangement raises different questions around lawful basis, transparency, onward use, retention, rights handling and defensibility of sharing. A joint controllership issue raises questions about allocation of responsibility and how the organisation will explain the arrangement if challenged. A hybrid model requires the organisation to recognise that different legal and operational treatments may apply within the same broader commercial arrangement.

The privacy function adds most value where it does not report “vendor risk” as a single category, but explains what type of data relationship exists and why that changes the organisation’s accountability position.

That is where privacy advice starts to influence decisions rather than simply annotate contracts.

Feeding the issue into the organisation requires different reporting depending on the model

Once the relationship has been characterised properly, the DPO or privacy manager then needs to decide how to feed the issue into the organisation.

This is where a lot of privacy reporting becomes too compressed. Vendor arrangements are grouped together as “third-party risk” or “processor risk” even where the underlying problems are materially different. That reduces the usefulness of the reporting and can make it harder for senior management to understand what decisions or mitigations are actually needed.

A processor issue may need to be reported as a matter of oversight strength, weak auditability, poor change control, insufficient monitoring or material reliance on sub-processors. A controller-to-controller relationship may need to be reported as a question of sharing rationale, transparency exposure, legal defensibility or unclear accountability boundaries. A hybrid relationship may need to be escalated because the organisation has only partially documented the legal structure of the arrangement. A joint controllership issue may need to be framed around allocation gaps, data subject handling responsibilities or strategic governance consequences.

That is where the DPO’s analytical role becomes a governance role. The issue is no longer simply whether a privacy concern exists. The issue is whether the organisation is receiving an accurate description of the type of exposure it has taken on.

A processor issue, a data sharing issue and a joint controllership issue should not appear in governance reporting as if they were the same type of problem. They expose the organisation in different ways and require different responses.

This is especially important where the organisation wants concise reporting. Concision is useful, but not if it collapses the legal and governance distinctions that make the issue intelligible.

DORA and AI make misclassification and weak oversight more serious

The DORA and AI crossovers become much more useful once the classification problem is understood. 

DORA sharpens the consequences of weak third-party oversight by asking not only whether the relationship is contractually manageable, but whether the organisation has become operationally dependent on a provider in a way that changes the seriousness of any control weakness. A relationship that looks routine under a narrow privacy lens may be much more significant once criticality, substitutability, concentration and resilience are considered.

AI sharpens a different aspect of the problem. Many AI-enabled services are more difficult to characterise neatly because the provider’s role may not fit comfortably within a pure processor model. There may be complex service architectures, layered sub-processing, separate safety or abuse-monitoring functions, telemetry, service-improvement claims, model-related uses or contractual positions that are superficially reassuring but operationally difficult to verify. In that environment, misclassification becomes more likely and the limits of oversight become more important.

In AI-enabled services, the privacy question is often not simply whether the vendor presents risk, but whether the organisation has correctly understood what role the vendor is actually playing in relation to the data.

That point matters because it changes what should be escalated. The issue may not be obvious non-compliance. It may be that the organisation is relying on a legal description of the vendor relationship that is too simplistic for the service it is actually using, or that it is accepting a level of opacity that should be understood and approved at a higher level.

What a mature privacy review of vendor relationships should actually involve

A mature review of vendor relationships should therefore go further than checking whether contractual templates have been completed. The privacy function should be asking what personal data is actually involved, what the service is genuinely doing with it, who is determining the purposes and essential means, whether different parts of the relationship need to be analysed differently, whether the agreements in place reflect that structure, and whether the oversight model matches the legal model that has been chosen.

That may sound obvious, but it is often not what happens in practice. In many organisations, the legal characterisation is inherited from procurement assumptions, vendor paper or previous practice. The privacy team is then asked only to validate the documentation rather than the underlying analysis.

That is not enough where the relationship is material. A stronger approach requires the DPO or privacy manager to test whether the documentation corresponds to the actual flows and the actual balance of decision-making. It also requires the privacy function to identify where the organisation’s practical ability to monitor, challenge or revisit the relationship is weaker than the paperwork suggests.

The useful privacy review is not the one that asks whether there is a DPA on file. It is the one that can explain why the arrangement has been classified as it has, what legal consequences follow from that classification, and where the organisation’s practical ability to oversee the position is limited.

That is the level at which vendor oversight starts to become defensible rather than merely documented.

Why this matters for the DPO or privacy manager

For a DPO or privacy manager, vendor oversight is one of the clearest tests of whether the privacy function is operating at governance level. A narrow review of contractual wording may satisfy a process requirement, but it does not tell the organisation much about whether it has understood the relationship correctly or whether its accountability position is robust.

The real value lies in being able to identify where the organisation has defaulted too quickly to a processor model, where a data sharing arrangement is more appropriate, where joint controllership needs to be recognised, where hybrid structures need more precise documentation, and where the practical ability to oversee the relationship is weaker than the formal paperwork implies.

That is not simply a matter of technical legal precision. It affects how the organisation explains the relationship internally, how it allocates controls, how it handles incidents, what it tells data subjects, what it reports to management, and what it is realistically able to defend if asked to justify the arrangement later.

The DPO’s role is not to make vendor arrangements look tidy. It is to make the organisation’s accountability position intelligible and defensible.

That is why this topic deserves more than a standard processor oversight discussion. The harder and more useful question is not whether the supplier has signed the right clauses, but whether the organisation has correctly understood what kind of relationship it has created and what that means for governance in practice.

Takeaway

A useful next step for any DPO or privacy manager is to look again at vendor relationships that are currently treated as settled. Which of them are genuinely processor arrangements? Which are better understood as controller-to-controller disclosures? Which contain mixed elements that are being collapsed into one legal label for convenience? Which may in fact involve joint determination of purpose and means? And where is the organisation relying on a contractual form that is easier to administer than the relationship is to defend?

The practical challenge is not simply to ensure that vendors are documented. It is to ensure that they are documented on the right legal basis, governed using the right accountability model, and reported internally in a way that reflects the actual nature of the exposure the organisation has chosen to accept.

This article is intended to support the learning covered in Hour 4 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Transfer Impact Assessments in Practice

This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What DPOs Should Actually Be Looking For

A Transfer Impact Assessment (or Transfer Risk Assessment – TRA) is the point at which transfer law stops being abstract and becomes a real organisational decision. In theory, the legal position may look simple enough: identify the transfer, identify the transfer tool, and then consider whether additional safeguards are needed. In practice, that is not where most TIAs fail. They usually fail earlier and more quietly.

They fail because the underlying transfer scenario has not been analysed properly. They fail because the organisation does not know enough about the recipient’s real operating model. They fail because the assessment of the foreign jurisdiction is generic rather than specific. And they fail because supplementary measures are described in broad compliance language without asking whether they materially change the exposure.

That is why a TIA matters. A TIA is not just a document to satisfy Schrems II. It is where the organisation has to demonstrate that it understands what is happening, what legal and practical risks arise in the recipient jurisdiction, and why the transfer remains supportable. For DPOs, this is one of the most revealing areas of privacy practice. A strong TIA usually points to stronger governance, better supplier oversight and more mature internal coordination. A weak TIA often points to the opposite.

What a TIA is actually trying to determine

A TIA is often reduced to a single question: “Can we still transfer the data using SCCs?” That question is too narrow. A useful TIA is trying to determine, in sequence:

  • what the transfer scenario actually is;
  • who the importer is, and in what role;
  • what data is involved, and in what form;
  • what the legal and practical position is in the destination jurisdiction;
  • whether public authority access, surveillance powers, redress and oversight could undermine the level of protection expected under GDPR;
  • whether supplementary measures materially reduce that risk;
  • and whether the organisation can genuinely stand over the conclusion it has reached.

That is why the TIA process has to be disciplined. It starts by identifying the country or countries involved and requiring relevant documentation, including relevant legislation, items such as DataGuidance materials, agreements, and internal checklists, before moving section by section through the template and analysing each part against the evidence provided. The template itself should break the work into the right components: transfer overview, receiving jurisdiction, transfer details, existing safeguards, alternatives, proportionality, law and practice in the recipient country, supplementary measures, probability assessment and approval. That structure is not just administratively tidy. It reflects the underlying legal logic.

This is also consistent with the EDPB’s post-Schrems II approach. The EDPB Recommendations 01/2020 on supplementary measures remain the central official guidance for organisations trying to assess whether an Article 46 transfer tool remains effective in light of the law and practice of the destination country. The EDPB Guidelines 05/2021 on the interplay between Article 3 and Chapter V are equally important because they help determine whether the arrangement is even a restricted transfer under Chapter V in the first place. In practice, that initial classification matters more than many organisations realise. A TIA that begins with the wrong transfer analysis is already weakened before it gets to the foreign-law questions.

Start with the facts: the transfer analysis must be right before the TIA can be right

One of the most common problems in transfer work is that very different scenarios are collapsed into a single generic category called “international transfer.” That may be administratively convenient, but it is analytically weak.

An employee temporarily working from a third country does not necessarily raise the same issues as a third-country contractor engaged to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service that extracts data from that platform and processes it in its own US environment. Occasional remote support access is not the same as routine privileged administrative access. Pseudonymised data used for analytics is not the same as a readable HR or health dataset accessible in clear text. These distinctions matter because they shape:

  • whether Chapter V is engaged;
  • which SCC module is relevant, if SCCs are used;
  • the sensitivity and exposure of the data;
  • the significance of public authority access risk in the destination jurisdiction;
  • and the practical effect of any technical or organisational safeguards.

For DPOs, the practical lesson is straightforward: a TIA should not begin with the transfer tool. It should begin with the transfer scenario. That means identifying:

  • who the exporter is;
  • who the importer is;
  • whether the importer is acting as controller, processor, sub-processor or contractor;
  • whether the data is stored, accessed remotely, downloaded, or transferred onward;
  • whether the data is ordinary personal data, special category data, criminal data, children’s data, or otherwise particularly sensitive;
  • whether the data is intelligible in the destination jurisdiction;
  • and whether the provider’s sub-processing, support or AI functionality changes the picture.

A good template should capture exporter/importer roles, the transfer mechanism, the nature of the transfer, onward transfers, categories of personal data, special-category and criminal data, data subjects, format of the data, method of transfer and existing security measures. This is a strong foundation, because it makes the foreign-jurisdiction analysis service-specific rather than generic.

In practice, the weakest TIAs often reflect poor factual scoping rather than poor legal knowledge. Hosting is mapped but support access is not. The primary vendor is known but the sub-processor chain is not. An AI-enabled tool is treated as though it sits safely inside the main platform’s environment, even though it extracts and processes data through separate infrastructure. The result is that the TIA looks complete but is addressing the wrong transfer.

When assessing your practices internally, review whether your scoping process distinguishes between:

  • storage and remote access;
  • employees, contractors and service providers;
  • primary vendors and sub-processors;
  • occasional and routine access;
  • EEA-hosted platforms and connected third-country tools;
  • readable, pseudonymised and encrypted data.

The quality of a TIA depends on the quality of the underlying transfer analysis. If the organisation has not correctly identified the parties, access model, data exposure and onward transfer chain, the assessment will be weaker than it appears.

Who should be involved in a TIA?

A TIA should never be treated as a privacy-only paperwork exercise. It is a cross-functional assessment, and that matters because no single function usually holds all the facts. A defensible TIA should be:

  • legally informed;
  • technically grounded;
  • operationally accurate;
  • and owned by the business as well as privacy.

The DPO or privacy lead should normally coordinate the assessment. That means framing the questions, testing assumptions, identifying gaps, and ensuring the final reasoning is coherent and evidence-based. But the DPO should not be left trying to infer system architecture, key management, support access patterns or sub-processor chains without support. Legal should be involved to assess:

  • whether the transfer tool is appropriate;
  • whether the importer role is correctly understood;
  • how SCCs or other Article 46 mechanisms are being used;
  • and whether the foreign-jurisdiction analysis raises legal issues that need escalation.

IT, architecture or security teams are often essential because the foreign-law risk only becomes meaningful when matched to technical facts. If the provider cannot access intelligible data, the analysis may look different than if provider personnel can access clear-text customer content in the course of support or service delivery. That means technical teams need to clarify:

  • where the data is hosted;
  • where it is processed;
  • who can access it;
  • how encryption works;
  • who controls the keys;
  • whether pseudonymisation is meaningful;
  • and how support or privileged access operates in practice.

The relevant business or system owner also matters. A TIA is not just about whether a transfer is possible; it is also about whether the transfer is necessary, whether alternatives exist, and whether the organisation has become dependent on the arrangement in a way that raises wider governance concerns.

Procurement or vendor management is often essential because:

  • they hold the contractual documentation;
  • they can obtain sub-processor lists, trust-centre materials and service descriptions;
  • and they know when renewals, change events and leverage points arise.

Risk, compliance or resilience functions may also need to be involved where the provider is strategically important or where the transfer intersects with broader third-party oversight. In regulated settings, particularly financial services, the same provider relationship may matter at once for privacy, outsourcing, operational resilience and dependency management.

AI governance or product/data governance teams should also be involved where AI-enabled tools are in scope, because the data-flow and control issues are often more opaque and more dynamic than in ordinary SaaS arrangements.

Weak TIAs often reflect fragmented ownership. Privacy has the template, legal has the contract, IT has a partial understanding of hosting, and procurement holds vendor papers, but no one assembles the picture properly. The result is that the final document is smoother than the underlying analysis.

In assessing your practice, make sure your TIA process identifies:

  • who owns scoping;
  • who confirms the technical facts;
  • who assesses the legal mechanism and foreign-law issues;
  • who validates the necessity of the transfer;
  • who reviews sub-processor and contract materials;
  • and who can approve or escalate if the assessment reveals unresolved risk.

A credible TIA is cross-functional. It should combine privacy, legal, technical, supplier and business inputs rather than being treated as a privacy-only exercise.

The foreign jurisdiction assessment: where the real analysis happens

This is the part of the TIA most likely to draw criticism if it is weak, and the part most likely to make the assessment genuinely meaningful if it is done properly. A poor jurisdiction assessment often asks one shallow question:

“Does this country have a data protection law?”

A stronger jurisdiction assessment asks the right question:

“In light of this particular transfer scenario, can the legal environment of the destination country undermine the level of protection expected under GDPR?”

That distinction matters.

A country may have a modern privacy statute and an active regulator, but still allow forms of state access, surveillance or national-security processing that are relevant to the transfer in question. Equally, the existence of public-authority access powers does not automatically make the transfer unsupportable. The issue is whether those powers, in context, materially affect the ability of the transfer tool to provide an essentially equivalent level of protection.

That is why a strong TIA needs to assess both the general legal environment, and the practical relevance of that environment to the transfer at hand.

A good template addresses public authority access, legal basis, necessity and proportionality, safeguards against excessive access, and case studies or precedents. It should further address the wider legal environment of the recipient country, including dedicated data protection law, rights, regulator independence, judicial remedies, public authority access, surveillance programmes, and limitations and oversight. One part looks at state access and proportionality directly; the other assesses the wider data protection framework of the country.

What sources should inform the jurisdiction assessment?

This is one of the clearest areas where internal AI tooling can improve quality if designed properly. A TIA companion should not allow users to “wing” the foreign-law analysis based on memory or a single source. The sources should usually be layered.

At the top should be the official guidance:

  • EDPB Recommendations 01/2020 on supplementary measures;
  • EDPB Guidelines 05/2021 on whether Chapter V applies in the first place;
  • relevant European Commission materials on adequacy and SCCs;
  • and relevant Irish DPC or other regulator guidance or conference materials on international transfers and SCCs.

Supporting those should be:

  • country-law research tools such as DataGuidance;
  • vendor-supplied materials, including DPAs, SCCs, trust-centre information, government-request statements and sub-processor lists;
  • and any internal legal or compliance commentary developed for the organisation.

The role of a tool like DataGuidance is important here. It is a research aid, not a final legal conclusion. It is useful for assembling a jurisdiction profile, identifying relevant legal themes and orienting the assessor to the local framework. But it should not replace a real analysis of how public authority access, redress, oversight and practical enforcement interact with the service in question.

What should the jurisdiction assessment actually test?

A strong assessment should address, at minimum:

  • whether the country has a dedicated data protection law;
  • whether individuals have enforceable data protection rights;
  • whether there is an independent supervisory authority or regulator;
  • whether meaningful judicial redress is available;
  • what public authority access powers exist;
  • whether surveillance or intelligence powers are broad, targeted, supervised, challengeable or secretive;
  • whether access is subject to necessity, proportionality and oversight;
  • and whether there is relevant history or case law indicating how those powers are used in practice.

The key is to avoid genericity. The question is not merely whether a surveillance framework exists in the abstract. The question is whether, in light of the actual transfer scenario, the combination of the country’s legal environment and the recipient’s access to the data undermines the level of protection expected. That is why the facts gathered earlier matter so much. A destination country analysis looks very different depending on whether:

  • the importer can access full readable HR records;
  • the importer only receives encrypted backups;
  • the service provider never holds the decryption keys;
  • or the tool is an AI-enabled platform that processes readable content and may involve several third-country sub-processors.

The weakest foreign-jurisdiction sections are usually generic and over-compressed. A paragraph states that the country has a data protection law and some regulator activity, briefly notes surveillance laws, and then concludes that the transfer is supportable. That may look balanced, but it often tells the reader very little about whether the actual risks of the transfer have been understood.

So, review whether your jurisdiction assessments:

  • identify the sources used;
  • distinguish between data protection law and state access powers;
  • analyse oversight and redress rather than just listing legal instruments;
  • connect the foreign-law position to the actual service and access model;
  • and make their assumptions visible rather than implicit.

The foreign-jurisdiction assessment is the part of the TIA most likely to reveal real residual risk. It should test not only whether the country has a privacy framework, but whether state access powers, oversight and redress materially affect the transfer in context.

Assessing the probability of unlawful access without creating false precision

The value of a structured probability assessment is that it forces the assessor to identify and weigh the drivers of risk rather than writing in broad, qualitative terms alone. Your template or methodology should reflect this by breaking the analysis into factors such as the legal framework, enforcement practices, surveillance capability and historical precedents, and then asking the user to explain the score reached. This can be very useful, provided the organisation is clear about what the score means and what it does not. A probability score is not an objective truth. It is a structured representation of a judgement based on:

  • the legal environment;
  • the practical features of the service;
  • the type and volume of data involved;
  • whether the data is intelligible to the importer;
  • the strength of safeguards;
  • and the evidence available at the time of the assessment.

That means the score should never stand alone. If a TIA produces a “low likelihood of unlawful access” score but cannot explain, with sources, why that conclusion was reached, the number adds very little. A more defensible approach is to treat probability scoring as an aid to disciplined reasoning. The assessor should be able to show:

  • which factors were considered;
  • what evidence informed each factor;
  • what assumptions were made;
  • and what would cause the score to change.

This is also an area where an internal AI companion can be genuinely helpful if designed carefully. It can prompt the user to upload country-law materials, identify the factors, ask the user to justify each factor with evidence, and then draft the rationale. But it should not be allowed to produce a score with no supporting narrative or no acknowledgement of limits.

Weak scoring exercises look numerical but shallow. They average a handful of factors without showing how those factors relate to the service, the accessibility of the data, or the relevance of the legal environment in context. That gives the impression of rigour without delivering much of it.

If you use a probability methodology, make sure it:

  • identifies the factors clearly;
  • ties them to the actual transfer scenario;
  • documents the evidence and assumptions;
  • and shows what would change the overall assessment.

Probability scoring can support consistency, but it does not replace judgement. The organisation should be able to explain the factors, assumptions and evidence behind any conclusion that the likelihood of unlawful access is low.

Supplementary measures: what actually changes the position

One of the strongest parts of the EDPB’s Recommendations 01/2020 is that they do not treat supplementary measures as abstract compliance decorations. The whole point is whether the measures make the transfer tool effective in context. That is the mindset DPOs need to preserve. The right question is not “Have we listed supplementary measures?” It is “Which measures materially reduce the exposure created by this transfer?” This is where many TIAs become weaker than they appear. Technical, contractual and organisational measures are all listed, but there is little analysis of whether they actually change the importer’s ability to access the data or the practical significance of the destination country’s legal environment.

Technical measures

Technical measures often matter most, but only where they genuinely reduce exposure. Encryption is a classic example. Encryption in transit and at rest is good baseline practice, but if the provider decrypts the data in its own environment and can access it in readable form, the legal relevance of that encryption may be limited. Key management matters. So does whether the importer holds the keys. So does whether the relevant risk is authority access via the importer or access prevented by design.

Pseudonymisation can also be meaningful, but only where the importer cannot realistically re-identify the data subject. If the importer can combine the data with other identifiers or is itself given the key to re-identification, then the measure may add less than the TIA suggests.

Minimisation, segmentation, tokenisation and local pre-processing can all be useful where they materially reduce what is exposed.

Contractual measures

Contractual clauses can support the position, particularly where they:

  • require challenge to overbroad requests;
  • increase transparency around authority access;
  • restrict onward transfers;
  • limit use of the data;
  • and support audit or notice rights.

But contractual promises do not usually neutralise a foreign-law issue on their own, particularly where the provider can still access the data in clear text.

Organisational measures

Organisational controls, such as internal access approvals, support restrictions, logging, escalation routes, and governance around sensitive data inputs, can be important, especially where they reduce frequency and scope of transfer or restrict who can trigger high-risk processing. They matter most when tied to actual process rather than simply listed as good governance principles.

The key to all of this is service-specific analysis. A measure is valuable only if it changes the actual position.

The most common weakness here is that “supplementary measures” are treated as a checklist. Encryption is mentioned, policies are mentioned, contractual clauses are mentioned, and the TIA moves on. But if the provider can still view the data, if the AI service still retains readable content, or if support staff still have access in practice, the analysis is not yet complete.

Review whether your TIA explains:

  • whether the importer can access the data in readable form;
  • who controls decryption or re-identification;
  • whether the measure changes the risk from public-authority access or only improves general security hygiene;
  • and whether the supplementary measures are genuinely linked to the risks identified in the jurisdiction assessment.

Supplementary measures are effective only if they materially reduce the real exposure. The organisation should be able to explain how technical, contractual and organisational controls change the transfer risk in practice rather than merely documenting that they exist.

AI and complex tooling: why the TIA needs stronger evidence, not softer assumptions

AI-enabled services often need stronger TIAs than ordinary SaaS tools, not weaker ones. The reason is straightforward. The processing chain is usually less transparent, the sub-processor landscape may be broader, the distinction between core functionality and underlying model/infrastructure is harder to see, and the organisation may have less visibility over retention, support access and onward processing than it assumes.

For example consider a scenario where tooling to assist meetings is introduced into your Microsoft stack.  The service might sit outside Microsoft’s compliance perimeter and process recordings through US-based infrastructure, raising not only transfer issues but wider concerns around special-category exposure, transparency, cybersecurity, retention and sub-processing through providers such as AWS, GCP, OpenAI and Anthropic. This can happen even where the core M365 environment might be configured within an EU boundary; a connected tool could extract meeting content and process it through its own infrastructure, bypassing that perimeter. That is precisely the kind of fact pattern a TIA must surface.

In an AI context, the transfer analysis needs to ask:

  • does the AI tool extract or replicate personal data from another environment?
  • where are the inference, storage, support and analytics functions actually located?
  • what sub-processors or underlying providers are involved?
  • can the provider’s personnel access readable content?
  • is data retained for troubleshooting, analytics or model improvement?
  • do the provider’s public assurances actually align with the way the service works?

A good TIA for an AI-enabled service is therefore not just about where the data goes. It is also about whether the organisation retains meaningful visibility and control once the data enters that environment.

A recurring weakness is governance lag. The organisation approves an AI-enabled feature because it is commercially useful, then tries to retrofit a privacy assessment around whatever documents the vendor is willing to provide. That often produces high-level assurances rather than a grounded understanding of the service.

Make sure AI-related TIAs:

  • are specific to the AI functionality, not just the core platform;
  • identify the actual processing chain and sub-processors;
  • address retention, reuse and support access explicitly;
  • and are revisited when the service model changes.

AI-enabled services often require a more rigorous TIA, not a lighter one. Their value may be clear, but the transfer assessment should reflect opaque processing chains, broader sub-processing and reduced visibility over data handling.

Using AI to support TIAs: what good looks like in Copilot or a custom GPT

A TIA companion can be genuinely helpful, but only if it is designed to improve the assessment rather than flatten it into polished prose. The value of a TIA AI assistant is not that it drafts faster. It is that it can structure the process, force evidence gathering, separate issues properly, and surface where the analysis is weak.

Good design will be a tool that instructs the user to begin with the country or countries involved, upload relevant documentation such as DataGuidance notes, agreements and checklists, and then step through the TIA section by section rather than attempting to draft the whole thing in one pass. It also anticipates the need for a DPO review checklist at the end of the process.

What the tool should do

Whatever the format, whether built in Copilot or as a type of custom GPT, the assistant should:

  • begin with jurisdiction identification;
  • require the user to upload source materials;
  • distinguish transfer scoping from country-law analysis;
  • force the user to identify missing evidence;
  • and produce both draft wording and a reviewer issues list.

A good AI companion should also slow the user down in the right places. In particular, it should not allow the assessor to draft a conclusion before the foreign-jurisdiction module is complete.

What the foreign-jurisdiction module should do

This is the most important part of the tool. A good module should:

  • ask which official and secondary sources are being used;
  • require the user to identify data protection law, authority access powers, oversight, redress and proportionality;
  • compare vendor claims with country-law realities;
  • ask whether the data is readable to the importer;
  • and require explicit rationale before suggesting any probability score.

In other words, the tool should not just summarise the uploaded materials. It should test them against each other and identify where the evidence is thin or conflicting.

What the tool should not do

A poor TIA assistant will:

  • jump quickly to narrative drafting;
  • assume a country is “low risk” based on one source;
  • treat documentation from legal research and compilation sources as a final answer rather than a research tool;
  • rely on vendor statements without challenge;
  • or generate approval language where the evidence is incomplete.

That is not a TIA companion. It is a drafting shortcut. The greatest risk with internal AI assistance is that it can make weak analysis look more professional. That is particularly dangerous in TIAs because the document may then appear complete and well reasoned when, in substance, the jurisdiction assessment is underdeveloped.

If you are building an AI assistant for TIAs, design it to, at the very least,:

  • start with the jurisdiction;
  • require source material;
  • force service-specific questions;
  • separate narrative drafting from unresolved issues;
  • and produce a DPO review checklist alongside the draft text.

AI assistance can improve consistency in TIAs, but only if the tool is designed to force evidence, challenge assumptions and surface unresolved issues rather than simply producing polished narrative.

What should trigger escalation or refusal?

A defensible TIA process should not assume that every issue can be solved by better drafting. Some issues should trigger escalation, delay or refusal. Examples include:

  • no clear answer on where the data is processed;
  • no visibility over sub-processors;
  • provider access to intelligible special-category or otherwise highly sensitive data;
  • unsupported or weak jurisdiction analysis;
  • inability to explain encryption, key control or re-identification risk;
  • AI-enabled services with unclear retention, reuse or support models;
  • or a strategically important vendor relationship where the organisation has become dependent without understanding the real transfer exposure.

This is especially important for DPOs. The point of a TIA is not simply to complete the document. It is to identify when the organisation is being asked to accept a risk position it cannot yet justify. Some of the weakest outcomes arise where the commercial decision is already fixed and the TIA is treated as a formality to be completed after the fact. That is where unresolved issues tend to be reframed as drafting issues rather than governance issues.

In your process, create escalation criteria for:

  • unclear jurisdiction risk;
  • poor provider transparency;
  • intelligible access to sensitive data;
  • unresolved AI processing questions;
  • and situations where the transfer is operationally important but poorly understood.

Certain TIA findings should be treated as escalation points rather than drafting problems. These include weak visibility over provider architecture, unsupported jurisdiction analysis, intelligible access to sensitive data and safeguards that do not materially reduce risk.

Finally

A strong TIA is not valuable because it produces a completed template. It is valuable because it shows whether the organisation can support a transfer with evidence, judgement and visible governance. That is what makes the foreign-jurisdiction assessment so important. It is the point at which the organisation must move from generic comfort to real analysis. It must show that it understands not only the transfer mechanism, but the legal and practical environment into which the data is moving and whether the safeguards in place actually change the position.

For DPOs, this is one of the clearest indicators of programme maturity. If the organisation can identify the transfer correctly, involve the right parties, assess the foreign jurisdiction properly, test the practical value of supplementary measures, and document the conclusion in a disciplined way, it is much more likely to be operating a privacy programme that can withstand criticism.

That is the real value of a TIA. It does not just measure legal awareness. It measures whether governance is actually working.

This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

From Privacy Metrics to Audit Resilience

This article accompanies Hour 3: Privacy Program Metrics in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

How Reporting Creates Evidence, Tracks Remediation, and Supports Regulator Readiness

Most organisations already have some form of privacy reporting. There is usually a monthly update, an issue tracker, a committee paper, a dashboard, a board section, a risk register item, or some combination of the lot. The existence of reporting is rarely the real problem. The problem is that much of it is built as an update function rather than an accountability mechanism. It may describe work in progress, but it often says far less than it should about whether controls are operating, whether the business has actually done what it said it would do, whether action has been evidenced as complete, and whether the organisation could credibly explain its position if challenged.

That is where weak privacy reporting usually gives itself away. It looks organised. It does not always stand up.

A privacy dashboard can be well presented and still be weak. The test is not whether it looks organised. The test is whether the organisation can stand over it under challenge.

What privacy reporting is actually for

Privacy reporting is often treated as a management courtesy. The privacy team keeps stakeholders updated, circulates a summary, flags a few issues, and tries to maintain visibility. That is not wrong, but it is too narrow. A serious reporting model should do more than circulate information. It should help the organisation:

  • show that governance is functioning;
  • identify where weaknesses remain unresolved;
  • track whether remediation is real rather than nominal;
  • preserve evidence behind the position being reported;
  • support challenge, escalation and decision-making.

That is the difference between an update and a governance tool. A report that says “the RoPA is under review” may be fine as a status note. It is not enough as assurance. A report that says “training has been completed” may be accurate, but still tell management very little about whether the relevant control weakness has improved. A report that marks an action as closed may mean no more than that someone stopped talking about it.

The question is not whether a report says something happened. The question is whether the report helps the organisation show:

  • what was done;
  • what evidence supports it;
  • what remains open;
  • who owns the gap;
  • and whether the risk position has actually changed.

That is what privacy reporting is actually for.

Why weak reporting often hides a weak programme

One of the most consistent patterns in privacy management is that reporting tends to be smoother than the underlying programme. This is not always because anyone is trying to mislead. More often, the reporting has simply been built backwards. A management team wants a summary. A committee wants a regular update. A board wants a concise privacy section. The privacy team then builds a template to satisfy that expectation. Statuses are added. A few counts are inserted. A RAG column appears. The result looks coherent enough to circulate.

The weakness only becomes obvious when the next question is asked. If internal audit wants to test the control that the report implies is functioning, can the organisation show the underlying evidence? If the board wants to know whether a known issue was actually fixed rather than just reclassified, does the report make that clear? If a client or regulator asks what sits behind a positive status, can the organisation produce a decision trail, not just a spreadsheet line?

That is why poor reporting so often gives false comfort. It is capable of describing momentum without demonstrating control. It can imply that the programme is functioning while the underlying ownership, evidence base or remediation discipline remains weak. This is also why privacy reporting should never be treated as a cosmetic exercise. Reporting does not create control. It exposes whether control exists.

Start with the reporting chain, not the dashboard

Good reporting does not begin with a dashboard. It begins with what the organisation must be able to demonstrate. That matters because reporting becomes weak the moment it is driven by format rather than accountability. If the starting point is “what should we put in the monthly pack?” the output will usually reflect what is easy to say. If the starting point is “what do we need to be able to stand over?” the reporting becomes much more disciplined.

A stronger reporting chain works like this. The organisation first needs to understand its obligations: legal obligations, governance expectations, policy commitments, contractual requirements, sectoral expectations, and, where relevant, AI governance and resilience-related demands. It then needs controls and processes designed to meet those obligations. Those controls should generate artefacts as they operate. Only then can the organisation derive indicators and reporting lines that mean anything.

That sequence is important because it prevents reporting from floating free of the programme itself. If, for example, the organisation needs to show that it understands what personal data it is processing, then the reporting should sit on top of a functioning RoPA process. If it needs to show that risks are being assessed before high-risk processing goes live, then the reporting should sit on top of an assessment process that produces real records and real challenge. If it needs to show that incidents are managed properly, then the reporting should sit on top of an incident process that produces logs, decisions, actions and closure evidence.

The report is therefore the end product of a management chain. It is not the substitute for one.

The quality of the report depends on the quality of the underlying programme

This is where organisations often get the order wrong. They try to improve the report before improving the programme that feeds it. That almost never works. If the organisation has incomplete processing records, the reporting on RoPA progress will be weaker than it looks. If assessments are rushed, inconsistently scoped or carried out too late to influence decisions, the reporting on assessment activity may create confidence where it has not been earned. If action owners are unclear, then remediation reporting will become little more than a record of drift. If governance routes are not working, then a board note may say that a risk is under review without showing whether anyone with authority has actually made a decision about it.

This is the real test. The quality of the report depends on the quality of the underlying programme. That principle is visible across good governance work. Weak outputs often reflect weak ownership, weak scoping, weak evidence or weak escalation. Strong outputs usually reflect the opposite. The report may be the thing stakeholders see, but it is really a proxy for the state of the system beneath it.

The quality of the report depends on the quality of the underlying programme. Reporting does not create control. It exposes whether control exists.

This is also why privacy reporting often becomes more revealing as organisations mature. Early reporting tends to focus on activity and effort. Better reporting starts to show whether the activity is actually connected to working controls, reduced exposure and visible governance decisions.

What evidence should sit behind the report

A defensible privacy report should allow the organisation to move from a headline statement to the underlying artefacts that support it. That is what turns reporting into evidence rather than narrative. If a report says that records of processing are up to date, there should be reviewed records, clear ownership and visible refresh dates behind that statement. If it says that risk assessments have been completed, the organisation should be able to show the assessments, the scope, the assumptions, the reasoning and the approval path. If it says that actions are closed, there should be closure evidence rather than a bare status change. That evidence base will usually include things such as:

  • records of processing activities;
  • DPIAs, TIAs, LIAs and screening records;
  • incident and breach logs;
  • rights-handling records;
  • policy review histories;
  • training records;
  • vendor review materials;
  • action trackers;
  • issue logs;
  • sign-off records;
  • committee or governance papers where escalation has occurred.

The point is not to generate paperwork for its own sake. It is to make sure that material statements in the report have somewhere reliable to stand.

This is where stronger governance tends to reveal itself in practice. A reporting model built on recurring updates, review cycles, trackers, logs, sign-off points and shared evidence folders is far more likely to withstand scrutiny than one built on manual summary writing alone. That is because the report is not being asked to do all the work. It is sitting on top of a documentary and operational record. Reports do not create accountability on their own. Artefacts do.

Where board reporting usually goes wrong

Board reporting on privacy often fails for one of two reasons. It is either too thin to be useful, or too detailed to be intelligible. Where it is too thin, it tends to reassure rather than inform. The board is told that privacy activity is ongoing, that compliance matters are being monitored, that incidents are under control, and that assessments are in train. That kind of reporting rarely helps the board understand whether governance is functioning. It gives them a privacy presence, not privacy assurance.

Where it is too detailed, the opposite problem arises. The board receives operational noise rather than governance insight. Too much process detail obscures the real question, which is whether significant weaknesses are visible, whether material risks remain open, whether repeated failures are emerging, and whether management is genuinely following through on remediation.

The board does not need a privacy activity log. It needs to know whether governance is working. In practice, that means board reporting should be capable of showing things such as:

  • recurring weaknesses rather than isolated incidents;
  • high-risk items that remain unresolved;
  • repeated slippage in remediation;
  • operational ambiguity that affects the organisation’s risk position;
  • evidence that material matters have been escalated, not absorbed silently into BAU.

It should also be able to distinguish between a problem being monitored and a problem being meaningfully controlled.

A board does not need more privacy numbers. It needs to know whether governance is functioning, where risk remains open, and whether remediation is real.

That is one of the most important discipline points in privacy reporting. Board reporting should not smooth over unresolved uncertainty in order to appear neat. If the business has not confirmed the underlying processing position, if key evidence is still missing, if a control has not yet changed in practice, or if the privacy function is dependent on another team to close the issue, the board should not be told a stronger story than the evidence can support.

Why DPOs should care about dependency, not just completion

One of the most useful ways to tell whether privacy reporting is mature is to see whether it shows dependency honestly. Privacy work is often collaborative by nature. A RoPA cannot be finalised without business owners confirming actual practice. A risk assessment cannot be completed properly without operational and technical inputs. An audit action cannot be closed just because the privacy team has drafted the right wording if the actual control owner has not changed the underlying process. A vendor issue may remain unresolved because procurement, IT, legal and the business have not aligned.

Weak reporting tends to hide that. It gives the impression that everything sits within one neat delivery stream. Strong reporting makes dependency visible. This matters especially for the DPO or privacy lead. If dependency is hidden, the DPO is left with reporting that appears positive while material blockers remain outside privacy control. That is dangerous, because it makes it harder to tell the difference between genuine progress and unresolved organisational drag. A stronger report should show:

  • what the privacy team has completed;
  • where business confirmation is still outstanding;
  • where sign-off has not happened;
  • where technical clarification is missing;
  • where management decision is required before the issue can move.

That is not a weakness in the report. It is a strength. It makes the organisation’s real position visible.

Considerations for better reporting

  • Do not accept reporting that is smoother than the underlying evidence.
  • Push for reporting that shows dependency, not just status.
  • Treat repeated slippage as a governance issue, not an administrative irritation.
  • Be careful of “closed” actions where the closure evidence is weak or indirect.
  • Make sure the reporting distinguishes between privacy effort and business completion.

Metrics should answer management questions, not just count work

A great deal of privacy reporting suffers from a numbers problem. Not because there are too few numbers, but because the wrong numbers are being asked to carry too much meaning. It is easy to count activities. The organisation can usually say how many assessments were completed, how many rights requests were received, how many incidents were logged, how many policies were reviewed, how many training sessions were delivered. Those figures are not useless. The problem is that they often tell management very little unless they are tied to a real question.

If ten assessments were completed, does that tell you whether risk was assessed early enough to influence decisions? If policy reviews are on time, does that tell you whether the underlying operational issue changed? If rights requests are answered within deadline, does that tell you whether the same upstream weaknesses continue to generate them? If incident numbers are low, does that tell you anything about visibility, under-reporting or quality of control?

Useful metrics should help the organisation understand:

  • whether controls are functioning;
  • whether risk is increasing or reducing;
  • whether issues are recurring;
  • whether remediation is credible;
  • whether the programme is becoming more controlled or simply more active.

This is also where not every useful measure needs to be treated as a KPI in the narrow sense. Some lines are activity measures. Some are indicators of control performance. Some show unresolved exposure. Others show slippage, repeated failure or escalation pressure. The usefulness of the report lies in whether the measure helps someone decide, challenge or intervene. The point of a privacy metric is not to count work. It is to help the organisation understand whether its controls, risks and remediation efforts are moving in the right direction.

A strong report should track remediation, not just issues

One of the clearest differences between weak reporting and strong reporting is what happens after the issue is identified. Weak reporting often stops at visibility. The issue appears in the pack, gets discussed, remains in the tracker, and returns in some slightly altered form next month. Over time, that becomes a familiar pattern. The organisation gets better at reporting the issue than resolving it.

Strong reporting does something different. It helps turn the issue into managed remediation. That means the report should make visible:

  • what the issue is;
  • why it matters;
  • who owns the next step;
  • what evidence will support closure;
  • when follow-up is required;
  • and whether escalation is now justified.

That is what makes reporting operationally useful. It also helps avoid one of the most common governance failures in privacy work: the quiet conversion of unresolved issues into administratively “closed” items. A privacy issue is not closed because it disappeared from the tracker. It is closed when the organisation can evidence that the weakness has actually been addressed.

A privacy issue is not “closed” because it disappeared from the tracker. It is closed when the organisation can evidence that the weakness has actually been addressed.

This is why remediation reporting matters so much. It shows whether the organisation is genuinely following through, or simply learning to describe the same problems more efficiently.

Audit resilience is built in ordinary governance

Audit resilience is often misunderstood as something that happens when audit arrives. In reality, it is built in ordinary governance. An organisation is resilient under scrutiny when it can reconstruct the decision trail without relying on memory. It should be able to show what the issue was, where it was logged, who reviewed it, what action followed, what evidence supported the conclusion, and whether any residual risk remains open. That is not a special audit exercise. That is what good governance should already be producing.

The same is true of regulator readiness. Regulator readiness does not begin when an information request lands. It begins when the organisation builds reporting in a way that preserves evidence, ownership, review history and escalation logic as part of routine operations.

This is one of the reasons recurring governance structures matter so much. Monthly status updates, action trackers, review cycles, sign-off points, incident logs, governance meetings and shared evidence environments may look administrative from the outside. In reality, they are often the things that determine whether the organisation can answer difficult questions later with confidence rather than reconstruction.

Ordinary reporting should reduce the need for extraordinary panic later.

Where AI and resilience make weak reporting more dangerous

Weak reporting becomes more dangerous where AI systems and resilience obligations are in scope. AI-related processing often involves more opacity, broader third-party dependency, more complex service chains and weaker visibility over how data is actually being handled in practice. That means broad assurances are especially risky. If reporting on AI use is built on vague inventories, incomplete review records or soft assumptions about how the service operates, the organisation may be reporting confidence that it has not yet earned.

Resilience issues create similar problems. Incidents, third-party dependencies, recovery arrangements, operational workarounds and critical service impacts often sit across privacy, security, operational resilience and vendor governance. If reporting is too siloed, those overlaps remain hidden. If reporting is too soft, the organisation may describe control where it really has unresolved dependency.

The lesson is not that all governance must be merged into one report. It is that weak reporting becomes less defensible where the facts are more complex, the dependencies are broader, and the evidence is harder to assemble after the fact. Where AI or resilience issues intersect with privacy, the answer is not lighter reporting. It is stronger evidence. So:

  • Where AI is involved, insist on clearer inventories, clearer scoping and stronger evidence trails.
  • Where resilience issues overlap with privacy, make sure unresolved dependency is visible.
  • Be wary of reporting that relies heavily on supplier narrative without internal validation.
  • Treat opacity as a reporting risk, not just a technical risk.

And, in Summary

Privacy reporting is not valuable because it produces a polished management paper. It is valuable because it helps the organisation show that governance is working. The strongest reporting models do more than summarise activity. They preserve evidence, expose dependency, support remediation, clarify ownership and strengthen the organisation’s ability to withstand challenge. They help show not just what has been done, but what can be proved, what remains unresolved, and what has been escalated because it still matters.

That is the real move from privacy metrics to audit resilience. If reporting does not help the organisation demonstrate control, evidence remediation and withstand scrutiny, it is not yet doing the job it needs to do.

This article is intended to support the learning covered in Hour 3 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Cross-Border Transfers for DPOs

This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Practical DPO Perspective

Cross-border transfers are often presented as a narrow legal issue: identify the transfer, select a mechanism, insert the clauses, and move on. That is not how this works in practice. For most organisations, the real weakness is not a complete absence of legal awareness. It is that the underlying transfer analysis is often shallow. The organisation may know that international transfers are regulated, but still fail to answer the questions that actually matter:

  • what is the transfer scenario?
  • who is receiving the data in practice?
  • where can it be accessed from?
  • is the data intelligible in the destination jurisdiction?
  • what, if anything, do the safeguards materially change?
  • and can the organisation stand over the position it has taken?

From a DPO perspective, this is where the issue becomes real. Cross-border transfers are not just about Chapter V. They are a practical test of whether the organisation understands its systems, its vendors, its dependencies and its own governance.

The first mistake is often getting the transfer analysis wrong

A surprising amount of poor transfer analysis starts too late. The organisation moves quickly to SCCs, adequacy or template wording before it has properly identified what the transfer actually is. That matters because not all overseas access scenarios are the same.

A temporary employee working remotely while travelling is not necessarily the same as engaging a contractor established in a third country to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service extracting data from that platform and processing it through its own US-based infrastructure. A support arrangement allowing occasional limited troubleshooting access is not the same as routine privileged administrative access from outside the EEA.

Those distinctions are not technical trivia. They shape the legal analysis. For DPOs, the first step is therefore not “Which clauses do we need?” It is “What exactly is happening here?” That means understanding:

  • who the recipient is
  • whether they are acting as processor, controller or contractor
  • whether the data is merely transiting, being stored, or being accessed remotely
  • whether the access is occasional or routine
  • whether the recipient can view the data in clear text
  • whether sub-processors are involved
  • and whether the organisation is dealing with one transfer or a chain of transfers

If those facts are unclear, the rest of the analysis is likely to be weak. Organisations often map where data is hosted but not where it is accessed from. They identify the main vendor but not the sub-processor chain. They treat a tool as part of an existing compliant environment, even though the add-on service is operating outside that perimeter altogether. They also tend to collapse very different overseas access scenarios into one generic “international transfer” label, which obscures the real legal and operational distinctions.

Consider reviewing whether your transfer mapping distinguishes between:

  • storage and remote access
  • employees and third-country contractors
  • primary vendors and sub-processors
  • core platforms and connected tools
  • occasional support access and ongoing operational access
  • pseudonymised or encrypted data versus data readable in clear text

International transfer exposure often turns on facts that are not visible at policy level. The organisation should distinguish between different access and hosting scenarios rather than treating all overseas processing as a single generic issue. Weak factual analysis leads to weak transfer decisions.

SCCs are often used as a substitute for thinking

Standard Contractual Clauses remain important and, in many cases, necessary. But they are often treated as though they answer more than they actually do.

  1. They do not tell you whether the recipient can access intelligible data.
  2. They do not tell you whether local law may undermine the level of protection expected under EU law.
  3. They do not tell you whether the provider’s support model materially changes the risk.
  4. They do not tell you whether the organisation has understood the actual architecture of the service.

That is why Schrems II mattered so much in practice. It did not make SCCs irrelevant. It made it harder to pretend that contractual wording alone resolves the issue. For DPOs, this is one of the most important mindset shifts. SCCs are not the conclusion. They are the legal vehicle through which the transfer may be supported, provided the surrounding facts and safeguards make that supportable. The real assessment still has to ask:

  • what legal environment is the recipient subject to?
  • what categories of data are involved?
  • can the provider or authorities access the data in intelligible form?
  • what technical and organisational measures exist?
  • what changes if those measures fail or are bypassed?

A signed set of SCCs without that analysis is not a strong position. It is often just a neat-looking file. A recurring problem is the belief that if the vendor is well known, the DPA is polished, and SCCs are attached, the organisation has done enough. In reality, that often means the organisation has documented the mechanism without properly assessing the transfer. Some TIAs then repeat generic language about safeguards while saying very little about how the service actually operates, what the provider can see, or what risk remains if the provider handles the data in clear text. Check whether your transfer analysis goes beyond “SCCs are in place”, generic vendor assurances, high-level statements about security, and broad claims of compliance unsupported by service-specific facts.

For example ask instead:

  • can the recipient access the data in clear text?
  • what practical difference do the safeguards make?
  • what do we know about the provider’s support and access model?
  • if challenged, could we explain why this transfer remains supportable?

Standard Contractual Clauses should not be treated as a substitute for substantive assessment. The presence of SCCs does not remove the need to understand provider access, destination-country risk, intelligibility of the data and the practical effect of safeguards.

A TIA is only useful if it forces the right factual questions

A Transfer Impact Assessment is often described as a compliance requirement. That is true, but it is not the most useful way to think about it. A good TIA is a disciplined way of forcing the organisation to confront the underlying facts of the transfer and to document the judgement it has made. It should ask, at a minimum:

  • what data is involved?
  • how sensitive is it?
  • who receives it?
  • where do they operate?
  • what access do they have in practice?
  • is the data intelligible at the point of access?
  • what laws in the destination jurisdiction matter?
  • what measures reduce the exposure?
  • and what residual risk remains?

That is what makes a TIA valuable. It is not simply an internal paper trail. It is a mechanism for converting abstract legal obligations into a decision the organisation can actually defend. This is particularly important for DPOs because weak TIAs tend to fail in the same way: they contain the right headings, but the wrong depth. They reproduce a compliance vocabulary without showing the reasoning that matters. If a TIA never meaningfully addresses whether the provider can view the data in readable form, whether the provider’s support staff are outside the EEA, or whether the sub-processor chain alters the risk, then it is not doing the real job.

The most common weaknesses are boilerplate analysis, late-stage completion, and poor connection to procurement or design decisions. TIAs are often produced after the commercial decision is already made, using generic wording that could apply to almost any vendor. That gives the appearance of control while leaving the actual decision-making unexamined. Review a small sample of  your TIAs and ask:

  • do they describe the actual service or just the generic transfer issue?
  • do they identify who can access the data and in what form?
  • do they distinguish between technical safeguards that genuinely reduce risk and those that do not?
  • do they record any limits, conditions or follow-up actions?
  • would the document still make sense to a regulator reading it cold?

A TIA is useful only if it captures the factual and legal reasoning behind the transfer. Boilerplate assessments create the appearance of assurance without showing that the organisation has meaningfully understood the provider, the data exposure or the residual risk.

AI and connected tooling are where organisations most easily lose control

If traditional transfer analysis was already difficult, AI-enabled services have made it harder. The challenge is not simply that AI tools may process data outside the EEA. It is that the processing chain is often less transparent, the sub-processor landscape is broader, and the customer may have less visibility over retention, reuse, support access and model-related processing than they assume. This is where a transfer analysis that looks acceptable on paper can become weak very quickly.

An organisation may believe it is operating inside a controlled environment, for example through an EU-hosted collaboration or productivity suite. But if a connected AI-enabled service extracts transcripts, recordings or other content from that environment and processes it through its own infrastructure, then the original boundary is no longer the key point. The real question becomes what happens once the data leaves that environment, who can access it, and under what conditions.

That is where DPOs need to be particularly careful. In an AI context, the transfer issue is not just where the data goes. It is whether the organisation retains meaningful visibility and control once the data enters that processing environment. That means asking harder questions:

  • is the tool using third-country infrastructure?
  • is prompt, transcript or content data retained?
  • is it available for model improvement, troubleshooting or analytics?
  • who are the relevant sub-processors?
  • can humans at the provider access the data?
  • is the data encrypted only in transit and at rest, or is it still intelligible during processing?
  • does the organisation understand the real boundaries of the service?

These are not optional refinements. They go to the heart of whether the transfer analysis is credible.

What we repeatedly see is governance lag. AI-enabled tools are deployed because they are useful, fast and embedded into everyday work. The privacy analysis then follows behind, often relying on assumptions that do not survive closer scrutiny. Organisations also tend to overestimate what “EU-based” marketing language means, particularly where the service depends on broader support, model or sub-processing arrangements.

Schedule a review:

  • which AI-enabled tools or integrations are already active
  • whether they extract or replicate personal data from existing systems
  • whether they introduce non-EEA processing or access
  • whether the service terms permit retention, analytics or reuse
  • whether your TIAs and vendor reviews are specific to the AI functionality rather than the core platform alone

AI-enabled services can materially weaken transfer visibility and increase accountability burden. Their use may involve non-obvious processing chains, third-country infrastructure, multiple sub-processors and reduced customer control. These tools should be assessed as transfer and governance issues, not just as productivity features.

This is also a third-party oversight issue, and in some sectors a resilience issue

Cross-border transfers are often kept within the privacy silo. In practice, they overlap heavily with vendor governance, outsourcing oversight and, in regulated sectors, broader operational resilience concerns. If a critical or hard-to-replace provider stores or accesses personal data outside the EEA, the issue is not simply whether there is a lawful mechanism in place. It is also whether the organisation has enough visibility, assurance and control over that provider relationship. That is why transfer governance should not sit apart from wider third-party review. A provider may at the same time be:

  • a material processor of personal data
  • an important operational dependency
  • a source of concentration or substitution risk
  • and a point of exposure because of non-EEA access or sub-processing

Where those issues are reviewed in separate silos, the organisation can end up with a legally tidy but operationally weak position. For DPOs, this matters because the transfer analysis is often only as good as the information the organisation has about the vendor. If that visibility is poor, the privacy conclusion will usually be weaker than it appears. This is particularly relevant in financial services and other regulated environments, where transfer governance may support broader expectations around supplier oversight, dependency management and evidence of control. It does not need to become a DORA article to make that point. It just needs to recognise that the same provider relationship may matter for several governance reasons at once.

A common failure point is fragmentation. Procurement reviews the contract. IT reviews the implementation. Risk reviews continuity. Privacy reviews the DPA. But no one joins that into a coherent view of how the provider actually operates, how dependent the organisation has become, and whether the privacy analysis still holds if service conditions change.

Questions to ask:

  • which providers are operationally significant as well as privacy-relevant
  • whether transfer review is linked to vendor governance and oversight
  • whether changes in hosting, support model or sub-processors are captured and escalated
  • whether board reporting on critical third parties includes material transfer exposure where relevant

International transfers may also expose wider third-party and resilience weaknesses. Where a provider is operationally important and processes personal data outside the EEA, the organisation needs not only a lawful mechanism but sufficient visibility and control over that dependency.

For DPOs, the real issue is whether the organisation can defend the position it has taken

The mature question in this area is not “Do we know that cross-border transfers are regulated?” Most organisations do. The more important question is whether the organisation can explain, with evidence, why it believes a given transfer is supportable. That requires more than awareness of the law. It requires enough understanding of systems, vendors and governance to connect the legal mechanism to the real operational facts. It requires TIAs that reflect the actual arrangement rather than generic precedent. It requires challenge where the business assumes that a contract or a familiar vendor name resolves the issue. And it requires senior reporting that turns transfer risk into something visible rather than theoretical.

That is why cross-border transfers are such a useful measure of programme maturity. Where the organisation gets this right, it usually indicates something broader: joined-up governance, stronger vendor control, clearer ownership and a privacy programme that can translate legal standards into defensible decisions. Where it gets this wrong, the same pattern usually appears elsewhere too.

The real difficulty is rarely total ignorance. It is fragmented ownership, weak operational visibility and analysis that is neater than it is deep. Privacy teams may know the law, but not have enough visibility into real access patterns, vendor architecture or AI-enabled data flows to challenge the business properly.

To combat this, ask:

  • who owns transfer mapping in practice?
  • who signs off TIAs and on what basis?
  • how are changes in tools, vendors or support arrangements identified?
  • can the organisation distinguish between compliant documentation and defensible analysis?
  • could it explain the position clearly to a regulator or auditor if required?

Cross-border transfer compliance is a practical test of governance maturity. It shows whether the organisation can convert legal requirements into evidence-based decisions, meaningful supplier oversight and a position that can be defended if challenged.

Final thoughts

Cross-border transfers are not difficult because the law is obscure. They are difficult because they expose whether the organisation has really understood its own operating model. For DPOs, that is the key point. This is not ultimately about inserting clauses into contracts or reciting Schrems II. It is about identifying where the data goes, who can access it, what the technical and organisational reality looks like, and whether the organisation can justify the conclusion it has reached. That is what makes transfer compliance useful. It does not just test legal knowledge. It tests whether privacy governance is actually working.

This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Defensible Vendor Privacy Lifecycles

This article accompanies Hour 4: Vendor Management Oversight in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Vendor privacy governance is usually weakest where the organisation treats it as a gateway rather than a lifecycle

A lot of vendor privacy governance still operates as though the important work happens at the point of onboarding. The business identifies a supplier. Procurement starts the engagement. Legal reviews the contract. Privacy is asked whether a DPA is needed, whether a DPIA is required, or whether the arrangement raises any obvious concerns. The service then goes live and, unless something goes wrong, the relationship is treated as settled. That model is common. It is also one of the reasons vendor oversight often proves weaker than organisations expect.

A vendor relationship does not remain low-risk because it was reviewed once. It changes as the service changes, as use expands, as more data moves through it, as the vendor updates its terms, as additional business teams start relying on it, and as the relationship becomes operationally harder to challenge or replace. That is true of processor arrangements, controller-to-controller disclosures, hybrid relationships and joint controllership scenarios alike, even though the legal and governance implications differ across each.

The practical issue is not that organisations have no process. It is that the process is too heavily concentrated at the front end and too thin thereafter.

Vendor privacy governance is strongest where the relationship is treated as a lifecycle rather than a contracting event, and where the organisation is clear about what the DPO advises, what privacy operations runs, and what other functions must own.

Once the legal characterisation is understood, the next question is not simply who owns the vendor. It is how the organisation runs the privacy dimension of the relationship over time in a way that is workable, collaborative and capable of standing up to scrutiny.

The first practical mistake is assuming that one vendor process can govern every data relationship

A lot of internal vendor models are built for efficiency. One onboarding route, one review form, one contract flow, one renewal rhythm. That can make sense from a procurement perspective. It is much less reliable from a privacy governance perspective.

A processor arrangement does not create the same accountability issues as a controller-to-controller sharing arrangement. A joint controllership scenario does not raise the same operational questions as a straightforward cloud hosting relationship. A hybrid arrangement cannot sensibly be governed as if one legal label answers everything. If the organisation tries to run all of those through a single privacy model without differentiation, the result is usually one of two things: either the process becomes so generic that it stops being meaningful, or the complexity is pushed into side conversations and never reflected properly in the live governance model.

The DPO and privacy operations team need to resist that flattening instinct. The point is not to make the process cumbersome. It is to make it accurate enough that the organisation’s control model follows the actual data relationship.

Different relationships require different forms of privacy governance. Different stages of the lifecycle require different inputs. Different functions need to come in and out of the picture at different times. A defensible model is one that makes those movements clear.

A processor arrangement, a controller-to-controller disclosure, a hybrid relationship and a joint controller arrangement may all sit under “vendor management” internally, but they do not create the same accountability problem and should not be governed as if they do.

That point is often more useful than a broad statement that vendor management should be “cross-functional”. Cross-functional is not the point. The point is that the organisation should be deliberate about how privacy governance moves through the relationship and about what kind of issue it is actually dealing with at each stage.

The DPO and privacy operations team should not be doing the same job

One of the reasons vendor governance becomes muddled is that the privacy function itself is not always clear on role separation. The DPO is drawn into workflow administration because they are seen as the privacy authority. Privacy operations ends up holding legal nuance because it is the team running the process. Neither outcome is especially stable.

A better model is more deliberate. The DPO’s role is to advise on legal characterisation, accountability structure, material risk, points of uncertainty and escalation. The DPO should be capable of explaining when the organisation is using the wrong relationship model, when a processor arrangement needs stronger oversight, when a controller-to-controller sharing arrangement is insufficiently defined, when a hybrid model needs to be split more clearly, or when a joint controllership issue is being masked by convenience drafting. The DPO also has an important role in helping the organisation decide which vendor issues should remain operational and which need to move into risk, management or board reporting.

Privacy operations does something different. It gives the model repeatability. It helps intake happen. It gathers the right questions early. It routes relationships to the right reviewers. It tracks what documentation is needed. It follows up on outstanding actions. It maintains records of review, reassessment, approvals and renewals. It keeps the organisation from repeatedly rediscovering the same problem because no one built it into process. In mature environments, privacy operations is often what turns privacy requirements from ad hoc commentary into a working governance system.

These roles need each other, but they are not interchangeable.

The DPO should not become the workflow, and privacy operations should not be left to carry the legal and governance judgment alone.

That is a useful distinction because it prevents the two most common failure modes. In one, the DPO is overloaded with operational handling and loses the capacity to focus on accountability and governance. In the other, privacy operations is left running a process that appears efficient but cannot distinguish between routine privacy administration and a relationship that is becoming harder to defend.

A better working model is collaborative. The DPO provides judgment, interpretation and escalation. Privacy operations provides structure, rhythm and evidence. Procurement, legal, IT and the business contribute their own essential inputs, but they do so within a privacy model that is designed rather than improvised.

The lifecycle should begin before contracting, not at the point the contract arrives

One of the easiest ways to weaken vendor governance is to bring privacy in only when the paperwork is already moving. By that stage, the business may already be committed, procurement may be working to a deadline, and the vendor may already be positioned internally as the chosen solution. Privacy review then becomes reactive. The function is asked to check documents, raise objections quickly, or “just confirm what is needed”.

That is not the strongest place to do privacy analysis. A better lifecycle starts earlier, at intake and scoping. The organisation should understand what service is being procured, what personal data will be involved, which data subjects are affected, what the vendor is actually doing with the data, which systems the service will connect to, whether the arrangement appears to involve a processor, a third-party controller, joint controllership or a hybrid mix, and what other functions need to be involved from the start.

That is not about slowing procurement down. It is about ensuring that the privacy position is shaped before the relationship hardens commercially. The collaborative nature of this stage matters. The business should be able to explain the use case and intended reliance. Procurement should be able to frame the route to engagement. IT or security should start identifying technical fit and integration implications. Privacy operations should gather and route the relevant information so the right analysis can happen. Legal and the DPO should be able to assess the emerging structure rather than trying to retrofit it later.

When that early step is skipped, organisations often end up treating classification and contract structure as secondary clean-up tasks. That is precisely the wrong way round.

Early privacy involvement is most useful where it shapes the relationship before the organisation becomes committed to a legal and operational model that is harder to unwind.

That is one of the clearest reasons to think in lifecycle terms. The privacy function is not there only to clear the paperwork. It is there to help make sure the relationship starts on a basis that can still be defended once the service is live.

Contracting should reflect the legal model, but the contract is not the control environment

Once the arrangement has been characterised properly, the contract can be built around that analysis. If the vendor is acting as a processor, an Article 28-compliant DPA is the right foundation. If the relationship is controller-to-controller, a more suitable data sharing framework or controller-to-controller set of terms will often be needed. If the arrangement involves joint controllership, the allocation of responsibilities needs to be dealt with in a way that reflects that structure. If the model is hybrid, then the documentation should follow the split rather than pretending one label is enough.

That sounds obvious, but it is still common to see organisations attach a processor addendum to everything because it is operationally familiar. That may solve an immediate workflow problem. It does not solve the privacy problem if the arrangement includes controller elements that remain unaddressed.

Even where the contract is legally structured well, the privacy team should be careful not to treat the document set as if it were the privacy governance model in itself. A DPA is part of the control environment, not the whole of it. The same is true of a data sharing agreement or joint controller arrangement. The contract captures the intended legal position. It does not ensure the relationship is understood, monitored, revisited or escalated appropriately once live.

Contracts should reflect the relationship. They should not be mistaken for the relationship itself.

That point becomes especially important where the organisation wants assurance that a relationship is “covered”. The more useful question is covered for what. Covered in a drafting sense is not the same as governed in an operational sense. Privacy operations can help a great deal here by making sure the record of review does not stop at “agreement signed”, but links the contractual model to the live controls, reassessment triggers and ownership expectations that need to follow.

Implementation is where privacy assumptions often start to drift

Once the service is approved and the contract is in place, the organisation often relaxes. That is understandable. The hard part feels complete. The real practical difficulty, however, often begins at implementation.

This is where services become real. The actual configuration may differ from the one assumed at review. The categories of users may widen. Integrations may connect the tool to additional datasets. Business teams may start using the platform in ways that were not fully envisaged at intake. Optional features may be turned on. The amount of data may increase. A platform may move from an isolated use case to something embedded in a broader process.

This is how privacy drift happens. The original review may not have been poor. It may have been entirely reasonable on the facts available at the time. The weakness emerges because the implementation moves the relationship away from those original facts without any corresponding privacy check.

This is one of the places where privacy operations has real practical value. It can build in implementation-stage prompts that force the organisation to confirm whether the deployment still matches the assumptions on which the original legal model and risk view were based. Records of processing may need to be updated. The business may need to confirm scope. Security or IT may need to flag a change in integration or architecture. Legal or the DPO may need to revisit the relationship where the change is substantive.

Many vendor relationships do not become problematic because the onboarding analysis was obviously wrong. They become problematic because the live use of the service outgrows the assumptions on which that analysis was based.

That is one of the clearest points a DPO or privacy lead can take into governance reporting. It explains why a vendor can appear compliant at appointment and still become materially weaker from an accountability perspective later. The issue is often not absence of process. It is unrecognised drift.

Monitoring needs to be proportionate, but it cannot be absent

A defensible lifecycle needs monitoring, but not every vendor should be monitored in the same way. One of the easiest ways to make vendor governance unworkable is to impose the same intensity on every relationship regardless of the data, the role classification, the service criticality or the organisation’s dependency. The better approach is to be proportionate and specific.

For processor relationships, monitoring may involve checking for changes in sub-processors, reviewing changes to service terms, looking at certifications or assurance material, revisiting contractual rights at renewal points, and making sure that the practical level of oversight remains matched to the significance of the processing. For controller-to-controller relationships, the privacy monitoring may need to focus more on whether the original basis and scope of the sharing still make sense, whether onward uses have changed, whether transparency positions remain accurate, and whether the organisation would still be comfortable defending the sharing rationale if asked.

Hybrid and AI-enabled relationships often need closer attention because their complexity or opacity makes drift harder to spot. If a service introduces new AI features, changes how data is handled for security or service improvement, or modifies the vendor’s own-use position, the privacy significance may be much greater than a routine contract change process would suggest.

The objective is not to keep every relationship under permanent scrutiny. It is to make sure the organisation has some meaningful way of detecting when the relationship it is relying on is no longer the relationship it originally assessed.

Monitoring is not strongest where it is constant. It is strongest where it is proportionate, role-sensitive and capable of detecting material change.

That is a practical point worth making because many organisations oscillate between two weak positions: either no live monitoring at all, or a burdensome review model that is too generic to be useful.

Change control is where mature vendor governance becomes visible

The strongest sign of maturity in vendor privacy governance is not a well-run onboarding process. It is a clear approach to change.

Vendor relationships change in ways that matter legally and operationally. A new module is procured. The business wants to expand use. The vendor changes its terms. A new sub-processor is introduced. A once-optional service becomes business critical. AI functionality is layered into an existing platform. The product architecture changes in a way that affects data flows. A service that looked processor-like starts to include more independent purpose-setting by the provider.

If the organisation has no way of recognising and assessing those shifts, then its privacy governance is weaker than it looks. This is where the back-and-forth across functions becomes most important. The business needs to know when a planned change is privacy-relevant and should not simply be treated as an operational enhancement. Procurement and legal need to know when contract or role assumptions need to be revisited. IT and security need to flag architectural changes rather than assuming the privacy position remains static. Privacy operations needs to capture the trigger and route it. The DPO needs to assess whether the change alters the legal characterisation, the accountability position or the level at which the issue should now be escalated.

A mature vendor model is not shown by how neatly a relationship is onboarded, but by whether the organisation knows when that relationship has changed enough to require fresh privacy analysis.

This is one of the most useful sections to get right because it turns vendor governance from a static compliance activity into a real operating model. It also fits naturally with the CPD hour’s emphasis on continuous oversight rather than one-off onboarding.

Incident, concern and escalation routes need to be built into the model

Some vendor issues do not surface through a formal review cycle at all. They emerge through discomfort, incident handling, complaint patterns, transparency questions, vendor communications, internal confusion about how a service is using data, or a practical inability to answer basic questions when they suddenly matter.

That means a defensible lifecycle needs routes for concerns to be captured and assessed even where no formal trigger was expected. Privacy operations can help by making sure there is a place for those issues to go. It can capture the concern, identify whether the issue is likely to be contractual, technical, legal, operational or mixed, and make sure it reaches the right people. The DPO can then assess whether the issue reflects a contained operational problem, a misalignment between the documented model and the live reality, a broader accountability weakness, or an issue that needs to be fed into governance structures more explicitly.

That collaborative model matters because not every concern needs the same response. Some issues can be remediated within the ordinary vendor process. Others may show that the relationship has become harder to defend, harder to monitor or harder to explain than the organisation had appreciated.

Vendor concerns should not need to wait for a breach or renewal before they can enter the governance process.

That sounds simple, but it is often a real weakness. Organisations are often better at documenting what they intended at the point of contracting than at recognising signals later that the relationship may no longer be operating on that basis.

The reporting that matters is not simply that a vendor is high-risk

One of the most useful contributions a DPO or privacy lead can make is to improve the quality of vendor reporting. Too much reporting on vendor privacy risk still collapses distinct issues into broad labels such as “high risk vendor”, “third-party risk” or “privacy concern raised”. That may be concise, but it is often too vague to support good decisions.

A more useful report does not merely say that a relationship is high risk. It explains why. Is the issue that the legal model is unclear or partially documented? Is the relationship processor-based in form but weak in practical oversight? Is the organisation relying on controller-to-controller sharing that is not well bounded? Is the vendor now critical to operations in a way that changes the significance of privacy weakness? Is the service AI-enabled or otherwise opaque such that the organisation’s visibility into the actual handling of data is structurally limited?

These are different problems. They require different decisions. They should not all appear in reporting as though they were just variations of the same vendor issue.

The most useful vendor reporting does not say only that a third party is “high risk”. It explains whether the organisation’s ability to understand, challenge or govern the relationship is weaker than the level of reliance placed on it.

That is a much more effective reporting frame for management or board-level audiences. It tells them why the issue matters in operational and governance terms. It also helps avoid the common problem where the privacy function reports issues accurately enough to document concern, but not clearly enough to drive action.

A board or senior management audience does not need to see every vendor file. It does need to understand where the organisation is relying on relationships that are more legally complex, more operationally important or less transparent than the control environment around them suggests.

DORA and AI make generic vendor governance less defensible

DORA and AI do not require an entirely separate vendor process for every relationship, but they do make generic governance models harder to defend.

DORA matters because it forces organisations to take dependency more seriously. A relationship may be legally well-papered and still represent a weak resilience position if the organisation cannot readily substitute the provider, if concentration risk is building, or if the service has become operationally critical in a way that makes privacy weakness more consequential. That does not turn the DPO into the DORA lead, but it does mean that privacy concerns about a vendor may need to be framed in terms of dependency and governance, not just compliance.

AI sharpens a different problem. Some AI-enabled services are difficult to classify neatly and difficult to oversee in a traditionally reassuring way. Product terms may evolve quickly. The vendor’s own-use position may be more nuanced than the sales or onboarding narrative suggests. The service may be layered, opaque or dependent on downstream providers. The organisation may be relying on contractual description and vendor assurance more heavily than it can comfortably verify. That makes lifecycle governance even more important.

A model that works only if the vendor relationship stays static, transparent and easy to describe is not enough.

In some AI-enabled and operationally critical relationships, the most important governance question is not whether the vendor was approved correctly, but whether the organisation still understands what it is relying on and what it can realistically control.

That is the point at which privacy, resilience and governance begin to converge. It is also why a lifecycle model is more useful than an ownership-only model. The issue is not just who owns the vendor. It is how the organisation keeps the relationship intelligible as it evolves.

Renewal, remediation and exit should be treated as part of privacy governance

A lot of vendor models become thin again towards the end of the relationship. Renewal is treated as a commercial issue. Exit is treated as an operational or procurement matter. Privacy comes back in only if the organisation remembers to ask about return or deletion of data. That is another missed opportunity.

A defensible lifecycle should treat renewal, remediation and exit as part of the privacy governance model, not as afterthoughts.

Renewal is an obvious point to reassess whether the legal model still fits the relationship, whether the current documentation still matches the service, whether the level of oversight remains proportionate to the real risk and whether new dependencies or concerns have emerged since onboarding. Remediation matters because some relationships will reveal weaknesses over time that need to be corrected through revised documentation, tighter controls, clearer internal restrictions or a more explicit governance treatment. Exit matters because a relationship that is no longer supportable may need to be unwound in a way that addresses data return, deletion, transition risk, retained copies and learning for future onboarding.

This is also where privacy operations can add a great deal of value. It can make sure that renewal is not just a date in a contract system, but a trigger for targeted reassessment. It can ensure that exit-related privacy actions are not lost between procurement, IT and the business. It can feed lessons from failed or awkward relationships back into the intake model so the same issues are less likely to be repeated.

A vendor lifecycle is incomplete if it can onboard and monitor a relationship, but cannot reassess, remediate or exit it in a privacy-governed way.

That is a point many organisations would do well to absorb. A relationship is not well governed simply because it can be approved. It also needs to be capable of being revisited and, where necessary, changed or brought to an end.

What a defensible working model looks like in practice

A defensible working model is therefore not a single form, a DPA repository or a broad statement that vendor management is cross-functional. It is a practical arrangement in which the organisation knows how privacy enters the relationship, how the legal model is analysed, how documentation is aligned to that model, how implementation drift is detected, how monitoring is made proportionate, how change is reassessed, how concerns are captured, how material issues are fed upward, and how renewal or exit is handled with the same level of care as onboarding.

That model should feel collaborative rather than territorial. The business should know when it needs to re-engage privacy because its use of a vendor has changed. Procurement should know that privacy review is not simply a contract annex request. Legal should know when the agreement structure no longer reflects the reality of the data relationship. IT and security should know that architecture and integration changes can carry privacy significance beyond technical risk. Privacy operations should know how to keep the workflow moving without carrying judgment it should not be asked to hold alone. The DPO should know where the organisation’s accountability position is becoming difficult to defend and how to express that clearly.

Vendor privacy governance becomes more defensible when the organisation stops asking whether the vendor has been approved and starts asking whether the relationship is still being governed on the basis on which it was approved.

That is probably the cleanest way to summarise the lifecycle approach. The point is not to make vendor governance endlessly process-heavy. The point is to stop pretending that a relationship reviewed once remains static forever.

Takeaway

A useful next step for a DPO or privacy operations lead is to look at the current vendor model and ask whether it behaves like a gateway or a lifecycle. Does privacy come in early enough to shape the relationship, or only late enough to annotate it? Is there a clear distinction between what the DPO should judge and what privacy operations should run? Are implementation changes, service expansion, new AI features, contractual updates and shifts in business reliance capable of triggering fresh review? Do concerns have somewhere to go before they become incidents? Does reporting explain the real nature of the exposure, or merely record that a vendor is “high risk”? And are renewal, remediation and exit treated as part of governance rather than loose ends?

The practical challenge is not simply to make vendor onboarding more orderly. It is to build a collaborative model in which the organisation can keep understanding, governing and, where necessary, reclassifying the relationships it relies on over time.

This article is intended to support the learning covered in Hour 4 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Who Owns Privacy Accountability?

This article accompanies Hour 3: Privacy Program Metrics in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Why a Defensible Privacy Programme Depends on More Than the DPO

One of the most persistent weaknesses in privacy governance is also one of the least candidly addressed. Organisations often speak as though privacy has an owner in the singular. Sometimes that owner is called the DPO. Sometimes it is the privacy team, compliance function or legal lead. Sometimes the language is softer, privacy “sits with” a particular function, or one person is described as “responsible for privacy.” In practice, however, the position is rarely that neat. A privacy programme may be coordinated by one function, but the things that determine whether it is accurate, current, evidenced, operationally real and legally defensible are spread across the organisation. That matters far more than it first appears.

A privacy programme becomes weak very quickly when one function is expected to stand over controls, decisions, evidence and remediation that it does not actually own. The weakness may not show immediately. Governance can still look busy. Reporting can still be produced. Meetings can still happen. Actions can still appear in trackers. But over time the pressure points become obvious. The privacy function is asked to defend a RoPA that depends on business units confirming what actually happens. The DPO is asked to “close” an issue that depends on IT, procurement or management decision. A board paper describes remediation as though it is progressing cleanly, while the real blockers remain unresolved elsewhere. At that point, the organisation does not really have a privacy ownership model. It has a privacy concentration problem.

This is where experienced readers will recognise a familiar dynamic. The more an organisation talks about “the privacy owner” without distinguishing between oversight, coordination, operational control and business accountability, the more likely it is that those functions are being blurred together. That blur is not merely inconvenient. It weakens governance, distorts reporting and makes assurance less credible. It is also one of the reasons privacy programmes can appear more mature on paper than they are in reality.

The answer is not to dilute accountability until everyone is vaguely responsible and no one is clearly answerable. Nor is it to treat the privacy function as the organisational backstop for every gap. The answer is to be much more precise about who owns what, who validates what, who reports what, and who is entitled, and expected, to challenge when the programme is not functioning as it should.

A privacy programme is not made defensible by naming one owner. It becomes defensible when the right people are accountable for the right parts of the system.

Privacy ownership is often described far more neatly than it actually exists

Organisations often use the language of ownership too casually. They say privacy is “owned” by the DPO or that the privacy team “has responsibility for” the programme. On one level, that is understandable. There is usually a need to identify who coordinates the agenda, answers the questions, keeps the programme moving and acts as a visible point of contact. The difficulty is that this shorthand quickly becomes misleading.

Ownership in privacy governance can mean several different things. It may refer to legal interpretation, operational coordination, drafting and documentation, oversight and challenge, risk management, control implementation, committee reporting or simply the function expected to respond when an issue arises. Those are not interchangeable. Yet in many organisations they are treated as though they are.

If the privacy function is described as the owner of the programme without further nuance, people elsewhere in the organisation often begin to behave as though privacy has somehow absorbed their accountability. Business owners may assume that if the privacy team has documented the process, it now “owns” the process from a governance perspective. Technical teams may treat privacy as the reporting face of a control environment they themselves must actually operate. Senior management may behave as though privacy reporting is something received from the privacy team rather than something that reflects management’s own decisions about risk, remediation and resourcing.

This is one of the reasons privacy programmes become subtly but materially distorted. The privacy team becomes responsible for describing a system whose accuracy depends on others. It becomes the point of reassurance for positions it cannot fully verify alone. It may even become the de facto owner of unresolved issues simply because no one else is prepared to pick them up. That is not an ownership model. It is a transfer of discomfort.

A more defensible approach begins by resisting the temptation to use “ownership” as a blanket term. Senior organisations are usually more disciplined about this. They distinguish between governance accountability, process ownership, control ownership, escalation responsibility, reporting responsibility and legal oversight. They understand that a privacy programme does need a visible centre, but that centre cannot credibly absorb every responsibility without weakening the rest of the system.

This is not just a drafting point. It affects how issues are handled in practice.

  1. If ownership is vague, accountability is weak.
  2. If accountability is weak, reporting becomes over-reliant on narrative.
  3. If reporting becomes narrative-heavy, the organisation often discovers too late that it cannot show what sits behind the confidence it has been expressing.

The DPO is not the operational owner of every privacy issue

This is one of the most important governance distinctions in the entire privacy programme, and it is one that organisations still get wrong with surprising frequency.

The DPO has a specific role. Under the GDPR framework, that role includes informing and advising, monitoring compliance, advising where appropriate on impact assessments, cooperating with the supervisory authority and acting as a contact point. That is already a substantial mandate. It is not, however, the same thing as owning every operational weakness, every unresolved process issue, every control failure, every vendor deficiency or every incomplete remediation action across the organisation.

That distinction matters because organisations frequently behave as though appointing a DPO solves the ownership question. In reality, it often only sharpens it. Once a DPO is in place, there is a risk that operational accountability begins to flow towards the role by default. The DPO becomes the person who is expected to “sort” the RoPA, “fix” the DPIA, “close” the issue, “deal with” the vendor concern, “sign off” the governance position, or “take” the matter to the board. Some of this may look like respect for the role. A good deal of it is possibly a delegation of ownership from elsewhere, perhaps due to lack of confidence or knowledge. That becomes problematic very quickly.

A DPO should be able to oversee, advise, challenge and escalate. That is different from being expected to carry the operational burden of every unresolved matter. If the DPO becomes the substitute owner of gaps that belong to business units, operational teams or management, the oversight function starts to weaken. It becomes harder to preserve independence of judgment when the DPO is also expected to keep the entire programme operationally afloat.

This is not a purely theoretical problem. It has practical consequences for reporting, decision-making and legal defensibility. If the DPO is the person who is always expected to supply the answer, the organisation can begin to treat the DPO’s presence as a proxy for control. That is dangerous. A DPO may be fully sighted on the issue and still not own the operational levers needed to resolve it. A mature governance model does not confuse visibility with ownership.

Experienced readers will recognise that this distinction is well understood in other governance settings. Internal audit does not become the owner of the control weaknesses it identifies. Risk does not become the operator of the business processes it monitors. Compliance does not become the owner of every policy breach it escalates. Privacy should not be treated differently simply because the DPO is often more visible than those other functions.

The DPO should be able to see weaknesses clearly, challenge them and escalate them. That becomes much harder when the DPO is expected to carry the programme operationally on behalf of everyone else.

This does not diminish the DPO’s importance. It protects it. The role is strongest where it is empowered to identify, challenge and escalate without silently absorbing organisational dependency that ought to remain visible.

Delivery and oversight are not the same thing

Another reason privacy governance becomes confused is that many organisations fail to distinguish between running the machinery of the programme and standing back from it critically. Privacy work often includes a large amount of operational coordination. Documents need to be updated. Inputs need to be gathered. Actions need to be tracked. Assessments need to be scheduled. Training needs to be rolled out. Governance packs need to be prepared. Issues need to be logged and followed up. Meetings need to happen. Evidence needs to be collected and stored. All of that work matters. A privacy programme without this operational discipline will drift quickly.

But operational discipline is not the same thing as oversight. Oversight involves something different. It involves asking whether the underlying position is actually sound, whether the evidence is sufficient, whether unresolved dependency has been hidden behind status language, whether the organisation is too willing to treat progress as closure, and whether an issue has reached the point where escalation is warranted because the open exposure is no longer tolerable as an administrative delay.

When those two strands are collapsed together, the organisation can become administratively active while remaining governance-weak. The privacy team may be doing an enormous amount of work, yet the programme still lacks a clear line between coordinating activity and testing whether that activity has produced real control. That is where motion begins to masquerade as assurance.

This distinction matters especially for experienced professionals because they will know how often organisations measure the wrong thing. A privacy function may be praised for “driving” the programme, but if driving the programme means endlessly compensating for missing ownership elsewhere, the governance model is not really improving. It is becoming dependent on one team’s persistence.

This is not to say that privacy operations and oversight must always sit in separate organisational silos. That would be unrealistic in many settings. It is to say that the distinction needs to remain visible in governance. Someone needs to be able to ask whether the apparent progress is real, whether the evidence supports the status being reported, and whether the issue has genuinely moved or merely been described more neatly.

That is one of the clearest markers of maturity. A programme becomes stronger when it can tell the difference between operational movement and genuine assurance.

The business owns the processing reality

No privacy programme can be more accurate than the organisation’s understanding of what it is actually doing. That sounds obvious, but it has profound governance consequences. Business units and process owners remain central to whether privacy governance is real. The privacy function cannot invent the purpose of processing, the actual workflow, the data being used informally, the operational workaround people have adopted, the practical retention behaviour, the external sharing that has become routine, or the fact that the documented process differs from what happens in practice.

This is where privacy documentation often drifts. A RoPA may be carefully assembled and then slowly lose accuracy because no one updates the privacy team when the process changes. A notice may remain broadly aligned with the original service design while no longer capturing all the actual uses now embedded in practice. A DPIA may be completed against a process map that is already partly outdated. A retention schedule may say one thing while operational teams continue doing another. The privacy team can ask the right questions, challenge incomplete inputs and try to refresh the record, but it cannot fabricate operational truth where business ownership is weak.

That is why the business cannot be treated as a passive recipient of privacy governance. It is not merely there to be “consulted.” It is an essential part of the evidence chain. If process owners do not engage properly, the privacy programme becomes less accurate, less current and less defensible.

This is also one of the reasons ownership models fail so often in practice. The privacy function may become extremely good at drafting, coordinating and reporting while the business becomes increasingly passive. Over time, the organisation begins to speak as though privacy documents “belong” to the privacy team even where the truth beneath them still belongs operationally to the business. That is an inversion of responsibility, and it is one of the quickest ways to weaken a programme without noticing it.

Seasoned professionals will recognise the governance consequence. Once the business starts to think of privacy as a specialist department’s concern rather than a set of obligations embedded in how the business actually runs, the programme becomes more performative and less reliable. It may still look active, but it becomes harder to trust the evidence base.

Privacy documentation is only as reliable as the operational truth underneath it. Where the business disengages, the programme almost always becomes weaker than the paperwork suggests.

Technical and operational controls do not sit in the privacy function

A similar point applies to technical and operational controls. Many of the measures most relevant to privacy are not controlled by the privacy function at all. They sit in IT, security, engineering, procurement, operations or resilience teams.

Access control, system architecture, backup and recovery capability, logging, deletion execution, vendor integrations, permissions management, monitoring practices, configuration decisions, identity management, incident handling and resilience arrangements are all likely to fall substantially outside the privacy team’s direct operational control. The privacy function can ask questions, review positions, seek evidence and report concerns. It cannot, on its own, create technical control where technical ownership is weak.

This matters because privacy reporting often sounds stronger than the technical evidence beneath it. It is easy enough for an organisation to say that technical and organisational measures are in place, that vendors are being managed, or that systems are subject to adequate controls. Those statements may be broadly true. But unless the relevant technical and operational teams are substantively inside the governance model, contributing evidence, confirming practice, validating assumptions and owning remediation, the privacy function may be reporting confidence it cannot fully verify.

That becomes particularly risky where the organisation allows privacy to become the reporting face of technical matters it does not actually control. If the privacy report implies a stronger technical position than the technical owners themselves could comfortably evidence, the organisation has created a governance gap disguised as assurance.

This is also where the overlap with operational resilience and, where relevant, DORA-style governance becomes real. Incidents, third-party dependencies, recovery capability, operational fallback and critical service exposure often sit across multiple governance streams. If privacy reporting is not fed by the teams that actually own those controls, it can become detached from the operational realities that matter most when scrutiny intensifies.

The answer is not to expect privacy teams to become technical specialists in everything. The answer is to ensure that technical and operational owners are genuinely accountable within the model, and that the privacy function is not expected to convert incomplete technical confidence into governance assurance.

Legal, compliance and risk are not interchangeable

A further weakness in many privacy programmes is the tendency to treat legal, compliance and risk as a single, vaguely supportive block. That is a mistake. Each of those functions contributes something different, and the privacy programme becomes more defensible when those differences are respected rather than flattened.

Legal contributes interpretation and defensibility. It helps the organisation understand what the law requires, where the real legal exposure sits, what contractual and jurisdictional factors matter, where the rules are uncertain or contested, how a lawful basis analysis should be approached, and whether a proposed position is one the organisation can stand over. This matters especially where the organisation is operating in grey areas or high-risk environments.

Compliance contributes discipline. It helps turn privacy governance from aspiration into process. It brings follow-up, governance rhythm, challenge on whether required steps have actually happened, and a stronger expectation that obligations will be tracked, not merely noted. A privacy programme without enough compliance discipline often has plenty of policy language and too little operational consequence.

Risk contributes framing and escalation. It helps the organisation articulate residual exposure, locate privacy issues in the wider risk landscape, and decide when a matter has moved beyond an operational inconvenience into something material enough to justify senior attention or formal risk acceptance. Without that framing, privacy issues can remain trapped at the level of repeated discussion without structural consequence.

Those distinctions are not technical for their own sake. They matter because a privacy programme needs interpretation, discipline and escalation. If one of those is weak, the programme tends to drift. If they are all blurred together, the organisation often ends up with broad awareness but weak accountability.

In practice, privacy meetings happen where issues are discussed. Legal questions are noted. Risks are acknowledged. Yet the matter does not move because the organisation has not been clear about which function is meant to do what next. That is precisely where clearer ownership adds value. It does not just identify who attends the meeting. It identifies who is responsible for moving the issue from recognition to action.

Senior management is not just the audience

One of the most unhelpful ways of thinking about privacy governance is to treat senior management as though it simply receives privacy reporting. Senior management is not just the audience for the programme. It is one of the forces that determines whether the programme is real.

This should be obvious when stated plainly, but in practice it is often obscured by the mechanics of reporting. Senior leaders decide where resources go, which delays are acceptable, whether repeated slippage is challenged, whether control weaknesses are genuinely addressed, and whether open risk is tolerated because the commercial or operational inconvenience of fixing it is judged too great. Those are not external observations on the programme. They are part of the programme.

This is why senior management cannot be treated as a passive recipient of privacy assurance. A weak privacy culture usually reveals itself not because reports are absent, but because management behaviour shows that the organisation is comfortable living with unresolved ambiguity. Actions remain open for too long. Ownership remains vague. High-risk items are repeatedly discussed but not properly resolved. The privacy function is expected to keep explaining the issue without management making the harder organisational decisions that would change the position.

When that happens, the reporting may remain active while the governance weakens. The board may continue to receive a privacy section. Committees may still discuss key items. But if management does not treat unresolved privacy exposure as something requiring real attention, the programme’s maturity will be overstated by its reporting.

This is particularly important where the board is concerned. Boards do not need privacy activity counts without context. They do not need long lists of tasks completed by the privacy function. What they need is a clear view of whether governance is functioning, where material weaknesses remain open, whether repeated slippage is occurring, whether remediation is credible, and whether management is genuinely addressing the issues being raised.

A board does not need more privacy numbers. It needs to know whether governance is functioning, where risk remains open, and whether remediation is real.

That is one of the most important discipline points for senior professionals. Board reporting should never disguise operational ambiguity as assurance. If the issue depends on business confirmation, technical validation or a management decision that has not yet been made, the board should not be told a stronger story than the evidence supports.

AI governance makes blurred ownership harder to defend

AI-related use cases make existing ownership weaknesses much more visible. This is because AI adoption often outpaces governance. A business team may begin using the tool. Procurement may engage the supplier. IT may manage access or implementation. Security may review the configuration. Legal may review the terms. Privacy may assess the data position. Compliance may raise process questions. Risk may become interested only once the issue has become more visible or more sensitive. By that stage, the ownership model is often already blurred.

That is exactly where problems begin. AI governance does not tolerate vague ownership well. The organisation needs to know who identified the use case, who classified it, who assessed the legal and privacy implications, who considered the technical and operational exposure, who approved its use, who is monitoring it, and who can stop or escalate it if the position changes. If those questions do not have credible answers, the organisation may still produce reporting on AI use, but that reporting will rest on unstable ground.

This is particularly important because AI-related use can intersect with privacy, transfers, security, transparency, third-party dependency, employment issues and service resilience all at once. In such settings, it becomes even more dangerous to assume that one function can hold the whole governance picture by itself.

AI does not create the ownership problem. It simply exposes it faster and more harshly. That is why organisations should be stricter, not looser, about ownership where AI is in scope. A privacy team may be central to assessing aspects of the use case. It should not be left holding the entire operational accountability model together through force of effort alone.

What a defensible operating model looks like

A defensible privacy programme is not one where the privacy function has the broadest remit. It is one where accountability is distributed clearly enough that the reporting is credible, the evidence is real and the remediation is not dependent on one function doing everyone else’s work.

In practical terms, that usually means the DPO or privacy lead can oversee, challenge and escalate without being expected to absorb every operational gap. It means the mechanics of privacy operations, workflows, trackers, documentation, evidence collection, follow-up and reporting preparation, are managed deliberately rather than left to happen informally. It means business owners remain accountable for the processing they run and the operational truth beneath the documentation. It means IT, security and resilience teams own the controls they are responsible for and contribute evidence rather than informal reassurance. It means legal contributes defensible interpretation, compliance contributes discipline, risk contributes escalation logic, and senior management makes decisions where the organisation’s exposure remains open.

The value of that model is not aesthetic. It is evidential. It makes the organisation’s reporting more honest, its remediation more realistic, its oversight more credible and its legal position easier to defend. Where the model is weaker, the privacy function ends up writing a story it cannot fully verify. Where the model is stronger, the report reflects a real operating environment with visible ownership and meaningful accountability.

A defensible privacy programme is not one where the privacy team does everything. It is one where the right people cannot avoid doing their part.

DPO / Privacy Pro takeaway

For privacy professionals, the most useful next step is not simply to ask whether the privacy function is busy or whether reporting exists. It is to step back and look at the operating model beneath the reporting. A few questions are worth revisiting.

  1. Are you being asked to stand over positions that depend on evidence held elsewhere?
  2. Are operational owners still clearly visible in the programme, or has the privacy function become the default holder of every unresolved issue?
  3. Does reporting distinguish between what privacy has coordinated and what the business has actually completed?
  4. Are technical and operational teams genuinely contributing evidence, or are they supplying broad reassurance that privacy then has to translate into governance language?
  5. Is management treating open issues as decisions requiring action, or as recurring matters for privacy to continue carrying?

Those are not minor design points. They usually tell you whether the programme is becoming more defensible or merely more elaborate. If the DPO or privacy lead has become the substitute owner of everything that is difficult to close, the answer is not to work harder. It is to correct the ownership model. A privacy programme becomes more credible not when more tasks sit with the privacy function, but when accountability across the business becomes clearer, more visible and harder to avoid.

Finally

A privacy programme is only as credible as the ownership model behind it. If one function is expected to carry the whole system, reporting may continue, but assurance weakens. The organisation becomes better at describing governance than demonstrating it.

Real accountability begins when privacy stops being treated as someone else’s department and starts being governed as a distributed organisational responsibility. That does not diminish the DPO or privacy function. It makes the programme more defensible, the reporting more credible and the organisation better able to stand over its position when it matters.

This article is intended to support the learning covered in Hour 3 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Data Protection Officer Services