EDPB Annual Report for 2025

This article accompanies Hour 1: Global Privacy Law Updates in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What the EDPB’s 2025 Annual Report Means for Organisations

The European Data Protection Board’s 2025 Annual Report is one of the clearest indications available of where European data protection regulation is moving in practice. Read alongside the Helsinki Statement on enhanced clarity, support and engagement, it shows an EDPB focused not only on consistency and enforcement, but also on making GDPR compliance more workable in an increasingly complex digital regulatory environment.

That matters because 2025 was not simply another year of GDPR guidance. It was a year in which the EDPB responded to a substantially more crowded regulatory landscape, with data protection increasingly intersecting with the Digital Services Act, the Digital Markets Act, the AI Act, competition law, adequacy decisions, procedural reform and proposals for simplification of regulatory obligations.

For organisations, the practical significance is straightforward. GDPR compliance can no longer be approached as a standalone legal exercise. Nor is it sufficient to rely on policies, privacy notices and internal guidance alone. The EDPB’s 2025 work points towards a model of compliance that is more integrated, more operational and more explicitly concerned with clarity, dialogue, consistency and practical implementation.

That is particularly relevant for in-house DPOs, compliance leads, legal teams and senior management. The deeper value of the report lies not only in what the EDPB did, but in what those activities suggest about how organisations are increasingly expected to govern privacy in practice.

The Helsinki Statement is more important than it first appears

The single most important framing point in the 2025 report is the Helsinki Statement, adopted on 2 July 2025. The Statement commits the EDPB to new initiatives to facilitate easier GDPR compliance, strengthen consistency, deepen stakeholder dialogue and develop stronger cross-regulatory cooperation in the evolving digital landscape. It also makes clear that these initiatives are intended in particular to support micro, small and medium organisations, enable responsible innovation and reinforce competitiveness in Europe.

This is not a retreat from strong privacy standards. The Statement expressly frames its approach as “a fundamental rights approach to innovation and competitiveness”. That formulation matters. The EDPB is not saying that privacy needs to give way to innovation. It is saying that innovation and competitiveness should be supported through a clearer and more usable regulatory environment, while fundamental rights remain central.

The Annual Report shows that this was not just aspirational language. The Board sought practical feedback from stakeholders on which templates organisations would find most useful, committed to publicly reporting the outcomes of consultations, and pushed forward work on more practical and more accessible guidance formats. The report is also explicit that the EDPB wants its guidance to be clearer, more practical and easier to understand, and that it has updated internal working methods accordingly.

The most useful illustration is the “Six months of progress since the Helsinki Statement” section on page 13. That timeline shows progress in four broad areas: making GDPR easier, improving consistency and enforcement, strengthening stakeholder dialogue and enhancing cross-regulatory cooperation. By December 2025 the EDPB had already produced internal guidance to improve the clarity and usability of its own outputs, held a stakeholder event on anonymisation and pseudonymisation, and endorsed joint DMA/GDPR guidance with the European Commission. It also set out a pipeline for 2026 that included a DPIA template, a common data breach notification template, a form to signal inconsistencies between national and EDPB guidance, and further joint AI/GDPR guidance.

This is worth taking seriously. Annual reports often describe activity after the event. Here, the EDPB is also signalling how it intends to work differently going forward.

In practice, many organisations do not primarily struggle because GDPR obligations are unclear in theory. They struggle because those obligations must be applied across real systems, suppliers, digital products, service environments, business timelines and governance structures. That is particularly evident where:

  • the privacy function is small
  • operational teams are moving quickly
  • procurement or product choices are made before privacy analysis is complete
  • several legal frameworks now apply to the same activity
  • the organisation needs something more practical than a long legal memo

One of the more useful aspects of the Helsinki Statement is that it reflects a growing regulatory recognition of this operational reality. Easier compliance, in this context, does not mean lower standards. It means guidance that can be translated into actual organisational behaviour more effectively.

One practical point worth adding is that simplification is only valuable if it improves judgement. Templates, summaries and checklists can be extremely useful, but only if they help organisations ask the right questions earlier and more consistently. Used badly, they can become a substitute for thinking. Used well, they are often what allows smaller or overstretched teams to make better decisions in time.

Regulatory direction: The EDPB is actively shifting towards more practical, usable and implementation-focused compliance support, while maintaining a fundamental rights-based approach. This suggests that privacy governance should increasingly be built around operational clarity and usable controls rather than documentation alone.

GDPR now needs to be understood within a broader digital regulatory landscape

A major theme in the report is the EDPB’s growing role in clarifying how GDPR interacts with other EU digital laws. The foreword states that the rapid expansion of the EU’s digital regulatory framework has added complexity to the data protection ecosystem and that regulators now have a responsibility to clarify the interplay between data protection rules and other digital laws, and to ensure legal certainty and consistency.

This is an important shift in emphasis. GDPR is not being displaced, but it is increasingly being interpreted as part of a wider digital rulebook. The report gives several concrete examples. In 2025, the EDPB:

  • adopted Guidelines 3/2025 on the interplay between the DSA and the GDPR
  • endorsed its first joint guidelines with the European Commission on the interplay between the DMA and the GDPR
  • continued work with the Commission and the AI Office on guidance addressing the interplay between the AI Act and EU data protection laws
  • adopted a position paper on the interplay between data protection and competition law

These are not merely institutional exercises. They indicate the kinds of legal and governance issues that organisations increasingly need to handle in joined-up ways.

The DSA/GDPR guidelines, for example, are said to address how GDPR principles and safeguards apply to notice-and-action mechanisms, recommender systems, transparency of advertising, deceptive design patterns, and privacy and safety protections for minors, including prohibitions on certain forms of profiling-based advertising. The DMA/GDPR guidance addresses specific choice, consent, data combination, portability and other obligations affecting gatekeepers, business users and individuals.

That is highly relevant even for organisations that are not gatekeepers or major platforms. The broader point is that privacy can no longer be assumed to sit on a separate compliance track. Product design, interface choices, ad-tech, platform functionality, AI deployments and user account models increasingly need to be understood across multiple legal frameworks.

In practice, the challenge is often not doctrinal but organisational. Different teams tend to own different parts of the problem:

  • privacy or legal may own GDPR
  • product or engineering may own user journeys and platform functionality
  • digital teams may own DSA or consumer-facing processes
  • AI or innovation teams may own model adoption
  • commercial teams may shape onboarding, consent journeys or personalisation features

Where these functions do not meet early enough, organisations can find themselves technically progressing in one area while creating avoidable exposure in another.

This can happen in very ordinary ways. An interface change designed to improve conversion may create a consent issue. A safety or moderation feature may affect rights or profiling analysis. A DMA-style data portability design may have implications for lawful basis, minimisation or transparency. A recommender system or advertising tool may need to be assessed through both DSA and GDPR lenses.

A distinctive point from practice is that many governance problems are no longer “privacy-only” problems. They are governance coordination problems. The relevant question is often less “what does GDPR say?” and more “who in the organisation is joining up privacy with the rest of the digital legal environment?”

That is particularly important in organisations dealing with higher-risk user groups, digital service delivery, education, health-related environments, children’s data, or AI-enabled decision support.

Cross-regulatory risk: GDPR compliance increasingly overlaps with other digital regulation, including the DSA, DMA and AI Act. Organisations should expect privacy, product and regulatory governance to become more integrated rather than more separate.

The guidance priorities are practical and implementation-focused

The EDPB’s 2025 guidance agenda is strikingly practical. In addition to the interplay guidance, the Board adopted:

The choice of topics is revealing. These are not primarily abstract questions about doctrine. They are questions about how organisations build systems, choose safeguards, structure services and minimise unnecessary friction or over-collection.

The pseudonymisation guidelines explain the role of pseudonymisation as a safeguard that may be appropriate and effective for meeting obligations under the GDPR, particularly in relation to data protection principles, privacy by design and default, and security. They also analyse the technical and organisational safeguards needed to preserve confidentiality and avoid unauthorised identification.

The blockchain guidance is similarly practical. It addresses architecture choices, role allocation, data minimisation, storage approaches and the handling of transparency, rectification and erasure in blockchain environments. The report states clearly that, as a general rule, storing personal data on a blockchain should be avoided where it conflicts with GDPR principles.

The recommendations on account creation for e-commerce websites are perhaps the most visibly user-oriented. The EDPB states that, as a general rule, users should be able to make purchases without being required to create an account, and that guest checkout or voluntary account creation should be offered wherever possible, with mandatory account creation only justifiable in limited cases such as subscription-based services or access to exclusive offers.

This matters because it illustrates a wider regulatory tendency. The EDPB is increasingly engaging with the practical design choices that shape data processing, not only the downstream legal justifications for them.

In practice, these are exactly the kinds of issues that tend to surface late:

  • is this safeguard actually effective?
  • is this architecture compatible with rights?
  • do we genuinely need persistent accounts?
  • are we collecting data because it is necessary, or because it makes the business model simpler?

We see privacy teams brought in after these choices have substantially hardened. At that point, the conversation becomes one of damage limitation rather than design improvement.

A useful perspective from practice is that privacy risk often becomes materially easier to manage where the organisation treats privacy analysis as part of design and procurement, rather than as a review stage after implementation decisions are already largely fixed. This is particularly relevant in outsourced digital services, AI-enabled workflows, health and care settings, education environments, public service delivery and products aimed at or accessible by children.

Operational design lesson: Recent EDPB guidance priorities suggest that privacy risk is increasingly being assessed through design choices, architecture, account models, safeguards and minimisation decisions. Early-stage design governance is therefore becoming more important.

AI moved from policy discussion to supervision and methodology

The EDPB’s 2025 report confirms that AI is no longer a peripheral policy topic. It is now part of mainstream supervisory and methodological work.

The report places AI at the centre of several activities:

  • ongoing joint work with the Commission and the AI Office on GDPR/AI Act interplay guidance
  • Support Pool of Experts projects on AI supervision and LLM privacy risks and mitigations
  • training curricula on AI security, AI compliance and secure AI systems handling personal data
  • an EDPB bootcamp on AI and AI auditing involving 50 participants from 24 countries
  • extension of the ChatGPT taskforce into a broader Taskforce on Generative AI Enforcement

The LLM risk project is especially significant. The report describes it as offering a comprehensive risk management methodology and practical mitigation measures for common privacy risks in LLM systems, illustrated through use cases such as customer service chatbots, student progress support tools and AI assistants for travel and schedule management.

This indicates a maturing supervisory posture. The EDPB is moving beyond general debate about AI and toward more structured evaluation of how AI systems are built, trained, deployed and audited.

In practice, AI adoption continues to outpace governance in many organisations. AI tools are already in use across customer support, analytics, drafting, workflow automation, education, compliance, HR, healthcare-adjacent settings and digital service delivery. But the visibility of those uses, and the consistency of governance around them, is often uneven.

Common issues include:

  • incomplete mapping of AI use
  • weak understanding of what personal data is involved
  • insufficient lawful basis analysis
  • underdeveloped vendor due diligence
  • unclear treatment of downstream or model-training implications
  • low visibility of AI risks at board or executive level
  • inconsistent decisions about when a DPIA, LIA or broader governance review is required

A distinctive point from practice is that AI risk often becomes most acute not where AI is technically most advanced, but where it is adopted most casually. Embedded AI features, low-friction productivity tools, trial deployments and vendor-enabled features can all create governance blind spots precisely because they do not always look like major AI projects.

From a practical DPO perspective, that means ordinary governance disciplines matter a great deal:

  • identifying where AI is already in use
  • understanding what personal data is involved
  • checking what vendors are doing with that data
  • escalating material use cases to proper review
  • updating records, notices and risk assessments where appropriate

AI governance: The EDPB’s 2025 work confirms that AI is now part of mainstream supervisory activity. Organisations should assume that AI-enabled processing requires structured privacy governance, clear accountability and proportionate escalation to senior management where risk or impact is significant.

Enforcement is becoming more thematic, better supported, and more methodical

Although the 2025 report gives more prominence to clarity and stakeholder dialogue, enforcement remains central. The “Supporting Enforcement” chapter shows that the EDPB continues to invest in the practical infrastructure of consistency and enforcement.

The Coordinated Enforcement Framework remains one of the clearest examples. In January 2025, the EDPB adopted a report on implementation of the right of access, based on coordinated national actions carried out in 2024. For 2025, the Board selected the implementation of the right to erasure as the focus of its coordinated action, with 32 DPAs participating and 764 controllers responding across Europe.

The Support Pool of Experts also remains significant. In 2025, the EDPB published the deliverables of seven projects launched in 2024 and launched nine new projects, including work on AI supervision, LLM privacy risk, training curricula, the digital euro, website auditing tools and AI auditing bootcamps.

The report also records that in 2025:

  • 414 cross-border cases were created in the EDPB case register
  • 1,299 One-Stop-Shop procedures were triggered
  • 572 of those led to final decisions

At national level, DPAs issued a total of €1.145 billion in fines, with France and Ireland accounting for the largest totals at €486.854 million and €530.773 million respectively.

In practice, organisations often pay close attention to large fines but less attention to how supervisory capability is evolving. That can be a mistake. Coordinated actions, expert tools, audit methodologies and cross-border procedures often signal where regulators are becoming more consistent and more prepared. If the EDPB is investing in areas such as access rights, erasure, AI supervision, website auditing and methodological support, that is often a better indicator of where scrutiny is deepening than any single headline decision.

A useful perspective from practice is that mature organisations tend to respond better to thematic regulatory signals than to isolated enforcement headlines. If a regulator is building tools and methodologies around a topic, it usually means expectations are becoming more structured. That is a strong reason to review those areas proactively rather than waiting for a specific complaint or incident.

Enforcement maturity: Enforcement is becoming more thematic and methodical, supported by coordinated actions, expert methodologies and cross-border processes. This suggests that organisations should pay attention not only to major fines, but also to the areas where supervisory capability is clearly deepening.

The EDPB is trying to make guidance easier to consume and use

A major theme running through both the Secretariat section and the core activities section is accessibility of guidance. The EDPB explains that in 2025 it intensified efforts to make GDPR information more accessible to a wider, non-technical audience, using clearer and more straightforward language, in line with the Helsinki Statement and the 2024 – 2027 Strategy.

The Board also published additional summaries of guidelines in 2025, covering pseudonymisation, personal data breaches, blockchain technologies, right of access and the DSA/GDPR interplay. It separately consulted on which ready-to-use templates organisations would find most useful, including privacy notices and RoPA templates.

This should not be dismissed as mere communications work. It reflects a deeper point: if guidance is to improve compliance in practice, it needs to be understandable, adaptable and capable of being used by people who are not specialist privacy lawyers.

One of the recurring barriers to operational privacy maturity is not resistance. It is translation. Many organisations have committed and capable teams, but they need guidance that can be turned into workflow, process design, internal controls and practical instructions. Where guidance remains too abstract, organisations tend either to over-engineer or under-implement. More usable formats can materially improve the ability of internal DPOs and compliance teams to engage productively with operations, procurement, HR, IT, education, care or service teams that do not work in privacy full-time.

A useful lesson here is that internal privacy support should often mirror the direction the EDPB itself is taking: shorter supporting materials, targeted guidance, templates, checklists and summaries can strengthen compliance when used to support sound judgement rather than replace it.

Guidance usability: The EDPB is increasingly prioritising guidance that is concise, practical and usable by non-experts. Internally, this suggests value in translating privacy requirements into clearer operational tools rather than relying solely on long-form legal documentation.

Stakeholder dialogue is becoming a more formal part of the regulatory model

The EDPB’s 2025 report gives unusual prominence to consultation and stakeholder dialogue. The Board launched five public consultations in 2025, including on pseudonymisation, blockchain, DSA/GDPR interplay, DMA/GDPR interplay and e-commerce account creation, and separately sought views on which templates organisations would find most useful.

The report also describes a stakeholder event on anonymisation and pseudonymisation involving over 100 participants from sector associations, organisations, companies, law firms and academia. In line with the Helsinki Statement, the EDPB says it will systematically publish reports on input received during such stakeholder events.

This matters because it suggests that stakeholder engagement is becoming part of how the EDPB builds legitimacy, improves practicality and strengthens consistency.

Organisations often assume that guidance is something regulators issue to them, rather than something they can shape through consultation, response and engagement. The EDPB’s 2025 approach suggests a more participative model, particularly where implementation questions matter. That may be especially relevant for sectors where data protection issues arise in distinctive operational settings, such as healthcare, education, charities, financial services, public bodies, AI-enabled services, and work involving children or vulnerable individuals.

Organisations with recurring compliance challenges should pay more attention to consultation opportunities. There is often value in engaging early where draft guidance touches lived operational issues. This is one of the clearest ways to help ensure that final guidance reflects practical realities.

Regulatory engagement: The EDPB is increasingly building consultation and stakeholder dialogue into how guidance is developed. Organisations in more regulated or higher-risk sectors may benefit from monitoring these processes more actively as part of horizon scanning and policy input.

International adequacy and global engagement remain active governance issues

The report also shows continuing EDPB attention to adequacy, international engagement and cross-border consistency. In 2025, the Board provided five adequacy-related opinions concerning the UK, Brazil and the European Patent Organisation. It also continued international engagement through fora such as the G7 DPA Roundtable and the Global Privacy Assembly, and held a second meeting with DPAs from countries and organisations with EU adequacy decisions.

This is useful context because it reinforces that international data governance remains a live topic, not a settled one. The opinion on Brazil, for example, positively noted substantial alignment in many respects, while still inviting the Commission to assess further issues such as onward transfers, secrecy-related limits and the treatment of public authority access in criminal law contexts.

Many organisations still treat international transfer compliance as a one-time legal implementation task. The EDPB’s ongoing adequacy work suggests a more dynamic reality. International data governance continues to evolve, particularly where cloud services, AI providers, outsourced support, global vendor chains or disclosure scenarios are involved. We also see that international governance issues often surface indirectly through procurement, system configuration, support models, managed services, AI tooling or incident response rather than through a discrete “international transfer” project.

International data governance problems often do not emerge as abstract transfer-law questions. They emerge as operational questions: who can access the data, from where, for what purpose, under what contractual structure, and with what onward-use implications.

International data governance: Adequacy, cross-border data use and onward transfer conditions remain active supervisory issues. Organisations should periodically review whether international data governance is still aligned with their actual service, supplier and technology footprint.

What this means for organisations now

Taken together, the EDPB’s 2025 report shows a regulator trying to do several things at once:

  • preserve strong protection of fundamental rights
  • make compliance easier in practice
  • reduce fragmentation across Europe
  • help organisations navigate overlapping digital laws
  • build better tools for supervision and enforcement
  • respond more concretely to AI and emerging technologies

That is a significant development. It means the European privacy framework is not simply becoming more demanding. It is also becoming more operational. For organisations, the practical implications are clear.

First, GDPR governance can no longer be isolated from broader digital compliance. The interplay with the DSA, DMA, AI Act and competition law is now part of the practical compliance picture.

Secondly, privacy maturity increasingly depends on usable implementation, not just formal documentation. The EDPB’s emphasis on templates, summaries, consultations and practical guidance reflects that reality.

Thirdly, AI governance should now be treated as part of mainstream privacy governance and not as a specialist or experimental side-stream.

Fourthly, organisations should pay closer attention to the EDPB’s enforcement-support work. Coordinated actions, expert methodologies and practical supervisory tools often indicate where scrutiny is becoming more structured.

Finally, organisations should expect privacy governance to be judged increasingly through its operational reality: how systems are designed, how rights are handled, how controls are evidenced, how risks are escalated and how legal obligations are translated into day-to-day practice.

What to take back to your organisation

A focused internal review after reading the 2025 report could reasonably include the following questions:

  • Do we understand where GDPR overlaps with other digital regulation that affects us?
  • Are our privacy controls being applied early enough in design, procurement and AI adoption?
  • Is our internal privacy guidance practical enough for non-specialist teams to use?
  • Are there areas where we are relying on policy statements without enough operational evidence?
  • Are board and executive visibility strong enough for digital and privacy risk areas that are changing quickly?

Conclusion

The EDPB’s 2025 Annual Report is not simply a record of activity. It is a statement of regulatory direction. Its core message is that data protection now operates in a more complex and interconnected regulatory environment, and that this complexity should be met with more practical guidance, stronger consistency, deeper stakeholder dialogue and more usable compliance tools, not weaker standards. For organisations, the challenge is no longer simply understanding GDPR in principle. It is integrating privacy governance into the wider reality of digital regulation, AI deployment, product design, supplier management, operational accountability and senior decision-making.

For many organisations, that requires a shift:

  • from privacy as documentation to privacy as governance
  • from siloed compliance to cross-functional coordination
  • from abstract policy to practical implementation
  • from reactive advice to earlier design-stage involvement

That is the deeper significance of the EDPB’s 2025 report. It is not only about what the Board did in 2025. It is about the kind of privacy governance environment European organisations are now being expected to build.

This article is intended to support the learning covered in Hour 1 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

DPC and EDPB Annual Reports for 2024

This article accompanies Hour 1: Global Privacy Law Updates in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What Organisations Should Take Away

The 2024 annual reports published by the Data Protection Commission (DPC) and the European Data Protection Board (EDPB) are useful for far more than tracking enforcement statistics. Read together, they provide a practical picture of where supervisory attention is being directed, which failures continue to recur, and what organisations should now be reviewing in their own privacy governance.

That is particularly important in Ireland. The DPC sits at the intersection of domestic complaints, breach reporting, supervision and cross-border regulation. The EDPB, in turn, continues to shape a more consistent European approach to enforcement, guidance and technological change. When those two layers are read together, the result is not simply a summary of what happened in 2024. It is a fairly clear indication of what organisations should expect to matter in 2025 and beyond.

For DPOs, compliance leads, legal teams and senior management, the real value of these reports lies in the signals they send. Those signals are not confined to fines. They touch the recurring weaknesses that show up in access rights, breach handling, operational discipline, role clarity, AI governance, board visibility and the overall ability of an organisation to evidence accountability.

In this article, we look at the most significant themes emerging from the DPC and EDPB annual reports for 2024 and, for each one, we set out what we commonly see in practice when organisations try to operationalise these obligations.

Enforcement remains active, but the more useful lesson is what sits behind it

The DPC’s 2024 Annual Report records 11 finalised inquiry decisions and over €652 million in administrative fines. It also shows a regulator dealing with substantial operational volume: over 32,000 contacts, 11,091 new cases processed, and 10,510 cases resolved in 2024 alone. By year end, 89 statutory inquiries remained on hand, while four large-scale inquiries were concluded during the year.

Those figures matter, but for most organisations the more useful lesson is not the amount of fines in isolation. It is what the DPC’s workload tells us. Data protection compliance remains a live governance issue. The DPC’s work in 2024 ranged across cross-border inquiries, domestic breach cases, CCTV, children’s data, AI model development, biometrics, sensitive health data, supervision activity, and legislative engagement. That is a broad and demanding field of regulatory attention.

The DPC’s foreword is also worth noting for tone as much as substance. It emphasises fairness, consistency and transparency, but it also states clearly that while organisations have flexibility in how they structure compliance, they remain accountable for the choices they make to both individuals and regulators. In other words, flexibility in approach is not a substitute for evidence of control.

The EDPB annual report reinforces that broader direction of travel. Its 2024 – 2027 strategy is built around four pillars: advancing harmonisation and promoting compliance, reinforcing a common enforcement culture, addressing technological challenges, and enhancing the EDPB’s global role. That is a useful frame for organisations because it shows that data protection authorities are not only reacting to specific infringements. They are also building a more structured, more consistent, more cross-border regulatory environment.

In practice, one of the recurring difficulties is that organisations still tend to treat regulatory developments as something that happens externally, rather than as indicators of how their own internal arrangements are likely to be assessed. A fine against a large platform is often viewed as distant from the experience of an ordinary Irish organisation. The more useful reading is usually the opposite. The issue is not whether an organisation will face the same factual pattern as a large-scale inquiry. The issue is whether the same underlying governance weaknesses are present in a smaller, less visible form.

We also often see organisations underestimate how much the DPC’s broad activity matters outside formal enforcement. Complaints handling, supervision, public guidance, case studies and legislative input all tell organisations something about regulatory expectations. If the privacy function only looks at headline fines, it will miss the more practical indications of what the regulator continues to see going wrong.

In terms of internal considerations for your organisation, a sensible first step is to move away from reading annual reports as backward-looking summaries. They should be read as practical indicators of:

  • which issues continue to generate friction
  • which operational weaknesses are still common
  • which areas are likely to receive more structured attention in the near future

For boards and senior management, the key point is that privacy remains a current governance issue and not simply a legal housekeeping topic.

Regulatory activity remains substantial. The DPC’s 2024 Annual Report records 11 finalised inquiry decisions and over €652 million in fines, alongside 11,091 new cases processed and 10,510 cases resolved. The broader implication is that privacy compliance continues to be an active governance and accountability issue rather than a background legal function.

Access rights remain one of the clearest indicators of programme weakness

Access rights remain a central theme in the DPC’s report. Subject access requests account for the largest share of national complaints, representing 34% of all complaints received, followed by fair processing at 17% and the right to erasure at 14%. By the end of 2024, the DPC had received 914 new complaints solely related to the right of access and concluded 904.

The DPC also explains why so many of these complaints arise. In many cases, organisations either fail to respond within the required timeframe or apply redactions and exemptions without giving sufficiently clear explanations. The report is explicit that it is not enough merely to list an exemption or cite legislation. The reason the exemption is being applied should be clearly explained, and those decisions should be documented.

The DPC case studies make this even more concrete. A hospital only provided access data after DPC intervention, despite the urgency of the matter. A financial services provider withdrew its reliance on restrictions and released previously withheld personal data only after the DPC challenged the legal basis for withholding it. Another organisation initially over-redacted records and later re-released them in partially redacted format after engagement with the DPC on the balancing exercise required by Article 15(4) GDPR.

The EDPB report places this in a wider European context. One of its 2024 highlights was the launch of a coordinated enforcement action on the right of access in February 2024. That is a useful signal. Access rights are not just a recurring domestic irritation; they remain a live topic of coordinated supervisory interest across Europe.

In practice, access rights often expose broader weaknesses in privacy governance. The legal right itself is not usually the main difficulty. The recurring problem is the organisation’s operational ability to comply consistently. Issues would be:

  • uncertainty over who owns the request internally
  • incomplete searches across mailboxes, shared drives or business systems
  • over-reliance on individual employees who “know where the data is”
  • poorly documented decisions on exemptions and redactions
  • difficulty explaining why certain information has been withheld
  • delays caused by weak coordination between legal, HR, operations and IT

A common issue is that organisations assume DSAR handling is mainly about retrieval. In practice, it is equally about judgement, explanation and evidencing of reasoning. Where that reasoning is weak or overly informal, the response often becomes difficult to defend.

Another recurring issue is that access rights are treated as episodic rather than systemic. If there is no repeatable internal workflow, organisations end up re-solving the same problem each time a request comes in. That is one reason DSAR performance often becomes such a useful indicator of the maturity of a privacy programme more generally.

Access rights are often one of the first places where weak accountability becomes visible so what can you do? Review whether your access request process is genuinely operationalised:

  • Is ownership clear?
  • Can you demonstrate adequate searches?
  • Are exemptions and redactions documented properly?
  • Can the reasoning be explained clearly to the individual and, if necessary, to the DPC?

Access rights remain a key area of exposure. Subject access requests account for 34% of complaints received by the DPC. This makes DSAR handling a practical indicator of whether privacy governance is functioning effectively, particularly in relation to timelines, searches, exemptions, redactions and decision-making quality.

Breach trends still point to ordinary operational weakness

The DPC received 7,781 valid breach notifications in 2024, an 11% increase on the previous year. Of those notifications, 81% were concluded by year end. The most important practical point, however, lies in the cause of those breaches. Fifty per cent of notified cases arose because correspondence was sent to the wrong recipient.

The DPC’s breach chapter develops that point in more detail. The highest category of breaches continues to involve unauthorised disclosures affecting single individuals or small groups, with poor operational practices and human error remaining prominent. The detailed breakdown shows:

  • 32% postal material to incorrect recipient
  • 14% email to incorrect recipient
  • 10% accidental or unauthorised alteration
  • 8% accidental loss or destruction
  • 5% hacking

This is one of the most useful themes in the annual report because it corrects a common misconception. Many organisations continue to associate privacy incidents primarily with cyber compromise. The DPC’s figures show that ordinary administrative failures remain an equally important source of regulatory exposure.

The DPC’s case studies reinforce this. In the broadcasting-sector phishing case, an employee was deceived into disclosing credentials, leading to unauthorised access to personal and special category data, with the DPC pointing to improved filters, training and revised procedures as part of the response. In another case, a third-level institution published non-anonymised survey data, leading to review of internal reporting processes and stronger controls. In another general case study, the DPC addressed the forwarding of work emails and special category data to a personal email account, again highlighting the importance of both technical and organisational controls.

In practice, many organisations have incident notification procedures but less mature arrangements for reducing repeat incidents over time. The response process exists, but the learning loop is weaker.  Observed is:

  • good incident logging, but weak analysis of recurring themes
  • legal and privacy teams involved late rather than early
  • corrective actions agreed in principle, but not tracked to closure
  • overly narrow focus on whether a breach is reportable, rather than why it happened
  • underinvestment in very ordinary controls, especially around correspondence, manual handling, exports and publication

A recurring issue is that operational teams may not see privacy incidents as part of governance. They may be treated as isolated mistakes rather than symptoms of a process or control issue. That makes repeat incidents much more likely.

Another common difficulty is that breach reporting to senior management can be descriptive rather than managerial. An incident is noted, but the question of whether similar incidents are reducing over time is not always asked clearly enough.

Breach management is not just about notification; it is also about demonstrable reduction in repeated failure. A takeaway would be to review your incident landscape in practical terms:

  • What are your most common breach types?
  • Are the same errors recurring?
  • Which ones are administrative and therefore more controllable?
  • Can you show that lessons learned have led to specific changes?

Breach trends remain heavily operational. The DPC received 7,781 valid breach notifications in 2024, with 50% arising from correspondence being sent to the wrong recipient. This supports continued focus on administrative controls, checking procedures, staff discipline and repeat-incident reduction rather than relying solely on breach notification workflows.

The case studies show that basic accountability failures remain common

The value of the DPC’s case studies is that they move the discussion away from abstract trends and show what organisations are still getting wrong in concrete terms. They reveal repeated issues in timing, documentation, explanation, role clarity and ordinary operational judgement.

Access request case studies show delayed responses, weak searches, poor handling of exemptions, and over-redaction. The controller/processor case study shows the continuing importance of properly identifying roles and understanding who must actually respond to a rights request. The personal-email case study illustrates how ordinary staff behaviour can create significant exposure, especially where special category data is involved. The rectification case study shows how customer service issues can quickly become privacy complaints where inaccurate personal data causes practical harm.

These are not exotic scenarios. That is precisely why they are useful. They remind organisations that a significant part of data protection compliance still comes down to the quality of routine governance and service delivery.

In practice, one of the recurring mistakes is to assume that serious data protection risk only arises in large-scale or technologically complex contexts. Very often, the issue is much more ordinary:

  • a request is not picked up on time
  • the search is incomplete
  • the explanation is weak
  • the agreement is unclear
  • the wrong document is shared
  • the control exists on paper but not in day-to-day behaviour

We also see organisations separate service problems from privacy problems too sharply. In reality, the boundary is often thin. Inaccurate data, weak complaint-handling, poor customer correspondence or unclear internal ownership can all become data protection issues very quickly when they affect rights or outcomes.

You can use the DPC’s case studies as a practical audit tool:

  • Which of these failures could happen here?
  • Which already have?
  • Are our controls strong enough to prevent them?
  • Are our staff sufficiently clear on what to do when something goes wrong?

EU coordination is becoming more important, not less

The DPC’s annual report reflects an Irish regulator operating in an increasingly coordinated European environment. The EDPB’s report makes that development more explicit.

The EDPB’s 2024 – 2027 strategy focuses on harmonisation, common enforcement culture, technological challenges and cross-regulatory cooperation. It also highlights the expanding responsibilities of data protection authorities in the context of the AI Act, DMA, DSA, Data Act and other digital frameworks. The Board notes that it issued 28 consistency opinions in 2024, including eight under Article 64(2), designed to address matters of general application or major cross-border relevance.

The report also underlines the role of coordinated enforcement. The 2024 highlights include a coordinated enforcement report on the role of DPOs, the launch of a coordinated enforcement action on the right of access, and the adoption of Opinion 28/2024 on AI models in December 2024.

This is important because it means organisations should increasingly assume that core GDPR issues are being understood within a shared European framework. That affects not only multinational businesses. It also affects domestic organisations whose practices touch issues that are receiving coordinated regulatory attention, such as access rights, AI, profiling, children’s data or cross-border services.

In practice, organisations often track DPC developments more closely than EDPB developments. That is understandable, but it can leave a gap in strategic awareness. We often see privacy governance shaped around domestic complaint themes, immediate contractual issues, specific incidents, and/or sector expectations. What can get missed is the extent to which EU-level consistency work shapes the direction of travel. That means organisations sometimes respond to a theme too late, after it has already become part of a broader supervisory agenda.

Another practical issue is that some organisations still assume that “European” developments only matter where large-scale cross-border processing is involved. Increasingly, that is too narrow. If a theme is receiving EDPB-level attention, it often signals a broader expectation of consistency that will eventually affect ordinary organisational practice as well.

Do not read the DPC report in isolation. The more useful question is:

  • what themes are being reinforced at EU level?
  • where is consistency increasing?
  • what does that suggest about where scrutiny is likely to deepen next?

The regulatory environment is becoming more coordinated across the EU. The EDPB’s 2024 – 2027 strategy focuses on harmonisation, enforcement culture, technological challenges and cross-regulatory cooperation. Organisations should assume that core privacy risks are increasingly being assessed in a more consistent European framework.

AI has moved firmly into mainstream privacy governance

One of the clearest themes in both the DPC and EDPB material is the centrality of AI to current regulatory thinking.

The DPC’s foreword states that regulation of AI model training attracted a great deal of public interest in 2024 and notes that new inquiries were commenced into AI models, biometrics and the security of sensitive health data. Its 2024 timeline records DPC engagement with Meta’s LLM plans, High Court proceedings concerning X’s Grok processing, the launch of an inquiry into Google’s AI model, and the DPC’s request to the EDPB for an Article 64(2) opinion on the use of personal data for development and deployment of AI models.

The EDPB annual report adds the broader European layer. Its foreword explains that the Board adopted an opinion on the use of personal data to train AI models in order to support responsible AI innovation while ensuring protection of personal data and compliance with the GDPR. The same section notes that AI developers can use legitimate interests as a legal basis for model training under certain conditions and that the EDPB set out a structured three-step test to help developers determine lawful use.

This is an important regulatory message. AI is not treated as outside GDPR. Nor is it treated as unlawful by default. It is treated as something that must be governed within ordinary accountability structures, using disciplined analysis rather than assumption.

The EDPB also makes clear that the AI Act and other digital legislation are expanding the responsibilities of DPAs and intensifying cross-regulatory cooperation. For organisations, that means AI governance is likely to become more, not less, integrated with privacy, product, risk and regulatory oversight.

In practice, AI adoption often moves faster than governance. Organisations begin using AI tools, copilots, model-based services or AI-enabled vendors before internal accountability arrangements have caught up. Recurring issues include:

  • uncertainty over lawful basis
  • weak transparency analysis
  • unclear supplier roles and sub-processing chains
  • underdeveloped DPIAs or no DPIA refresh at all
  • insufficient clarity on what personal data is being used, where, and for what purpose
  • treating AI as an innovation topic first and a governance topic second

A common issue is that organisations may have sensible general privacy controls but have not yet adapted them to AI-related realities. For example, vendor diligence may not yet ask the right questions about model training, retention, downstream use or human oversight. Similarly, internal teams may not yet distinguish clearly between use of an AI-enabled tool and development or deployment of an AI system with a materially different risk profile.

What to do? Map where AI is already present:

  • internal tools
  • external vendors
  • product features
  • workflow automation
  • model-assisted decisions

Then ask:

  • is this captured in our privacy governance?
  • has lawful basis been assessed properly?
  • do our notices and internal records reflect reality?
  • do we know what our vendors are doing with personal data?

AI is now part of mainstream privacy governance. Both the DPC and EDPB treated AI model development and deployment as core regulatory topics in 2024. AI-related processing should therefore be governed through existing privacy, risk and accountability structures rather than treated as a separate informal innovation stream.

DPIAs, role clarity and processor accountability remain highly practical issues

Even where the annual reports do not dwell on DPIAs as a standalone theme, they reinforce the wider accountability expectations that make DPIAs and role clarity so important.

The DPC’s access-related case studies show that role confusion still arises, particularly around controller and processor responsibilities. In one case, the DPC accepted that an organisation was acting as a processor and had complied with its obligations by referring the request to the controller, supported by a detailed written agreement setting out roles and instructions. This is a useful reminder that outsourcing does not remove responsibility. Rather, it increases the need for clear role definition and operational coordination.

The DPC’s annual report also shows how much emphasis continues to fall on practical explanations, evidence, and ability to justify decisions. That same logic applies to DPIAs and other risk assessments. It is no longer sufficient to have a template completed somewhere in the project file. The question is whether risk has been assessed at the right time, whether alternatives have been considered, whether decisions can be followed, and whether safeguards are reflected in actual controls.

In practice, role clarity and risk assessment still cause difficulty. DPOs see:

  • processor agreements that exist, but do not really support day-to-day rights handling or breach response
  • unclear internal understanding of who is controller, processor or joint controller in more complex service chains
  • DPIAs drafted too late in the project lifecycle
  • risk assessments that identify issues but do not clearly drive design or operational change
  • mitigations that are described, but not obviously tied to real controls

These issues often become more acute in AI-related, vendor-heavy or fast-moving projects. Where several parties are involved, or where technology adoption is proceeding quickly, the temptation is often to finalise role allocation and risk analysis after the main decisions have already been made.

Useful review points are:

  • whether processor/controller roles are clearly documented and understood
  • whether key agreements support rights handling, incident management and accountability
  • whether DPIAs are being carried out early enough and updated where processing changes
  • whether risk assessments are functioning as decision tools rather than paperwork

The common message is visible, measurable and auditable accountability

Taken together, the DPC and EDPB material points in a common direction. Privacy programmes are increasingly expected to be visible to decision-makers, measurable in practice and capable of withstanding scrutiny.

The DPC’s values include accountability, fairness, consistency and transparency. The EDPB strategy places emphasis on harmonisation, enforcement, practical guidance and technological readiness. Both sets of material suggest that regulators are looking beyond the existence of policies. The more important question is whether organisations can show how privacy governance actually works.

That means showing:

  • how rights are handled
  • how incidents are learned from
  • how high-risk processing is assessed
  • how senior management is informed
  • how improvements are tracked
  • how accountability is evidenced over time

In practice, board and executive visibility remains uneven. Many organisations do report privacy issues upwards, but the reporting is not always sufficiently management-focused. Accountability reporting can be:

  • narrative-heavy reporting with limited metrics
  • updates that describe activity but do not clearly show trend or risk
  • breach reporting without repeat-incident analysis
  • rights reporting without process-health indicators
  • DPO reporting lines that technically exist but do not create real organisational visibility

A recurring issue is that privacy becomes visible to leadership after an incident, but less visible in advance of one. That makes it harder for the organisation to demonstrate proactive accountability.

In your organisation, ask the following:

  • What does the board actually see on privacy?
  • Are privacy metrics meaningful and decision-useful?
  • Can the organisation show trends, not just isolated events?
  • Is accountability visible before regulatory or reputational issues arise?

Current expectations increasingly favour privacy programmes that are visible, measurable and auditable. Regulators are looking beyond policies to whether organisations can show functioning governance, practical control, clear ownership and evidence of remediation.

Summary

The DPC and EDPB annual reports are useful not only because they describe the last year of regulatory activity. They are useful because they show where pressure continues to build and what kinds of organisational weakness remain most likely to matter.

Many of the issues that continue to generate complaints, breaches and supervisory attention are not new. They are recurring weaknesses in access handling, explanation, operational discipline, accountability, role clarity and governance visibility. What is changing is the environment in which those weaknesses are being judged. It is becoming more coordinated, more structured and more technologically alert.

For many organisations, the real challenge is not understanding GDPR in principle. It is embedding that understanding into ordinary governance, processes, decision-making and reporting. That is what the DPC and EDPB annual reports help to illuminate, and that is why they remain worth reading carefully.

This article is intended to support the learning covered in Hour 1 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Transfer Impact Assessments in Practice

This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What DPOs Should Actually Be Looking For

A Transfer Impact Assessment (or Transfer Risk Assessment – TRA) is the point at which transfer law stops being abstract and becomes a real organisational decision. In theory, the legal position may look simple enough: identify the transfer, identify the transfer tool, and then consider whether additional safeguards are needed. In practice, that is not where most TIAs fail. They usually fail earlier and more quietly.

They fail because the underlying transfer scenario has not been analysed properly. They fail because the organisation does not know enough about the recipient’s real operating model. They fail because the assessment of the foreign jurisdiction is generic rather than specific. And they fail because supplementary measures are described in broad compliance language without asking whether they materially change the exposure.

That is why a TIA matters. A TIA is not just a document to satisfy Schrems II. It is where the organisation has to demonstrate that it understands what is happening, what legal and practical risks arise in the recipient jurisdiction, and why the transfer remains supportable. For DPOs, this is one of the most revealing areas of privacy practice. A strong TIA usually points to stronger governance, better supplier oversight and more mature internal coordination. A weak TIA often points to the opposite.

What a TIA is actually trying to determine

A TIA is often reduced to a single question: “Can we still transfer the data using SCCs?” That question is too narrow. A useful TIA is trying to determine, in sequence:

  • what the transfer scenario actually is;
  • who the importer is, and in what role;
  • what data is involved, and in what form;
  • what the legal and practical position is in the destination jurisdiction;
  • whether public authority access, surveillance powers, redress and oversight could undermine the level of protection expected under GDPR;
  • whether supplementary measures materially reduce that risk;
  • and whether the organisation can genuinely stand over the conclusion it has reached.

That is why the TIA process has to be disciplined. It starts by identifying the country or countries involved and requiring relevant documentation, including relevant legislation, items such as DataGuidance materials, agreements, and internal checklists, before moving section by section through the template and analysing each part against the evidence provided. The template itself should break the work into the right components: transfer overview, receiving jurisdiction, transfer details, existing safeguards, alternatives, proportionality, law and practice in the recipient country, supplementary measures, probability assessment and approval. That structure is not just administratively tidy. It reflects the underlying legal logic.

This is also consistent with the EDPB’s post-Schrems II approach. The EDPB Recommendations 01/2020 on supplementary measures remain the central official guidance for organisations trying to assess whether an Article 46 transfer tool remains effective in light of the law and practice of the destination country. The EDPB Guidelines 05/2021 on the interplay between Article 3 and Chapter V are equally important because they help determine whether the arrangement is even a restricted transfer under Chapter V in the first place. In practice, that initial classification matters more than many organisations realise. A TIA that begins with the wrong transfer analysis is already weakened before it gets to the foreign-law questions.

Start with the facts: the transfer analysis must be right before the TIA can be right

One of the most common problems in transfer work is that very different scenarios are collapsed into a single generic category called “international transfer.” That may be administratively convenient, but it is analytically weak.

An employee temporarily working from a third country does not necessarily raise the same issues as a third-country contractor engaged to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service that extracts data from that platform and processes it in its own US environment. Occasional remote support access is not the same as routine privileged administrative access. Pseudonymised data used for analytics is not the same as a readable HR or health dataset accessible in clear text. These distinctions matter because they shape:

  • whether Chapter V is engaged;
  • which SCC module is relevant, if SCCs are used;
  • the sensitivity and exposure of the data;
  • the significance of public authority access risk in the destination jurisdiction;
  • and the practical effect of any technical or organisational safeguards.

For DPOs, the practical lesson is straightforward: a TIA should not begin with the transfer tool. It should begin with the transfer scenario. That means identifying:

  • who the exporter is;
  • who the importer is;
  • whether the importer is acting as controller, processor, sub-processor or contractor;
  • whether the data is stored, accessed remotely, downloaded, or transferred onward;
  • whether the data is ordinary personal data, special category data, criminal data, children’s data, or otherwise particularly sensitive;
  • whether the data is intelligible in the destination jurisdiction;
  • and whether the provider’s sub-processing, support or AI functionality changes the picture.

A good template should capture exporter/importer roles, the transfer mechanism, the nature of the transfer, onward transfers, categories of personal data, special-category and criminal data, data subjects, format of the data, method of transfer and existing security measures. This is a strong foundation, because it makes the foreign-jurisdiction analysis service-specific rather than generic.

In practice, the weakest TIAs often reflect poor factual scoping rather than poor legal knowledge. Hosting is mapped but support access is not. The primary vendor is known but the sub-processor chain is not. An AI-enabled tool is treated as though it sits safely inside the main platform’s environment, even though it extracts and processes data through separate infrastructure. The result is that the TIA looks complete but is addressing the wrong transfer.

When assessing your practices internally, review whether your scoping process distinguishes between:

  • storage and remote access;
  • employees, contractors and service providers;
  • primary vendors and sub-processors;
  • occasional and routine access;
  • EEA-hosted platforms and connected third-country tools;
  • readable, pseudonymised and encrypted data.

The quality of a TIA depends on the quality of the underlying transfer analysis. If the organisation has not correctly identified the parties, access model, data exposure and onward transfer chain, the assessment will be weaker than it appears.

Who should be involved in a TIA?

A TIA should never be treated as a privacy-only paperwork exercise. It is a cross-functional assessment, and that matters because no single function usually holds all the facts. A defensible TIA should be:

  • legally informed;
  • technically grounded;
  • operationally accurate;
  • and owned by the business as well as privacy.

The DPO or privacy lead should normally coordinate the assessment. That means framing the questions, testing assumptions, identifying gaps, and ensuring the final reasoning is coherent and evidence-based. But the DPO should not be left trying to infer system architecture, key management, support access patterns or sub-processor chains without support. Legal should be involved to assess:

  • whether the transfer tool is appropriate;
  • whether the importer role is correctly understood;
  • how SCCs or other Article 46 mechanisms are being used;
  • and whether the foreign-jurisdiction analysis raises legal issues that need escalation.

IT, architecture or security teams are often essential because the foreign-law risk only becomes meaningful when matched to technical facts. If the provider cannot access intelligible data, the analysis may look different than if provider personnel can access clear-text customer content in the course of support or service delivery. That means technical teams need to clarify:

  • where the data is hosted;
  • where it is processed;
  • who can access it;
  • how encryption works;
  • who controls the keys;
  • whether pseudonymisation is meaningful;
  • and how support or privileged access operates in practice.

The relevant business or system owner also matters. A TIA is not just about whether a transfer is possible; it is also about whether the transfer is necessary, whether alternatives exist, and whether the organisation has become dependent on the arrangement in a way that raises wider governance concerns.

Procurement or vendor management is often essential because:

  • they hold the contractual documentation;
  • they can obtain sub-processor lists, trust-centre materials and service descriptions;
  • and they know when renewals, change events and leverage points arise.

Risk, compliance or resilience functions may also need to be involved where the provider is strategically important or where the transfer intersects with broader third-party oversight. In regulated settings, particularly financial services, the same provider relationship may matter at once for privacy, outsourcing, operational resilience and dependency management.

AI governance or product/data governance teams should also be involved where AI-enabled tools are in scope, because the data-flow and control issues are often more opaque and more dynamic than in ordinary SaaS arrangements.

Weak TIAs often reflect fragmented ownership. Privacy has the template, legal has the contract, IT has a partial understanding of hosting, and procurement holds vendor papers, but no one assembles the picture properly. The result is that the final document is smoother than the underlying analysis.

In assessing your practice, make sure your TIA process identifies:

  • who owns scoping;
  • who confirms the technical facts;
  • who assesses the legal mechanism and foreign-law issues;
  • who validates the necessity of the transfer;
  • who reviews sub-processor and contract materials;
  • and who can approve or escalate if the assessment reveals unresolved risk.

A credible TIA is cross-functional. It should combine privacy, legal, technical, supplier and business inputs rather than being treated as a privacy-only exercise.

The foreign jurisdiction assessment: where the real analysis happens

This is the part of the TIA most likely to draw criticism if it is weak, and the part most likely to make the assessment genuinely meaningful if it is done properly. A poor jurisdiction assessment often asks one shallow question:

“Does this country have a data protection law?”

A stronger jurisdiction assessment asks the right question:

“In light of this particular transfer scenario, can the legal environment of the destination country undermine the level of protection expected under GDPR?”

That distinction matters.

A country may have a modern privacy statute and an active regulator, but still allow forms of state access, surveillance or national-security processing that are relevant to the transfer in question. Equally, the existence of public-authority access powers does not automatically make the transfer unsupportable. The issue is whether those powers, in context, materially affect the ability of the transfer tool to provide an essentially equivalent level of protection.

That is why a strong TIA needs to assess both the general legal environment, and the practical relevance of that environment to the transfer at hand.

A good template addresses public authority access, legal basis, necessity and proportionality, safeguards against excessive access, and case studies or precedents. It should further address the wider legal environment of the recipient country, including dedicated data protection law, rights, regulator independence, judicial remedies, public authority access, surveillance programmes, and limitations and oversight. One part looks at state access and proportionality directly; the other assesses the wider data protection framework of the country.

What sources should inform the jurisdiction assessment?

This is one of the clearest areas where internal AI tooling can improve quality if designed properly. A TIA companion should not allow users to “wing” the foreign-law analysis based on memory or a single source. The sources should usually be layered.

At the top should be the official guidance:

  • EDPB Recommendations 01/2020 on supplementary measures;
  • EDPB Guidelines 05/2021 on whether Chapter V applies in the first place;
  • relevant European Commission materials on adequacy and SCCs;
  • and relevant Irish DPC or other regulator guidance or conference materials on international transfers and SCCs.

Supporting those should be:

  • country-law research tools such as DataGuidance;
  • vendor-supplied materials, including DPAs, SCCs, trust-centre information, government-request statements and sub-processor lists;
  • and any internal legal or compliance commentary developed for the organisation.

The role of a tool like DataGuidance is important here. It is a research aid, not a final legal conclusion. It is useful for assembling a jurisdiction profile, identifying relevant legal themes and orienting the assessor to the local framework. But it should not replace a real analysis of how public authority access, redress, oversight and practical enforcement interact with the service in question.

What should the jurisdiction assessment actually test?

A strong assessment should address, at minimum:

  • whether the country has a dedicated data protection law;
  • whether individuals have enforceable data protection rights;
  • whether there is an independent supervisory authority or regulator;
  • whether meaningful judicial redress is available;
  • what public authority access powers exist;
  • whether surveillance or intelligence powers are broad, targeted, supervised, challengeable or secretive;
  • whether access is subject to necessity, proportionality and oversight;
  • and whether there is relevant history or case law indicating how those powers are used in practice.

The key is to avoid genericity. The question is not merely whether a surveillance framework exists in the abstract. The question is whether, in light of the actual transfer scenario, the combination of the country’s legal environment and the recipient’s access to the data undermines the level of protection expected. That is why the facts gathered earlier matter so much. A destination country analysis looks very different depending on whether:

  • the importer can access full readable HR records;
  • the importer only receives encrypted backups;
  • the service provider never holds the decryption keys;
  • or the tool is an AI-enabled platform that processes readable content and may involve several third-country sub-processors.

The weakest foreign-jurisdiction sections are usually generic and over-compressed. A paragraph states that the country has a data protection law and some regulator activity, briefly notes surveillance laws, and then concludes that the transfer is supportable. That may look balanced, but it often tells the reader very little about whether the actual risks of the transfer have been understood.

So, review whether your jurisdiction assessments:

  • identify the sources used;
  • distinguish between data protection law and state access powers;
  • analyse oversight and redress rather than just listing legal instruments;
  • connect the foreign-law position to the actual service and access model;
  • and make their assumptions visible rather than implicit.

The foreign-jurisdiction assessment is the part of the TIA most likely to reveal real residual risk. It should test not only whether the country has a privacy framework, but whether state access powers, oversight and redress materially affect the transfer in context.

Assessing the probability of unlawful access without creating false precision

The value of a structured probability assessment is that it forces the assessor to identify and weigh the drivers of risk rather than writing in broad, qualitative terms alone. Your template or methodology should reflect this by breaking the analysis into factors such as the legal framework, enforcement practices, surveillance capability and historical precedents, and then asking the user to explain the score reached. This can be very useful, provided the organisation is clear about what the score means and what it does not. A probability score is not an objective truth. It is a structured representation of a judgement based on:

  • the legal environment;
  • the practical features of the service;
  • the type and volume of data involved;
  • whether the data is intelligible to the importer;
  • the strength of safeguards;
  • and the evidence available at the time of the assessment.

That means the score should never stand alone. If a TIA produces a “low likelihood of unlawful access” score but cannot explain, with sources, why that conclusion was reached, the number adds very little. A more defensible approach is to treat probability scoring as an aid to disciplined reasoning. The assessor should be able to show:

  • which factors were considered;
  • what evidence informed each factor;
  • what assumptions were made;
  • and what would cause the score to change.

This is also an area where an internal AI companion can be genuinely helpful if designed carefully. It can prompt the user to upload country-law materials, identify the factors, ask the user to justify each factor with evidence, and then draft the rationale. But it should not be allowed to produce a score with no supporting narrative or no acknowledgement of limits.

Weak scoring exercises look numerical but shallow. They average a handful of factors without showing how those factors relate to the service, the accessibility of the data, or the relevance of the legal environment in context. That gives the impression of rigour without delivering much of it.

If you use a probability methodology, make sure it:

  • identifies the factors clearly;
  • ties them to the actual transfer scenario;
  • documents the evidence and assumptions;
  • and shows what would change the overall assessment.

Probability scoring can support consistency, but it does not replace judgement. The organisation should be able to explain the factors, assumptions and evidence behind any conclusion that the likelihood of unlawful access is low.

Supplementary measures: what actually changes the position

One of the strongest parts of the EDPB’s Recommendations 01/2020 is that they do not treat supplementary measures as abstract compliance decorations. The whole point is whether the measures make the transfer tool effective in context. That is the mindset DPOs need to preserve. The right question is not “Have we listed supplementary measures?” It is “Which measures materially reduce the exposure created by this transfer?” This is where many TIAs become weaker than they appear. Technical, contractual and organisational measures are all listed, but there is little analysis of whether they actually change the importer’s ability to access the data or the practical significance of the destination country’s legal environment.

Technical measures

Technical measures often matter most, but only where they genuinely reduce exposure. Encryption is a classic example. Encryption in transit and at rest is good baseline practice, but if the provider decrypts the data in its own environment and can access it in readable form, the legal relevance of that encryption may be limited. Key management matters. So does whether the importer holds the keys. So does whether the relevant risk is authority access via the importer or access prevented by design.

Pseudonymisation can also be meaningful, but only where the importer cannot realistically re-identify the data subject. If the importer can combine the data with other identifiers or is itself given the key to re-identification, then the measure may add less than the TIA suggests.

Minimisation, segmentation, tokenisation and local pre-processing can all be useful where they materially reduce what is exposed.

Contractual measures

Contractual clauses can support the position, particularly where they:

  • require challenge to overbroad requests;
  • increase transparency around authority access;
  • restrict onward transfers;
  • limit use of the data;
  • and support audit or notice rights.

But contractual promises do not usually neutralise a foreign-law issue on their own, particularly where the provider can still access the data in clear text.

Organisational measures

Organisational controls, such as internal access approvals, support restrictions, logging, escalation routes, and governance around sensitive data inputs, can be important, especially where they reduce frequency and scope of transfer or restrict who can trigger high-risk processing. They matter most when tied to actual process rather than simply listed as good governance principles.

The key to all of this is service-specific analysis. A measure is valuable only if it changes the actual position.

The most common weakness here is that “supplementary measures” are treated as a checklist. Encryption is mentioned, policies are mentioned, contractual clauses are mentioned, and the TIA moves on. But if the provider can still view the data, if the AI service still retains readable content, or if support staff still have access in practice, the analysis is not yet complete.

Review whether your TIA explains:

  • whether the importer can access the data in readable form;
  • who controls decryption or re-identification;
  • whether the measure changes the risk from public-authority access or only improves general security hygiene;
  • and whether the supplementary measures are genuinely linked to the risks identified in the jurisdiction assessment.

Supplementary measures are effective only if they materially reduce the real exposure. The organisation should be able to explain how technical, contractual and organisational controls change the transfer risk in practice rather than merely documenting that they exist.

AI and complex tooling: why the TIA needs stronger evidence, not softer assumptions

AI-enabled services often need stronger TIAs than ordinary SaaS tools, not weaker ones. The reason is straightforward. The processing chain is usually less transparent, the sub-processor landscape may be broader, the distinction between core functionality and underlying model/infrastructure is harder to see, and the organisation may have less visibility over retention, support access and onward processing than it assumes.

For example consider a scenario where tooling to assist meetings is introduced into your Microsoft stack.  The service might sit outside Microsoft’s compliance perimeter and process recordings through US-based infrastructure, raising not only transfer issues but wider concerns around special-category exposure, transparency, cybersecurity, retention and sub-processing through providers such as AWS, GCP, OpenAI and Anthropic. This can happen even where the core M365 environment might be configured within an EU boundary; a connected tool could extract meeting content and process it through its own infrastructure, bypassing that perimeter. That is precisely the kind of fact pattern a TIA must surface.

In an AI context, the transfer analysis needs to ask:

  • does the AI tool extract or replicate personal data from another environment?
  • where are the inference, storage, support and analytics functions actually located?
  • what sub-processors or underlying providers are involved?
  • can the provider’s personnel access readable content?
  • is data retained for troubleshooting, analytics or model improvement?
  • do the provider’s public assurances actually align with the way the service works?

A good TIA for an AI-enabled service is therefore not just about where the data goes. It is also about whether the organisation retains meaningful visibility and control once the data enters that environment.

A recurring weakness is governance lag. The organisation approves an AI-enabled feature because it is commercially useful, then tries to retrofit a privacy assessment around whatever documents the vendor is willing to provide. That often produces high-level assurances rather than a grounded understanding of the service.

Make sure AI-related TIAs:

  • are specific to the AI functionality, not just the core platform;
  • identify the actual processing chain and sub-processors;
  • address retention, reuse and support access explicitly;
  • and are revisited when the service model changes.

AI-enabled services often require a more rigorous TIA, not a lighter one. Their value may be clear, but the transfer assessment should reflect opaque processing chains, broader sub-processing and reduced visibility over data handling.

Using AI to support TIAs: what good looks like in Copilot or a custom GPT

A TIA companion can be genuinely helpful, but only if it is designed to improve the assessment rather than flatten it into polished prose. The value of a TIA AI assistant is not that it drafts faster. It is that it can structure the process, force evidence gathering, separate issues properly, and surface where the analysis is weak.

Good design will be a tool that instructs the user to begin with the country or countries involved, upload relevant documentation such as DataGuidance notes, agreements and checklists, and then step through the TIA section by section rather than attempting to draft the whole thing in one pass. It also anticipates the need for a DPO review checklist at the end of the process.

What the tool should do

Whatever the format, whether built in Copilot or as a type of custom GPT, the assistant should:

  • begin with jurisdiction identification;
  • require the user to upload source materials;
  • distinguish transfer scoping from country-law analysis;
  • force the user to identify missing evidence;
  • and produce both draft wording and a reviewer issues list.

A good AI companion should also slow the user down in the right places. In particular, it should not allow the assessor to draft a conclusion before the foreign-jurisdiction module is complete.

What the foreign-jurisdiction module should do

This is the most important part of the tool. A good module should:

  • ask which official and secondary sources are being used;
  • require the user to identify data protection law, authority access powers, oversight, redress and proportionality;
  • compare vendor claims with country-law realities;
  • ask whether the data is readable to the importer;
  • and require explicit rationale before suggesting any probability score.

In other words, the tool should not just summarise the uploaded materials. It should test them against each other and identify where the evidence is thin or conflicting.

What the tool should not do

A poor TIA assistant will:

  • jump quickly to narrative drafting;
  • assume a country is “low risk” based on one source;
  • treat documentation from legal research and compilation sources as a final answer rather than a research tool;
  • rely on vendor statements without challenge;
  • or generate approval language where the evidence is incomplete.

That is not a TIA companion. It is a drafting shortcut. The greatest risk with internal AI assistance is that it can make weak analysis look more professional. That is particularly dangerous in TIAs because the document may then appear complete and well reasoned when, in substance, the jurisdiction assessment is underdeveloped.

If you are building an AI assistant for TIAs, design it to, at the very least,:

  • start with the jurisdiction;
  • require source material;
  • force service-specific questions;
  • separate narrative drafting from unresolved issues;
  • and produce a DPO review checklist alongside the draft text.

AI assistance can improve consistency in TIAs, but only if the tool is designed to force evidence, challenge assumptions and surface unresolved issues rather than simply producing polished narrative.

What should trigger escalation or refusal?

A defensible TIA process should not assume that every issue can be solved by better drafting. Some issues should trigger escalation, delay or refusal. Examples include:

  • no clear answer on where the data is processed;
  • no visibility over sub-processors;
  • provider access to intelligible special-category or otherwise highly sensitive data;
  • unsupported or weak jurisdiction analysis;
  • inability to explain encryption, key control or re-identification risk;
  • AI-enabled services with unclear retention, reuse or support models;
  • or a strategically important vendor relationship where the organisation has become dependent without understanding the real transfer exposure.

This is especially important for DPOs. The point of a TIA is not simply to complete the document. It is to identify when the organisation is being asked to accept a risk position it cannot yet justify. Some of the weakest outcomes arise where the commercial decision is already fixed and the TIA is treated as a formality to be completed after the fact. That is where unresolved issues tend to be reframed as drafting issues rather than governance issues.

In your process, create escalation criteria for:

  • unclear jurisdiction risk;
  • poor provider transparency;
  • intelligible access to sensitive data;
  • unresolved AI processing questions;
  • and situations where the transfer is operationally important but poorly understood.

Certain TIA findings should be treated as escalation points rather than drafting problems. These include weak visibility over provider architecture, unsupported jurisdiction analysis, intelligible access to sensitive data and safeguards that do not materially reduce risk.

Finally

A strong TIA is not valuable because it produces a completed template. It is valuable because it shows whether the organisation can support a transfer with evidence, judgement and visible governance. That is what makes the foreign-jurisdiction assessment so important. It is the point at which the organisation must move from generic comfort to real analysis. It must show that it understands not only the transfer mechanism, but the legal and practical environment into which the data is moving and whether the safeguards in place actually change the position.

For DPOs, this is one of the clearest indicators of programme maturity. If the organisation can identify the transfer correctly, involve the right parties, assess the foreign jurisdiction properly, test the practical value of supplementary measures, and document the conclusion in a disciplined way, it is much more likely to be operating a privacy programme that can withstand criticism.

That is the real value of a TIA. It does not just measure legal awareness. It measures whether governance is actually working.

This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

From Privacy Metrics to Audit Resilience

This article accompanies Hour 3: Privacy Program Metrics in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

How Reporting Creates Evidence, Tracks Remediation, and Supports Regulator Readiness

Most organisations already have some form of privacy reporting. There is usually a monthly update, an issue tracker, a committee paper, a dashboard, a board section, a risk register item, or some combination of the lot. The existence of reporting is rarely the real problem. The problem is that much of it is built as an update function rather than an accountability mechanism. It may describe work in progress, but it often says far less than it should about whether controls are operating, whether the business has actually done what it said it would do, whether action has been evidenced as complete, and whether the organisation could credibly explain its position if challenged.

That is where weak privacy reporting usually gives itself away. It looks organised. It does not always stand up.

A privacy dashboard can be well presented and still be weak. The test is not whether it looks organised. The test is whether the organisation can stand over it under challenge.

What privacy reporting is actually for

Privacy reporting is often treated as a management courtesy. The privacy team keeps stakeholders updated, circulates a summary, flags a few issues, and tries to maintain visibility. That is not wrong, but it is too narrow. A serious reporting model should do more than circulate information. It should help the organisation:

  • show that governance is functioning;
  • identify where weaknesses remain unresolved;
  • track whether remediation is real rather than nominal;
  • preserve evidence behind the position being reported;
  • support challenge, escalation and decision-making.

That is the difference between an update and a governance tool. A report that says “the RoPA is under review” may be fine as a status note. It is not enough as assurance. A report that says “training has been completed” may be accurate, but still tell management very little about whether the relevant control weakness has improved. A report that marks an action as closed may mean no more than that someone stopped talking about it.

The question is not whether a report says something happened. The question is whether the report helps the organisation show:

  • what was done;
  • what evidence supports it;
  • what remains open;
  • who owns the gap;
  • and whether the risk position has actually changed.

That is what privacy reporting is actually for.

Why weak reporting often hides a weak programme

One of the most consistent patterns in privacy management is that reporting tends to be smoother than the underlying programme. This is not always because anyone is trying to mislead. More often, the reporting has simply been built backwards. A management team wants a summary. A committee wants a regular update. A board wants a concise privacy section. The privacy team then builds a template to satisfy that expectation. Statuses are added. A few counts are inserted. A RAG column appears. The result looks coherent enough to circulate.

The weakness only becomes obvious when the next question is asked. If internal audit wants to test the control that the report implies is functioning, can the organisation show the underlying evidence? If the board wants to know whether a known issue was actually fixed rather than just reclassified, does the report make that clear? If a client or regulator asks what sits behind a positive status, can the organisation produce a decision trail, not just a spreadsheet line?

That is why poor reporting so often gives false comfort. It is capable of describing momentum without demonstrating control. It can imply that the programme is functioning while the underlying ownership, evidence base or remediation discipline remains weak. This is also why privacy reporting should never be treated as a cosmetic exercise. Reporting does not create control. It exposes whether control exists.

Start with the reporting chain, not the dashboard

Good reporting does not begin with a dashboard. It begins with what the organisation must be able to demonstrate. That matters because reporting becomes weak the moment it is driven by format rather than accountability. If the starting point is “what should we put in the monthly pack?” the output will usually reflect what is easy to say. If the starting point is “what do we need to be able to stand over?” the reporting becomes much more disciplined.

A stronger reporting chain works like this. The organisation first needs to understand its obligations: legal obligations, governance expectations, policy commitments, contractual requirements, sectoral expectations, and, where relevant, AI governance and resilience-related demands. It then needs controls and processes designed to meet those obligations. Those controls should generate artefacts as they operate. Only then can the organisation derive indicators and reporting lines that mean anything.

That sequence is important because it prevents reporting from floating free of the programme itself. If, for example, the organisation needs to show that it understands what personal data it is processing, then the reporting should sit on top of a functioning RoPA process. If it needs to show that risks are being assessed before high-risk processing goes live, then the reporting should sit on top of an assessment process that produces real records and real challenge. If it needs to show that incidents are managed properly, then the reporting should sit on top of an incident process that produces logs, decisions, actions and closure evidence.

The report is therefore the end product of a management chain. It is not the substitute for one.

The quality of the report depends on the quality of the underlying programme

This is where organisations often get the order wrong. They try to improve the report before improving the programme that feeds it. That almost never works. If the organisation has incomplete processing records, the reporting on RoPA progress will be weaker than it looks. If assessments are rushed, inconsistently scoped or carried out too late to influence decisions, the reporting on assessment activity may create confidence where it has not been earned. If action owners are unclear, then remediation reporting will become little more than a record of drift. If governance routes are not working, then a board note may say that a risk is under review without showing whether anyone with authority has actually made a decision about it.

This is the real test. The quality of the report depends on the quality of the underlying programme. That principle is visible across good governance work. Weak outputs often reflect weak ownership, weak scoping, weak evidence or weak escalation. Strong outputs usually reflect the opposite. The report may be the thing stakeholders see, but it is really a proxy for the state of the system beneath it.

The quality of the report depends on the quality of the underlying programme. Reporting does not create control. It exposes whether control exists.

This is also why privacy reporting often becomes more revealing as organisations mature. Early reporting tends to focus on activity and effort. Better reporting starts to show whether the activity is actually connected to working controls, reduced exposure and visible governance decisions.

What evidence should sit behind the report

A defensible privacy report should allow the organisation to move from a headline statement to the underlying artefacts that support it. That is what turns reporting into evidence rather than narrative. If a report says that records of processing are up to date, there should be reviewed records, clear ownership and visible refresh dates behind that statement. If it says that risk assessments have been completed, the organisation should be able to show the assessments, the scope, the assumptions, the reasoning and the approval path. If it says that actions are closed, there should be closure evidence rather than a bare status change. That evidence base will usually include things such as:

  • records of processing activities;
  • DPIAs, TIAs, LIAs and screening records;
  • incident and breach logs;
  • rights-handling records;
  • policy review histories;
  • training records;
  • vendor review materials;
  • action trackers;
  • issue logs;
  • sign-off records;
  • committee or governance papers where escalation has occurred.

The point is not to generate paperwork for its own sake. It is to make sure that material statements in the report have somewhere reliable to stand.

This is where stronger governance tends to reveal itself in practice. A reporting model built on recurring updates, review cycles, trackers, logs, sign-off points and shared evidence folders is far more likely to withstand scrutiny than one built on manual summary writing alone. That is because the report is not being asked to do all the work. It is sitting on top of a documentary and operational record. Reports do not create accountability on their own. Artefacts do.

Where board reporting usually goes wrong

Board reporting on privacy often fails for one of two reasons. It is either too thin to be useful, or too detailed to be intelligible. Where it is too thin, it tends to reassure rather than inform. The board is told that privacy activity is ongoing, that compliance matters are being monitored, that incidents are under control, and that assessments are in train. That kind of reporting rarely helps the board understand whether governance is functioning. It gives them a privacy presence, not privacy assurance.

Where it is too detailed, the opposite problem arises. The board receives operational noise rather than governance insight. Too much process detail obscures the real question, which is whether significant weaknesses are visible, whether material risks remain open, whether repeated failures are emerging, and whether management is genuinely following through on remediation.

The board does not need a privacy activity log. It needs to know whether governance is working. In practice, that means board reporting should be capable of showing things such as:

  • recurring weaknesses rather than isolated incidents;
  • high-risk items that remain unresolved;
  • repeated slippage in remediation;
  • operational ambiguity that affects the organisation’s risk position;
  • evidence that material matters have been escalated, not absorbed silently into BAU.

It should also be able to distinguish between a problem being monitored and a problem being meaningfully controlled.

A board does not need more privacy numbers. It needs to know whether governance is functioning, where risk remains open, and whether remediation is real.

That is one of the most important discipline points in privacy reporting. Board reporting should not smooth over unresolved uncertainty in order to appear neat. If the business has not confirmed the underlying processing position, if key evidence is still missing, if a control has not yet changed in practice, or if the privacy function is dependent on another team to close the issue, the board should not be told a stronger story than the evidence can support.

Why DPOs should care about dependency, not just completion

One of the most useful ways to tell whether privacy reporting is mature is to see whether it shows dependency honestly. Privacy work is often collaborative by nature. A RoPA cannot be finalised without business owners confirming actual practice. A risk assessment cannot be completed properly without operational and technical inputs. An audit action cannot be closed just because the privacy team has drafted the right wording if the actual control owner has not changed the underlying process. A vendor issue may remain unresolved because procurement, IT, legal and the business have not aligned.

Weak reporting tends to hide that. It gives the impression that everything sits within one neat delivery stream. Strong reporting makes dependency visible. This matters especially for the DPO or privacy lead. If dependency is hidden, the DPO is left with reporting that appears positive while material blockers remain outside privacy control. That is dangerous, because it makes it harder to tell the difference between genuine progress and unresolved organisational drag. A stronger report should show:

  • what the privacy team has completed;
  • where business confirmation is still outstanding;
  • where sign-off has not happened;
  • where technical clarification is missing;
  • where management decision is required before the issue can move.

That is not a weakness in the report. It is a strength. It makes the organisation’s real position visible.

Considerations for better reporting

  • Do not accept reporting that is smoother than the underlying evidence.
  • Push for reporting that shows dependency, not just status.
  • Treat repeated slippage as a governance issue, not an administrative irritation.
  • Be careful of “closed” actions where the closure evidence is weak or indirect.
  • Make sure the reporting distinguishes between privacy effort and business completion.

Metrics should answer management questions, not just count work

A great deal of privacy reporting suffers from a numbers problem. Not because there are too few numbers, but because the wrong numbers are being asked to carry too much meaning. It is easy to count activities. The organisation can usually say how many assessments were completed, how many rights requests were received, how many incidents were logged, how many policies were reviewed, how many training sessions were delivered. Those figures are not useless. The problem is that they often tell management very little unless they are tied to a real question.

If ten assessments were completed, does that tell you whether risk was assessed early enough to influence decisions? If policy reviews are on time, does that tell you whether the underlying operational issue changed? If rights requests are answered within deadline, does that tell you whether the same upstream weaknesses continue to generate them? If incident numbers are low, does that tell you anything about visibility, under-reporting or quality of control?

Useful metrics should help the organisation understand:

  • whether controls are functioning;
  • whether risk is increasing or reducing;
  • whether issues are recurring;
  • whether remediation is credible;
  • whether the programme is becoming more controlled or simply more active.

This is also where not every useful measure needs to be treated as a KPI in the narrow sense. Some lines are activity measures. Some are indicators of control performance. Some show unresolved exposure. Others show slippage, repeated failure or escalation pressure. The usefulness of the report lies in whether the measure helps someone decide, challenge or intervene. The point of a privacy metric is not to count work. It is to help the organisation understand whether its controls, risks and remediation efforts are moving in the right direction.

A strong report should track remediation, not just issues

One of the clearest differences between weak reporting and strong reporting is what happens after the issue is identified. Weak reporting often stops at visibility. The issue appears in the pack, gets discussed, remains in the tracker, and returns in some slightly altered form next month. Over time, that becomes a familiar pattern. The organisation gets better at reporting the issue than resolving it.

Strong reporting does something different. It helps turn the issue into managed remediation. That means the report should make visible:

  • what the issue is;
  • why it matters;
  • who owns the next step;
  • what evidence will support closure;
  • when follow-up is required;
  • and whether escalation is now justified.

That is what makes reporting operationally useful. It also helps avoid one of the most common governance failures in privacy work: the quiet conversion of unresolved issues into administratively “closed” items. A privacy issue is not closed because it disappeared from the tracker. It is closed when the organisation can evidence that the weakness has actually been addressed.

A privacy issue is not “closed” because it disappeared from the tracker. It is closed when the organisation can evidence that the weakness has actually been addressed.

This is why remediation reporting matters so much. It shows whether the organisation is genuinely following through, or simply learning to describe the same problems more efficiently.

Audit resilience is built in ordinary governance

Audit resilience is often misunderstood as something that happens when audit arrives. In reality, it is built in ordinary governance. An organisation is resilient under scrutiny when it can reconstruct the decision trail without relying on memory. It should be able to show what the issue was, where it was logged, who reviewed it, what action followed, what evidence supported the conclusion, and whether any residual risk remains open. That is not a special audit exercise. That is what good governance should already be producing.

The same is true of regulator readiness. Regulator readiness does not begin when an information request lands. It begins when the organisation builds reporting in a way that preserves evidence, ownership, review history and escalation logic as part of routine operations.

This is one of the reasons recurring governance structures matter so much. Monthly status updates, action trackers, review cycles, sign-off points, incident logs, governance meetings and shared evidence environments may look administrative from the outside. In reality, they are often the things that determine whether the organisation can answer difficult questions later with confidence rather than reconstruction.

Ordinary reporting should reduce the need for extraordinary panic later.

Where AI and resilience make weak reporting more dangerous

Weak reporting becomes more dangerous where AI systems and resilience obligations are in scope. AI-related processing often involves more opacity, broader third-party dependency, more complex service chains and weaker visibility over how data is actually being handled in practice. That means broad assurances are especially risky. If reporting on AI use is built on vague inventories, incomplete review records or soft assumptions about how the service operates, the organisation may be reporting confidence that it has not yet earned.

Resilience issues create similar problems. Incidents, third-party dependencies, recovery arrangements, operational workarounds and critical service impacts often sit across privacy, security, operational resilience and vendor governance. If reporting is too siloed, those overlaps remain hidden. If reporting is too soft, the organisation may describe control where it really has unresolved dependency.

The lesson is not that all governance must be merged into one report. It is that weak reporting becomes less defensible where the facts are more complex, the dependencies are broader, and the evidence is harder to assemble after the fact. Where AI or resilience issues intersect with privacy, the answer is not lighter reporting. It is stronger evidence. So:

  • Where AI is involved, insist on clearer inventories, clearer scoping and stronger evidence trails.
  • Where resilience issues overlap with privacy, make sure unresolved dependency is visible.
  • Be wary of reporting that relies heavily on supplier narrative without internal validation.
  • Treat opacity as a reporting risk, not just a technical risk.

And, in Summary

Privacy reporting is not valuable because it produces a polished management paper. It is valuable because it helps the organisation show that governance is working. The strongest reporting models do more than summarise activity. They preserve evidence, expose dependency, support remediation, clarify ownership and strengthen the organisation’s ability to withstand challenge. They help show not just what has been done, but what can be proved, what remains unresolved, and what has been escalated because it still matters.

That is the real move from privacy metrics to audit resilience. If reporting does not help the organisation demonstrate control, evidence remediation and withstand scrutiny, it is not yet doing the job it needs to do.

This article is intended to support the learning covered in Hour 3 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Cross-Border Transfers for DPOs

This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Practical DPO Perspective

Cross-border transfers are often presented as a narrow legal issue: identify the transfer, select a mechanism, insert the clauses, and move on. That is not how this works in practice. For most organisations, the real weakness is not a complete absence of legal awareness. It is that the underlying transfer analysis is often shallow. The organisation may know that international transfers are regulated, but still fail to answer the questions that actually matter:

  • what is the transfer scenario?
  • who is receiving the data in practice?
  • where can it be accessed from?
  • is the data intelligible in the destination jurisdiction?
  • what, if anything, do the safeguards materially change?
  • and can the organisation stand over the position it has taken?

From a DPO perspective, this is where the issue becomes real. Cross-border transfers are not just about Chapter V. They are a practical test of whether the organisation understands its systems, its vendors, its dependencies and its own governance.

The first mistake is often getting the transfer analysis wrong

A surprising amount of poor transfer analysis starts too late. The organisation moves quickly to SCCs, adequacy or template wording before it has properly identified what the transfer actually is. That matters because not all overseas access scenarios are the same.

A temporary employee working remotely while travelling is not necessarily the same as engaging a contractor established in a third country to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service extracting data from that platform and processing it through its own US-based infrastructure. A support arrangement allowing occasional limited troubleshooting access is not the same as routine privileged administrative access from outside the EEA.

Those distinctions are not technical trivia. They shape the legal analysis. For DPOs, the first step is therefore not “Which clauses do we need?” It is “What exactly is happening here?” That means understanding:

  • who the recipient is
  • whether they are acting as processor, controller or contractor
  • whether the data is merely transiting, being stored, or being accessed remotely
  • whether the access is occasional or routine
  • whether the recipient can view the data in clear text
  • whether sub-processors are involved
  • and whether the organisation is dealing with one transfer or a chain of transfers

If those facts are unclear, the rest of the analysis is likely to be weak. Organisations often map where data is hosted but not where it is accessed from. They identify the main vendor but not the sub-processor chain. They treat a tool as part of an existing compliant environment, even though the add-on service is operating outside that perimeter altogether. They also tend to collapse very different overseas access scenarios into one generic “international transfer” label, which obscures the real legal and operational distinctions.

Consider reviewing whether your transfer mapping distinguishes between:

  • storage and remote access
  • employees and third-country contractors
  • primary vendors and sub-processors
  • core platforms and connected tools
  • occasional support access and ongoing operational access
  • pseudonymised or encrypted data versus data readable in clear text

International transfer exposure often turns on facts that are not visible at policy level. The organisation should distinguish between different access and hosting scenarios rather than treating all overseas processing as a single generic issue. Weak factual analysis leads to weak transfer decisions.

SCCs are often used as a substitute for thinking

Standard Contractual Clauses remain important and, in many cases, necessary. But they are often treated as though they answer more than they actually do.

  1. They do not tell you whether the recipient can access intelligible data.
  2. They do not tell you whether local law may undermine the level of protection expected under EU law.
  3. They do not tell you whether the provider’s support model materially changes the risk.
  4. They do not tell you whether the organisation has understood the actual architecture of the service.

That is why Schrems II mattered so much in practice. It did not make SCCs irrelevant. It made it harder to pretend that contractual wording alone resolves the issue. For DPOs, this is one of the most important mindset shifts. SCCs are not the conclusion. They are the legal vehicle through which the transfer may be supported, provided the surrounding facts and safeguards make that supportable. The real assessment still has to ask:

  • what legal environment is the recipient subject to?
  • what categories of data are involved?
  • can the provider or authorities access the data in intelligible form?
  • what technical and organisational measures exist?
  • what changes if those measures fail or are bypassed?

A signed set of SCCs without that analysis is not a strong position. It is often just a neat-looking file. A recurring problem is the belief that if the vendor is well known, the DPA is polished, and SCCs are attached, the organisation has done enough. In reality, that often means the organisation has documented the mechanism without properly assessing the transfer. Some TIAs then repeat generic language about safeguards while saying very little about how the service actually operates, what the provider can see, or what risk remains if the provider handles the data in clear text. Check whether your transfer analysis goes beyond “SCCs are in place”, generic vendor assurances, high-level statements about security, and broad claims of compliance unsupported by service-specific facts.

For example ask instead:

  • can the recipient access the data in clear text?
  • what practical difference do the safeguards make?
  • what do we know about the provider’s support and access model?
  • if challenged, could we explain why this transfer remains supportable?

Standard Contractual Clauses should not be treated as a substitute for substantive assessment. The presence of SCCs does not remove the need to understand provider access, destination-country risk, intelligibility of the data and the practical effect of safeguards.

A TIA is only useful if it forces the right factual questions

A Transfer Impact Assessment is often described as a compliance requirement. That is true, but it is not the most useful way to think about it. A good TIA is a disciplined way of forcing the organisation to confront the underlying facts of the transfer and to document the judgement it has made. It should ask, at a minimum:

  • what data is involved?
  • how sensitive is it?
  • who receives it?
  • where do they operate?
  • what access do they have in practice?
  • is the data intelligible at the point of access?
  • what laws in the destination jurisdiction matter?
  • what measures reduce the exposure?
  • and what residual risk remains?

That is what makes a TIA valuable. It is not simply an internal paper trail. It is a mechanism for converting abstract legal obligations into a decision the organisation can actually defend. This is particularly important for DPOs because weak TIAs tend to fail in the same way: they contain the right headings, but the wrong depth. They reproduce a compliance vocabulary without showing the reasoning that matters. If a TIA never meaningfully addresses whether the provider can view the data in readable form, whether the provider’s support staff are outside the EEA, or whether the sub-processor chain alters the risk, then it is not doing the real job.

The most common weaknesses are boilerplate analysis, late-stage completion, and poor connection to procurement or design decisions. TIAs are often produced after the commercial decision is already made, using generic wording that could apply to almost any vendor. That gives the appearance of control while leaving the actual decision-making unexamined. Review a small sample of  your TIAs and ask:

  • do they describe the actual service or just the generic transfer issue?
  • do they identify who can access the data and in what form?
  • do they distinguish between technical safeguards that genuinely reduce risk and those that do not?
  • do they record any limits, conditions or follow-up actions?
  • would the document still make sense to a regulator reading it cold?

A TIA is useful only if it captures the factual and legal reasoning behind the transfer. Boilerplate assessments create the appearance of assurance without showing that the organisation has meaningfully understood the provider, the data exposure or the residual risk.

AI and connected tooling are where organisations most easily lose control

If traditional transfer analysis was already difficult, AI-enabled services have made it harder. The challenge is not simply that AI tools may process data outside the EEA. It is that the processing chain is often less transparent, the sub-processor landscape is broader, and the customer may have less visibility over retention, reuse, support access and model-related processing than they assume. This is where a transfer analysis that looks acceptable on paper can become weak very quickly.

An organisation may believe it is operating inside a controlled environment, for example through an EU-hosted collaboration or productivity suite. But if a connected AI-enabled service extracts transcripts, recordings or other content from that environment and processes it through its own infrastructure, then the original boundary is no longer the key point. The real question becomes what happens once the data leaves that environment, who can access it, and under what conditions.

That is where DPOs need to be particularly careful. In an AI context, the transfer issue is not just where the data goes. It is whether the organisation retains meaningful visibility and control once the data enters that processing environment. That means asking harder questions:

  • is the tool using third-country infrastructure?
  • is prompt, transcript or content data retained?
  • is it available for model improvement, troubleshooting or analytics?
  • who are the relevant sub-processors?
  • can humans at the provider access the data?
  • is the data encrypted only in transit and at rest, or is it still intelligible during processing?
  • does the organisation understand the real boundaries of the service?

These are not optional refinements. They go to the heart of whether the transfer analysis is credible.

What we repeatedly see is governance lag. AI-enabled tools are deployed because they are useful, fast and embedded into everyday work. The privacy analysis then follows behind, often relying on assumptions that do not survive closer scrutiny. Organisations also tend to overestimate what “EU-based” marketing language means, particularly where the service depends on broader support, model or sub-processing arrangements.

Schedule a review:

  • which AI-enabled tools or integrations are already active
  • whether they extract or replicate personal data from existing systems
  • whether they introduce non-EEA processing or access
  • whether the service terms permit retention, analytics or reuse
  • whether your TIAs and vendor reviews are specific to the AI functionality rather than the core platform alone

AI-enabled services can materially weaken transfer visibility and increase accountability burden. Their use may involve non-obvious processing chains, third-country infrastructure, multiple sub-processors and reduced customer control. These tools should be assessed as transfer and governance issues, not just as productivity features.

This is also a third-party oversight issue, and in some sectors a resilience issue

Cross-border transfers are often kept within the privacy silo. In practice, they overlap heavily with vendor governance, outsourcing oversight and, in regulated sectors, broader operational resilience concerns. If a critical or hard-to-replace provider stores or accesses personal data outside the EEA, the issue is not simply whether there is a lawful mechanism in place. It is also whether the organisation has enough visibility, assurance and control over that provider relationship. That is why transfer governance should not sit apart from wider third-party review. A provider may at the same time be:

  • a material processor of personal data
  • an important operational dependency
  • a source of concentration or substitution risk
  • and a point of exposure because of non-EEA access or sub-processing

Where those issues are reviewed in separate silos, the organisation can end up with a legally tidy but operationally weak position. For DPOs, this matters because the transfer analysis is often only as good as the information the organisation has about the vendor. If that visibility is poor, the privacy conclusion will usually be weaker than it appears. This is particularly relevant in financial services and other regulated environments, where transfer governance may support broader expectations around supplier oversight, dependency management and evidence of control. It does not need to become a DORA article to make that point. It just needs to recognise that the same provider relationship may matter for several governance reasons at once.

A common failure point is fragmentation. Procurement reviews the contract. IT reviews the implementation. Risk reviews continuity. Privacy reviews the DPA. But no one joins that into a coherent view of how the provider actually operates, how dependent the organisation has become, and whether the privacy analysis still holds if service conditions change.

Questions to ask:

  • which providers are operationally significant as well as privacy-relevant
  • whether transfer review is linked to vendor governance and oversight
  • whether changes in hosting, support model or sub-processors are captured and escalated
  • whether board reporting on critical third parties includes material transfer exposure where relevant

International transfers may also expose wider third-party and resilience weaknesses. Where a provider is operationally important and processes personal data outside the EEA, the organisation needs not only a lawful mechanism but sufficient visibility and control over that dependency.

For DPOs, the real issue is whether the organisation can defend the position it has taken

The mature question in this area is not “Do we know that cross-border transfers are regulated?” Most organisations do. The more important question is whether the organisation can explain, with evidence, why it believes a given transfer is supportable. That requires more than awareness of the law. It requires enough understanding of systems, vendors and governance to connect the legal mechanism to the real operational facts. It requires TIAs that reflect the actual arrangement rather than generic precedent. It requires challenge where the business assumes that a contract or a familiar vendor name resolves the issue. And it requires senior reporting that turns transfer risk into something visible rather than theoretical.

That is why cross-border transfers are such a useful measure of programme maturity. Where the organisation gets this right, it usually indicates something broader: joined-up governance, stronger vendor control, clearer ownership and a privacy programme that can translate legal standards into defensible decisions. Where it gets this wrong, the same pattern usually appears elsewhere too.

The real difficulty is rarely total ignorance. It is fragmented ownership, weak operational visibility and analysis that is neater than it is deep. Privacy teams may know the law, but not have enough visibility into real access patterns, vendor architecture or AI-enabled data flows to challenge the business properly.

To combat this, ask:

  • who owns transfer mapping in practice?
  • who signs off TIAs and on what basis?
  • how are changes in tools, vendors or support arrangements identified?
  • can the organisation distinguish between compliant documentation and defensible analysis?
  • could it explain the position clearly to a regulator or auditor if required?

Cross-border transfer compliance is a practical test of governance maturity. It shows whether the organisation can convert legal requirements into evidence-based decisions, meaningful supplier oversight and a position that can be defended if challenged.

Final thoughts

Cross-border transfers are not difficult because the law is obscure. They are difficult because they expose whether the organisation has really understood its own operating model. For DPOs, that is the key point. This is not ultimately about inserting clauses into contracts or reciting Schrems II. It is about identifying where the data goes, who can access it, what the technical and organisational reality looks like, and whether the organisation can justify the conclusion it has reached. That is what makes transfer compliance useful. It does not just test legal knowledge. It tests whether privacy governance is actually working.

This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Data Protection Officer Services