Vendor Oversight and Legal Characterisation

This article accompanies Hour 4: Vendor Management Oversight in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Vendor oversight often weakens before monitoring even begins

Vendor management is often framed too narrowly. The organisation onboards a supplier, legal reviews the paperwork, a data processing agreement is requested, and privacy is then asked whether the vendor is “covered”. That sequence is familiar. It is also where a great deal of weak practice begins.

The first privacy question in a vendor relationship is not whether a DPA is in place. It is whether the organisation has correctly understood what role the other party is actually performing in relation to the personal data. If that point is missed, the rest of the oversight model is built on the wrong foundation. The wrong agreement may be used, wrong risks may be reported, or the wrong controls may be prioritised. In some cases, the organisation may believe it has documented accountability when it has only described part of the relationship.

Vendor oversight often weakens at the point where the organisation chooses the wrong legal model for the relationship and then builds its controls around that error.

This is one of the reasons vendor management repeatedly becomes a source of exposure. Organisations often move too quickly from “supplier involved” to “processor appointed”. That may be right in some cases. In others, it is incomplete or simply wrong.

A supplier may be a processor for some elements of the service and an independent controller for others. In some relationships, the other party is not a processor at all. In others again, the arrangement may involve elements of joint controllership that are not being recognised because the parties prefer the apparent neatness of processor language.

That is why this topic matters for the DPO or privacy manager. Vendor oversight is not just about monitoring suppliers after onboarding. It starts with legal and factual accuracy. If the relationship is characterised too loosely at the outset, the organisation’s privacy position is likely to remain weaker than it appears throughout the life of the arrangement.

The classification question comes before the agreement question

Privacy review is often reduced to paperwork. The contract arrives, a DPA is attached, and the organisation asks whether the necessary clauses are present. That is a normal operational step, but it is not the point at which the analysis should begin.

The more important question is what the facts of the arrangement actually show. Who is deciding why the personal data is being used? Who is deciding the essential means of the processing? Is the supplier acting only on the organisation’s instructions, or is it using the data for its own purposes as well? Does the answer differ depending on the data flow, the service element or the stage of the relationship?

These are not drafting niceties. They determine the legal model that should apply. A processor is not created by labelling the supplier a processor. A controller-to-controller relationship is not avoided by putting Article 28 wording into a contract. Joint controllership is not displaced simply because the parties would prefer one side to take a more passive label. The factual reality of the processing remains the starting point.

That matters because privacy obligations attach differently depending on the role actually being performed. If the organisation applies a processor framework where the other party is determining its own purposes, the contract may give a false impression of control. If the relationship is treated as controller-to-controller where the organisation is in fact instructing the other party in a more constrained way, the oversight model may be looser than it should be. If the relationship contains mixed elements, but only one side of the picture is documented, the organisation may be carrying accountability gaps that do not become obvious until the relationship is tested.

The privacy risk in many vendor arrangements begins before any monitoring issue arises. It begins when the relationship is documented on a legal basis that does not match the facts.

For the DPO or privacy manager, this is one of the most useful ways to reframe vendor review. The immediate task is not to decide whether the paperwork is present. It is to decide whether the legal characterisation is sound enough to support everything that follows.

Where the supplier really is a processor

There will, of course, be many cases where the supplier is acting as a processor. If the organisation determines the purposes of the processing, defines the essential features of what is to be done with the data, and the supplier is carrying out that processing on documented instructions, then the processor framework is likely to be the correct one.

Where that is the case, the familiar GDPR consequences follow. Article 28 becomes central. Appropriate contractual provisions are required. Due diligence matters. Oversight matters. The controller remains accountable for the processing carried out on its behalf. None of that is in doubt. What is often misunderstood is what the processor model does and does not achieve in practice.

A processor agreement is not a substitute for understanding the service. It does not eliminate the need to assess what the processor is actually doing, how sub-processing is structured, how changes to the service are managed, whether monitoring is meaningful, or whether the organisation’s theoretical rights are usable in practice. An Article 28 framework only supports accountability if the organisation is able to retain some practical grip on the relationship over time.

Weak processor oversight is not usually just a drafting failure. It is often a failure to revisit whether the organisation still understands the relationship as it has developed. A processor approved for a bounded purpose may, over time, handle larger volumes of data, support more business functions, rely on a wider sub-processing chain or become much harder to challenge commercially than at the point of onboarding.

A processor arrangement is only as strong as the organisation’s ability to understand the service, monitor material change and respond where the practical level of control weakens over time.

This is where the DPO or privacy manager needs to be more exact than a basic contract review allows. The issue is not simply whether the processor clauses are there. It is whether the processor model still fits the service as used, and whether the organisation’s oversight is operating at the level the risk now requires.

Not every vendor relationship is a processor relationship

One of the most persistent oversimplifications in vendor management is the assumption that any supplier handling personal data is acting as a processor. That is often operationally convenient, but it can be legally and practically misleading.

Some third parties use personal data for their own purposes and within their own regulatory or commercial frameworks. In those relationships, they are not simply carrying out another organisation’s instructions. They may be receiving data from the organisation, but that does not make them its processor.

That point is easy to lose in practice because controller-to-controller relationships are often less tidy from a governance point of view. They force the organisation to confront the fact that the other party has its own purposes, its own lawful basis position, its own transparency obligations and, in many cases, its own onward disclosure model. The organisation does not retain the same type of instruction-based control that exists in a processor relationship.

Examples vary by context, but the pattern is familiar. A professional adviser may receive personal data and use it within its own regulated professional role. An insurer, benefits provider, fraud prevention service, credit agency, external platform provider or specialist analytics provider may determine certain purposes of use for itself. Some software providers may act as a processor for the hosted client environment while separately using certain data for product security, abuse detection, fraud control or service improvement on their own account.

In those cases, a DPA alone is not enough. In some cases, it is not the right primary instrument at all.

A more appropriate arrangement may require controller-to-controller provisions or a data sharing agreement that properly addresses the legal and governance consequences of disclosure to another controller. That means looking more carefully at the purpose of the sharing, the lawful basis relied upon, transparency to data subjects, onward transfers, retention boundaries, rights handling, security expectations and any restrictions or conditions the organisation wants to attach to the sharing.

Where the other party is determining its own purposes, the issue is no longer processor oversight alone. It becomes a question of whether the organisation has a defensible controller-to-controller sharing model.

That is a materially different type of privacy analysis. The DPO or privacy manager should not be satisfied simply because “privacy terms” exist somewhere in the contract. The key question is whether the organisation has recognised that the relationship is not one of delegated processing only, and whether the right agreement structure has been used as a result.

Why data sharing agreements matter in controller-to-controller scenarios

Data sharing agreements are sometimes treated as secondary documents compared with DPAs. In practice, where personal data is disclosed to another controller, a well-structured data sharing agreement can be at least as important from a governance perspective.

That is because the legal and accountability issues are different. The organisation is no longer only asking how to bind a supplier to instructions. It is asking on what basis the data is being shared, what each party is entitled to do with it, what constraints or expectations apply to onward use, how transparency is addressed, how retention is framed, how rights requests are handled where responsibilities overlap, and what happens if the legal position around the sharing changes.

This is especially important where the relationship is significant but not neatly reducible to a pure service provider model. Without a good controller-to-controller framework, organisations can end up assuming that the existence of a commercial contract means the privacy position is adequately covered. It often is not.

A well-structured data sharing agreement does not make accountability disappear, but it does force the parties to confront the right questions. What is being shared, why, under what legal basis, with what expectations around use, and with what allocation of responsibility? Those are exactly the questions that tend to go underexplored when the relationship is hastily labelled as a processor arrangement.

For the DPO or privacy manager, this is often where advisory value is clearest. The privacy function should be able to explain not just that a DPA is the wrong tool in a particular case, but why the relationship requires a more accurate controller-to-controller analysis and what practical consequences follow from that reclassification.

Hybrid relationships are often the real trap

Some of the hardest vendor arrangements are not purely processor or purely controller relationships. They are mixed. That is often where privacy oversight becomes both more difficult and more important.

A SaaS provider may act as a processor in relation to customer data hosted in the service, while separately acting as a controller for certain telemetry, fraud prevention, service security or product improvement functions. A benefits or HR platform may process employee data on the organisation’s instructions for certain service elements while separately determining aspects of use for its own legal, product or operational purposes. An external specialist may process personal data under instruction for one strand of a project while independently determining how related information is used for another.

These mixed models create a practical temptation. The organisation may want to pick one label for the relationship as a whole and move on. That is understandable. It is also often the source of poor documentation and weak oversight.

A hybrid relationship needs to be analysed by reference to the relevant data flows and purposes. One part of the arrangement may require an Article 28 processor framework. Another may require controller-to-controller data sharing provisions. In some cases, there may need to be a combination of both within the same broader contractual package, supported by clear internal mapping of which activities fall into which category.

Some of the weakest vendor arrangements are not wrongly documented because nobody cared. They are wrongly documented because the relationship was more complex than the organisation was willing to analyse properly.

This is a particularly important point for the DPO or privacy manager because hybrid arrangements often create false confidence. The organisation may believe the relationship is “covered” because a DPA is in place, while significant parts of the vendor’s role sit outside that processor model altogether. Unless the mixed nature of the arrangement is explicitly recognised, the oversight model is likely to remain partial and reporting to management may understate the real legal and governance complexity.

Joint controllership is different again

Joint controllership creates a different type of issue, and it is often under-identified for the same reason hybrid arrangements are under-analysed: the processor model is usually seen as more straightforward and more comfortable from a governance perspective.

But joint controllership cannot be wished away through preference. If two parties are jointly determining the purposes and means of the processing, then the organisation needs to recognise that reality and deal with it properly.

This is not the same as saying that both parties are merely involved in the same broad environment. Joint controllership is not established simply because two organisations are both present or both interested. The question is whether they are together shaping the relevant processing in a sufficiently real way that responsibilities need to be allocated accordingly.

That can arise in partnerships, co-branded initiatives, consortium arrangements, shared programmes, data-enabled campaigns, platform relationships or service designs where both sides materially shape why and how the processing occurs. In those cases, the organisation should not be looking only for processor language. It should be asking whether an Article 26-style joint controller arrangement is needed and whether the essentials of that arrangement are reflected in practice.

That matters because joint controllership affects how responsibilities are allocated, how transparency is addressed and how the organisation explains the relationship to data subjects and to regulators if challenged.

Joint controllership often becomes visible only when the organisation stops asking who is receiving the data and starts asking who is shaping the purpose of the processing.

For a DPO or privacy manager, this is another area where advisory value lies in resisting oversimplification. The relationship may be commercially described as a vendor arrangement, but that does not answer the privacy question. If the other party is participating in the determination of purpose and means, processor language may obscure more than it clarifies.

Why misclassification matters so much in practice

It is easy to make this sound theoretical. It is not. The reason classification matters is that the legal model chosen at the outset shapes everything that follows: what agreement is used, what due diligence is prioritised, what monitoring takes place, what transparency position is taken, what rights-handling assumptions are made, how incidents are escalated, what is reported to management, and what the organisation thinks it can defend if the relationship is later scrutinised.

If a third-party controller is treated as if it were a processor, the organisation may believe it has more control than it does. If a hybrid relationship is reduced to a single processor model, important elements of onward use or independent purpose-setting may go undocumented. If a joint controller scenario is treated as a routine vendor arrangement, accountability may be allocated in a way that bears little resemblance to how the processing is actually designed and operated.

This is one of the reasons vendor oversight often looks stronger than it is. The organisation may genuinely have a contract, a review process and some monitoring in place. But if the underlying legal model is wrong, those controls are operating against the wrong understanding of the relationship.

A vendor arrangement cannot be treated as well governed simply because it is documented. The first question is whether it has been documented on the right legal basis at all.

That is a point worth taking seriously in management and board reporting, because it shifts the conversation from process completion to legal and governance accuracy.

The DPO’s role is to explain what the relationship means, not just what document is missing

A common failure mode for privacy teams is that they identify an issue but report it in a way that is too generic to support good decisions. Saying that a vendor arrangement needs a DPA, or that the supplier should be reviewed, may be technically correct but not especially useful if the deeper issue is misclassification, increasing dependency or structural limits in oversight.

The privacy function adds most value where it can explain what type of relationship exists and why that changes the organisation’s accountability position. That means being able to distinguish clearly between a processor oversight issue, a controller-to-controller sharing issue, a joint controller issue or a mixed model that needs to be mapped and governed in parts.

That distinction matters because the governance consequences are different. A processor relationship calls for strong instruction-based oversight, monitoring, sub-processor scrutiny and change control. A controller-to-controller arrangement raises different questions around lawful basis, transparency, onward use, retention, rights handling and defensibility of sharing. A joint controllership issue raises questions about allocation of responsibility and how the organisation will explain the arrangement if challenged. A hybrid model requires the organisation to recognise that different legal and operational treatments may apply within the same broader commercial arrangement.

The privacy function adds most value where it does not report “vendor risk” as a single category, but explains what type of data relationship exists and why that changes the organisation’s accountability position.

That is where privacy advice starts to influence decisions rather than simply annotate contracts.

Feeding the issue into the organisation requires different reporting depending on the model

Once the relationship has been characterised properly, the DPO or privacy manager then needs to decide how to feed the issue into the organisation.

This is where a lot of privacy reporting becomes too compressed. Vendor arrangements are grouped together as “third-party risk” or “processor risk” even where the underlying problems are materially different. That reduces the usefulness of the reporting and can make it harder for senior management to understand what decisions or mitigations are actually needed.

A processor issue may need to be reported as a matter of oversight strength, weak auditability, poor change control, insufficient monitoring or material reliance on sub-processors. A controller-to-controller relationship may need to be reported as a question of sharing rationale, transparency exposure, legal defensibility or unclear accountability boundaries. A hybrid relationship may need to be escalated because the organisation has only partially documented the legal structure of the arrangement. A joint controllership issue may need to be framed around allocation gaps, data subject handling responsibilities or strategic governance consequences.

That is where the DPO’s analytical role becomes a governance role. The issue is no longer simply whether a privacy concern exists. The issue is whether the organisation is receiving an accurate description of the type of exposure it has taken on.

A processor issue, a data sharing issue and a joint controllership issue should not appear in governance reporting as if they were the same type of problem. They expose the organisation in different ways and require different responses.

This is especially important where the organisation wants concise reporting. Concision is useful, but not if it collapses the legal and governance distinctions that make the issue intelligible.

DORA and AI make misclassification and weak oversight more serious

The DORA and AI crossovers become much more useful once the classification problem is understood. 

DORA sharpens the consequences of weak third-party oversight by asking not only whether the relationship is contractually manageable, but whether the organisation has become operationally dependent on a provider in a way that changes the seriousness of any control weakness. A relationship that looks routine under a narrow privacy lens may be much more significant once criticality, substitutability, concentration and resilience are considered.

AI sharpens a different aspect of the problem. Many AI-enabled services are more difficult to characterise neatly because the provider’s role may not fit comfortably within a pure processor model. There may be complex service architectures, layered sub-processing, separate safety or abuse-monitoring functions, telemetry, service-improvement claims, model-related uses or contractual positions that are superficially reassuring but operationally difficult to verify. In that environment, misclassification becomes more likely and the limits of oversight become more important.

In AI-enabled services, the privacy question is often not simply whether the vendor presents risk, but whether the organisation has correctly understood what role the vendor is actually playing in relation to the data.

That point matters because it changes what should be escalated. The issue may not be obvious non-compliance. It may be that the organisation is relying on a legal description of the vendor relationship that is too simplistic for the service it is actually using, or that it is accepting a level of opacity that should be understood and approved at a higher level.

What a mature privacy review of vendor relationships should actually involve

A mature review of vendor relationships should therefore go further than checking whether contractual templates have been completed. The privacy function should be asking what personal data is actually involved, what the service is genuinely doing with it, who is determining the purposes and essential means, whether different parts of the relationship need to be analysed differently, whether the agreements in place reflect that structure, and whether the oversight model matches the legal model that has been chosen.

That may sound obvious, but it is often not what happens in practice. In many organisations, the legal characterisation is inherited from procurement assumptions, vendor paper or previous practice. The privacy team is then asked only to validate the documentation rather than the underlying analysis.

That is not enough where the relationship is material. A stronger approach requires the DPO or privacy manager to test whether the documentation corresponds to the actual flows and the actual balance of decision-making. It also requires the privacy function to identify where the organisation’s practical ability to monitor, challenge or revisit the relationship is weaker than the paperwork suggests.

The useful privacy review is not the one that asks whether there is a DPA on file. It is the one that can explain why the arrangement has been classified as it has, what legal consequences follow from that classification, and where the organisation’s practical ability to oversee the position is limited.

That is the level at which vendor oversight starts to become defensible rather than merely documented.

Why this matters for the DPO or privacy manager

For a DPO or privacy manager, vendor oversight is one of the clearest tests of whether the privacy function is operating at governance level. A narrow review of contractual wording may satisfy a process requirement, but it does not tell the organisation much about whether it has understood the relationship correctly or whether its accountability position is robust.

The real value lies in being able to identify where the organisation has defaulted too quickly to a processor model, where a data sharing arrangement is more appropriate, where joint controllership needs to be recognised, where hybrid structures need more precise documentation, and where the practical ability to oversee the relationship is weaker than the formal paperwork implies.

That is not simply a matter of technical legal precision. It affects how the organisation explains the relationship internally, how it allocates controls, how it handles incidents, what it tells data subjects, what it reports to management, and what it is realistically able to defend if asked to justify the arrangement later.

The DPO’s role is not to make vendor arrangements look tidy. It is to make the organisation’s accountability position intelligible and defensible.

That is why this topic deserves more than a standard processor oversight discussion. The harder and more useful question is not whether the supplier has signed the right clauses, but whether the organisation has correctly understood what kind of relationship it has created and what that means for governance in practice.

Takeaway

A useful next step for any DPO or privacy manager is to look again at vendor relationships that are currently treated as settled. Which of them are genuinely processor arrangements? Which are better understood as controller-to-controller disclosures? Which contain mixed elements that are being collapsed into one legal label for convenience? Which may in fact involve joint determination of purpose and means? And where is the organisation relying on a contractual form that is easier to administer than the relationship is to defend?

The practical challenge is not simply to ensure that vendors are documented. It is to ensure that they are documented on the right legal basis, governed using the right accountability model, and reported internally in a way that reflects the actual nature of the exposure the organisation has chosen to accept.

This article is intended to support the learning covered in Hour 4 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Transfer Impact Assessments in Practice

This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

What DPOs Should Actually Be Looking For

A Transfer Impact Assessment (or Transfer Risk Assessment – TRA) is the point at which transfer law stops being abstract and becomes a real organisational decision. In theory, the legal position may look simple enough: identify the transfer, identify the transfer tool, and then consider whether additional safeguards are needed. In practice, that is not where most TIAs fail. They usually fail earlier and more quietly.

They fail because the underlying transfer scenario has not been analysed properly. They fail because the organisation does not know enough about the recipient’s real operating model. They fail because the assessment of the foreign jurisdiction is generic rather than specific. And they fail because supplementary measures are described in broad compliance language without asking whether they materially change the exposure.

That is why a TIA matters. A TIA is not just a document to satisfy Schrems II. It is where the organisation has to demonstrate that it understands what is happening, what legal and practical risks arise in the recipient jurisdiction, and why the transfer remains supportable. For DPOs, this is one of the most revealing areas of privacy practice. A strong TIA usually points to stronger governance, better supplier oversight and more mature internal coordination. A weak TIA often points to the opposite.

What a TIA is actually trying to determine

A TIA is often reduced to a single question: “Can we still transfer the data using SCCs?” That question is too narrow. A useful TIA is trying to determine, in sequence:

  • what the transfer scenario actually is;
  • who the importer is, and in what role;
  • what data is involved, and in what form;
  • what the legal and practical position is in the destination jurisdiction;
  • whether public authority access, surveillance powers, redress and oversight could undermine the level of protection expected under GDPR;
  • whether supplementary measures materially reduce that risk;
  • and whether the organisation can genuinely stand over the conclusion it has reached.

That is why the TIA process has to be disciplined. It starts by identifying the country or countries involved and requiring relevant documentation, including relevant legislation, items such as DataGuidance materials, agreements, and internal checklists, before moving section by section through the template and analysing each part against the evidence provided. The template itself should break the work into the right components: transfer overview, receiving jurisdiction, transfer details, existing safeguards, alternatives, proportionality, law and practice in the recipient country, supplementary measures, probability assessment and approval. That structure is not just administratively tidy. It reflects the underlying legal logic.

This is also consistent with the EDPB’s post-Schrems II approach. The EDPB Recommendations 01/2020 on supplementary measures remain the central official guidance for organisations trying to assess whether an Article 46 transfer tool remains effective in light of the law and practice of the destination country. The EDPB Guidelines 05/2021 on the interplay between Article 3 and Chapter V are equally important because they help determine whether the arrangement is even a restricted transfer under Chapter V in the first place. In practice, that initial classification matters more than many organisations realise. A TIA that begins with the wrong transfer analysis is already weakened before it gets to the foreign-law questions.

Start with the facts: the transfer analysis must be right before the TIA can be right

One of the most common problems in transfer work is that very different scenarios are collapsed into a single generic category called “international transfer.” That may be administratively convenient, but it is analytically weak.

An employee temporarily working from a third country does not necessarily raise the same issues as a third-country contractor engaged to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service that extracts data from that platform and processes it in its own US environment. Occasional remote support access is not the same as routine privileged administrative access. Pseudonymised data used for analytics is not the same as a readable HR or health dataset accessible in clear text. These distinctions matter because they shape:

  • whether Chapter V is engaged;
  • which SCC module is relevant, if SCCs are used;
  • the sensitivity and exposure of the data;
  • the significance of public authority access risk in the destination jurisdiction;
  • and the practical effect of any technical or organisational safeguards.

For DPOs, the practical lesson is straightforward: a TIA should not begin with the transfer tool. It should begin with the transfer scenario. That means identifying:

  • who the exporter is;
  • who the importer is;
  • whether the importer is acting as controller, processor, sub-processor or contractor;
  • whether the data is stored, accessed remotely, downloaded, or transferred onward;
  • whether the data is ordinary personal data, special category data, criminal data, children’s data, or otherwise particularly sensitive;
  • whether the data is intelligible in the destination jurisdiction;
  • and whether the provider’s sub-processing, support or AI functionality changes the picture.

A good template should capture exporter/importer roles, the transfer mechanism, the nature of the transfer, onward transfers, categories of personal data, special-category and criminal data, data subjects, format of the data, method of transfer and existing security measures. This is a strong foundation, because it makes the foreign-jurisdiction analysis service-specific rather than generic.

In practice, the weakest TIAs often reflect poor factual scoping rather than poor legal knowledge. Hosting is mapped but support access is not. The primary vendor is known but the sub-processor chain is not. An AI-enabled tool is treated as though it sits safely inside the main platform’s environment, even though it extracts and processes data through separate infrastructure. The result is that the TIA looks complete but is addressing the wrong transfer.

When assessing your practices internally, review whether your scoping process distinguishes between:

  • storage and remote access;
  • employees, contractors and service providers;
  • primary vendors and sub-processors;
  • occasional and routine access;
  • EEA-hosted platforms and connected third-country tools;
  • readable, pseudonymised and encrypted data.

The quality of a TIA depends on the quality of the underlying transfer analysis. If the organisation has not correctly identified the parties, access model, data exposure and onward transfer chain, the assessment will be weaker than it appears.

Who should be involved in a TIA?

A TIA should never be treated as a privacy-only paperwork exercise. It is a cross-functional assessment, and that matters because no single function usually holds all the facts. A defensible TIA should be:

  • legally informed;
  • technically grounded;
  • operationally accurate;
  • and owned by the business as well as privacy.

The DPO or privacy lead should normally coordinate the assessment. That means framing the questions, testing assumptions, identifying gaps, and ensuring the final reasoning is coherent and evidence-based. But the DPO should not be left trying to infer system architecture, key management, support access patterns or sub-processor chains without support. Legal should be involved to assess:

  • whether the transfer tool is appropriate;
  • whether the importer role is correctly understood;
  • how SCCs or other Article 46 mechanisms are being used;
  • and whether the foreign-jurisdiction analysis raises legal issues that need escalation.

IT, architecture or security teams are often essential because the foreign-law risk only becomes meaningful when matched to technical facts. If the provider cannot access intelligible data, the analysis may look different than if provider personnel can access clear-text customer content in the course of support or service delivery. That means technical teams need to clarify:

  • where the data is hosted;
  • where it is processed;
  • who can access it;
  • how encryption works;
  • who controls the keys;
  • whether pseudonymisation is meaningful;
  • and how support or privileged access operates in practice.

The relevant business or system owner also matters. A TIA is not just about whether a transfer is possible; it is also about whether the transfer is necessary, whether alternatives exist, and whether the organisation has become dependent on the arrangement in a way that raises wider governance concerns.

Procurement or vendor management is often essential because:

  • they hold the contractual documentation;
  • they can obtain sub-processor lists, trust-centre materials and service descriptions;
  • and they know when renewals, change events and leverage points arise.

Risk, compliance or resilience functions may also need to be involved where the provider is strategically important or where the transfer intersects with broader third-party oversight. In regulated settings, particularly financial services, the same provider relationship may matter at once for privacy, outsourcing, operational resilience and dependency management.

AI governance or product/data governance teams should also be involved where AI-enabled tools are in scope, because the data-flow and control issues are often more opaque and more dynamic than in ordinary SaaS arrangements.

Weak TIAs often reflect fragmented ownership. Privacy has the template, legal has the contract, IT has a partial understanding of hosting, and procurement holds vendor papers, but no one assembles the picture properly. The result is that the final document is smoother than the underlying analysis.

In assessing your practice, make sure your TIA process identifies:

  • who owns scoping;
  • who confirms the technical facts;
  • who assesses the legal mechanism and foreign-law issues;
  • who validates the necessity of the transfer;
  • who reviews sub-processor and contract materials;
  • and who can approve or escalate if the assessment reveals unresolved risk.

A credible TIA is cross-functional. It should combine privacy, legal, technical, supplier and business inputs rather than being treated as a privacy-only exercise.

The foreign jurisdiction assessment: where the real analysis happens

This is the part of the TIA most likely to draw criticism if it is weak, and the part most likely to make the assessment genuinely meaningful if it is done properly. A poor jurisdiction assessment often asks one shallow question:

“Does this country have a data protection law?”

A stronger jurisdiction assessment asks the right question:

“In light of this particular transfer scenario, can the legal environment of the destination country undermine the level of protection expected under GDPR?”

That distinction matters.

A country may have a modern privacy statute and an active regulator, but still allow forms of state access, surveillance or national-security processing that are relevant to the transfer in question. Equally, the existence of public-authority access powers does not automatically make the transfer unsupportable. The issue is whether those powers, in context, materially affect the ability of the transfer tool to provide an essentially equivalent level of protection.

That is why a strong TIA needs to assess both the general legal environment, and the practical relevance of that environment to the transfer at hand.

A good template addresses public authority access, legal basis, necessity and proportionality, safeguards against excessive access, and case studies or precedents. It should further address the wider legal environment of the recipient country, including dedicated data protection law, rights, regulator independence, judicial remedies, public authority access, surveillance programmes, and limitations and oversight. One part looks at state access and proportionality directly; the other assesses the wider data protection framework of the country.

What sources should inform the jurisdiction assessment?

This is one of the clearest areas where internal AI tooling can improve quality if designed properly. A TIA companion should not allow users to “wing” the foreign-law analysis based on memory or a single source. The sources should usually be layered.

At the top should be the official guidance:

  • EDPB Recommendations 01/2020 on supplementary measures;
  • EDPB Guidelines 05/2021 on whether Chapter V applies in the first place;
  • relevant European Commission materials on adequacy and SCCs;
  • and relevant Irish DPC or other regulator guidance or conference materials on international transfers and SCCs.

Supporting those should be:

  • country-law research tools such as DataGuidance;
  • vendor-supplied materials, including DPAs, SCCs, trust-centre information, government-request statements and sub-processor lists;
  • and any internal legal or compliance commentary developed for the organisation.

The role of a tool like DataGuidance is important here. It is a research aid, not a final legal conclusion. It is useful for assembling a jurisdiction profile, identifying relevant legal themes and orienting the assessor to the local framework. But it should not replace a real analysis of how public authority access, redress, oversight and practical enforcement interact with the service in question.

What should the jurisdiction assessment actually test?

A strong assessment should address, at minimum:

  • whether the country has a dedicated data protection law;
  • whether individuals have enforceable data protection rights;
  • whether there is an independent supervisory authority or regulator;
  • whether meaningful judicial redress is available;
  • what public authority access powers exist;
  • whether surveillance or intelligence powers are broad, targeted, supervised, challengeable or secretive;
  • whether access is subject to necessity, proportionality and oversight;
  • and whether there is relevant history or case law indicating how those powers are used in practice.

The key is to avoid genericity. The question is not merely whether a surveillance framework exists in the abstract. The question is whether, in light of the actual transfer scenario, the combination of the country’s legal environment and the recipient’s access to the data undermines the level of protection expected. That is why the facts gathered earlier matter so much. A destination country analysis looks very different depending on whether:

  • the importer can access full readable HR records;
  • the importer only receives encrypted backups;
  • the service provider never holds the decryption keys;
  • or the tool is an AI-enabled platform that processes readable content and may involve several third-country sub-processors.

The weakest foreign-jurisdiction sections are usually generic and over-compressed. A paragraph states that the country has a data protection law and some regulator activity, briefly notes surveillance laws, and then concludes that the transfer is supportable. That may look balanced, but it often tells the reader very little about whether the actual risks of the transfer have been understood.

So, review whether your jurisdiction assessments:

  • identify the sources used;
  • distinguish between data protection law and state access powers;
  • analyse oversight and redress rather than just listing legal instruments;
  • connect the foreign-law position to the actual service and access model;
  • and make their assumptions visible rather than implicit.

The foreign-jurisdiction assessment is the part of the TIA most likely to reveal real residual risk. It should test not only whether the country has a privacy framework, but whether state access powers, oversight and redress materially affect the transfer in context.

Assessing the probability of unlawful access without creating false precision

The value of a structured probability assessment is that it forces the assessor to identify and weigh the drivers of risk rather than writing in broad, qualitative terms alone. Your template or methodology should reflect this by breaking the analysis into factors such as the legal framework, enforcement practices, surveillance capability and historical precedents, and then asking the user to explain the score reached. This can be very useful, provided the organisation is clear about what the score means and what it does not. A probability score is not an objective truth. It is a structured representation of a judgement based on:

  • the legal environment;
  • the practical features of the service;
  • the type and volume of data involved;
  • whether the data is intelligible to the importer;
  • the strength of safeguards;
  • and the evidence available at the time of the assessment.

That means the score should never stand alone. If a TIA produces a “low likelihood of unlawful access” score but cannot explain, with sources, why that conclusion was reached, the number adds very little. A more defensible approach is to treat probability scoring as an aid to disciplined reasoning. The assessor should be able to show:

  • which factors were considered;
  • what evidence informed each factor;
  • what assumptions were made;
  • and what would cause the score to change.

This is also an area where an internal AI companion can be genuinely helpful if designed carefully. It can prompt the user to upload country-law materials, identify the factors, ask the user to justify each factor with evidence, and then draft the rationale. But it should not be allowed to produce a score with no supporting narrative or no acknowledgement of limits.

Weak scoring exercises look numerical but shallow. They average a handful of factors without showing how those factors relate to the service, the accessibility of the data, or the relevance of the legal environment in context. That gives the impression of rigour without delivering much of it.

If you use a probability methodology, make sure it:

  • identifies the factors clearly;
  • ties them to the actual transfer scenario;
  • documents the evidence and assumptions;
  • and shows what would change the overall assessment.

Probability scoring can support consistency, but it does not replace judgement. The organisation should be able to explain the factors, assumptions and evidence behind any conclusion that the likelihood of unlawful access is low.

Supplementary measures: what actually changes the position

One of the strongest parts of the EDPB’s Recommendations 01/2020 is that they do not treat supplementary measures as abstract compliance decorations. The whole point is whether the measures make the transfer tool effective in context. That is the mindset DPOs need to preserve. The right question is not “Have we listed supplementary measures?” It is “Which measures materially reduce the exposure created by this transfer?” This is where many TIAs become weaker than they appear. Technical, contractual and organisational measures are all listed, but there is little analysis of whether they actually change the importer’s ability to access the data or the practical significance of the destination country’s legal environment.

Technical measures

Technical measures often matter most, but only where they genuinely reduce exposure. Encryption is a classic example. Encryption in transit and at rest is good baseline practice, but if the provider decrypts the data in its own environment and can access it in readable form, the legal relevance of that encryption may be limited. Key management matters. So does whether the importer holds the keys. So does whether the relevant risk is authority access via the importer or access prevented by design.

Pseudonymisation can also be meaningful, but only where the importer cannot realistically re-identify the data subject. If the importer can combine the data with other identifiers or is itself given the key to re-identification, then the measure may add less than the TIA suggests.

Minimisation, segmentation, tokenisation and local pre-processing can all be useful where they materially reduce what is exposed.

Contractual measures

Contractual clauses can support the position, particularly where they:

  • require challenge to overbroad requests;
  • increase transparency around authority access;
  • restrict onward transfers;
  • limit use of the data;
  • and support audit or notice rights.

But contractual promises do not usually neutralise a foreign-law issue on their own, particularly where the provider can still access the data in clear text.

Organisational measures

Organisational controls, such as internal access approvals, support restrictions, logging, escalation routes, and governance around sensitive data inputs, can be important, especially where they reduce frequency and scope of transfer or restrict who can trigger high-risk processing. They matter most when tied to actual process rather than simply listed as good governance principles.

The key to all of this is service-specific analysis. A measure is valuable only if it changes the actual position.

The most common weakness here is that “supplementary measures” are treated as a checklist. Encryption is mentioned, policies are mentioned, contractual clauses are mentioned, and the TIA moves on. But if the provider can still view the data, if the AI service still retains readable content, or if support staff still have access in practice, the analysis is not yet complete.

Review whether your TIA explains:

  • whether the importer can access the data in readable form;
  • who controls decryption or re-identification;
  • whether the measure changes the risk from public-authority access or only improves general security hygiene;
  • and whether the supplementary measures are genuinely linked to the risks identified in the jurisdiction assessment.

Supplementary measures are effective only if they materially reduce the real exposure. The organisation should be able to explain how technical, contractual and organisational controls change the transfer risk in practice rather than merely documenting that they exist.

AI and complex tooling: why the TIA needs stronger evidence, not softer assumptions

AI-enabled services often need stronger TIAs than ordinary SaaS tools, not weaker ones. The reason is straightforward. The processing chain is usually less transparent, the sub-processor landscape may be broader, the distinction between core functionality and underlying model/infrastructure is harder to see, and the organisation may have less visibility over retention, support access and onward processing than it assumes.

For example consider a scenario where tooling to assist meetings is introduced into your Microsoft stack.  The service might sit outside Microsoft’s compliance perimeter and process recordings through US-based infrastructure, raising not only transfer issues but wider concerns around special-category exposure, transparency, cybersecurity, retention and sub-processing through providers such as AWS, GCP, OpenAI and Anthropic. This can happen even where the core M365 environment might be configured within an EU boundary; a connected tool could extract meeting content and process it through its own infrastructure, bypassing that perimeter. That is precisely the kind of fact pattern a TIA must surface.

In an AI context, the transfer analysis needs to ask:

  • does the AI tool extract or replicate personal data from another environment?
  • where are the inference, storage, support and analytics functions actually located?
  • what sub-processors or underlying providers are involved?
  • can the provider’s personnel access readable content?
  • is data retained for troubleshooting, analytics or model improvement?
  • do the provider’s public assurances actually align with the way the service works?

A good TIA for an AI-enabled service is therefore not just about where the data goes. It is also about whether the organisation retains meaningful visibility and control once the data enters that environment.

A recurring weakness is governance lag. The organisation approves an AI-enabled feature because it is commercially useful, then tries to retrofit a privacy assessment around whatever documents the vendor is willing to provide. That often produces high-level assurances rather than a grounded understanding of the service.

Make sure AI-related TIAs:

  • are specific to the AI functionality, not just the core platform;
  • identify the actual processing chain and sub-processors;
  • address retention, reuse and support access explicitly;
  • and are revisited when the service model changes.

AI-enabled services often require a more rigorous TIA, not a lighter one. Their value may be clear, but the transfer assessment should reflect opaque processing chains, broader sub-processing and reduced visibility over data handling.

Using AI to support TIAs: what good looks like in Copilot or a custom GPT

A TIA companion can be genuinely helpful, but only if it is designed to improve the assessment rather than flatten it into polished prose. The value of a TIA AI assistant is not that it drafts faster. It is that it can structure the process, force evidence gathering, separate issues properly, and surface where the analysis is weak.

Good design will be a tool that instructs the user to begin with the country or countries involved, upload relevant documentation such as DataGuidance notes, agreements and checklists, and then step through the TIA section by section rather than attempting to draft the whole thing in one pass. It also anticipates the need for a DPO review checklist at the end of the process.

What the tool should do

Whatever the format, whether built in Copilot or as a type of custom GPT, the assistant should:

  • begin with jurisdiction identification;
  • require the user to upload source materials;
  • distinguish transfer scoping from country-law analysis;
  • force the user to identify missing evidence;
  • and produce both draft wording and a reviewer issues list.

A good AI companion should also slow the user down in the right places. In particular, it should not allow the assessor to draft a conclusion before the foreign-jurisdiction module is complete.

What the foreign-jurisdiction module should do

This is the most important part of the tool. A good module should:

  • ask which official and secondary sources are being used;
  • require the user to identify data protection law, authority access powers, oversight, redress and proportionality;
  • compare vendor claims with country-law realities;
  • ask whether the data is readable to the importer;
  • and require explicit rationale before suggesting any probability score.

In other words, the tool should not just summarise the uploaded materials. It should test them against each other and identify where the evidence is thin or conflicting.

What the tool should not do

A poor TIA assistant will:

  • jump quickly to narrative drafting;
  • assume a country is “low risk” based on one source;
  • treat documentation from legal research and compilation sources as a final answer rather than a research tool;
  • rely on vendor statements without challenge;
  • or generate approval language where the evidence is incomplete.

That is not a TIA companion. It is a drafting shortcut. The greatest risk with internal AI assistance is that it can make weak analysis look more professional. That is particularly dangerous in TIAs because the document may then appear complete and well reasoned when, in substance, the jurisdiction assessment is underdeveloped.

If you are building an AI assistant for TIAs, design it to, at the very least,:

  • start with the jurisdiction;
  • require source material;
  • force service-specific questions;
  • separate narrative drafting from unresolved issues;
  • and produce a DPO review checklist alongside the draft text.

AI assistance can improve consistency in TIAs, but only if the tool is designed to force evidence, challenge assumptions and surface unresolved issues rather than simply producing polished narrative.

What should trigger escalation or refusal?

A defensible TIA process should not assume that every issue can be solved by better drafting. Some issues should trigger escalation, delay or refusal. Examples include:

  • no clear answer on where the data is processed;
  • no visibility over sub-processors;
  • provider access to intelligible special-category or otherwise highly sensitive data;
  • unsupported or weak jurisdiction analysis;
  • inability to explain encryption, key control or re-identification risk;
  • AI-enabled services with unclear retention, reuse or support models;
  • or a strategically important vendor relationship where the organisation has become dependent without understanding the real transfer exposure.

This is especially important for DPOs. The point of a TIA is not simply to complete the document. It is to identify when the organisation is being asked to accept a risk position it cannot yet justify. Some of the weakest outcomes arise where the commercial decision is already fixed and the TIA is treated as a formality to be completed after the fact. That is where unresolved issues tend to be reframed as drafting issues rather than governance issues.

In your process, create escalation criteria for:

  • unclear jurisdiction risk;
  • poor provider transparency;
  • intelligible access to sensitive data;
  • unresolved AI processing questions;
  • and situations where the transfer is operationally important but poorly understood.

Certain TIA findings should be treated as escalation points rather than drafting problems. These include weak visibility over provider architecture, unsupported jurisdiction analysis, intelligible access to sensitive data and safeguards that do not materially reduce risk.

Finally

A strong TIA is not valuable because it produces a completed template. It is valuable because it shows whether the organisation can support a transfer with evidence, judgement and visible governance. That is what makes the foreign-jurisdiction assessment so important. It is the point at which the organisation must move from generic comfort to real analysis. It must show that it understands not only the transfer mechanism, but the legal and practical environment into which the data is moving and whether the safeguards in place actually change the position.

For DPOs, this is one of the clearest indicators of programme maturity. If the organisation can identify the transfer correctly, involve the right parties, assess the foreign jurisdiction properly, test the practical value of supplementary measures, and document the conclusion in a disciplined way, it is much more likely to be operating a privacy programme that can withstand criticism.

That is the real value of a TIA. It does not just measure legal awareness. It measures whether governance is actually working.

This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Data Protection Officer Services