This article accompanies Hour 2: Cross-Border Transfers in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.
A Transfer Impact Assessment (or Transfer Risk Assessment – TRA) is the point at which transfer law stops being abstract and becomes a real organisational decision. In theory, the legal position may look simple enough: identify the transfer, identify the transfer tool, and then consider whether additional safeguards are needed. In practice, that is not where most TIAs fail. They usually fail earlier and more quietly.
They fail because the underlying transfer scenario has not been analysed properly. They fail because the organisation does not know enough about the recipient’s real operating model. They fail because the assessment of the foreign jurisdiction is generic rather than specific. And they fail because supplementary measures are described in broad compliance language without asking whether they materially change the exposure.
That is why a TIA matters. A TIA is not just a document to satisfy Schrems II. It is where the organisation has to demonstrate that it understands what is happening, what legal and practical risks arise in the recipient jurisdiction, and why the transfer remains supportable. For DPOs, this is one of the most revealing areas of privacy practice. A strong TIA usually points to stronger governance, better supplier oversight and more mature internal coordination. A weak TIA often points to the opposite.
A TIA is often reduced to a single question: “Can we still transfer the data using SCCs?” That question is too narrow. A useful TIA is trying to determine, in sequence:
That is why the TIA process has to be disciplined. It starts by identifying the country or countries involved and requiring relevant documentation, including relevant legislation, items such as DataGuidance materials, agreements, and internal checklists, before moving section by section through the template and analysing each part against the evidence provided. The template itself should break the work into the right components: transfer overview, receiving jurisdiction, transfer details, existing safeguards, alternatives, proportionality, law and practice in the recipient country, supplementary measures, probability assessment and approval. That structure is not just administratively tidy. It reflects the underlying legal logic.
This is also consistent with the EDPB’s post-Schrems II approach. The EDPB Recommendations 01/2020 on supplementary measures remain the central official guidance for organisations trying to assess whether an Article 46 transfer tool remains effective in light of the law and practice of the destination country. The EDPB Guidelines 05/2021 on the interplay between Article 3 and Chapter V are equally important because they help determine whether the arrangement is even a restricted transfer under Chapter V in the first place. In practice, that initial classification matters more than many organisations realise. A TIA that begins with the wrong transfer analysis is already weakened before it gets to the foreign-law questions.
One of the most common problems in transfer work is that very different scenarios are collapsed into a single generic category called “international transfer.” That may be administratively convenient, but it is analytically weak.
An employee temporarily working from a third country does not necessarily raise the same issues as a third-country contractor engaged to access internal systems. A cloud platform hosted in the EEA is not the same as a connected service that extracts data from that platform and processes it in its own US environment. Occasional remote support access is not the same as routine privileged administrative access. Pseudonymised data used for analytics is not the same as a readable HR or health dataset accessible in clear text. These distinctions matter because they shape:
For DPOs, the practical lesson is straightforward: a TIA should not begin with the transfer tool. It should begin with the transfer scenario. That means identifying:
A good template should capture exporter/importer roles, the transfer mechanism, the nature of the transfer, onward transfers, categories of personal data, special-category and criminal data, data subjects, format of the data, method of transfer and existing security measures. This is a strong foundation, because it makes the foreign-jurisdiction analysis service-specific rather than generic.
In practice, the weakest TIAs often reflect poor factual scoping rather than poor legal knowledge. Hosting is mapped but support access is not. The primary vendor is known but the sub-processor chain is not. An AI-enabled tool is treated as though it sits safely inside the main platform’s environment, even though it extracts and processes data through separate infrastructure. The result is that the TIA looks complete but is addressing the wrong transfer.
When assessing your practices internally, review whether your scoping process distinguishes between:
The quality of a TIA depends on the quality of the underlying transfer analysis. If the organisation has not correctly identified the parties, access model, data exposure and onward transfer chain, the assessment will be weaker than it appears.
A TIA should never be treated as a privacy-only paperwork exercise. It is a cross-functional assessment, and that matters because no single function usually holds all the facts. A defensible TIA should be:
The DPO or privacy lead should normally coordinate the assessment. That means framing the questions, testing assumptions, identifying gaps, and ensuring the final reasoning is coherent and evidence-based. But the DPO should not be left trying to infer system architecture, key management, support access patterns or sub-processor chains without support. Legal should be involved to assess:
IT, architecture or security teams are often essential because the foreign-law risk only becomes meaningful when matched to technical facts. If the provider cannot access intelligible data, the analysis may look different than if provider personnel can access clear-text customer content in the course of support or service delivery. That means technical teams need to clarify:
The relevant business or system owner also matters. A TIA is not just about whether a transfer is possible; it is also about whether the transfer is necessary, whether alternatives exist, and whether the organisation has become dependent on the arrangement in a way that raises wider governance concerns.
Procurement or vendor management is often essential because:
Risk, compliance or resilience functions may also need to be involved where the provider is strategically important or where the transfer intersects with broader third-party oversight. In regulated settings, particularly financial services, the same provider relationship may matter at once for privacy, outsourcing, operational resilience and dependency management.
AI governance or product/data governance teams should also be involved where AI-enabled tools are in scope, because the data-flow and control issues are often more opaque and more dynamic than in ordinary SaaS arrangements.
Weak TIAs often reflect fragmented ownership. Privacy has the template, legal has the contract, IT has a partial understanding of hosting, and procurement holds vendor papers, but no one assembles the picture properly. The result is that the final document is smoother than the underlying analysis.
In assessing your practice, make sure your TIA process identifies:
A credible TIA is cross-functional. It should combine privacy, legal, technical, supplier and business inputs rather than being treated as a privacy-only exercise.
This is the part of the TIA most likely to draw criticism if it is weak, and the part most likely to make the assessment genuinely meaningful if it is done properly. A poor jurisdiction assessment often asks one shallow question:
“Does this country have a data protection law?”
A stronger jurisdiction assessment asks the right question:
“In light of this particular transfer scenario, can the legal environment of the destination country undermine the level of protection expected under GDPR?”
That distinction matters.
A country may have a modern privacy statute and an active regulator, but still allow forms of state access, surveillance or national-security processing that are relevant to the transfer in question. Equally, the existence of public-authority access powers does not automatically make the transfer unsupportable. The issue is whether those powers, in context, materially affect the ability of the transfer tool to provide an essentially equivalent level of protection.
That is why a strong TIA needs to assess both the general legal environment, and the practical relevance of that environment to the transfer at hand.
A good template addresses public authority access, legal basis, necessity and proportionality, safeguards against excessive access, and case studies or precedents. It should further address the wider legal environment of the recipient country, including dedicated data protection law, rights, regulator independence, judicial remedies, public authority access, surveillance programmes, and limitations and oversight. One part looks at state access and proportionality directly; the other assesses the wider data protection framework of the country.
This is one of the clearest areas where internal AI tooling can improve quality if designed properly. A TIA companion should not allow users to “wing” the foreign-law analysis based on memory or a single source. The sources should usually be layered.
At the top should be the official guidance:
Supporting those should be:
The role of a tool like DataGuidance is important here. It is a research aid, not a final legal conclusion. It is useful for assembling a jurisdiction profile, identifying relevant legal themes and orienting the assessor to the local framework. But it should not replace a real analysis of how public authority access, redress, oversight and practical enforcement interact with the service in question.
A strong assessment should address, at minimum:
The key is to avoid genericity. The question is not merely whether a surveillance framework exists in the abstract. The question is whether, in light of the actual transfer scenario, the combination of the country’s legal environment and the recipient’s access to the data undermines the level of protection expected. That is why the facts gathered earlier matter so much. A destination country analysis looks very different depending on whether:
The weakest foreign-jurisdiction sections are usually generic and over-compressed. A paragraph states that the country has a data protection law and some regulator activity, briefly notes surveillance laws, and then concludes that the transfer is supportable. That may look balanced, but it often tells the reader very little about whether the actual risks of the transfer have been understood.
So, review whether your jurisdiction assessments:
The foreign-jurisdiction assessment is the part of the TIA most likely to reveal real residual risk. It should test not only whether the country has a privacy framework, but whether state access powers, oversight and redress materially affect the transfer in context.
The value of a structured probability assessment is that it forces the assessor to identify and weigh the drivers of risk rather than writing in broad, qualitative terms alone. Your template or methodology should reflect this by breaking the analysis into factors such as the legal framework, enforcement practices, surveillance capability and historical precedents, and then asking the user to explain the score reached. This can be very useful, provided the organisation is clear about what the score means and what it does not. A probability score is not an objective truth. It is a structured representation of a judgement based on:
That means the score should never stand alone. If a TIA produces a “low likelihood of unlawful access” score but cannot explain, with sources, why that conclusion was reached, the number adds very little. A more defensible approach is to treat probability scoring as an aid to disciplined reasoning. The assessor should be able to show:
This is also an area where an internal AI companion can be genuinely helpful if designed carefully. It can prompt the user to upload country-law materials, identify the factors, ask the user to justify each factor with evidence, and then draft the rationale. But it should not be allowed to produce a score with no supporting narrative or no acknowledgement of limits.
Weak scoring exercises look numerical but shallow. They average a handful of factors without showing how those factors relate to the service, the accessibility of the data, or the relevance of the legal environment in context. That gives the impression of rigour without delivering much of it.
If you use a probability methodology, make sure it:
Probability scoring can support consistency, but it does not replace judgement. The organisation should be able to explain the factors, assumptions and evidence behind any conclusion that the likelihood of unlawful access is low.
One of the strongest parts of the EDPB’s Recommendations 01/2020 is that they do not treat supplementary measures as abstract compliance decorations. The whole point is whether the measures make the transfer tool effective in context. That is the mindset DPOs need to preserve. The right question is not “Have we listed supplementary measures?” It is “Which measures materially reduce the exposure created by this transfer?” This is where many TIAs become weaker than they appear. Technical, contractual and organisational measures are all listed, but there is little analysis of whether they actually change the importer’s ability to access the data or the practical significance of the destination country’s legal environment.
Technical measures often matter most, but only where they genuinely reduce exposure. Encryption is a classic example. Encryption in transit and at rest is good baseline practice, but if the provider decrypts the data in its own environment and can access it in readable form, the legal relevance of that encryption may be limited. Key management matters. So does whether the importer holds the keys. So does whether the relevant risk is authority access via the importer or access prevented by design.
Pseudonymisation can also be meaningful, but only where the importer cannot realistically re-identify the data subject. If the importer can combine the data with other identifiers or is itself given the key to re-identification, then the measure may add less than the TIA suggests.
Minimisation, segmentation, tokenisation and local pre-processing can all be useful where they materially reduce what is exposed.
Contractual clauses can support the position, particularly where they:
But contractual promises do not usually neutralise a foreign-law issue on their own, particularly where the provider can still access the data in clear text.
Organisational controls, such as internal access approvals, support restrictions, logging, escalation routes, and governance around sensitive data inputs, can be important, especially where they reduce frequency and scope of transfer or restrict who can trigger high-risk processing. They matter most when tied to actual process rather than simply listed as good governance principles.
The key to all of this is service-specific analysis. A measure is valuable only if it changes the actual position.
The most common weakness here is that “supplementary measures” are treated as a checklist. Encryption is mentioned, policies are mentioned, contractual clauses are mentioned, and the TIA moves on. But if the provider can still view the data, if the AI service still retains readable content, or if support staff still have access in practice, the analysis is not yet complete.
Review whether your TIA explains:
Supplementary measures are effective only if they materially reduce the real exposure. The organisation should be able to explain how technical, contractual and organisational controls change the transfer risk in practice rather than merely documenting that they exist.
AI-enabled services often need stronger TIAs than ordinary SaaS tools, not weaker ones. The reason is straightforward. The processing chain is usually less transparent, the sub-processor landscape may be broader, the distinction between core functionality and underlying model/infrastructure is harder to see, and the organisation may have less visibility over retention, support access and onward processing than it assumes.
For example consider a scenario where tooling to assist meetings is introduced into your Microsoft stack. The service might sit outside Microsoft’s compliance perimeter and process recordings through US-based infrastructure, raising not only transfer issues but wider concerns around special-category exposure, transparency, cybersecurity, retention and sub-processing through providers such as AWS, GCP, OpenAI and Anthropic. This can happen even where the core M365 environment might be configured within an EU boundary; a connected tool could extract meeting content and process it through its own infrastructure, bypassing that perimeter. That is precisely the kind of fact pattern a TIA must surface.
In an AI context, the transfer analysis needs to ask:
A good TIA for an AI-enabled service is therefore not just about where the data goes. It is also about whether the organisation retains meaningful visibility and control once the data enters that environment.
A recurring weakness is governance lag. The organisation approves an AI-enabled feature because it is commercially useful, then tries to retrofit a privacy assessment around whatever documents the vendor is willing to provide. That often produces high-level assurances rather than a grounded understanding of the service.
Make sure AI-related TIAs:
AI-enabled services often require a more rigorous TIA, not a lighter one. Their value may be clear, but the transfer assessment should reflect opaque processing chains, broader sub-processing and reduced visibility over data handling.
A TIA companion can be genuinely helpful, but only if it is designed to improve the assessment rather than flatten it into polished prose. The value of a TIA AI assistant is not that it drafts faster. It is that it can structure the process, force evidence gathering, separate issues properly, and surface where the analysis is weak.
Good design will be a tool that instructs the user to begin with the country or countries involved, upload relevant documentation such as DataGuidance notes, agreements and checklists, and then step through the TIA section by section rather than attempting to draft the whole thing in one pass. It also anticipates the need for a DPO review checklist at the end of the process.
Whatever the format, whether built in Copilot or as a type of custom GPT, the assistant should:
A good AI companion should also slow the user down in the right places. In particular, it should not allow the assessor to draft a conclusion before the foreign-jurisdiction module is complete.
This is the most important part of the tool. A good module should:
In other words, the tool should not just summarise the uploaded materials. It should test them against each other and identify where the evidence is thin or conflicting.
A poor TIA assistant will:
That is not a TIA companion. It is a drafting shortcut. The greatest risk with internal AI assistance is that it can make weak analysis look more professional. That is particularly dangerous in TIAs because the document may then appear complete and well reasoned when, in substance, the jurisdiction assessment is underdeveloped.
If you are building an AI assistant for TIAs, design it to, at the very least,:
AI assistance can improve consistency in TIAs, but only if the tool is designed to force evidence, challenge assumptions and surface unresolved issues rather than simply producing polished narrative.
A defensible TIA process should not assume that every issue can be solved by better drafting. Some issues should trigger escalation, delay or refusal. Examples include:
This is especially important for DPOs. The point of a TIA is not simply to complete the document. It is to identify when the organisation is being asked to accept a risk position it cannot yet justify. Some of the weakest outcomes arise where the commercial decision is already fixed and the TIA is treated as a formality to be completed after the fact. That is where unresolved issues tend to be reframed as drafting issues rather than governance issues.
In your process, create escalation criteria for:
Certain TIA findings should be treated as escalation points rather than drafting problems. These include weak visibility over provider architecture, unsupported jurisdiction analysis, intelligible access to sensitive data and safeguards that do not materially reduce risk.
A strong TIA is not valuable because it produces a completed template. It is valuable because it shows whether the organisation can support a transfer with evidence, judgement and visible governance. That is what makes the foreign-jurisdiction assessment so important. It is the point at which the organisation must move from generic comfort to real analysis. It must show that it understands not only the transfer mechanism, but the legal and practical environment into which the data is moving and whether the safeguards in place actually change the position.
For DPOs, this is one of the clearest indicators of programme maturity. If the organisation can identify the transfer correctly, involve the right parties, assess the foreign jurisdiction properly, test the practical value of supplementary measures, and document the conclusion in a disciplined way, it is much more likely to be operating a privacy programme that can withstand criticism.
That is the real value of a TIA. It does not just measure legal awareness. It measures whether governance is actually working.
This article is intended to support the learning covered in Hour 2 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.