This article accompanies Hour 5: DPIAs in Practice in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.
Most organisations do not need to be persuaded that AI use should be assessed. The harder question, and the more useful one in practice, is whether the DPIA is still telling the truth once the system is live. In AI work, that is usually where the real complexity sits. The document may be complete, the workflow may be followed, and the lawful basis may be clear on paper, but the system itself can still move underneath it through changing outputs, shifting reliance, vendor opacity, uncertain ownership of prompts and outputs, and a level of decision influence that is often more significant in practice than it first appears. That is where the assessment stops being a formality and starts becoming a test of whether the organisation really understands the processing it is trying to stand over.
In most mature organisations, the problem is not that there is no governance process. There is usually a screening route, some form of privacy review, a procurement track, contractual diligence and a familiar DPIA workflow. In many cases, those structures work well for conventional systems because the relationship between purpose, data, system and output is reasonably stable. The organisation can say with some confidence what the system is for, what data it uses, what the output is, and where the main risks sit.
AI systems are harder to assess because they loosen that relationship. The model may be procured for one purpose but used in several. It may sit inside an existing platform rather than arrive as a standalone project. Its outputs may be advisory, but still influential. The vendor relationship may look processor-like at first glance, but on closer inspection the organisation may have only partial visibility into how the model is trained, updated or bounded. Even where the organisation’s own use case is fairly narrow, the system it is relying on may be the product of a much larger lifecycle that sits outside its line of sight.
That is why experienced teams often feel a degree of friction when carrying out AI DPIAs. The process itself is not necessarily wrong. It is that the assessment is being asked to do more than record a fixed set of facts. It is being asked to capture a moving relationship between use, control, vendor assurances, human behaviour and downstream impact.
The practical difficulty is not that the organisation lacks a DPIA. It is that the system is often more fluid than the assessment assumes.
The question is not whether AI needs a separate framework in every case. The more useful question is whether the current DPIA method is deep enough to capture the operational and legal complexity that AI introduces into otherwise familiar governance environments.
One of the first places this becomes visible is in the question of ownership. In ordinary vendor arrangements, the organisation can usually draw a fairly clear line. It controls the purpose. It determines the means to a sufficient degree to act as controller. The vendor provides a service and acts, in broad terms, as processor. That model may still be correct at a high level, but in AI systems it often hides a more complicated reality.
The organisation may control the workflow into which the AI tool is introduced, but it may not own the model, the training conditions, the update cycle, or the terms under which prompts, outputs and telemetry are handled. It may know that a processor agreement is in place and that the platform sits within a defined environment, but still be relying heavily on a set of representations from the vendor about how the model behaves, what happens to prompts, whether outputs are reused for training, and what forms of logging or retention take place.
That is already enough to show why ownership is harder here than a standard controller and processor diagram suggests. The organisation may be fully accountable for the use of the tool in its own processing, while still lacking complete control over the conditions that shape the tool’s behaviour. That does not make deployment impossible. It does mean the DPIA needs to be more candid about the limits of control than many conventional assessments are used to being.
In AI systems, accountability often sits more neatly than control.
If the organisation is being asked to stand over the system, what exactly is it standing over. Is it standing over the workflow in which the system is used. The prompts submitted by staff. The outputs generated. The vendor assurances. The deployment environment. Or all of those together. A DPIA that collapses those layers into one bland statement of processor support is often telling the reader less than they need to know.
In AI DPIAs, the more practical issue is often not who owns the intellectual property in a strict legal sense, but how intellectual property is used to limit visibility into how the system actually works. Vendors will frequently rely on IP protections to avoid disclosing detail about training data, model behaviour, tuning, or how outputs are generated. That position may be legitimate from a commercial perspective, but it creates a gap for the organisation carrying out the DPIA.
The organisation is still expected to assess risk, explain processing, and stand over the system’s use. That becomes more difficult where key elements of the system are effectively treated as a black box.
The organisation is being asked to assess risk in a system it cannot fully see, and that is where the difficulty tends to sit.
This is where the DPIA can become thinner than it appears. The document may describe the use case, the data inputs, and the intended outputs, but still rely heavily on vendor assurances rather than understanding. Where explanations are limited, the assessment often shifts from analysis to trust.
That is not necessarily a failing, but it does need to be recognised for what it is. In practice, this affects several aspects of the risk assessment. It becomes harder to explain how outputs are generated, whether they may reflect underlying training data, how the system may behave in edge cases, or how it might change over time. It also affects the organisation’s ability to explain its own safeguards. Controls such as human review, output checking or restricted use are often relied upon more heavily where the system itself cannot be interrogated.
This is also where the question of prompts and outputs becomes relevant, not primarily as an ownership issue, but as a proxy for understanding how the system is being used and what happens to the data once it enters it. Organisations will often seek confirmation that prompts, outputs and customer data are not reused for model training or service improvement without permission. That is a sensible step, but it is also an indication that the organisation is working within a defined trust boundary rather than full visibility.
At that point, the DPIA is partly an assessment of the system, and partly an assessment of the organisation’s reliance on the vendor.
For a DPO or privacy lead, the more useful question is not whether the organisation has perfect insight into the model. That is rarely achievable. The question is whether the limits of visibility are clearly understood, documented and reflected in how the system is used. If the organisation cannot explain where its understanding ends and where it is relying on vendor assurance, the DPIA is likely to overstate certainty. If it can, the assessment becomes more realistic and more defensible, even where some elements of the system remain opaque.
Most AI DPIAs start where privacy teams are used to starting, namely with input data. What personal data is being submitted to the system. Is special category data involved. Is children’s data involved. What is the lawful basis. Are there transfer issues. Are there processor terms. All of this remains necessary.
The difficulty is that, in AI use cases, the output is often where the processing becomes genuinely consequential. The output may summarise, classify, prioritise, predict, recommend or draft. Once that output enters a workflow, it may change how staff behave, how issues are framed, how quickly matters are escalated and how confidently decisions are taken. It may also itself become a new record.
That means a DPIA that does not give sufficient attention to outputs is often assessing only half the story. A system that takes in personal data and returns an answer is not complete when the answer is generated. The real processing chain continues when that answer is used. In some contexts, the output becomes part of the permanent file. In others, it is copied into correspondence or becomes the basis of a recommendation to another decision-maker. In still others, it is used transiently, but still shapes what someone chooses to do next.
In AI systems, outputs are often not the end of processing. They are the beginning of the next stage.
That is one reason why generic phrases such as “AI is used only to assist staff” can be so weak in practice. Assistance can still be consequential. An assistive tool that changes how staff interpret an enquiry, assess a risk, or allocate attention may not be determinative in a legal sense, but it is still part of the machinery that produces outcomes.
Many organisations become overly focused on whether Article 22 is engaged. That is understandable, but it can lead to the wrong emphasis. In practice, the more difficult question is often not whether the decision is fully automated, but how significantly the AI tool is influencing what humans do.
One of the areas where AI DPIAs tend to require more careful judgement is in distinguishing between formal automated decision-making and practical decision influence. Many systems will be positioned, correctly, as assistive rather than determinative. There is a human in the loop. The system does not make final decisions independently. On that basis, the strict threshold for automated decision-making may not be met. That is often where the analysis stops.
In practice, the more relevant question is not whether the system is formally making decisions, but how much it is shaping them. AI tools can influence how information is presented, how cases are prioritised, how risks are framed, and how responses are drafted. Even where a human retains responsibility, the system may still play a significant role in directing attention and shaping judgement.
The absence of fully automated decision-making does not mean the absence of meaningful influence.
This is where DPIAs tend to benefit from going beyond classification. The presence of a human reviewer, by itself, does not say much about how decisions are actually made. The more useful analysis looks at how that review operates in practice. Whether outputs are routinely challenged, whether alternative interpretations are considered, and whether the user has both the context and the authority to depart from the system’s output where necessary.
A similar issue arises in other types of system design, where the question is not whether a process is technically automated, but whether it materially shapes outcomes. In those contexts, the focus tends to move away from labels and towards effect. Who sees what, in what order, with what framing, and with what constraints. Those are often the factors that determine how decisions are ultimately reached. The same approach is useful in AI assessments.
The more practical question is not whether the system makes the decision, but how much it shapes the conditions under which the decision is made.
For a DPIA, that shift in perspective allows the assessment to better capture where risk arises in practice. It also provides a clearer basis for identifying safeguards, not only at the point of decision, but in how the system is embedded within the workflow that leads to that decision. The presence of a human does not tell you very much by itself. The more useful question is what room remains for real human judgement once the system’s output is in front of them.
“Human oversight” is one of the most overused phrases in AI documentation, partly because it often sounds reassuring while remaining undefined. A strong DPIA should resist that temptation. The fact that a human looks at an output does not necessarily mean the output is meaningfully challenged. Nor does it mean the risk created by the system has been mitigated to a defensible level.
The better question is whether the human user has the authority, time, context and habit of mind to depart from the AI output where needed. If the workflow, productivity expectation or cultural framing of the tool encourages the user to accept outputs at pace, the control may exist on paper while doing much less in practice. If the system is used repeatedly and begins to feel reliable, challenge may reduce over time even where the initial design assumed active review. If staff are not trained to recognise uncertainty, hallucination, model limitation or output ambiguity, the review may become little more than a sense-check.
The human review itself needs to be designed as a control. It needs to be evidenced. It needs to sit in a workflow where disagreement with the output is legitimate, expected and not quietly penalised by time pressure.
Human oversight only has governance value if it is capable of changing the outcome.
If the reviewer disagreed with the system, what would happen next? If the answer is unclear, then the organisation may be relying on a control it has not really operationalised.
A further pressure point in AI DPIAs is the tendency to treat legal basis and necessity as fixed once the initial assessment has been completed. In many cases, the original reasoning is sound. The organisation identifies a legitimate purpose, determines that the processing is compatible with that purpose, and documents the lawful basis accordingly.
The difficulty is that AI systems can change how that purpose is operationalised without anyone consciously deciding that the use case has changed. The same tool begins to support more teams. Outputs start to shape decision-making more directly. A summarisation tool begins to act as a risk triage aid. A drafting assistant becomes part of formal communication. A classification tool quietly changes how issues are prioritised. Nothing about the headline purpose may appear to have moved. In practice, the processing may be doing more, or doing something more intrusive, than the original justification assumed.
The lawfulness problem is often not that the organisation chose the wrong lawful basis at the start. It is that the organisation no longer tests whether the current behaviour of the system is still consistent with the reasoning it originally documented.
The purpose statement may remain stable while the operational reality beneath it becomes more ambitious.
In your organisation revisit one live use case. Ask what the system is now doing in practice. Then ask whether the original necessity analysis still describes that reality.
Vendor dependence is not itself unusual. What makes AI different is the extent to which the organisation may be asked to rely on a system it cannot fully inspect, control or explain, while still carrying the burden of accountability for how it is used. That creates what can fairly be described as borrowed risk.
The organisation may receive assurances about data residency, processor role, model training separation, retention and isolation. Those assurances matter and should be documented. For example, you might look for DPA, EU deployment, and the non-use of prompts and outputs for model training, and treat positive answers as key assurances that materially improved the data protection position. But those assurances do not eliminate the underlying asymmetry. The organisation is still being asked to stand over a tool it did not build and cannot fully interrogate.
That does not mean the DPIA should become speculative. It does mean it should be candid about the trust boundary. Where the organisation is relying on vendor representations, that reliance should be explicit. Where the vendor can update the system in ways that may affect output behaviour, the review trigger should not depend on the organisation discovering change after the fact. Where the organisation cannot fully assess the training lifecycle, that limitation should inform how narrowly or cautiously the tool is deployed.
A vendor assurance can improve the risk position. It does not remove the fact that some of the risk remains borrowed.
At some point, every strong DPIA comes back to evidence. Not because documentation is the goal in itself, but because accountability is difficult to sustain where the organisation cannot show how the stated controls work in practice.
Do not adopt a view on the least-worst interim option, but look to support implementations through formal access approval, SOPs, audit, time-bound permissions, a DPIA addendum (as development moves on) and updated transparency measures. Comfort level might be linked to documented confirmation on processor position, data usage, retention and continued human review . The common theme is that defensibility comes from the combination of reasoning and proof.
That matters in AI work because systems can drift quietly. Outputs may be reused. Users may rely more heavily on them. Vendor changes may affect behaviour. If the organisation has no logs, no review trail, no evidence of challenge, no record of who approved expanded use, and no mechanism for revisiting the DPIA when the system changes, then the assessment may still exist but its value under scrutiny is much lower.
A DPIA is strongest where the organisation can still explain the live system six months later and show how its controls actually operate.
Senior management or an executive board does not need another generic assurance that an AI assessment was completed. They need to know whether the organisation can evidence the assumptions on which that assessment rests.
The most useful response to this complexity is usually not to rewrite the template first. It is to test one live use case against present reality. Take a current AI deployment and trace it from input to output to downstream use. Start with the prompts or source data being submitted. Then look at what comes back from the system. Then follow what happens next. Does the output sit transiently on screen, or does it become part of a case file, client communication, recommendation, or record of decision. Who is expected to review it. What would count as disagreement with it. What does the contract or vendor documentation say about prompts, outputs, retention and training. What happens when the vendor changes the service. Which part of that story is clearly owned by the organisation, and which part depends on trust in the vendor environment.
Then read the DPIA against that workflow, not the other way around. If the document still accurately describes the processing, identifies the main points of influence, reflects the real control position and is supported by evidence that the controls operate, the organisation is in a stronger place than many. If it does not, the issue is usually not that the DPIA was pointless. It is that the assessment needs to be reconnected to the live system.
The DPO is the function that tests whether the organisation is still telling itself the truth about the system it is using.
For a DPO, privacy lead or senior compliance professional, the most practical next step is to pick one live AI use case and run a focused reality check against the current DPIA. Do not start with the template. Start with the workflow.
Trace the processing from the moment data is submitted to the system through to the point where an output influences action. Confirm what categories of personal data are actually being used in practice, including any drift from the original use case. Check whether special category data, children’s data, confidential commercial material or legally sensitive content are now entering the tool more often than originally assumed.
Then test the ownership and control picture. Can the organisation explain what it owns, what it merely uses, and what it is relying on vendor assurances to support. Is there a clear position on prompts, outputs, telemetry, retention and model training. Can the organisation explain whether outputs are stored, reused, incorporated into files, or copied into communications. If those points are not clear, the DPIA is already working on incomplete assumptions.
Move next to output handling. Identify whether outputs remain transient or whether they become records, recommendations, drafts, triage inputs, notes or evidence relied upon later. Ask whether the DPIA currently treats outputs as part of the processing chain or still focuses mainly on inputs. In many organisations, this is where the assessment is thinnest.
Then look at decision influence. Forget for a moment whether Article 22 is engaged. Ask instead how much practical weight the output carries. Does it change how quickly a matter is escalated, how attention is allocated, how a person is described, or how a professional frames their judgement. If the answer is yes, then the DPIA should say so in plain terms.
After that, test human oversight as a real control. Who reviews the output. What are they expected to do with it. Can they depart from it without friction. Is there evidence that challenge happens in practice. Have users been trained on limitations, or only on functionality. If human review is being relied upon as the key safeguard, it should be possible to explain how it changes outcomes and where that is evidenced.
Then revisit legal basis and necessity. Ask whether the purpose statement and lawful basis still match how the system is actually being used now. This is particularly important where the same tool has spread to new teams, new contexts or more influential stages of decision-making. If the use has become more ambitious, the original justification may need to be refreshed even where the headline purpose appears unchanged.
Finally, test evidencing and governance connection. Can the organisation show vendor due diligence, approval records, review triggers, logs, audit trails, SOPs, training records, change control, and any escalation or reapproval where the use case expanded. If the answer is largely no, then the DPIA is doing too much work alone.
A practical checklist for that review is below.
That exercise is rarely wasted. Even where the answer is broadly positive, it usually sharpens the organisation’s understanding of where the true points of risk and reliance sit. Where the answer is less comfortable, it gives the DPO or privacy lead something much more useful than another abstract debate about AI governance. It gives them a concrete basis for updating the assessment so that it once again reflects the system the organisation is genuinely trying to stand over.
This article is intended to support the learning covered in Hour 5 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.