AI is already embedded in most organisations. It is not usually introduced as a formal programme. It appears through vendor tools, system updates, or internal use cases that expand over time. In many cases, it is introduced as a feature rather than a decision. Governance tends to lag behind that.
What follows is not a lack of compliance, but a lack of structure. The same issues appear repeatedly: DPIAs completed late or retrospectively, records of processing that no longer reflect how systems operate, vendor arrangements that say little about model training or updates, and references to human oversight that are not defined in practice.
The problem is not that organisations are ignoring their obligations. It is that the way AI is introduced does not align with how governance processes are typically designed. The practical question is how to bring those two things back into line.
There is a tendency to treat AI governance as something new that needs to be built alongside existing GDPR and data governance processes. In practice, that usually creates duplication rather than clarity.
A Data Protection Impact Assessment already requires an organisation to understand what is happening, why it is happening, what data is involved, and what risks arise from that processing. It also requires those risks to be addressed through appropriate controls. Those questions do not change because a system is described as “AI”, but what does change is how those questions need to be applied.
AI systems are often less transparent in how outputs are generated, more dependent on third-party models, and more likely to evolve over time without a clear trigger for reassessment. They also tend to influence decisions in ways that are not always immediately visible. It is this precise combination that creates the governance gap. The EU AI Act does not introduce a separate assessment model to solve this. Its approach is to require ongoing, lifecycle-based risk management, with obligations scaling depending on the level of risk involved. That is consistent with the risk-based structure already present in GDPR and reflected in how different categories of AI systems are treated under the Act. We have already covered the risk categories in more detail here:
https://xpertdpo.com/understanding-minimal-and-limited-risk-under-the-eu-ai-act/
This does not replace specific obligations under the AI Act where they apply, particularly for high-risk systems. However, in practice, many of the same governance questions arise and the issue is less the absence of structure than how existing processes are used. The practical implication is that most organisations do not need a new framework. They need to make existing processes, particularly the DPIA, operate in a way that reflects how AI systems are actually designed, deployed and changed over time.
In most organisations, the issue is not whether a DPIA is completed. It is when and how it is completed. By the time a DPIA is started, the position is often already fixed. The system has been selected, the vendor has been engaged, and the use case has moved beyond its original scope. At that point, the DPIA becomes descriptive rather than decision-making. That is particularly the case with AI.
A tool is introduced for a limited purpose, summarisation, triage, internal support, and then expands. It is connected to additional data sources, used in new contexts, or relied on more heavily in decision-making. None of those steps necessarily trigger a formal review, but each of them changes the risk profile. As Stuart often puts it:
“By the time the DPIA lands on your desk, the real decisions have already been made.”
The documentation reflects how the system was originally understood, not how it is practically operating. The DPIA describes an earlier version of the use case, the records of processing do the same and vendor terms may never have been revisited at all. From a governance perspective, those positions gradually drift apart.
That is why the failure point is almost always at intake. If there is no structured way of identifying and assessing AI use cases early, governance becomes reactive. Once the system is embedded, the scope for meaningful change is limited, and the DPIA is reduced to recording decisions rather than shaping them.
For DPOs, the shift is a practical one. The objective is not to produce a better document. It is to ensure that assessment happens early enough to influence what is being built or procured, and that changes to the system trigger review as a matter of course. Without that, even a well-run DPIA process will not produce an accurate or defensible outcome. That is where the limitation of the traditional approach becomes clear.
A DPIA is often treated as a document that is completed at a point in time, but this approach does not hold for AI systems. The issue is not that the structure of the DPIA is wrong, it is that it is applied as if the system it describes is static. In reality, most AI systems change continuously in that vendors update models, new data sources are introduced, and use cases expand in ways that are not always formally tracked. If the DPIA does not move with that, it becomes inaccurate very quickly.
The more useful way to approach this is to treat the DPIA as part of a lifecycle that sits alongside the system from the point it is first considered through to ongoing use. This is not to suggest that the DPIA captures every aspect of AI governance. Technical validation, model performance, and system-specific controls will often sit outside it. In practice, however, the DPIA provides a natural anchor point. It is where purpose, data use, risk and decision-making intersect. When used properly, it connects those elements to the organisation’s wider governance processes, including risk management, security, vendor oversight and accountability. The lifecycle itself is not conceptually complex, but applying it consistently in practice requires it to be properly connected to how the organisation actually operates.
It starts at screening. The point is not to produce a detailed assessment, but to ensure that the use case is identified early, ownership is clear, and there is a conscious decision about whether a DPIA is required. That decision should be recorded, not assumed.
From there, the focus moves to describing what is actually happening. In practice, that means being clear about purpose, data inputs, outputs, and how those outputs are used in real workflows. A short, accurate description and a simple data flow diagram are usually sufficient. What matters is that this reflects reality, not an initial assumption about how the system will be used.
At this stage, there should already be a link into the organisation’s records of processing. If the introduction of an AI system changes how data is accessed, combined or used, the RoPA should be updated at the same time. Treating the DPIA and the RoPA as separate exercises is one of the most common sources of inconsistency.
The next step is lawfulness and necessity, but again the emphasis is on how this operates in practice. The question is not simply which lawful basis applies, but whether the processing being carried out matches that basis as the system evolves. This is particularly relevant where use cases expand over time or where outputs begin to influence decisions more directly.
AI systems require a more specific assessment of how risks arise in operation and identifying such, for example, how outputs are used, where over-reliance might develop, or where vendor limitations create blind spots. These risks should not sit only within the DPIA. They need to be visible within the organisation’s wider risk framework, with clear ownership and appropriate escalation where required. Mitigations then need to be defined in a way that integrates with existing governance structures. Controls should not remain in a document. They should be logged, tracked and tested, typically through a GRC system or equivalent process. This is where DPIAs often fail in practice, the control is described, but there is no mechanism to ensure it is implemented or maintained.
At this point, the DPIA should feed directly into governance sign-off. That includes recording decisions, documenting any residual risk, and ensuring that there is a clear owner responsible for the system going forward. Where high risk cannot be mitigated, the process should allow for escalation and, where necessary, consultation with the supervisory authority.
From there, the focus shifts to ongoing operation. AI systems require defined review triggers. A model update, a new data source, an expanded use case, or an incident should all lead to reassessment. Without those triggers, the DPIA will not be revisited until it is already out of date. In practice, this is where the process changes.
The DPIA stops being treated as a standalone assessment and starts functioning as part of the organisation’s wider governance machinery. That means it is no longer enough for the document itself to be complete. Its outputs need to go somewhere, and they need to continue doing work after sign-off. That is what makes the process auditable rather than merely documented. In a functioning model, this becomes visible in how the DPIA connects to the rest of the organisation:
It also changes how different functions need to work together. AI governance does not sit with a single team, and treating it as if it does is one of the reasons these processes break down. Responsibility is distributed in practice where privacy or the DPO function retains oversight of the DPIA and its linkage to the RoPA. Privacy operations supports screening and process discipline. Legal addresses lawful basis and contractual controls. IT security addresses threat modelling, controls and technical assurance. Risk functions ensure that issues are visible within enterprise governance structures. AI and data teams are responsible for mapping, testing and understanding how the system actually behaves. Depending on the context, specialist input may also be needed on fairness, sector-specific impacts or AI Act readiness.
That is not about creating additional layers of governance. It is about ensuring that the right questions are answered once, by the right people, at the right stage. When those roles are aligned and the outputs of the DPIA are embedded into existing systems, the process becomes both workable and defensible. It also avoids the need to create parallel structures. The same set of activities supports GDPR accountability and the lifecycle expectations set out in the EU AI Act, without duplication. The difference is not in the DPIA document itself. It is in how it is used.
In practice, most organisations are not missing components of AI governance. They are missing alignment. When you step back from individual DPIAs or use cases, the same areas come up repeatedly. It is usually not difficult to identify where things are working and where they are not. Those tend to fall into five areas.
In practice, that means being able to point to ownership records or governance minutes, lawful basis documentation and privacy information, testing outputs and security assurance, DPIA reviews and change logs, and vendor agreements, due diligence and training records. This is also where organisations tend to see the gaps most clearly. It is rarely that something is entirely missing. It is that one or two of these areas are underdeveloped, and that creates a weakness in the overall process. This methodology is a way of identifying where governance is already working, and where it needs to be strengthened.
Governance is not just about identifying risks or describing controls for the sake of. It is about being able to demonstrate, at any point in time, how those risks are being managed in practice. This is where many otherwise well-designed processes fall short.
A DPIA may identify the right issues where controls may be described in detail. But, if there is no evidence that those controls have been implemented, tested or maintained, the position is difficult to defend. From a regulatory perspective, the question is not what was intended, but what can be shown. In practice, that shifts the focus.
Controls are no longer just statements within a document. They need to generate records like showing when DPIAs were reviewed, evidence of training and awareness for those expected to exercise oversight, outputs from testing and monitoring activities, and clear documentation of decisions and governance sign-off. This is also where the integration described earlier becomes important. If risks sit only within a DPIA, and controls sit only within a policy, there is no clear way to demonstrate that they are being managed. Where those elements are connected, as to risk registers, control tracking, training systems and review logs, the organisation can show how the process operates over time.
The objective is not documentation for its own sake. It is a consistent record of how decisions were made, how controls were implemented, and whether they remain effective. That is what makes the difference between a process that exists on paper and one that can withstand scrutiny.
So the question is not whether a framework exists but rather whether it is functioning. A useful way to approach this is to step through the described process and test whether the key elements are in place. Do not do this as a formal audit, but as a practical sense check of how the system is being governed.
None of these questions are new, but what they do is make visible whether the process described earlier is operating in practice as intended. Where the answer to several of them is unclear or uncertain, the issue is usually not a single gap, but a breakdown in how the process is connected.
A common concern is that governance will delay deployment but in practice where done with consideration, the opposite is usually true. Where there is a clear intake process, defined ownership, and a consistent approach to risk assessment and controls, decisions can be made earlier and with more certainty. Teams are not waiting for approval at the end. They understand what is expected from the outset and can move within that structure. This approach changes how systems are built.
When a DPIA is used properly, it becomes part of the design process rather than a retrospective check. Decisions about data use, system behaviour and controls are made early, when they can still be shaped. That is what privacy by design looks like in practice, not as a principle, but as part of how systems are engineered. The benefit of the approach tends to become clearer over time.
Systems that have been designed with that level of clarity are easier to extend. New use cases can be assessed more quickly because the original assumptions, risks and controls are already understood and documented. Changes can be made with confidence because there is a clear record of how the system is intended to operate and how it has been governed.
And, the same is true when scrutiny increases. Whether that comes from regulators, auditors or internal review, the organisation is not trying to reconstruct decisions after the fact. There is a record of how the system was assessed, how risks were managed, and how those controls have been maintained over time. That is where the earlier work pays off and also changes how governance is experienced internally.
Instead of being seen as a blocker, the DPO function becomes part of how decisions are made. There is less friction, fewer surprises, and a clearer sense of where responsibility sits. Over time, that builds trust both in the process and in the people responsible for it. We like to think that this is where privacy culture becomes visible. Where that culture is not about policies in isolation. It is about whether people understand when to involve the right functions, whether governance is built into delivery, and whether decisions are made in a way that can be explained later.
The aim is not to restrict AI use. It is to make it workable in a way that supports both innovation and accountability. This is how governance enables organisations to grow their use of AI without losing control of it.
AI governance is often presented as a new challenge, but in practice most organisations already have the structures they need. Frameworks exist but might not always be applied in a way that reflects how AI systems are actually introduced, used and changed over time. What this comes back to is alignment.
A DPIA that is started early, grounded in how the system actually operates, and maintained as that system evolves will do most of the work. When that is connected to the organisation’s wider governance processes like risk, controls, training and oversight the result is not additional complexity, but clarity.
This is also where the role of the DPO becomes most visible as the function that ensures these elements are connected and operating as they should, not just a final approval. In practice, that means being involved early enough to influence decisions, maintaining oversight as systems evolve, and ensuring that what is documented continues to reflect what is actually happening. Where that role is understood in those terms, governance becomes more consistent and easier to rely on. Where it is not, the same issues tend to reappear like late assessments, incomplete records, and decisions that are difficult to defend after the fact.
When done Privacy by Design in mind, systems can be developed and deployed without unnecessary delay, and there is a clear and consistent record of how decisions were made and how risks are being managed. Regulators are increasingly expecting this more mature approach, but it is also what allows organisations to move beyond reacting to individual use cases and towards a more stable, repeatable approach to AI.
So, the question is not whether governance exists but rather whether it reflects reality, and whether the DPO is positioned to ensure that it does.