This article accompanies Hour 3: Privacy Program Metrics in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.
One of the most persistent weaknesses in privacy governance is also one of the least candidly addressed. Organisations often speak as though privacy has an owner in the singular. Sometimes that owner is called the DPO. Sometimes it is the privacy team, compliance function or legal lead. Sometimes the language is softer, privacy “sits with” a particular function, or one person is described as “responsible for privacy.” In practice, however, the position is rarely that neat. A privacy programme may be coordinated by one function, but the things that determine whether it is accurate, current, evidenced, operationally real and legally defensible are spread across the organisation. That matters far more than it first appears.
A privacy programme becomes weak very quickly when one function is expected to stand over controls, decisions, evidence and remediation that it does not actually own. The weakness may not show immediately. Governance can still look busy. Reporting can still be produced. Meetings can still happen. Actions can still appear in trackers. But over time the pressure points become obvious. The privacy function is asked to defend a RoPA that depends on business units confirming what actually happens. The DPO is asked to “close” an issue that depends on IT, procurement or management decision. A board paper describes remediation as though it is progressing cleanly, while the real blockers remain unresolved elsewhere. At that point, the organisation does not really have a privacy ownership model. It has a privacy concentration problem.
This is where experienced readers will recognise a familiar dynamic. The more an organisation talks about “the privacy owner” without distinguishing between oversight, coordination, operational control and business accountability, the more likely it is that those functions are being blurred together. That blur is not merely inconvenient. It weakens governance, distorts reporting and makes assurance less credible. It is also one of the reasons privacy programmes can appear more mature on paper than they are in reality.
The answer is not to dilute accountability until everyone is vaguely responsible and no one is clearly answerable. Nor is it to treat the privacy function as the organisational backstop for every gap. The answer is to be much more precise about who owns what, who validates what, who reports what, and who is entitled, and expected, to challenge when the programme is not functioning as it should.
A privacy programme is not made defensible by naming one owner. It becomes defensible when the right people are accountable for the right parts of the system.
Organisations often use the language of ownership too casually. They say privacy is “owned” by the DPO or that the privacy team “has responsibility for” the programme. On one level, that is understandable. There is usually a need to identify who coordinates the agenda, answers the questions, keeps the programme moving and acts as a visible point of contact. The difficulty is that this shorthand quickly becomes misleading.
Ownership in privacy governance can mean several different things. It may refer to legal interpretation, operational coordination, drafting and documentation, oversight and challenge, risk management, control implementation, committee reporting or simply the function expected to respond when an issue arises. Those are not interchangeable. Yet in many organisations they are treated as though they are.
If the privacy function is described as the owner of the programme without further nuance, people elsewhere in the organisation often begin to behave as though privacy has somehow absorbed their accountability. Business owners may assume that if the privacy team has documented the process, it now “owns” the process from a governance perspective. Technical teams may treat privacy as the reporting face of a control environment they themselves must actually operate. Senior management may behave as though privacy reporting is something received from the privacy team rather than something that reflects management’s own decisions about risk, remediation and resourcing.
This is one of the reasons privacy programmes become subtly but materially distorted. The privacy team becomes responsible for describing a system whose accuracy depends on others. It becomes the point of reassurance for positions it cannot fully verify alone. It may even become the de facto owner of unresolved issues simply because no one else is prepared to pick them up. That is not an ownership model. It is a transfer of discomfort.
A more defensible approach begins by resisting the temptation to use “ownership” as a blanket term. Senior organisations are usually more disciplined about this. They distinguish between governance accountability, process ownership, control ownership, escalation responsibility, reporting responsibility and legal oversight. They understand that a privacy programme does need a visible centre, but that centre cannot credibly absorb every responsibility without weakening the rest of the system.
This is not just a drafting point. It affects how issues are handled in practice.
This is one of the most important governance distinctions in the entire privacy programme, and it is one that organisations still get wrong with surprising frequency.
The DPO has a specific role. Under the GDPR framework, that role includes informing and advising, monitoring compliance, advising where appropriate on impact assessments, cooperating with the supervisory authority and acting as a contact point. That is already a substantial mandate. It is not, however, the same thing as owning every operational weakness, every unresolved process issue, every control failure, every vendor deficiency or every incomplete remediation action across the organisation.
That distinction matters because organisations frequently behave as though appointing a DPO solves the ownership question. In reality, it often only sharpens it. Once a DPO is in place, there is a risk that operational accountability begins to flow towards the role by default. The DPO becomes the person who is expected to “sort” the RoPA, “fix” the DPIA, “close” the issue, “deal with” the vendor concern, “sign off” the governance position, or “take” the matter to the board. Some of this may look like respect for the role. A good deal of it is possibly a delegation of ownership from elsewhere, perhaps due to lack of confidence or knowledge. That becomes problematic very quickly.
A DPO should be able to oversee, advise, challenge and escalate. That is different from being expected to carry the operational burden of every unresolved matter. If the DPO becomes the substitute owner of gaps that belong to business units, operational teams or management, the oversight function starts to weaken. It becomes harder to preserve independence of judgment when the DPO is also expected to keep the entire programme operationally afloat.
This is not a purely theoretical problem. It has practical consequences for reporting, decision-making and legal defensibility. If the DPO is the person who is always expected to supply the answer, the organisation can begin to treat the DPO’s presence as a proxy for control. That is dangerous. A DPO may be fully sighted on the issue and still not own the operational levers needed to resolve it. A mature governance model does not confuse visibility with ownership.
Experienced readers will recognise that this distinction is well understood in other governance settings. Internal audit does not become the owner of the control weaknesses it identifies. Risk does not become the operator of the business processes it monitors. Compliance does not become the owner of every policy breach it escalates. Privacy should not be treated differently simply because the DPO is often more visible than those other functions.
The DPO should be able to see weaknesses clearly, challenge them and escalate them. That becomes much harder when the DPO is expected to carry the programme operationally on behalf of everyone else.
This does not diminish the DPO’s importance. It protects it. The role is strongest where it is empowered to identify, challenge and escalate without silently absorbing organisational dependency that ought to remain visible.
Another reason privacy governance becomes confused is that many organisations fail to distinguish between running the machinery of the programme and standing back from it critically. Privacy work often includes a large amount of operational coordination. Documents need to be updated. Inputs need to be gathered. Actions need to be tracked. Assessments need to be scheduled. Training needs to be rolled out. Governance packs need to be prepared. Issues need to be logged and followed up. Meetings need to happen. Evidence needs to be collected and stored. All of that work matters. A privacy programme without this operational discipline will drift quickly.
But operational discipline is not the same thing as oversight. Oversight involves something different. It involves asking whether the underlying position is actually sound, whether the evidence is sufficient, whether unresolved dependency has been hidden behind status language, whether the organisation is too willing to treat progress as closure, and whether an issue has reached the point where escalation is warranted because the open exposure is no longer tolerable as an administrative delay.
When those two strands are collapsed together, the organisation can become administratively active while remaining governance-weak. The privacy team may be doing an enormous amount of work, yet the programme still lacks a clear line between coordinating activity and testing whether that activity has produced real control. That is where motion begins to masquerade as assurance.
This distinction matters especially for experienced professionals because they will know how often organisations measure the wrong thing. A privacy function may be praised for “driving” the programme, but if driving the programme means endlessly compensating for missing ownership elsewhere, the governance model is not really improving. It is becoming dependent on one team’s persistence.
This is not to say that privacy operations and oversight must always sit in separate organisational silos. That would be unrealistic in many settings. It is to say that the distinction needs to remain visible in governance. Someone needs to be able to ask whether the apparent progress is real, whether the evidence supports the status being reported, and whether the issue has genuinely moved or merely been described more neatly.
That is one of the clearest markers of maturity. A programme becomes stronger when it can tell the difference between operational movement and genuine assurance.
No privacy programme can be more accurate than the organisation’s understanding of what it is actually doing. That sounds obvious, but it has profound governance consequences. Business units and process owners remain central to whether privacy governance is real. The privacy function cannot invent the purpose of processing, the actual workflow, the data being used informally, the operational workaround people have adopted, the practical retention behaviour, the external sharing that has become routine, or the fact that the documented process differs from what happens in practice.
This is where privacy documentation often drifts. A RoPA may be carefully assembled and then slowly lose accuracy because no one updates the privacy team when the process changes. A notice may remain broadly aligned with the original service design while no longer capturing all the actual uses now embedded in practice. A DPIA may be completed against a process map that is already partly outdated. A retention schedule may say one thing while operational teams continue doing another. The privacy team can ask the right questions, challenge incomplete inputs and try to refresh the record, but it cannot fabricate operational truth where business ownership is weak.
That is why the business cannot be treated as a passive recipient of privacy governance. It is not merely there to be “consulted.” It is an essential part of the evidence chain. If process owners do not engage properly, the privacy programme becomes less accurate, less current and less defensible.
This is also one of the reasons ownership models fail so often in practice. The privacy function may become extremely good at drafting, coordinating and reporting while the business becomes increasingly passive. Over time, the organisation begins to speak as though privacy documents “belong” to the privacy team even where the truth beneath them still belongs operationally to the business. That is an inversion of responsibility, and it is one of the quickest ways to weaken a programme without noticing it.
Seasoned professionals will recognise the governance consequence. Once the business starts to think of privacy as a specialist department’s concern rather than a set of obligations embedded in how the business actually runs, the programme becomes more performative and less reliable. It may still look active, but it becomes harder to trust the evidence base.
Privacy documentation is only as reliable as the operational truth underneath it. Where the business disengages, the programme almost always becomes weaker than the paperwork suggests.
A similar point applies to technical and operational controls. Many of the measures most relevant to privacy are not controlled by the privacy function at all. They sit in IT, security, engineering, procurement, operations or resilience teams.
Access control, system architecture, backup and recovery capability, logging, deletion execution, vendor integrations, permissions management, monitoring practices, configuration decisions, identity management, incident handling and resilience arrangements are all likely to fall substantially outside the privacy team’s direct operational control. The privacy function can ask questions, review positions, seek evidence and report concerns. It cannot, on its own, create technical control where technical ownership is weak.
This matters because privacy reporting often sounds stronger than the technical evidence beneath it. It is easy enough for an organisation to say that technical and organisational measures are in place, that vendors are being managed, or that systems are subject to adequate controls. Those statements may be broadly true. But unless the relevant technical and operational teams are substantively inside the governance model, contributing evidence, confirming practice, validating assumptions and owning remediation, the privacy function may be reporting confidence it cannot fully verify.
That becomes particularly risky where the organisation allows privacy to become the reporting face of technical matters it does not actually control. If the privacy report implies a stronger technical position than the technical owners themselves could comfortably evidence, the organisation has created a governance gap disguised as assurance.
This is also where the overlap with operational resilience and, where relevant, DORA-style governance becomes real. Incidents, third-party dependencies, recovery capability, operational fallback and critical service exposure often sit across multiple governance streams. If privacy reporting is not fed by the teams that actually own those controls, it can become detached from the operational realities that matter most when scrutiny intensifies.
The answer is not to expect privacy teams to become technical specialists in everything. The answer is to ensure that technical and operational owners are genuinely accountable within the model, and that the privacy function is not expected to convert incomplete technical confidence into governance assurance.
A further weakness in many privacy programmes is the tendency to treat legal, compliance and risk as a single, vaguely supportive block. That is a mistake. Each of those functions contributes something different, and the privacy programme becomes more defensible when those differences are respected rather than flattened.
Legal contributes interpretation and defensibility. It helps the organisation understand what the law requires, where the real legal exposure sits, what contractual and jurisdictional factors matter, where the rules are uncertain or contested, how a lawful basis analysis should be approached, and whether a proposed position is one the organisation can stand over. This matters especially where the organisation is operating in grey areas or high-risk environments.
Compliance contributes discipline. It helps turn privacy governance from aspiration into process. It brings follow-up, governance rhythm, challenge on whether required steps have actually happened, and a stronger expectation that obligations will be tracked, not merely noted. A privacy programme without enough compliance discipline often has plenty of policy language and too little operational consequence.
Risk contributes framing and escalation. It helps the organisation articulate residual exposure, locate privacy issues in the wider risk landscape, and decide when a matter has moved beyond an operational inconvenience into something material enough to justify senior attention or formal risk acceptance. Without that framing, privacy issues can remain trapped at the level of repeated discussion without structural consequence.
Those distinctions are not technical for their own sake. They matter because a privacy programme needs interpretation, discipline and escalation. If one of those is weak, the programme tends to drift. If they are all blurred together, the organisation often ends up with broad awareness but weak accountability.
In practice, privacy meetings happen where issues are discussed. Legal questions are noted. Risks are acknowledged. Yet the matter does not move because the organisation has not been clear about which function is meant to do what next. That is precisely where clearer ownership adds value. It does not just identify who attends the meeting. It identifies who is responsible for moving the issue from recognition to action.
One of the most unhelpful ways of thinking about privacy governance is to treat senior management as though it simply receives privacy reporting. Senior management is not just the audience for the programme. It is one of the forces that determines whether the programme is real.
This should be obvious when stated plainly, but in practice it is often obscured by the mechanics of reporting. Senior leaders decide where resources go, which delays are acceptable, whether repeated slippage is challenged, whether control weaknesses are genuinely addressed, and whether open risk is tolerated because the commercial or operational inconvenience of fixing it is judged too great. Those are not external observations on the programme. They are part of the programme.
This is why senior management cannot be treated as a passive recipient of privacy assurance. A weak privacy culture usually reveals itself not because reports are absent, but because management behaviour shows that the organisation is comfortable living with unresolved ambiguity. Actions remain open for too long. Ownership remains vague. High-risk items are repeatedly discussed but not properly resolved. The privacy function is expected to keep explaining the issue without management making the harder organisational decisions that would change the position.
When that happens, the reporting may remain active while the governance weakens. The board may continue to receive a privacy section. Committees may still discuss key items. But if management does not treat unresolved privacy exposure as something requiring real attention, the programme’s maturity will be overstated by its reporting.
This is particularly important where the board is concerned. Boards do not need privacy activity counts without context. They do not need long lists of tasks completed by the privacy function. What they need is a clear view of whether governance is functioning, where material weaknesses remain open, whether repeated slippage is occurring, whether remediation is credible, and whether management is genuinely addressing the issues being raised.
A board does not need more privacy numbers. It needs to know whether governance is functioning, where risk remains open, and whether remediation is real.
That is one of the most important discipline points for senior professionals. Board reporting should never disguise operational ambiguity as assurance. If the issue depends on business confirmation, technical validation or a management decision that has not yet been made, the board should not be told a stronger story than the evidence supports.
AI-related use cases make existing ownership weaknesses much more visible. This is because AI adoption often outpaces governance. A business team may begin using the tool. Procurement may engage the supplier. IT may manage access or implementation. Security may review the configuration. Legal may review the terms. Privacy may assess the data position. Compliance may raise process questions. Risk may become interested only once the issue has become more visible or more sensitive. By that stage, the ownership model is often already blurred.
That is exactly where problems begin. AI governance does not tolerate vague ownership well. The organisation needs to know who identified the use case, who classified it, who assessed the legal and privacy implications, who considered the technical and operational exposure, who approved its use, who is monitoring it, and who can stop or escalate it if the position changes. If those questions do not have credible answers, the organisation may still produce reporting on AI use, but that reporting will rest on unstable ground.
This is particularly important because AI-related use can intersect with privacy, transfers, security, transparency, third-party dependency, employment issues and service resilience all at once. In such settings, it becomes even more dangerous to assume that one function can hold the whole governance picture by itself.
AI does not create the ownership problem. It simply exposes it faster and more harshly. That is why organisations should be stricter, not looser, about ownership where AI is in scope. A privacy team may be central to assessing aspects of the use case. It should not be left holding the entire operational accountability model together through force of effort alone.
A defensible privacy programme is not one where the privacy function has the broadest remit. It is one where accountability is distributed clearly enough that the reporting is credible, the evidence is real and the remediation is not dependent on one function doing everyone else’s work.
In practical terms, that usually means the DPO or privacy lead can oversee, challenge and escalate without being expected to absorb every operational gap. It means the mechanics of privacy operations, workflows, trackers, documentation, evidence collection, follow-up and reporting preparation, are managed deliberately rather than left to happen informally. It means business owners remain accountable for the processing they run and the operational truth beneath the documentation. It means IT, security and resilience teams own the controls they are responsible for and contribute evidence rather than informal reassurance. It means legal contributes defensible interpretation, compliance contributes discipline, risk contributes escalation logic, and senior management makes decisions where the organisation’s exposure remains open.
The value of that model is not aesthetic. It is evidential. It makes the organisation’s reporting more honest, its remediation more realistic, its oversight more credible and its legal position easier to defend. Where the model is weaker, the privacy function ends up writing a story it cannot fully verify. Where the model is stronger, the report reflects a real operating environment with visible ownership and meaningful accountability.
A defensible privacy programme is not one where the privacy team does everything. It is one where the right people cannot avoid doing their part.
For privacy professionals, the most useful next step is not simply to ask whether the privacy function is busy or whether reporting exists. It is to step back and look at the operating model beneath the reporting. A few questions are worth revisiting.
Those are not minor design points. They usually tell you whether the programme is becoming more defensible or merely more elaborate. If the DPO or privacy lead has become the substitute owner of everything that is difficult to close, the answer is not to work harder. It is to correct the ownership model. A privacy programme becomes more credible not when more tasks sit with the privacy function, but when accountability across the business becomes clearer, more visible and harder to avoid.
A privacy programme is only as credible as the ownership model behind it. If one function is expected to carry the whole system, reporting may continue, but assurance weakens. The organisation becomes better at describing governance than demonstrating it.
Real accountability begins when privacy stops being treated as someone else’s department and starts being governed as a distributed organisational responsibility. That does not diminish the DPO or privacy function. It makes the programme more defensible, the reporting more credible and the organisation better able to stand over its position when it matters.
This article is intended to support the learning covered in Hour 3 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.