When “Low”, “Limited”, or “Minimal” Risk AI Still Needs Explaining

This article accompanies Hour 5: DPIAs in Practice in our full-day CPD programme on XpertAcademy. Completion of the full one-hour session, including the related learning materials, contributes to the one-hour CPD certificate issued for that session. You can access the course here: CPD Event A: Full-Day Regulatory Privacy Training.

The most difficult AI assessments are not always the obviously high-risk ones. They are often the ordinary tools that look assistive, low impact and operationally convenient until their outputs begin shaping how people understand, prioritise or respond to information. That is where the DPIA becomes more useful than it first appears. So, this is how to use the DPIA to test transparency, reliance and explainability before small AI tools become governance blind spots

For many organisations, the instinct is to reserve deeper AI governance work for systems that clearly make decisions about people, process sensitive information at scale, or fall into a higher regulatory risk category. That makes sense as a starting point. Risk classification matters. Proportionality matters. No organisation should treat every small AI feature as though it were a high-risk decision engine.

However, a system can sit outside the highest risk category and still require careful explanation. It may not make final decisions. It may not reject applications, determine eligibility or produce a legally significant outcome. It may only summarise, rank, draft, route or recommend. In practice, however, those activities can still affect how people are treated, particularly where the output changes what a human user sees, how quickly an issue is escalated, or how confidently a conclusion is reached.

“A system does not need to make the final decision to shape the conditions in which that decision is made.”

That is the practical gap this article is concerned with. The issue is not whether every limited or apparently low-risk AI system requires a heavy assessment. It is whether the organisation has used the DPIA well enough to understand what needs to be explained, to whom, and why.

The ICO and Alan Turing Institute guidance on explaining AI-assisted decisions is useful here because it does not limit explanation to fully automated decisions. It is concerned with decisions and services delivered or assisted by AI, and with helping affected individuals understand how those systems are used in practice. That is the right starting point for DPIA work in this area because many of the most common AI deployments sit somewhere between back-office productivity and meaningful influence over a person’s experience. (ico.org.uk)

Risk classification is not the same thing as explanation need

One of the most useful distinctions for a DPO, privacy lead or AI governance function is the difference between risk classification and explanation need.

Risk classification asks what type of AI system this is, what regulatory category it may fall into, and what level of governance attention it attracts. Explanation need asks a slightly different question. It asks whether people need to know that AI is being used, what role it plays, how much weight is placed on its outputs, and what they can do if the output affects them.

Those two questions are related, but they are not the same.

A system may not be classified as high risk, but still be used in a context where explanation matters. A chatbot that answers general opening hours questions is very different from a chatbot that helps people navigate welfare, healthcare, housing, complaints or employment processes. A summarisation tool used to tidy internal meeting notes is different from one used to summarise safeguarding records, grievance material, medical correspondence or customer complaints. A drafting assistant used for generic marketing copy is different from one used to prepare responses to individuals about access, eligibility, delay, debt, disciplinary matters or service refusal. The technology may look similar. The context changes the governance question.

“Low risk is not a conclusion about explanation. It is only one part of the wider assessment.”

This is where the DPIA can add real value. It gives the organisation a structured way to examine not only what the system is, but how it is used. It allows the privacy team to separate the question of formal classification from the more practical question of whether affected people, staff, management or regulators could understand the role the system plays.

The OECD AI Principles make this point in a broader way. Their transparency and explainability principle asks AI actors to provide meaningful information appropriate to context, including information about capabilities and limitations, awareness of interaction with AI systems, and, where feasible and useful, information that helps affected people understand and challenge outputs. That is a helpful framing because it is not confined to the most serious AI deployments. It is about meaningful information in context. (OECD.AI)

For a DPIA, that means the question should not stop at “is this high risk?” It should continue to “what would a person reasonably need to understand about this system, given how it is being used?”

Why assistive systems still deserve proper explanation

The most common AI systems in organisations are often described as assistive. They support staff. They draft first versions. They summarise long records. They flag likely categories. They suggest responses. They help route work. They do not, formally speaking, decide anything. That language is usually accurate, but it can also be incomplete.

It is important to understand, and not underestimate, that:

  • Assistance can still be influential.
  • A summary can change what the reader notices.
  • A triage suggestion can change what is dealt with first.
  • A draft response can shape tone and content.
  • A risk flag can influence how much attention a case receives.
  • A ranking can affect what is seen, what is missed and what is treated as urgent.

The problem is not that assistive systems are inherently inappropriate. Often they are entirely sensible. The problem is that the word “assistive” can make the use sound less consequential than it is.

“Assistive does not mean neutral. It means the system acts through the person who uses it.”

That is the point a DPIA needs to capture. If the system assists a human user, the assessment should explain how that assistance operates. Does it simply reduce administrative effort, or does it frame the substance of the decision? Does the user treat the output as a prompt, or as the starting point for their view? Does the workflow encourage challenge, or does it encourage acceptance?

This is not an academic distinction. It affects transparency, fairness, accountability and evidence. If an individual challenges an outcome, the organisation may need to explain not only the final human decision, but whether AI played a role in shaping the information that decision-maker saw.

That is also why explainability cannot be confined to the technical layer. For many lower-risk systems, the more useful explanation is not a mathematical account of the model. It is a plain account of what the tool does in the process, what it does not do, what weight is placed on the output and what human review remains.

The DPIA as an explainability test

The DPIA is often treated as a risk assessment document. In AI contexts, it should also be treated as an explainability test.

That does not mean turning every DPIA into a public-facing explanation. It means using the DPIA process to test whether the organisation can explain the system at the right level to the right audience. 

Those audiences are different. For example, an affected individual may need to know that AI was used, what role it played, what information was relevant, whether a human reviewed the matter and how they can query or challenge the outcome. A staff member using the system needs to understand what the tool is doing, where its limitations sit and when not to rely on it. A DPO or compliance reviewer needs to understand the lawful basis, data flows, output use, safeguards and review triggers. Senior management needs to understand residual risk, reliance and assurance. A regulator or auditor may need to see how the organisation reached its conclusions and what evidence supports them.

The explanation does not need to be identical for all of those audiences. It does need to be consistent.

“A good DPIA should allow the organisation to explain the same system clearly from several angles.”

This is a practical way to use the ICO and Turing approach. The guidance refers to explaining AI-assisted decisions and services to affected individuals, and includes consideration of what goes into an explanation, contextual factors and how to select priority explanations by considering domain, use case and impact. Those ideas translate naturally into DPIA work because they encourage the organisation to ask what explanation is needed in this particular setting, rather than assuming one generic transparency statement will do. (ico.org.uk)

For lower-risk systems, this may result in a relatively light explanation. That is fine. The point is not to overcomplicate. The point is to make sure the organisation has consciously decided what explanation is proportionate, and can justify that decision.

Use-level explainability matters where model-level transparency is limited

One reason organisations struggle with explainability is that they equate it with full technical transparency. That can make the task feel unrealistic, especially where the system depends on a third-party model and the vendor is unable or unwilling to disclose detailed information about training data, model logic or tuning. Vendor opacity can affect the quality of risk assessment, particularly where the system is used in sensitive contexts or where outputs carry material influence.

But it does not follow that explanation is impossible.

In many lower or limited-risk AI uses, the organisation may not need to explain the full model internals to provide a meaningful account of the system. What it often needs is use-level explainability. That means explaining the purpose of the tool, the information it uses, the type of output it produces, the limits of that output, the role of human review and the way a person can query the outcome.

“Where model-level transparency is limited, the organisation still needs use-level explainability.”

This is a useful distinction because it keeps the DPIA practical. It avoids the unhelpful binary of either having full technical explainability or having none. It recognises that organisations can still provide meaningful information about how the system is being used, even where some technical detail remains unavailable.

NIST’s AI risk work is helpful on this point because it treats explainability and interpretability as part of a wider trustworthy AI picture, alongside accountability, transparency, privacy, fairness, safety, reliability, security and resilience. NIST also distinguishes explainability as information about mechanisms underlying AI operation and interpretability as the meaning of outputs in the context of the system’s purpose. That distinction is particularly useful for DPIAs because the organisation may not be able to explain everything about the model, but it should still be able to explain what the output means in the workflow in which it is used. (NIST AI Resource Center)

This matters for the DPO as much as for the technical team. If the vendor cannot provide full model documentation, the DPIA should not pretend otherwise. It should identify the limit, consider whether it is acceptable in context and adjust the control position accordingly. That may mean limiting the use case, reducing the weight placed on outputs, adding human review, improving staff guidance, increasing monitoring, or deciding that the tool is not suitable for a particular context.

What the DPIA should be able to explain in a lower-risk AI use case

For systems that appear limited, assistive or low risk, the DPIA should still be able to answer a set of practical questions. Not as a theoretical exercise, but because these questions determine whether the organisation understands the role the AI is playing.

It should be able to explain what the tool is doing in plain terms. Not in vendor language, and not in a way that simply repeats the product description. The organisation should be able to say whether the tool summarises, classifies, ranks, generates, recommends, routes, detects or predicts, and where that function sits in the workflow.

It should also explain what the tool is not doing. This is often just as important. If the tool does not make final decisions, does not replace professional judgement, does not determine eligibility and does not remove human review, that should be clear. However, those statements should be tied to how the process actually works. A statement that the system is “assistive only” is not enough unless the organisation can explain what assistance means in practice.

The DPIA should also explain what the output means. A risk score, priority flag, summary or recommendation should not be treated as self-explanatory. The organisation should know what the output is intended to indicate, what it does not indicate, and what the user is expected to do with it.

“An AI output is not explained by naming it. It is explained by saying what role it plays.”

This is particularly important where staff may be tempted to treat outputs as more authoritative than they are. A summary may omit nuance. A recommendation may reflect incomplete data. A ranking may be useful for workflow management but unsuitable as a proxy for importance or merit. If the DPIA does not capture those limitations, the organisation may be relying on staff to infer them.

Finally, the DPIA should be able to explain what happens when the output is wrong. This is often the simplest test of whether the system has been properly understood. Questions you can ask:

  1. If the output is inaccurate, who notices.
  2. If it is misleading, who corrects it.
  3. If it causes a person to be treated differently, how can that be identified.
  4. If someone challenges the outcome, can the organisation reconstruct the role the system played.

These are ordinary but necessary governance questions that become more important where AI is being used.

Context changes the explanation requirement

A useful DPIA should avoid treating the same technology as the same risk in every setting. The issue is not the label attached to the tool. It is the effect of the tool in context. The context of use matters. For example:

  • A chatbot used to answer general website questions may require a different level of explanation from a chatbot used by service users trying to understand entitlements, complaint routes or healthcare options.
  • A summarisation tool used by internal staff to condense non-sensitive material may require a different level of assessment from the same tool used to summarise HR grievances, safeguarding concerns, legal correspondence or medical history.
  • A drafting assistant used to polish language may be different from one used to generate responses explaining why a person has not received a service.

“The same AI function can require a different explanation when the setting changes.”

This is where the DPIA should bring together legal, operational and technical judgement. The legal team may understand transparency and rights implications. The operational team may understand how staff use the output. The technical team may understand system limits. The DPO or privacy lead needs to make sure those perspectives are brought into the same assessment.

This is also where the OECD and NIST language around context is valuable. OECD refers to meaningful information appropriate to context, while NIST frames trustworthy AI as socio-technical, involving human, organisational and technical factors. Those points are helpful because they keep the assessment grounded in use rather than treating AI explainability as a purely technical property. (OECD.AI)

This is often the most useful practical lesson. Do not start by asking whether the tool is generally explainable. Ask whether the organisation can explain this tool, in this context, to the people who need to understand it.

Transparency obligations are not only a high-risk issue

The EU AI Act reinforces the need to separate risk category from transparency requirement. Although the full high-risk regime receives much of the attention, Article 50 includes transparency obligations for certain AI systems, including informing people when they are interacting directly with an AI system, marking certain AI-generated or manipulated content, and informing exposed persons about emotion recognition or biometric categorisation systems. The information must be provided clearly and accessibly. (ai-act-service-desk.ec.europa.eu)

That does not mean every AI DPIA should become an AI Act compliance assessment. It does mean organisations should be careful not to assume that lower risk classification removes the need for transparency thinking.

The GDPR position also remains relevant. Where personal data is processed, transparency is not simply about identifying a lawful basis. It is about enabling people to understand how their information is used in a way that is meaningful enough for them to exercise rights, raise concerns and challenge outcomes where appropriate.

The DPIA is a useful place to bring those points together. It can ask whether the organisation has identified the relevant audience for explanation, whether the explanation is proportionate to the context, and whether the transparency measure is meaningful rather than cosmetic.

“A transparency notice is not the same thing as an explanation.”

That line matters because organisations can sometimes satisfy themselves too quickly by pointing to privacy notices or AI usage disclosures. Those may be necessary, but they may not be sufficient. If a person is affected by a process in which AI plays a meaningful role, the organisation may need to explain not only that AI is involved, but what role it played and how the person can question the result.

Human oversight needs its own explanation

Human oversight is often used as the reason why a system is treated as lower risk. The organisation explains that the AI does not make a final decision, that a person remains responsible, and that outputs are reviewed before action is taken. That may be correct, but it is not yet an explanation.

The DPIA should test what the human reviewer is actually doing. Are they checking accuracy. Are they reviewing fairness. Are they confirming that context has not been lost. Are they comparing the output with source material. Are they empowered to reject or depart from the output. Are they trained on the tool’s limitations. Is there evidence that challenge happens in practice.

“Human oversight is not a transparency answer unless the organisation can explain what the human is actually reviewing.”

This is particularly important where the explanation to an individual depends on the presence of human review. If the organisation says that AI is only used to assist staff and that staff remain responsible, that explanation is only meaningful if the organisation can describe the human role with some precision.

Otherwise, there is a risk that human oversight becomes a reassurance rather than a safeguard. The DPIA should therefore link human oversight to real operational design. It should identify who reviews outputs, what they are expected to review, what information they have available, what happens if they disagree, and how oversight is recorded where the risk profile requires it.

For lower-risk systems, this does not need to be burdensome. A simple, well-understood review process may be enough. But it should still be real.

Explainability is also about challenge and correction

One reason explainability matters is that people may need to query, correct or challenge an outcome. This is true even where the AI system is not the final decision-maker. If AI has shaped the information seen by a decision-maker, influenced the prioritisation of a case, generated a draft explanation or classified a person’s issue, then the individual may reasonably need to understand enough about that role to challenge the outcome. This does not mean exposing trade secrets or providing technical detail that would be meaningless to the person. It means providing enough information to make the process intelligible.

The OECD transparency and explainability principle expressly links meaningful information to enabling affected people to understand outputs and, where adversely affected, to challenge them. That connection is useful for DPIA practice because it frames explanation as something functional, not decorative. (OECD.AI)

“An explanation is only useful if it helps someone understand what happened and what they can do about it.”

For a DPO or legal team, this is a strong way to test whether transparency is working. If a person asked why they received a particular response, delay, escalation, recommendation or classification, could the organisation explain the role the AI system played in ordinary language. If the answer is no, the issue is not only communication. It may be that the organisation itself has not sufficiently understood the system’s role.

That is why the DPIA should examine challenge routes. Not only rights under data protection law in the abstract, but practical routes for raising concerns, correcting errors, requesting human review or obtaining a meaningful explanation.

What senior stakeholders should receive

Senior stakeholders do not need the technical mechanics of every AI tool. They do need a clear account of systems that may affect individuals, even where those systems are classified as limited, assistive or low risk.

For board or senior management reporting, the useful information is usually quite focused.

  • What is the system being used for.
  • Who may be affected.
  • What explanation is provided to staff or individuals.
  • How much influence do outputs have.
  • What human review exists.
  • What are the main limits of understanding.
  • What evidence shows that the tool is being used as described.
  • What would trigger review.

“Management does not need a model explanation. It needs a clear explanation of reliance.”

That is a useful governance distinction. The question for management is not whether the model can be explained in technical depth. The question is whether the organisation can explain what it is relying on, why that reliance is acceptable, and how it would know if the position changed.

This is also where DPIAs can improve reporting quality. A good DPIA should produce a short summary that can be understood outside the privacy team. If the summary cannot explain the role of the AI system without resorting to technical jargon or compliance labels, the underlying assessment may need more work.

Using the DPIA proportionately

There is always a risk that AI governance becomes heavier than it needs to be. That is not the aim here. The point is not that every AI feature should be put through a full, high-intensity DPIA. The point is that where a DPIA is being used, or where screening suggests one may be needed, explainability should be part of the assessment.

For some systems, the conclusion may be simple. The tool is used internally, does not involve personal data beyond ordinary staff use, does not influence decisions about individuals, and requires only a basic internal explanation and acceptable use guidance. For others, the same kind of technology may sit closer to individual impact and require a more detailed explanation, stronger oversight and better evidence.

“Proportionate does not mean light-touch by default. It means matched to the role the system plays.”

That is the judgement senior professionals are expected to exercise. Not every system is high risk. Not every system is harmless. The DPIA helps locate the system between those positions by asking what the tool actually does, where it sits, who is affected and what needs to be explained.

Takeaway

For a DPO, privacy lead, legal adviser or senior compliance professional, the most useful way to apply this learning is to run a low-risk AI explainability test against one live or proposed AI use case. The objective is not to turn a modest tool into a major governance exercise. It is to check whether the organisation can explain the system at a level that matches the role it plays. The checklist below is intended to support that review.

1. Role of the tool
Can the organisation explain in plain language what the tool does. Does it summarise, classify, rank, generate, recommend, route, detect or predict. Has the organisation avoided relying only on vendor terminology.

2. Context of use
Is the tool used in an internal administrative context, or does it sit in a process that affects individuals. Does it touch complaints, employment, healthcare, education, vulnerability, eligibility, safeguarding, debt, access to services or other sensitive contexts.

3. Data and input sources
What information is submitted to the system. Does this include personal data, special category data, confidential material, children’s data, employee data or information about vulnerable individuals. Has actual use drifted from the original assumed data set.

4. Output meaning
Can the organisation explain what the output means and what it does not mean. Is a summary treated as complete. Is a score treated as a risk indicator. Is a recommendation treated as a preferred answer. Are users clear on the limits.

5. Decision influence
Does the output affect how work is prioritised, how a person is described, how a file is framed, how quickly a case is escalated, or how a response is drafted. Even if the system does not decide anything formally, does it shape the decision-making environment.

6. Human oversight
Who reviews the output. What are they reviewing for. Do they have authority to depart from it. Are they trained to recognise limitations. Is challenge expected in practice. Is there evidence that human review changes outcomes where necessary.

7. Explanation to affected individuals
Where individuals are affected, what are they told about the role of AI. Is the explanation meaningful, or does it merely state that AI may be used. Can the organisation explain the role of the tool in ordinary language if someone asks.

8. Explanation to staff
Do staff understand what the system is for, how much weight to place on outputs, when to check source material, when not to use the tool and how to escalate concerns. Is the guidance operational rather than generic.

9. Vendor explanation limits
Where the vendor cannot provide detailed model-level transparency, has the organisation recorded what is known, what is not known and why the remaining uncertainty is acceptable in this use case. Has the organisation considered whether use-level explainability is sufficient.

10. Challenge and correction
If an output is wrong or misleading, how is that identified and corrected. Can an affected person query the outcome. Can staff override the system. Is there a route for reporting repeated problems or unexpected behaviour.

11. Evidence and logs
Can the organisation show how the system was used, who reviewed outputs, what guidance was provided, what changes were made and whether controls operated in practice. Is the evidence proportionate to the risk and context.

12. Review triggers
Has the organisation defined what would require reassessment. This may include new data sources, expanded use, increased reliance, changed outputs, vendor updates, complaints, incidents, evidence of bias, or use in a more sensitive context.

The most valuable part of this exercise is often the comparison between what the organisation believes the tool does and how it is actually being used. Where those two positions are aligned, the DPIA is likely to be stronger. Where they are not, the issue is usually not the classification label. It is the explanation gap.

So, pick one AI tool that has been described internally as low risk, assistive or administrative, and trace one real workflow. Look at the input, the output, the human review and the final action. Then ask whether the current DPIA or screening record explains that workflow in a way that would make sense to a staff member, a senior manager, an affected person and a regulator.

This article is intended to support the learning covered in Hour 5 of our XpertAcademy CPD programme. The relevant CPD certificate is issued for completion of the full one-hour session on XpertAcademy, rather than for reading this article on its own. You can return to the course here: CPD Event A: Full-Day Regulatory Privacy Training.

Ready to start your Data Protect journey with us?

Data Protection Officer Services