The Council of Europe AI Convention: Why AI Governance Is No Longer Just an AI Act Exercise

Why this matters now

On 13 May 2026, the text of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was published in the EU Official Journal. The EU had already approved conclusion of the Convention through Council Decision (EU) 2026/1080, adopted on 21 April 2026.

For EU organisations, the important point is that the Convention does not displace the EU AI Act. The EU decision makes clear that the Convention will be applied in the Union through Regulation (EU) 2024/1689, the EU AI Act, and other relevant parts of EU law where appropriate.

The significance is therefore not that organisations now need a separate Convention compliance programme. It is that the Convention confirms the broader governance frame in which AI regulation now sits: human rights, democratic integrity, rule of law, transparency, accountability, remedies and lifecycle risk management.

The AI Act tells organisations much of what they must do; the Convention helps explain what responsible AI governance must be able to prove.

Not a separate compliance track

It is worth being precise about what the Convention does not do. For organisations operating in the EU, it does not replace the AI Act or create a separate operational regime that sits beside it. The practical questions of AI classification, provider and deployer obligations, transparency duties, technical documentation, post-market monitoring and high-risk system controls remain anchored in the AI Act where that Act applies.

Nor should the Convention be treated as a new checklist for every organisation to complete in isolation. The Convention is an international treaty framework. Its direct obligations are framed around Parties to the Convention, while the EU has connected its implementation to the AI Act and other parts of the Union acquis.

It also does not mean that every AI system should be treated as high risk. Proportionality still matters. A low-impact productivity tool should not be governed in the same way as an AI system used to support access to healthcare, employment, education, credit, public services or legal rights.

The value of the Convention is different. It helps clarify the rights-based purpose behind AI governance. It gives organisations a way to test whether existing AI controls can evidence not only classification and compliance status, but also accountability, transparency, oversight, remedies and lifecycle risk management.

A rights-based governance framework

The Convention is best understood as a rights-based governance framework for AI. It places the design, development, deployment and use of AI systems within the wider obligations to protect human rights, democratic processes and the rule of law.

That means the governance question is not limited to whether an AI system falls into a particular regulatory category. Organisations also need to understand how the system may affect people, how those impacts are assessed, what safeguards are in place, and whether there are meaningful routes for oversight, challenge and remedy.

The Convention also takes a lifecycle view. It is concerned not only with the moment a system is procured or launched, but with how risks are identified, documented, mitigated, monitored and reviewed over time.

For DPOs and compliance teams, this is where the Convention becomes practically useful. It gives language for the wider governance evidence that should sit around AI use: not only what the system is, but what it may do to people, who is accountable for it, and how the organisation knows its controls are working.

From classification to evidence

For organisations, the significance of the Convention is not that it creates a separate compliance route. It is that it changes the quality of the governance question.

AI Act classification still matters. Organisations still need to know whether a system is prohibited, high risk, subject to transparency obligations, or outside the more demanding parts of the regime. But classification is not the same thing as governance.

Classification is necessary, but it is not the same as governance.

The Convention points toward a rights evidence layer around AI use. Organisations should be able to show how they have considered the impact of an AI system on people, what safeguards are in place, who is accountable, how the system is monitored, and what route exists if something goes wrong.

That evidence may already sit partly within DPIAs, AI impact assessments, vendor assessments, records of processing, technical documentation, testing reports, escalation procedures and governance minutes. The practical task is to make sure those controls join up, rather than leaving AI assurance scattered across disconnected documents.

This aligns closely with the approach discussed in our article on AI Governance and Data Protection Impact Assessments. The issue is rarely the absence of any governance process. More often, the issue is that the process does not reflect how AI systems are actually introduced, used, changed and relied upon in practice.

What DPOs and compliance teams should look for

For DPOs, the Convention reinforces several governance areas that should already be visible in mature AI oversight.

First, risk and impact assessment must reflect the system in use, not only the system as described at procurement. A DPIA or AI impact assessment should consider the people affected, the context of use, the seriousness of potential impacts, and how risks will be monitored over time.

Second, transparency needs to be meaningful. It is not enough to state that AI may be used somewhere in a process. Individuals should be able to understand when AI is involved, what role it plays, and where to go if they need to question or challenge an outcome. This is particularly important for systems that are not high risk but still influence how people are treated, as discussed in our article on low, limited and minimal risk AI that still needs explaining.

Third, human oversight needs to be real. Organisations should be clear about who can intervene, what information they receive, whether they have authority to change an outcome, and how oversight is recorded. A reference to “human in the loop” is not enough if the person involved has no practical ability to understand, question or correct the output.

Fourth, remedies and escalation routes should be designed before something goes wrong. If an AI-supported process affects access to a service, opportunity, benefit or right, the organisation should know how a person can query, correct or challenge the result.

Finally, vendor and lifecycle governance matter. Many AI systems enter organisations through third-party tools or embedded platform features. DPOs should be involved early enough to understand what is being used, what evidence is available, and how risks will be reviewed as the system changes. This is one of the reasons AI DPIAs often become harder than they first appear: ownership, outputs and decision influence are not always clear at the start.

A practical governance test

A useful practical test is whether the organisation can explain the AI system beyond its regulatory category. A label such as “limited risk” or “high risk” may be necessary, but it does not tell the whole governance story.

For any AI system that could materially affect individuals, organisations should be able to answer the following questions:

  • What does the system do, and where is it used?
  • Whose rights, interests or access to services could be affected?
  • What evidence supports the risk assessment?
  • Who owns the risk and has authority to act?
  • What human oversight, monitoring or review is in place?
  • How can an affected person query, challenge, correct or escalate an outcome?

If those answers are spread across procurement files, DPIAs, vendor documents, governance minutes and technical records, the task is not necessarily to create a new process. It may be to join the existing evidence into a coherent governance picture.

What to share with senior stakeholders and decision-makers

The message for senior stakeholders and decision-makers is not that the organisation needs to become expert in the Convention. It is that AI governance should now be treated as an assurance issue, not only a technical, procurement or compliance issue.

Senior leaders do not need to understand every model parameter. They do need confidence that the organisation has visibility of AI use, clear ownership of AI risk, evidence that impacts have been assessed, and routes for oversight, escalation and remedy.

Senior leaders do not need every technical detail. They need assurance that AI is visible, owned, evidenced and reviewable.

The questions to share with the Board, executive team or senior management group are practical:

  • Do we know where AI is being used, including vendor and embedded tools?
  • Which AI uses could materially affect individuals, services, opportunities or rights?
  • Who owns AI risk at senior and operational level?
  • What evidence shows that risks have been assessed and controls are working?
  • How would a person query, challenge or escalate an AI-supported outcome?

These questions do not require a separate Convention compliance project. They require a clear view of whether existing AI Act, GDPR, DPIA, vendor, risk and accountability processes are joined up well enough to support senior-level assurance.

Bringing it together

The AI Act remains the main operational framework for EU organisations. The Convention adds a wider governance lens. It asks whether AI systems are being managed in a way that protects people, supports oversight, enables remedy and preserves trust in institutions and decision-making.

For DPOs and compliance teams, the practical response is not to create a separate “Convention compliance” process. It is to review whether existing AI governance, DPIAs, vendor assessments, transparency materials, monitoring and escalation routes can evidence the rights-based questions the Convention brings into focus.

The organisations best placed to respond will be those that can show not only what category an AI system falls into, but how it is controlled, who is accountable for it, and what happens when its use affects people.

Sources

Ready to start your Data Protect journey with us?

Data Protection Officer Services