At XpertDPO, we use Artificial Intelligence (AI) to support our services in data protection, compliance, and risk management. We are committed to using AI in a responsible, ethical, and transparent way. This policy explains how we use AI, how we keep it safe and fair, and how we make sure it stays under human control.
This policy covers all AI tools and systems we use in our work. That includes tools that help us with data protection audits, GDPR compliance checks, risk analysis, and document processing. It also applies to all employees, contractors, and partners who help develop or use AI as part of XpertDPO’s services.
We are responsible for all AI-generated outputs in our services. AI supports our work but does not replace human decision-making in important matters. Our team always checks and approves any AI recommendations before they are used. Our team members include certified AI Governance Professionals.
We work hard to make sure our AI systems do not produce biased or unfair results. We regularly test them to catch and fix any issues that might lead to discrimination.
We aim to make it clear how AI supports your compliance journey. If an AI tool helped with your risk rating or compliance analysis, you can ask us to explain how it worked and why it gave that result.
All AI systems we use follow data protection rules. Where possible and when necessary, we use anonymised or pseudonymised data to protect privacy. AI never has access to more data than it needs.
We use human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) models. This means AI assists our experts, but people are always in charge. Clients can always question or ask for a human review of any AI-assisted output.
We follow all legal and regulatory requirements, including the GDPR and the EU AI Act. AI systems are subject to the required risk and ethics assessments. We update our approach as laws and standards change.
When AI is used in our interactions or services that you experience, we will clearly let you know. You may see or hear a message or see an icon that shows an AI system is involved.
We do not make use of High Risk AI Systems. We do not use personal data to train AI systems. We do not use AI systems to infer emotions or categorise you biometrically.
You can:
We keep detailed records about the AI systems we use. This includes:
We regularly review how these tools perform and update them as needed. Our team receives training on AI use, ethics, and safety.
We welcome your feedback and take concerns seriously. Clients and staff can report any problems or suggestions related to AI use.
If you have questions about this AI Transparency Policy or wish to exercise your rights, please contact us at:
We may update this policy from time to time to reflect changes in technology, law, or our services. Please check back occasionally to stay informed.