Artificial intelligence: is your organisation governing its use adequately?

Artificial Intelligence (AI) is increasingly present across all areas of professional life—content writing, task automation, prediction, risk detection, image processing, and more. Whether it involves an internal chatbot, a complex decision-making algorithm, or generative AI tools such as ChatGPT or Copilot, AI is reshaping the way employees and executives work within companies and public organisations.
This evolution raises major challenges: legal compliance, ethics, security, data protection, explainability, and reliability. To address them, new requirements are gradually being introduced across organizations.
As the EU AI Act comes into force, what was seen as a best practice is now an organisational imperative. The Regulation notably requires both providers and users of AI systems1 to take appropriate measures to ensure a sufficient level of AI knowledge among their staff, as well as any entity likely to use or supervise an AI system on their behalf2.
The AI Office3 and legislators are also encouraging the drafting of codes of conduct and governance mechanisms applicable to all AI systems, regardless of their risk classification.4.
An AI policy—or AI convention—is currently one of the most effective ways to integrate sound AI governance and meet expectations concerning the regulation of artificial intelligence.
Why implement an AI policy?
A well-designed AI policy serves several essential purposes, both legally and operationally:
- Clarify responsibilities and ensure accountability within the organisation
- Provide a clear framework for AI usage
- Align AI use with the organisation’s values and internal rules
- Prevent legal, ethical, financial, reputational, or organisational risks
- Optimise internal efficiency through well-governed AI: better-chosen, better-integrated, and better-utilised tools
- Reinforce trust among users, clients, citizens, or employees
In short, an AI policy acts as a reference framework. It clarifies the organisation’s position on AI, prevents uncontrolled usage, mitigates the risk of incidents, and anticipates the increasing demands of regulators.
What should an AI policy include?
The content will vary depending on the organisation; each must define its own standards. However, here are some examples of elements that may be included:
Principles and guidelines
The aim of an AI policy is to establish structured governance of AI systems. To achieve this, it must set out clear commitments regarding AI usage: respect for fundamental rights, transparency, human oversight, explainability, reliability and fairness of systems, staff training, security, and data protection.
These principles serve as a compass for all AI-related decisions. They represent commitments to which the organisation adheres when selecting, using, or developing AI systems.
These guiding principles should also be reflected in the selection of new AI tools within the organisation (e.g. through specific contractual clauses in agreements with AI system providers).
AI systems record
A regularly updated inventory helps the organisation to monitor the AI systems in use: what they are used for, what data they rely on, and who is responsible for them.
This register facilitates risk assessment, compliance monitoring, and promotes transparency both internally and externally.
Rules for employees
The policy must also provide a precise framework for use. For each AI system, it should specify what can be done, for what purpose, and under what conditions.
In particular, it must specify the rules that employees are required to comply with regarding the protection of personal data, verification of results generated by AI and respect for intellectual property.
If certain actions are prohibited or subject to prior authorisation, these restrictions must also be clearly stated in the document.
An internal point of contact should be clearly identified in case of questions or incidents related to the use of AI systems.
Roles and responsibilities
It is essential to clearly allocate roles and responsibilities throughout the AI system lifecycle—from implementation to monitoring and incident management.
Depending on the organisation’s needs and the type of AI systems used5, the policy may include risk assessment processes, a staff training plan, or any other relevant element contributing to structured and scalable governance.
In conclusion, adopting an AI policy means:
- Creating an internal culture of responsible AI
- Anticipating evolving regulatory requirements
- Structuring practices to better manage risks
- Strengthening trust among employees, partners, clients or citizens
To prevent harmful AI practices from becoming established through routine use, it is urgent to govern AI with method, consistency, and transparency.
1Deployers as defined in Article 3 (4) of the AI Act
2 Article 4 of the IA Act
3 The AI Office is the European centre of expertise on Artificial Intelligence : IA Office
4 Article 95 of the IA Act
5 The IA Act distinguishes 4 categories of AI systems: AI systems with unacceptable risk, AI systems with high risk, AI systems with limited risk, AI systems with minimal risk.
Other sources consulted:
AI Act Explorer : https://artificialintelligenceact.eu
IBM: https://www.ibm.com/fr-fr/think/topics/ai-governance
CNIL : AI how-to sheets : https://www.cnil.fr/fr/ai-how-to-sheets
Belgian Data Protection Authority : https://www.autoriteprotectiondonnees.be/publications/brochure-d-information-sur-les-systemes-d-intelligence-artificielle-et-le-rgpd.pdf
Has your organization defined internal policies to guide and govern the use of AI?
Our compliance consultants guide your though every step of defining and deploying responsible AI governance frameworks.
Explore more articles
About us
CTI CONSULTING helps organisations use digital with impact, integrity and efficiency.
Our multidisciplinary team of consultants supports you to design, acquire, integrate and use digital tools in an effective and responsible manner.
Want to know more ?
Has your organization defined internal policies to guide and govern the use of AI?
Our compliance consultants guide your though every step of defining and deploying responsible AI governance frameworks.