The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law: A global standard in AI governance

On May 17, 2024, the Council of Europe’s Committee of Ministers formally adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law—the first legally binding international AI treaty—during its annual ministerial meeting. This landmark treaty aims to establish a comprehensive global legal framework ensuring that AI systems respect fundamental human rights, democratic values, and the rule of law throughout their lifecycle.

On September 5, 2024, the Council of Europe announced that the Framework Convention was officially opened for signature. On that same day, the Framework Convention was signed by several Council of Europe member states, including Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and the United Kingdom. Additionally, non-Council of Europe countries like Israel and the United States joined as signatories, along with the European Union. The Framework Convention was drafted by the 46 member states of the Council of Europe, with the participation of all observer states: Canada, Japan, Mexico, the Holy See and the United States of America, as well as the European Union, and a significant number of non-member states: Australia, Argentina, Costa Rica, Israel, Peru and Uruguay.

Scope of the Treaty

The Framework Convention covers AI systems across their entire lifecycle—design, development, deployment, and use. While it focuses on the use of AI systems by public authorities, it also extends to private actors acting on their behalf. Parties to the Framework Convention are not obligated to apply the provisions of the treaty to activities pertaining to the protection of their national security interests. However, they must ensure that such activities comply with international law and uphold democratic institutions and processes. The Framework Convention does not extend to national defense matters or research and development activities, except in cases where the testing of AI systems could potentially impact human rights, democracy, or the rule of law.

Fundamental Principles

The Framework Convention establishes seven guiding principles that signatory countries must adopt into their domestic legal systems, ensuring AI usage aligns with human rights, democracy, and the rule of law.

  • Human Dignity and Individual Autonomy: Parties shall adopt or maintain measures to safeguard human dignity and individual autonomy in the context of AI systems. This involves ensuring that AI activities do not diminish individual agency or reduce people to mere data points. It emphasizes that AI should enhance rather than interfere with personal autonomy and respect the complexity of human identity and values.
  • Transparency and Oversight: Parties shall implement measures that ensure transparency and oversight in AI systems, tailored to the specific contexts and risks involved. This includes making the operations and decision-making processes of AI systems understandable and accessible to relevant stakeholders. Additionally, it mandates that AI-generated content be identifiable to avoid deception and maintain clarity in AI interactions.
  • Accountability and Responsibility: Parties shall establish mechanisms to ensure accountability for any adverse impacts on human rights, democracy, and the rule of law caused by AI systems. This includes creating frameworks that allow for the attribution of responsibility to individuals or entities involved in AI activities. The principle highlights the need for clear lines of responsibility and the capacity to address and rectify negative outcomes.
  • Equality and Non-Discrimination: Parties should implement and uphold measures to ensure that activities throughout the lifecycle of an AI system respect principles of equality and non-discrimination as established by applicable international and domestic law. Parties are required to implement measures that prevent discrimination and address inequalities to achieve fair, just and equitable outcomes.
  • Privacy and Personal Data Protection: Parties shall protect individuals' privacy and personal data in relation to AI systems. This involves adhering to relevant domestic and international privacy laws and standards, and implementing effective safeguards for data protection. This principle underscores the importance of maintaining individuals' privacy and ensuring that personal data is handled securely throughout the AI lifecycle.
  • Reliability: Parties shall adopt or maintain measures to enhance the reliability of AI systems, including standards for quality and security. Parties should promote trust in AI outputs by ensuring that systems meet rigorous reliability criteria and are subject to appropriate verification and documentation processes. This principle aims to ensure that AI systems perform consistently and safely.
  • Safe Innovation: Parties are encouraged to foster responsible innovation by establishing controlled environments for the development and testing of AI systems under competent authorities. This approach allows for the safe experimentation of AI technologies under regulatory supervision, ensuring that innovation aligns with human rights, democracy and the rule of law. It aims to balance the need for technological advancement with the need for safeguards against potential adverse impacts.
Remedies and Procedural Safeguards

The Framework Convention emphasizes the need for accessible and effective remedies for human rights violations resulting from the activities within the lifecycle of artificial intelligence systems. These measures include documenting and making relevant information about AI systems available to authorized bodies and, when appropriate, to affected individuals. This information must be sufficient for individuals to challenge decisions made or significantly influenced by AI and, where relevant, to contest the use of the AI system itself. Additionally, Parties must provide an effective mechanism for individuals to file complaints with competent authorities regarding AI-related human rights issues.

The Framework Convention further mandates that procedural safeguards in human rights law remain applicable to AI contexts. Parties must ensure that when an AI system significantly affects human rights, individuals affected by the system have access to effective procedural guarantees, safeguards, and rights, in line with applicable international and domestic laws. Furthermore, parties shall ensure that, when appropriate to the context, individuals interacting with artificial intelligence systems are informed that they are engaging with a machine rather than a human.

Implementation and Oversight

With the treaty in place, member states now face the task of implementing its provisions. The establishment of independent oversight mechanisms will be crucial in ensuring compliance and monitoring AI systems for risks to human rights and democratic institutions. States are required to submit periodic reports on their progress, ensuring transparency and accountability.

The Framework Convention establishes the Conference of the Parties composed of official representatives of the Parties to the Convention to responsible for assessing the implementation of its provisions. The Conference of the Parties will regularly convene to facilitate international cooperation, exchange best practices, and address emerging challenges in AI regulation. This body will play a key role in ensuring the treaty remains a living document, capable of adapting to future technological advancements.

Conclusion

The Council of Europe’s AI treaty represents a critical milestone in global AI governance. The Framework Convention addresses the rapidly evolving nature of AI technologies while ensuring that their development and implementation remain aligned with fundamental human rights by embedding values such as transparency, accountability, and respect for human dignity into the lifecycle of an AI system. By providing a legal framework that fosters responsible AI use, the treaty paves the way for further international cooperation in AI regulation. Its success will depend on the commitment of its signatories, but its broad reception underscores its potential as a blueprint for future AI governance.

If you want to learn more, contact a member of McDonald Hopkins' national Data Privacy and Cybersecurity team.

Jump to Page

McDonald Hopkins uses cookies on our website to enhance user experience and analyze website traffic. Third parties may also use cookies in connection with our website for social media, advertising and analytics and other purposes. By continuing to browse our website, you agree to our use of cookies as detailed in our updated Privacy Policy and our Terms of Use.