The OECD Recommendation on Artificial Intelligence (AI), adopted in May 2019, was the first intergovernmental standard aimed at fostering innovation and trust in AI by promoting responsible stewardship of trustworthy AI. The Recommendation was updated in November 2023 to ensure it remains technically accurate and reflective of technological developments, particularly in generative AI. The OECD has been actively involved in AI policy since 2016, conducting empirical and policy activities to map the AI landscape, assess its economic and social impacts, and develop policy frameworks to support AI adoption and trust.
The Recommendation emphasizes a human-centered approach to AI, outlining five high-level principles: inclusive growth and well-being, respect for human rights and democratic values, transparency, robustness, and accountability. It also provides recommendations for national policies and international cooperation, including investing in AI R&D, fostering inclusive AI ecosystems, shaping governance environments, preparing for labor market transformations, and promoting international collaboration. The Recommendation aims to create a stable policy environment to foster AI trust and adoption globally.
Revisions in 2023 and 2024 aimed to update the Recommendation to address the evolving AI landscape, including clarifying the definition of AI systems, addressing misinformation and misuse, enhancing transparency, and emphasizing safety and responsible conduct throughout the AI lifecycle. The OECD launched the AI Policy Observatory (OECD.AI) and the OECD Network of Experts on AI (ONE AI) to support the Recommendation's implementation, providing forums for policy exchange, interdisciplinary dialogue, and practical guidance. The OECD continues to work on AI policy in coordination with other international organizations to foster trustworthy AI development worldwide.