How G20 countries are working to support trustworthy AI

By Sarah Box

Senior Counsellor, OECD Directorate for Science, Technology and Innovation

Artificial intelligence (AI) technologies and tools have played a role in every aspect of the response to the COVID-19 crisis – from detecting and diagnosing the virus, to supporting the search for a vaccine, to monitoring the recovery and improving early warning tools. These technologies have certainly helped bolster efforts to combat COVID-19 across the globe, yet their development, diffusion and use are still at a relatively early level of maturity across many countries and firms.

Indeed, AI’s full potential is yet to be harnessed – not just within the context of tackling pandemics, but also in improving outcomes in sectors such as health, education, agriculture, energy and beyond. At the same time, these technologies have raised anxieties and ethical concerns around their robustness and trustworthiness. The challenge for policy makers, then, is to help ensure that AI is developed and deployed in ways that are fair, accountable and with respect for human rights and democratic values.   

G20 countries are stepping up to this challenge. In 2019, G20 Leaders welcomed the G20 AI Principles drawn from the OECD Recommendation on AI. These principles seek to foster public trust and confidence in AI technologies and realise their potential, through promoting principles such as inclusiveness, human-centricity, transparency, robustness and accountability.

This week, G20 Digital Ministers, under the Saudi Arabian presidency, confirmed their commitment to advance the G20 AI Principles. Countries undertook an exercise to collect examples of national strategies and innovative policy initiatives aimed at steering toward responsible stewardship of trustworthy AI.  The OECD’s new report, prepared as an input for discussions in the G20 Digital Economy Task Force (DETF), highlights that governments are actively experimenting with AI strategies and policies to seize the benefits and orient AI towards human-centred outcomes.

The report, Examples of AI National Policies, is based on country survey responses or information for almost all G20 and guest countries, as well as DETF discussions held in 2020. Although the report covers only a sample of countries’ AI initiatives that have launched or are under development, it nevertheless sheds light on important trends in their efforts to support trustworthy AI for the benefit of economies and societies.

Our report finds that G20 countries are engaging in a diverse range of efforts to build and support AI ecosystems. Most of these efforts are very recent, and many address multiple G20 AI Principles at once (either implicitly or explicitly). The German AI Strategy, for instance, places a focus on the technology’s benefits for people and the environment, identifies the responsible development and use of AI for the good of society as one of its objectives, and aims to address all five values-based G20 AI Principles. India’s AI Strategy focuses on leveraging AI for inclusive growth, and flags important issues including bias, ethics and privacy as it seeks to build a vibrant ecosystem for AI.  This broad span makes it difficult to categorise initiatives and strategies neatly by Principle, but it is consistent with the G20 AI Principles being complementary and mutually reinforcing. (Improving transparency, for example, may also address issues of fairness and bias.) In addition, addressing the AI Principles “as a package” may allow countries to make faster progress towards the ultimate goal of advancing trustworthy AI.   

G20 countries today have a critical window to continue their leadership and work to advance the AI Principles.   

Our report also shows that a significant number of initiatives focus on R&D for AI, underscoring the fact that many countries see potential for the technology to develop much further, and to address various economic and social issues. Public investment in R&D is increasing as well, complementing the strong private sector investment already underway, which may enable a greater understanding of AI-related challenges and facilitate new solutions. At the same time, however, it appears that relatively few policies are primarily focused on the Principles of robustness, safety and security, and accountability, compared to those that address inclusive growth and human-centred values. This suggests that there may be an opportunity to put greater emphasis on these areas going forward.

Overall, the evidence collected suggests that countries will need to deploy a mix of policies to build trustworthy and human-centric AI. At this point, it is still too early to conduct in-depth evaluations of the initiatives undertaken, as very few have been operating for a significant amount of time. Yet this also suggests that there is a strong opportunity for countries to share experiences and learn from one another.

As AI technologies continue to mature and with AI policy making still in an initial phase, G20 countries today have a critical window to continue their leadership and work to advance the AI Principles. The OECD is also taking action to move from principles to practice. Our AI Policy Observatory, launched earlier this year, brings together stakeholders to share insights and collaborate on shaping AI-related policy. The platform features the latest data on AI trends and policies in around 60 countries, as well as material from partners in academia and the private sector. Through this initiative and with its ongoing work to support multistakeholder dialogue on AI, the OECD stands ready to support the G20 as it works to advance trustworthy AI.  

Learn more:

Leave a Reply