The OECD Principles on Artificial Intelligence were adopted on 22 May 2019 by OECD member countries upon approval of the OECD Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up to by governments. The OECD's website announcing adoption expressed the hope that the "OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct."
It consists of five normative principles (what the OECD terms "values based") grounded in the sustainability enhancing notion of responsible stewardship that has gotten much traction in the business context among influence leaders in recent years (e.g., here). They include
1--AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2--AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3--There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
4--AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
5--Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
These five principles are then directed to the state, as is the habit of the OECD regulatory form. That direction is summarized in five recommended actions that states can take:
1--Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
2--Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
3--Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
4--Empower people with the skills for AI and support workers for a fair transition.
5--Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.
The OECD Principles of Artificial Intelligence, along with brief reflections, follow.
With states (and their governmental structures) taking the lead, the
organization and development of AI can be better driven. One can
welcome these principles as another step forward in the public dialogue
about the structures and operation of new modalities of regulation
represented by AI. They do raise, or in some cases they underline, issues that are also present in the many approaches to AI structuring already being developed by states and other actors. A few are summarized below.
1. It is not clear how these principles will be embedded within the AI principles already being developed among states (e.g., here). The difficulty here are not just with normative principles, but of the approach that states take to the fundamental issue of structuring activity of this sort--either driven by the state (through planning and meta-legislation), or through market mechanism (and delegation of regulatory implementation and supervision to private actors). The problem becomes more complex when one deals with state owned enterprises, with sovereign wealth funds and with enterprises performing governmental tasks.
2. One wonders, as well, whether, beyond issues of policy coherence across states with sometimes substantially different sets of values, the administrative and regulatory issues around AI structures can be addressed. Already it has become clear that there are several new approaches to trade and economic activities values that are emerging. Where once, before 2016, there appeared to be a convergence of fundamental values, now the trajectory appears to have reversed. In a world in whcih one must assume incoherence in values, then to what extent are values based principles effective?
3. The OECD, civil society and states have tended to approach AI from the perspective of constitutionalism (the "values" bit), and from the perspective of property (transparency, accountability, safety, and security issues). But the AI principles continue to fail to treat AI as regulation, especially when undertaken by states, or through private entities seeking to comply with state regulatory mandates. It is to the importation of principles of administrative law that AI policymakers might better turn their attention.
4. Lastly, one wonders, whether private organizations have an autonomous and societal responsibility for AI development beyond that of the state. The template established in the UN Guiding Principles for Business and Human Rights--of a state duty to protect, a corporate responsibility to respect, and a joint obligation to remedy, has substantial potential application in the construction of structures for the management of the development (and the policing ) of AI systems.
5. The Principles acknowledge AI as property (a res in the old fashioned sense) but do little to coordinate that character with the stewardship principles directed toward the host of values based political objectives also specified. What one can sense is intimated is the view of disclosure regimes as the basis for dealing with relate issues. But at the same time the failkures to resolve the tensions between intellectual property rights, for example, and social justice objectives in other areas (pharma, for example), are merely transposed to a new environment.
6. The principles include an inherent tension between a set of political objectives described in the sort of bland and de-fanged terms that has become common as the discursive trope for such things in political circles, and popular sovereignty. That tension occurs in two forms. The first is interpretative--the broad objectives of the Principles (take social justice for example) can be subject to wildly different interpretation depending on the political order in which these terms may be embedded. In some jurisdictions, social justice might be furthered through aggressive cultivation of markets and the protection of property; in other jurisdictions social justice might be reduced to a set of specific policy and social reconstruction goals to be undertaken under the leadership and direction of the state. To the extent AI modalities becomes intertwined, the effects are not clear.
7. There is an uneasy relationship between AI and data. And, indeed, because the definition of AI constructs it as a large umbrella concept tied together by a machine making "decisions" on the basis of data, analytics and algorithms, the problems inherent in all of these (each quite distinct) is conflated in ways that are not necessarily helpful.
1. It is not clear how these principles will be embedded within the AI principles already being developed among states (e.g., here). The difficulty here are not just with normative principles, but of the approach that states take to the fundamental issue of structuring activity of this sort--either driven by the state (through planning and meta-legislation), or through market mechanism (and delegation of regulatory implementation and supervision to private actors). The problem becomes more complex when one deals with state owned enterprises, with sovereign wealth funds and with enterprises performing governmental tasks.
2. One wonders, as well, whether, beyond issues of policy coherence across states with sometimes substantially different sets of values, the administrative and regulatory issues around AI structures can be addressed. Already it has become clear that there are several new approaches to trade and economic activities values that are emerging. Where once, before 2016, there appeared to be a convergence of fundamental values, now the trajectory appears to have reversed. In a world in whcih one must assume incoherence in values, then to what extent are values based principles effective?
3. The OECD, civil society and states have tended to approach AI from the perspective of constitutionalism (the "values" bit), and from the perspective of property (transparency, accountability, safety, and security issues). But the AI principles continue to fail to treat AI as regulation, especially when undertaken by states, or through private entities seeking to comply with state regulatory mandates. It is to the importation of principles of administrative law that AI policymakers might better turn their attention.
4. Lastly, one wonders, whether private organizations have an autonomous and societal responsibility for AI development beyond that of the state. The template established in the UN Guiding Principles for Business and Human Rights--of a state duty to protect, a corporate responsibility to respect, and a joint obligation to remedy, has substantial potential application in the construction of structures for the management of the development (and the policing ) of AI systems.
5. The Principles acknowledge AI as property (a res in the old fashioned sense) but do little to coordinate that character with the stewardship principles directed toward the host of values based political objectives also specified. What one can sense is intimated is the view of disclosure regimes as the basis for dealing with relate issues. But at the same time the failkures to resolve the tensions between intellectual property rights, for example, and social justice objectives in other areas (pharma, for example), are merely transposed to a new environment.
6. The principles include an inherent tension between a set of political objectives described in the sort of bland and de-fanged terms that has become common as the discursive trope for such things in political circles, and popular sovereignty. That tension occurs in two forms. The first is interpretative--the broad objectives of the Principles (take social justice for example) can be subject to wildly different interpretation depending on the political order in which these terms may be embedded. In some jurisdictions, social justice might be furthered through aggressive cultivation of markets and the protection of property; in other jurisdictions social justice might be reduced to a set of specific policy and social reconstruction goals to be undertaken under the leadership and direction of the state. To the extent AI modalities becomes intertwined, the effects are not clear.
7. There is an uneasy relationship between AI and data. And, indeed, because the definition of AI constructs it as a large umbrella concept tied together by a machine making "decisions" on the basis of data, analytics and algorithms, the problems inherent in all of these (each quite distinct) is conflated in ways that are not necessarily helpful.
* * *
Recommendation of the Council on Artificial Intelligence
THE COUNCIL,HAVING REGARD to Article 5 b) of the Convention on the Organisation for Economic Co-operation and Development of 14 December 1960;HAVING REGARD to the OECD Guidelines for Multinational Enterprises [OECD/LEGAL/0144]; Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data [OECD/LEGAL/0188]; Recommendation of the Council concerning Guidelines for Cryptography Policy [OECD/LEGAL/0289]; Recommendation of the Council for Enhanced Access and More Effective Use of Public Sector Information [OECD/LEGAL/0362]; Recommendation of the Council on Digital Security Risk Management for Economic and Social Prosperity [OECD/LEGAL/0415]; Recommendation of the Council on Consumer Protection in E-commerce [OECD/LEGAL/0422]; Declaration on the Digital Economy: Innovation, Growth and Social Prosperity (Cancún Declaration) [OECD/LEGAL/0426]; Declaration on Strengthening SMEs and Entrepreneurship for Productivity and Inclusive Growth [OECD/LEGAL/0439]; as well as the 2016 Ministerial Statement on Building more Resilient and Inclusive Labour Markets, adopted at the OECD Labour and Employment Ministerial Meeting;HAVING REGARD to the Sustainable Development Goals set out in the 2030 Agenda for Sustainable Development adopted by the United Nations General Assembly (A/RES/70/1) as well as the 1948 Universal Declaration of Human Rights;HAVING REGARD to the important work being carried out on artificial intelligence (hereafter, “AI”) in other international governmental and non-governmental fora;RECOGNISING that AI has pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future;RECOGNISING that AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges;RECOGNISING that, at the same time, these transformations may have disparate effects within, and between societies and economies, notably regarding economic shifts, competition, transitions in the labour market, inequalities, and implications for democracy and human rights, privacy and data protection, and digital security;RECOGNISING that trust is a key enabler of digital transformation; that, although the nature of future AI applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for capturing the beneficial potential of the technology, while limiting the risks associated with it;UNDERLINING that certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including those related to human rights, consumer and personal data protection, intellectual property rights, responsible business conduct, and competition, while noting that the appropriateness of some frameworks may need to be assessed and new approaches developed;RECOGNISING that given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context;CONSIDERING that embracing the opportunities offered, and addressing the challenges raised, by AI applications, and empowering stakeholders to engage is essential to fostering adoption of trustworthy AI in society, and to turning AI trustworthiness into a competitive parameter in the global marketplace;On the proposal of the Committee on Digital Economy Policy:I.AGREES that for the purpose of this Recommendation the following terms should be understood as follows:‒AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.‒AI system lifecycle: AI system lifecycle phases involve: i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building; ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.‒AI knowledge: AI knowledge refers to the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices, required to understand and participate in the AI system lifecycle.‒AI actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.‒Stakeholders: Stakeholders encompass all organisations and individuals involved in, or affected by, AI systems, directly or indirectly. AI actors are a subset of stakeholders.Section 1: Principles for responsible stewardship of trustworthy AIII.RECOMMENDS that Members and non-Members adhering to this Recommendation (hereafter the “Adherents”) promote and implement the following principles for responsible stewardship of trustworthy AI, which are relevant to all stakeholders.III.CALLS ON all AI actors to promote and implement, according to their respective roles, the following Principles for responsible stewardship of trustworthy AI.IV.UNDERLINES that the following principles are complementary and should be considered as a whole.1.1.Inclusive growth, sustainable development and well-beingStakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.1.2.Human-centred values and fairnessa)AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.b)To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.1.3.Transparency and explainabilityAI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:i.to foster a general understanding of AI systems,ii.to make stakeholders aware of their interactions with AI systems, including in the workplace,iii.to enable those affected by an AI system to understand the outcome, and,iv.to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.1.4.Robustness, security and safetya)AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.b)To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.c)AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.1.5.AccountabilityAI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.Section 2: National policies and international co-operation for trustworthy AIV.RECOMMENDS that Adherents implement the following recommendations, consistent with the principles in section 1, in their national policies and international co-operation, with special attention to small and medium-sized enterprises (SMEs).2.1.Investing in AI research and developmenta)Governments should consider long-term public investment, and encourage private investment, in research and development, including interdisciplinary efforts, to spur innovation in trustworthy AI that focus on challenging technical issues and on AI-related social, legal and ethical implications and policy issues.b)Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards.2.2.Fostering a digital ecosystem for AIGovernments should foster the development of, and access to, a digital ecosystem for trustworthy AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.2.3.Shaping an enabling policy environment for AIa)Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.b)Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.2.4.Building human capacity and preparing for labour market transformationa)Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills.b)Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programmes along the working life, support for those affected by displacement, and access to new opportunities in the labour market.c)Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.2.5.International co-operation for trustworthy AIa)Governments, including developing countries and with stakeholders, should actively co-operate to advance these principles and to progress on responsible stewardship of trustworthy AI.b)Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.c)Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.d)Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.VI.INVITES the Secretary-General and Adherents to disseminate this Recommendation.VII.INVITES non-Adherents to take due account of, and adhere to, this Recommendation.VIII.INSTRUCTS the Committee on Digital Economy Policy:a)to continue its important work on artificial intelligence building on this Recommendation and taking into account work in other international fora, and to further develop the measurement framework for evidence-based AI policies;b)to develop and iterate further practical guidance on the implementation of this Recommendation, and to report to the Council on progress made no later than end December 2019;c)to provide a forum for exchanging information on AI policy and activities including experience with the implementation of this Recommendation, and to foster multi-stakeholder and interdisciplinary dialogue to promote trust in and adoption of AI; andd)to monitor, in consultation with other relevant Committees, the implementation of this Recommendation and report thereon to the Council no later than five years following its adoption and regularly thereafter.Background information
The Recommendation on Artificial Intelligence (AI) – the first intergovernmental standard on AI – was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy (CDEP). The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values. Complementing existing OECD standards in areas such as privacy, digital security risk management, and responsible business conduct, the Recommendation focuses on AI-specific issues and sets a standard that is implementable and sufficiently flexible to stand the test of time in this rapidly evolving field.The OECD’s work on Artificial Intelligence and rationale for developing the OECD Recommendation on Artificial Intelligence
The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI and calls on AI actors to promote and implement them:
In addition to and consistent with these value-based principles, the Recommendation also provides five recommendations to policy-makers pertaining to national policies and international co-operation for trustworthy AI, namely:
- inclusive growth, sustainable development and well-being;
- human-centred values and fairness;
- transparency and explainability;
- robustness, security and safety;
- and accountability.
The Recommendation also includes a provision for the development of metrics to measure AI research, development and deployment, and for building an evidence base to assess progress in its implementation.
- investing in AI research and development;
- fostering a digital ecosystem for AI;
- shaping an enabling policy environment for AI;
- building human capacity and preparing for labour market transformation;
- and international co-operation for trustworthy AI.
Artificial Intelligence (AI) is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges. It is deployed in many sectors ranging from production, finance and transport to healthcare and security.
Alongside benefits, AI also raises challenges for our societies and economies, notably regarding economic shifts and inequalities, competition, transitions in the labour market, and implications for democracy and human rights.
The OECD has undertaken empirical and policy activities on AI in support of the policy debate over the past two years, starting with a Technology Foresight Forum on AI in 2016 and an international conference on AI: Intelligent Machines, Smart Policies in 2017. The Organisation also conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations, and describes AI initiatives from governments and other stakeholders at national and international levels.
This work has demonstrated the need to shape a stable policy environment at the international level to foster trust in and adoption of AI in society. Against this background, the OECD Committee on Digital Economy Policy (CDEP) agreed to develop a draft Council Recommendation to promote a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and applies to all stakeholders.
Complementing existing OECD standards already relevant to AI – such as those on privacy and data protection, digital security risk management, and responsible business conduct – the Recommendation focuses on policy issues that are specific to AI and strives to set a standard that is implementable and flexible enough to stand the test of time in a rapidly evolving field. The Recommendation contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system” and “AI actors”, for the purposes of the Recommendation.
More specifically, the Recommendation includes two substantive sections:
An inclusive and participatory process for developing the Recommendation
- Principles for responsible stewardship of trustworthy AI: the first section sets out five complementary principles relevant to all stakeholders: i) inclusive growth, sustainable development and well-being; ii) human-centred values and fairness; iii) transparency and explainability; iv) robustness, security and safety; and v) accountability. This section further calls on AI actors to promote and implement these principles according to their roles.
- National policies and international co-operation for trustworthy AI: consistent with the five aforementioned principles, this section provides five recommendations to Members and non-Members having adhered to the draft Recommendation (hereafter the “Adherents”) to implement in their national policies and international co-operation: i) investing in AI research and development; ii) fostering a digital ecosystem for AI; iii) shaping an enabling policy environment for AI; iv) building human capacity and preparing for labour market transformation; and v) international co-operation for trustworthy AI.
The development of the Recommendation was participatory in nature, incorporating input from a broad range of sources throughout the process. In May 2018, the CDEP agreed to form an expert group to scope principles to foster trust in and adoption of AI, with a view to developing a draft Council Recommendation in the course of 2019. The AI Group of experts at the OECD (AIGO) was subsequently established, comprising over 50 experts from different disciplines and different sectors (government, industry, civil society, trade unions, the technical community and academia) - see http://www.oecd.org/going-digital/ai/oecd-aigo-membership-list.pdf for the full list. Between September 2018 and February 2019 the group held four meetings: in Paris, France, in September and November 2018, in Cambridge, MA, United States, at the Massachusetts Institute of Technology (MIT) in January 2019, back to back with the MIT AI Policy Congress, and finally in Dubai, United Arab Emirates, at the World Government Summit in February 2019. The work benefited from the diligence, engagement and substantive contributions of the experts participating in AIGO, as well as from their multi-stakeholder and multidisciplinary backgrounds.
Drawing on the final output document of the AIGO, a draft Recommendation was developed in the CDEP and with the consultation of other relevant OECD bodies. The CDEP approved a final draft Recommendation and agreed to transmit it to the OECD Council for adoption in a special meeting on 14-15 March 2019. The OECD Council adopted the Recommendation at its meeting at Ministerial level on 22-23 May 2019.
Follow-up, monitoring of implementation and dissemination toolsThe OECD Recommendation on AI provides the first intergovernmental standard for AI policies and a foundation on which to conduct further analysis and develop tools to support governments in their implementation efforts. In this regard, it instructs the CDEP to monitor the implementation of the Recommendation and report to the Council on its implementation and continued relevance five years after its adoption and regularly thereafter. The CDEP is also instructed to continue its work on AI, building on this Recommendation, and taking into account work in other international fora, such as UNESCO, the Council of Europe and the initiative to build an International Panel on AI (see https://pm.gc.ca/eng/news/2018/12/06/mandate-international-panel-artificial-intelligence and https://www.gouvernement.fr/en/france-and-canada-create-new-expert-international-panel-on-artificial-intelligence).
In order to support implementation of the Recommendation, the Council instructed the CDEP to develop practical guidance for implementation, to provide a forum for exchanging information on AI policy and activities, and to foster multi-stakeholder and interdisciplinary dialogue. This will be achieved largely through the OECD AI Policy Observatory, an inclusive hub for public policy on AI that aims to help countries encourage, nurture and monitor the responsible development of trustworthy artificial intelligence systems for the benefit of society. It will combine resources from across the OECD with those of partners from all stakeholder groups to provide multidisciplinary, evidence-based policy analysis on AI. The Observatory is planned to be launched late 2019 and will include a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update, enabling the comparison of their key elements in an interactive manner. It will also be continuously updated with AI metrics, measurements, policies and good practices that could lead to further updates in the practical guidance for implementation.
The Recommendation is open to non-OECD Member adherence, underscoring the global relevance of OECD AI policy work as well as the Recommendation’s call for international co-operation.
Unofficial translation(s): German.
For further information please consult: oecd.ai.
Contact information: ai@oecd.org.
No comments:
Post a Comment