ARTIFICAL INTELLIGENCEG7 SummitNEWS
November 22, 2023
The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems.,
22 November 2023
The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, established at the G7 summit in Hiroshima in May 2023, marks a significant stride in promoting safe, reliable, and trustworthy AI globally. This comprehensive framework, inclusive of entities from academia, civil society, private and public sectors, aims to guide the development and utilization of sophisticated AI systems, including advanced foundation models and generative AI technologies.
These guiding principles, intended as a dynamic document, build upon the existing OECD AI Principles. They respond to the evolving landscape of advanced AI systems, aiming to maximize their benefits while addressing the inherent risks and challenges. Applicable to all stakeholders in the AI ecosystem, these principles encompass the entire AI lifecycle, from design and development to deployment and usage.
The commitment to further evolve these principles reflects a collaborative approach, welcoming inputs from global communities and stakeholders across various sectors. An international code of conduct, based on these principles, is also in development, emphasizing a transparent and responsible approach to advanced AI system development.
Acknowledging the uniqueness of jurisdictional approaches, the principles advocate for a risk-based methodology, calling for active participation from organizations in implementing these guidelines. This includes the development of monitoring tools and mechanisms, in collaboration with entities like the OECD and GPAI, to ensure organizational accountability.
Central to these principles is the adherence to the rule of law, human rights, diversity, fairness, democracy, and a human-centric approach. Organizations are urged not to engage in the development or deployment of AI systems that could undermine democratic values, pose risks to safety and human rights, or facilitate terrorism and criminal misuse.
The principles align with international human rights obligations and frameworks, including the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises. They urge organizations to employ diverse testing methods, address vulnerabilities, and maintain transparency and accountability in AI system development and deployment.
In essence, these principles serve as a comprehensive blueprint for organizations, guiding them towards ethical, secure, and responsible AI development, steering the global community towards a more sustainable and technologically advanced future.