• HARAYAMA Yuko (Specialist in higher education studies, and science, technology and innovation policy)
    Emeritus Professor, Tohoku University

When we want to check on something, we just type it in a search engine. Targeted advertisements flood our online activities and we sometimes end up by clicking on a link button displayed. Before going out, we tap on our mobile phone to check the weather forecast for the area. All these behaviors have long become our habit, while it is only recently that we started to hear about Artificial Intelligence (AI) almost every day.

Behind such convenience in our everyday life is a huge amount of data, high-performance computers to handle it, and AI to orchestrate the whole process involved. All this time, we have been using AI unwittingly, or to say, passively. However, the rise of generative AI has changed our mind about AI - it became something we can actively use for our personal purposes.

Also, it is worthy of note that nowadays even the general public expresses their opinions about the possibilities, potential or unknown risks, and social impacts of AI, while these used to be the agenda for specialists to discuss in their circles.

Compared to other technologies, AI is far more dual natured. AI may dramatically transform how society functions, how people live their lives, and how people interact with each other. As such, developers of AI could be held liable for the social impact of the technology, not just for technical aspects. And, governments are required to formulate a framework covering various issues of AI, including investment into AI as state-of-the-art technology, appropriate AI governance assuming widespread use of AI in society, safety, fairness, protection of privacy, and ethical considerations.

In response to such social demands, in the late 2010s, discussions on how to manage AI started actively in major countries and in regional and international levels.

In Japan, starting with the report issued in 2017 by the Advisory Board on Artificial Intelligence and Human Society which summarized the ethical, legal, economic, educational, and social implications of AI, the "Social Principles of Human-Centric AI" was formulated in 2019, and the "Action Plan for enhancing global interoperability of AI Governance" was agreed at the G7 Digital Ministerial Meeting held in 2023.

In the OECD, the AI Principles focusing on "human- centered" approaches were formulated in 2019. Japan was deeply involved in its formulation as a member country, and participates in the Working Party on AI Governance (AIGO) that supports the implementation of OECD standards relating to AI. Meanwhile, UNESCO adopted the Recommendation on the Ethics of AI in 2021, for which Japan participated in drafting.

In a similar time frame, the European Union established the High-Level Expert Group on AI in 2018, and formulated the Ethics Guidelines for Trustworthy AI in 2019 to present the direction of AI-related policies, while utilizing the AI Alliance as opportunities for policy dialogs. The Ethics Guidelines also put forward human-centric approaches, and needless to say, the discussions held to formulate the Ethics Guidelines paved the way to the AI strategy formulated by the European Commission and the AI Act adopted by the European Parliament last month.

Influence of AI would instantly spread throughout the world, and "human-centered" can be interpreted in so many various ways. Therefore, international cooperation is essential from the viewpoint of AI governance. However, overviewing the global trend surrounding AI, it is now apparent that the discussions on AI have gone dichotomic, such as the European Union focusing on a legal framework while the United States respects the autonomy of the private sector, and some call for hard laws to legally bind parties involved while others want soft laws to promote autonomous actions.

I would like to conclude this essay by stating that there is an attempt to overcome the differences in standpoints amid such a situation as of June 2023. That is, the approach of the Committee on AI led by the Council of Europe, which governs negotiations toward formulation of the AI Convention based on human rights, democracy, and rule of law. The chairperson is a representative from Switzerland, which is not an EU member state, and representatives from observer countries (U.S., Canada, U.K., Japan, Israel) in addition to representatives from 46 EU member states are working on drafting the Treaty. While aiming at reaching an agreement on the fundamental principles, the AI Convention would give ratified countries a certain level of discretion in terms of the scope of application, considering the differences in legal frameworks in the countries. By this, the Convention encourages the countries to compromise on conflicting issues, and attempts to increase the number of countries agreeing with and ratifying the Convention once adopted. I can't wait to see how this is going to all work out.

Essay All Discours
Go to Top