人と情報のエコシステム
2018.09.01

Special Tripartite Discussion "What are the responsibilities and the subject of the artificial intelligence era?"

Category
Photo1

Kazuya MATSUURA philosophy, Takako YOSHIDA psychology, Tatsuhiko INATANI law

If artificial intelligence causes an accident, who will take responsibility? In this era where information technology rapidly penetrates society, it is an important issue that has yet to be resolved. To challenge this contemporary problem, three researchers, each one having different expertise - philosophy, psychology and law - met to discuss what "responsibility" is and what "subject" in the age of artificial intelligence is.

First of all, please tell us about your research and the project contents being promoted by "Human-Information Technology Ecosystem" (hereinafter: HITE)

Kazuya MATSUURA (below: MATSUURA): My specialty is Greek philosophy, mainly that of Aristotle. While I know some may feel Greek philosophy is too unwavering, I believe that Greek philosophers offer a timeless way of thinking which we share unconsciously, and their discussion and values serve as hints to comprehend the current society. At HITE we launched a project titled "Consideration and Suggestion on the Concept of Responsibility in the Sophisticated Information Society" last year, and starting this fiscal year we are carrying out a project "Consideration on the concept of 'responsibility' between autonomous machines and citizenries". These projects aim to clarify and reconsider the concept of "responsibly" in the coming sophisticated information society from the viewpoint of humanities.

Our project emphasizes human history and culture. When some sort of "autonomous machine," such as cars with AI, spread throughout our society, the point of argumentation would not be how the machines actually move or behave, but how ordinary people could accept the machines. Therefore, we put less weight on seeking the definition of autonomous machines, but we ask this question instead. Namely, "what abilities are needed for machines to be regarded as the equivalent of human beings?" This question, of course, just leads to the more universal question of "What a human being is." If we want the system of society co-existing with autonomous machines to be more ideal, it is necessary to advert not only to the social models offered by modern Western philosophy and political theory, but models in other periods and eras, such as ancient Greece, the Edo period of Japan, ancient India, etc.

Takako YOSHIDA (hereinafter: YOSHIDA): In our laboratory, we are working on machines and systems that work in accordance to the actions of the human body. At HITE, our theme is to observe the human user's psychological state to answer the question of "Which can be the subjective and objective subject of a specific action when human beings and artificial systems are integrated into one and working together, the human or artificial system?" Our interest is especially on the user's subjective feeling of affinity of machines and the human body from the approach of brain science. For example, when using a wearable power support robot attached to the human body and working with it, when a certain condition is satisfied, the human user gradually feels that the machine system can be part of his/her own body. In the end, he/she feels strongly that he/she, himself/herself is the only subject to control his/her own body, and loses the feeling that the machine may be also controlling his/her body. My question is, "Who is responsible when some socially unwanted event happens in that state?" From the perspective of the person himself/herself, all actions feel as if they were made on his/her own solo intention, which then leads to them feeling responsible for himself/herself even if the problem is physically on the machine side. On the other hand, despite the result of an action caused by himself/herself, he/she declares that it is a malfunction of the machine and passes it to the manufacturer's side in some cases. Who judges whether the person himself/herself is really thinking this or just lying? Furthermore, he/she can also declare that the machine took over their own body actions. This can be a tough situation because from a third person's point of view, as the machine and the person are working together, it is difficult to distinguish which controls which, the machine or human user. Under such circumstances the boundary between such machines and humans becomes ambiguous. I think it is very important to figure out "subject" and "responsibility" of who performed a specific act in this type of human and artificial system co-operation.

Photo2

Tatsuhiko INATANI (hereinafter: INATANI): My specialty is criminal justice and criminology. Specifically, I'm conducting research of substantive and procedural law in the criminal justice and legislative theory, while applying the theory of other areas adjacent to law such as philosophy, economics, sociology and cognitive science. Nowadays, with the progress of globalization and technology, fluctuations have occurred in the idea that "A human being is a rational 'subject' with 'free will'", which has been regarded as a solid premise in the modern legal world up to now. So while making full use of contemporary philosophy, cognitive psychology, behavioral economics and other knowledge, I critically examined the current criminal justice system and explored a new criminal justice system. In "Legal being: electronic personhoods of artificial intelligence and robots in NAJIMI society, based on a reconsideration of the concept of autonomy" project I participated in with HITE, we have dealt with recent hot legal issues such as whether a legal personality should be given to artificial intelligence. We are also developing discussions on what is legal responsibility concerning the development and use of artificial intelligence, especially on what punishment system is necessary for artificial intelligence to exist in harmony with human beings.

Is there no "free will" for humans?

What is the most important point concerning "responsibility and subject" in the artificial intelligence era?

INATANI: As I mentioned earlier, in modern criminal law based on modern philosophy of the West, it is a basic premise that humans with free will can control objects without being influenced by the external environment. However, artificial intelligence such as deep learning is something that continues to develop, so complete control of it cannot be assumed. On the other hand, according to recent neuroscience and brain science, human beings always exist under the influence of the external environment, so it is also revealing that the existence of a firm free will is doubtful in the first place. Then, it can be said that the premise of the argument that "if a harmful event occurs, the person who could control the danger should take responsibility" is now questionable.

YOSHIDA: In the field of cognitive science and brain science as well, the possibility that human beings are not actively and subjectively controlling their own actions as much as they think by themselves has been discussed. With human beings, we can find varieties of phenomena that the human subjective feeling to control themselves does not correspond to their actual action in the physical world. In addition, recently the possibility that you tend to argue that you are not responsible for the action coerced by other people and agents such as machines and AI is being discussed. Before debating the social question of "Who is responsible", it is necessary to carefully consider the characteristics of this type of human cognition and behavior.

MATSUURA: The very concepts of "responsibility" or "subject" are the product of modern Western philosophy. One background of these concepts may be "the principle of alternative possibility," which enables us to blame someone and assign guilt to him/her, claiming that he/she was able to choose another action. However, this principle cannot be agreed by everyone. For example, if we recall an ancient Indian idea, we could say that the accident was caused "because of his/her karma from the previous life." Or, if a slave committed a crime in ancient Rome or Greek times, it was the master who was accused and had to pay the indemnity because of the responsibility of management failure. So our project, referring to social models in the past, reconsiders if the society formed with the modern concept of "responsibility", that of "subject", or "the principle of an alternative possibility" can really lead us to a happy and prosperous future.

Where is the machine from? Is it a human?

When responsibility is asked from the artificial intelligence side, at what level can the machine be determined to be "autonomous"?

MATSUURA: I think that your question will be more important when artificial intelligence penetrates the daily lives of people. However, rather than defining the level of autonomy of machines or "autonomy" itself, it is more efficient to argue the possibility that ordinary people, including me, would regard artificial intelligence as "having autonomy and intelligence like human beings", if we aim to adapt them as "culture" into society. Therefore, we have no choice but to answer the question, depending on how each culture sees the "autonomy" or "human beings" in question.

YOSHIDA: There is scientific research on "animacy" to reveal how and when humans "feel" intelligence and life in mechanical and computer graphics objects, and it seems to be easier to define and study compared to the precise scientific definition and study of what is "true" intelligence and life. This is also relevant to the Turing test, which may be "artificial unintelligence" rather than artificial intelligence. In other words, it can also be said that human beings can have the subjective feeling or illusion that AI is a living thing, even for things that may not have life. I have to think carefully whether we should discuss today's rapidly developing AI in the same way.

INATANI: In the framework of the mainstream criminal law argument, we may try to capture the autonomy from the viewpoint of the sanity that human beings should have to be legally responsible. However, as I mentioned earlier, the answer to the question, "What is a human being?" that can take legal responsibility, in itself is changing. It might be the time to rethink the premise of traditional framework based on modern philosophy, which essentializes the mode of existence of human beings and the sanctions against them.

Photo3

MATSUURA: If we seriously pursue "autonomous human beings", it may mean the people who are not affected from the outside at all. Whether such people exist is doubtful, except for the great philosopher Immanuel Kant.

YOSHIDA: I don't know Kant in person. If there is a person or artificial system such as AI or a machine that lives totally independent from their surrounding environment, they may act completely independently from human social common sense. I wonder if they can live without any trouble in an ordinary human society.

The gap between technical "safety" and psychological "security"

Recently, there has been debate on whether the manufacturer or the driver is responsible for an accident involving an automatic driving car. What do you think?

INATANI: Autonomous driving is defined by 0 to 5 technical levels according to the SAE, and conditions are totally different depending on each level, so it cannot be said unconditionally.

YOSHIDA: I am concerned that the design concept of semi-automatic operation of level 3 or so for automatic driving is not widespread throughout the world; (the machine is driving in a specific place such as a highway, the person is sitting in the driver's seat only for the purpose, according to the design philosophy, of being able to respond to an emergency). Compared to the scenario in which a human and artificial system are operating relatively independently, it seems possible that we have more accidents in a state where the machine and human beings are cooperating together. This may be what is known empirically in other automation technology fields such as aircraft autopilot incident case reports. There seems to be a certain number of researchers claiming this; Compared to fully automatic driving, accidents are likely to occur if human operation intervenes halfway.

INATANI: Regarding level 3, I think that ultimately this kind of mindset may unconsciously be derived from the premise of modern philosophy that the control of a human being based on its free will over the object is better than the machine control over the human being. In modern law, the driver is a human being, so he/she can and should manage the danger of objects based on free will. Then the idea that there is responsibility as a human being for finding and controlling the danger of the vehicle is straightforward in a sense. Although the problems caused from it may be enormous.

YOSHIDA: Suppose that artificial intelligence that can solve any problem perfectly was driving a train or an aircraft, there is also a viewpoint on whether human beings want to ride or not. Some people may think that a human operator is necessary to cope with an emergency since they should be more responsible than a machine.

INATANI: There is a gap between the psychological sense of security and objective safety. It may be the biggest problem that is complicating the discussion.

YOSHIDA: Especially in the case of semi-automatic driving, it is important to consider (1) what kind of cognitive characteristics the driver has, (2) and how to maintain a sense of responsibility of the driver in a safe and comfortable manner. Based on the above (1) (2), the vehicle system may be designed to operate while keeping these human characteristics the best inside it.

MATSUURA: In terms of machines and humans working together, there are also approaches to the design of machinery systems to extend or support human actions and abilities. I expect technology to support and enhance our ability. The automatic driving technique may proceed in the direction of improving the ability to assist human driving ability rather than aiming for complete automation. It is the very design, I believe, which more than a few people really want, such as handicapped people and their supporters.

Instead of pursuing problems, creating an ideal social vision

INATANI: Either way, I think that it is not very productive to pursue only the responsibilities of users and developers just because artificial intelligence causes danger. From now on, rather than trying to determine what the essence of machines and human beings is, everyone should think about what kind of society we want first, and then start to discuss the distribution of legal responsibilities appropriate for that purpose. In the case of autonomous driving, I think that it is better that everyone starts concretely thinking about how they want it to exist themselves and spread that idea into society individually. From that point of view, I feel that it is better to loosen the traditional way of establishing preliminary regulations by the government, based on its idealized and sometimes very fixed social images. I think that it is better not to discuss "how society should be" to fix every problem immediately or even ex-ante, but to adopt an approach that allows us to gradually develop a synthesized society between human beings and artificial intelligence centered on the debate over "what we want to be with them".

YOSHIDA: There are also ways to give quick prototypes to future users, to allow thinking while gathering feedback.

Photo4

INATANI: I agree. Even after actually distributing the autonomous systems, if they cause problems, we should thoroughly discuss with users, companies, professional technicians and legal professions involved with them, like "Let's do this if we can control it", or "If you cannot yet control it, try this". I would like to prepare a legal system to embody the ideal mode of existence, little by little, while discussing the direction everyone is aiming for each time.

How to incorporate people's voices

Will artificial intelligence be acceptable to society? Will problems that cannot be divided by theory and institutions come up?

INATANI: Who will be the subject to discuss it is important. For example, now you can also widely incorporate opinions from the public through SNS, and it is possible for artificial intelligence to analyze those voices as data. It is important not only to discuss experts' opinions but also to listen to the opinions of the general public. On the other hand, if you put too much emphasis on the user's opinion, the voice of uneasiness will rise too much, like "Let's eliminate all danger first", and there is a possibility that regulations may become too strict, so I think that it also needs attention.

MATSUURA: I strongly agree that it is necessary procedure to reflect the opinions in the social system, but I am concerned about two problems. One is "whose opinion is to be collected", and the other is "whether the consolidated opinions harmonize with current legal systems and culture". As AI's learning shows, the output of AI definitely depends on what kinds of input we give it in the learning process. If AI learns from violent people, the AI will produce violent output, and if you gather people who are interested in a particular religion as the teachers for the AI, it will produce an output that reflects that religion. It can be the same in the case of us. Likewise, I do not deny that aggregation of opinion from the public is one of the important processes, but the most important is the method of how to summarize their opinion. Otherwise, I am afraid that we would make different rules for each region or person's taste, or a law that conforms to an overly idealized human figure.

Finally, please tell us about the possibility of your future collaborative research

YOSHIDA: I think that making opportunities to listen to people in different fields and not ignoring their ideas is important. Since there are not many opportunities for the Tokyo Institute of Technology, especially Mechanical Engineering students and researchers to have contact with humanistic thinking, I will soon invite Dr. Matsuura and Inatani to the Tokyo Institute of Technology and apply law and philosophy, etc. I want to create the opportunity to talk with people in different fields. Also, I would like to involve more people in this type of discussion.

MATSUURA: Like Dr. Yoshida, we scholars of literature should know more about the current technology, and have more opportunities to communicate with professional researchers of industry. Through this opportunity, I would like to have discussions beyond the boundaries of modern specialized fields. I would be happy if we could think of concrete methods of developing such discussions at educational institutions.

INATANI: In terms of education, I recently gave lectures for high school students. They, as children of the digital era, could understand the characteristics of information technology and artificial intelligence very quickly and they were very open to accept the social changes that they might bring about. Their ideas are very important even now, and there is a possibility that their flexible minds and frequent activities with artificial intelligence will innovate its future in our society. Also, since law is a field that can collaborate with all other fields, it has the ability to infiltrate everywhere. I hope to learn a lot from the specialists in other fields and to figure out a concrete legal proposal in the future.

Photo5

MATSUURA: We are in the era in which we must go beyond the framework modern law has been built upon so far. Even if we present the best, we philosophers should present many options which we can choose from, and we can also consider how society would change. Today I thought that there was a lot to be in line with law specialists like Dr. Inatani. It seems that the time has come when we can create laws and societies that are more in line with our diverse values and lives.

INATANI: Based on this HITE initiative, I think that we may present a unique mode of law based on Japanese culture and philosophy. Applying the theory on the premise of an ambiguous division between "subject" and "object" to construction of the society, which we've been discussing today, is somewhat difficult in particular for the people in the West where they still try to maintain the division based on their cultural pedigree. Because Japanese culture and philosophy is not so firm on this problem, I think, there might be possibilities to innovate a new resolution to current and future problems caused by artificial intelligence, beyond the framework of Western modern philosophy. I would like to make this collaboration an opportunity to tackle the initiatives that stimulate and involve the whole world.

Kazuya MATSUURA
Principal Investigator of the HITE project "Consideration on the concept of "responsibility" between autonomous machines and citizenries". Associate Professor, Toyo University. Professional in Greek philosophy. Formerly Full-Time Lecturer, Faculty of Teacher Education, ShumeiUniversity. Specializing in Philosophy.

Takako YOSHIDA
Principal Investigator of the HITE project "Which controls which? Sense of agency when humans and semi-automated systems co-operate". Associate Professor, Tokyo Institute of Technology. Specialty is applied brain science.

Tatsuhiko INATANI
Member of the HITE project "Legal being: electronic personhoods of artificial intelligence and robots in NAJIMI society, based on a reconsideration of the concept of autonomy". Associate Professor, Kyoto University. Specialized in criminal justice and criminology.

Date of interview: November 26, 2017

This interview has been printed in our Program Introduction booklet Vol.02. If you would like to check the other articles please click the link below.

Human Information Technology Ecosystem (HITE) Program Introduction booklet - Vol.01 Vol.02