人と情報のエコシステム
2018.09.01

"To the Future of People and Information Technology" Toward building a better relationship between technology and human society.

Category
v01_0_top.png

Dominique Chen IT entrepreneur, researcher Arisa Ema Assistant Professor, Science Interpreter Training Program, Komaba Organization for Educational Excellence, University of Tokyo Toru Nishigaki Professor Emeritus of the University of Tokyo and Professor of the Department of Communication at Tokyo Keizai University

We are in an age of information technology where artificial intelligence (AI) evolves at an accelerating pace. What meaning and impact does technology have for human beings? The pioneers of the HITE community who continue to think about implementations of technologies in society and provide a venue for dialogue to tackle theoretical questions that do not have any answers were interviewed on how we can approach building a future society together.

Meaning and social influence of information technology

First, please tell me about your own activities and how you have been involved with HITE.

Dominique Chen (hereinafter referred to as Chen): I was originally a researcher in information studies, and I have been running an IT startup, Dividual Inc., since 2008 (*merged and acquired by Smartnews Inc. in January 2018) whose main purpose is the development of web and smart phone applications among other things. I am also involved in activities between Academia and private sectors, including as the director of a non-profit corporation, Commonsphere, which delivers what is known as the "Creative Commons License (commonly "CC")" which allows creators to have their own choice of copyrighting on the web.

Last year, I saw a public advertisement on the "Human Information Technology Ecosystem (hereafter referred to "HITE")" posted in Japan, and I still remember that it was so exciting to know that such efforts are finally being initiated in Japan. This is because I have seen overseas researchers and businesses get together to create a place to publicly discuss the social risks of advanced information technologies such as AI since several years ago. US non-profit organizations such as the Future of Life Institute and OpenAi have been the leaders of these activities. Being aware of the issue that Japan must also keep up with the trends in time, we launched "Development and Dissemination of Information Technology Guidelines for Promoting Japanese-style Wellbeing" together with Associate Professor Hideyuki Ando from Osaka University and PI, Dr. Junji Watanabe from NTT Communication Science Laboratory with whom I have been discussing the issue.

v01_2_chen.png

Arisa Ema (hereinafter referred as Ema): My research field is known as "STS" which refers to "Science, Technology and Society" or "Science and Technology Studies". I have been studying the relationship between science/technology and society, focusing on information technology, and especially on the social implications of surveillance technology. In 2014, there was an incident (*1) where the cover design of the Japanese Society for Artificial Intelligence raised a significant controversy. At that time, I noticed that there was no place to interact with other researchers who have been engaging in research on information technology, including myself, who have been studying the social impacts of information technology. Accordingly, my colleagues and I have launched a research group called "Acceptable Intelligence with Responsibility (AIR)" to discuss social implications of information technologies, including AI from interdisciplinal viewpoints. As for our research activities, AIR members visit fields such as a robot hotel, or farms where information technology had been introduced to investigate social implications of the technologies. We also carried out an oral history study at the second Artificial Intelligence Boom (*2) in the 1980s.

Toru Nishigaki (hereinafter referred to as Nishigaki): My current specialty is mainly philosophical studies of information society. But when I started my working career, I was engaged in designing a mathematical model of an OS of a mainframe computer in a big manufacturer. Since then, I have been concerned with computers for more than 40 years . In 1980, when I was studying abroad at Stanford University, we were going through the second AI boom, and knowledge engineering was very popular. Expert systems (*3), which aimed at substituting for human experts like lawyers and doctors, were especially being eagerly developed. Meanwhile, "STS" was also popular at Stanford University, and the impacts of science/technology on society were actively discussed. This gave me the chance to review my research from a social standpoint.

In the 80's, even in Japan, enormous budgets were injected to AI development, including the 5th generation computer, at many research institutions. I also joined the boom after returning to Japan by getting involved in the development of the 5th generation computer for a while. However, hard work at a factory site damaged my health, so I became a university teacher in the late 80s. Soon the AI boom ended and the "winter" period began for AI research. I think the biggest reason AI development failed in Japan was that the meaning and the social influence of AI had not been given enough consideration, and only intense technological efforts had been made. In other words, the philosophy and the ideas behind AI were mostly ignored during development.

For that reason, in the 80's, I independently started my research focusing on AI studies from philosophical viewpoint, especially based on French contemporary postmodern philosophy. After that, in 2000, the Graduate School of Interdisciplinary Information Studies (Interfaculty Initiative in Information Studies) at the University of Tokyo, which advocated "Integration of liberal arts and computer engineering", was launched. I joined it as one of the founding faculty staff members, and since then have been elaborating on new information theory termed "Fundamental Informatics" which originated in Japan.

v01_2_nishigaki.png

Asking "questions" that opens discussions for everyone

Dr. Chen and Dr. Ema both uphold the theme of humans' subjectivity such as "well-being" and "various values".

Chen: In recent years, mainly in Silicon Valley in the US, a movement of attempting to find out the reproducibility of personal spirituality such as well-being and mindfulness using science has been increasingly gaining momentum. It has been considered to be merely human's subjectivity. Coincidentally, at the same time as the HITE announced public recruitment, with Dr. Watanabe, I was given the opportunity to supervise the translated version of the book "Positive Computing-Technology for Wellbeing and Human Potential" (MIT publication) which summarizes the trends of the time, and the book was published in January this year as "ウェルビーイングの設計論−人がよりよく生きるための情報技術 (POSITIVE COMPUTING Technology for Wellbeing and Human Potential)" (BNN Shinsha).

However, "Well-being" in Western countries is considered to be the pursuit of magnification of an individual's happiness. Under this concept, the level of positive emotion is evaluated as a foundation of research, but I thought that such numerical values of individualism might not fit in with Japanese people who greatly value the importance of harmony with others. So, while making full use of information technology, it became our research theme to think about how we can build a concept of well-being that fits the local culture that nurtures Japanese values.

Ema: A question has come into my mind from what the interviewer said. Does "sense of values" really originate from individual subjectivity? I think "sense of values" can be fostered in a society and a community through interactions among people, goods, systems, environment, etc. And today, people's "values" have diversified at a fast pace, as well as becoming localized and marginalized. In current situation, I believe that it is very important to build a foundation for "dialogue" which provides opportunities for intensive discussion over various values. Rather than coming up with some "solutions," creating "questions" which everyone can tackle will be essential. Creating "questions" with proper size and quality of interests for everyone is a challenge. That is one of the goals that our project would like to pursue.

In order to ask "questions that opens discussions for everyone", there is also a need to eliminate the gaps between actual research and the images of AI and information technology disseminated in the media.

Chen: Since I am also an IT engineer, I feel that the regular "physical sensations" of engineers are not well communicated to society. If you speak subjectively about risk theory or optimism in the situation with no reality or understanding of the technology, it would be a pointless discussion, wouldn't it? As a breakthrough for that, I think that technology should be discussed based on evidences such as data known from many studies. It is very important to have domains of academia such as HITE to support such discussions. Since companies always follow the structure of pursuing short-term profits, there is no incentive for them to consider what technology means to human beings for the medium-to-long term.

Nishigaki: In any discussion, I believe what is most important is going back to the "principle" and thinking about the issue in question radically. For example, when arguing whether or not the Singularity Hypothesis is correct, the discussion will become confusing unless we consider carefully the standpoint of the discussant. The values are related to his/her social position. They are not merely subjective, but intersubjective, based on social systems and cultures. If we do not pay attention to such factors, problems cannot be solved appropriately. Even in HITE, I would like to emphasize that principles should be respected in order to deepen mutual understanding while avoiding divergent discussions on a project-by-project basis.

Ema: I think it is important returning to the discussion's basic argument. When implementing new technologies and systems in society, simply "replacing" new with existing technologies and systems is not enough. We are at the stage of going back to the principle, as Dr. Nishigaki mentioned, such as rethinking democracy, and what responsibility is for us. For instance, in a situation where there are multiple possibilities of what is considered to be "justice", it is necessary to organize them first.

Nishigaki: I agree with you. In 2014, I wrote a book titled "ネット社会の『正義』とは何か 集合知と新しい民主主義 (What is 'Justice' in the Internet society? - Collective Intelligence and New Democracy)" (Kadokawa selection book). The final goal is to algorithmically establish the standards of such things as justice, safety, profits, etc., by connecting in some way the ideas of social science and public philosophy with decisions made by AI. For that purpose, I suggested that we should keep a balance, through a dialogue, between the utilitarian pursuit for the values of whole community, and the liberal consideration for basic rights of each community member.

To create a better relationship between information technology and society

How do you perceive a "society where people and information technology fit in " which is the goal of HITE?

v01_2_ema.png

Ema: The current information technology is good at moving towards the prioritization of optimization, creating efficiency and convenience. However, if optimization is carried out with today's sense of values, the disparity will gradually increase, and it may create a system in which only a specific group of people can enjoy the benefits. I assume that a guideline of "well-being" that Dr. Chen has been suggesting aims to apply the alternative ideas thoroughly which have a different purpose from such optimization into specific technologies.

Chen: Right. Computers surrounding us are built upon a single sense of value, which is being faster and more convenient. But there is not enough discussion as to whether that sense of value is right or wrong and whether it is meaningful or not. The risk of the trade-off that convenience deprives human beings of their independence and autonomy should also be taken into consideration, and if human beings lose the ability to ask questions, our uniqueness in comparison with machines will be increasingly lost.

Ema: Moreover, as machines penetrate into our lives without us knowing it, we could have the problem of not even noticing that "We have stopped questioning".

Chen: I agree. The Internet has its pros and cons. Before we started using Google on a daily basis, we needed time and effort to read books by going to the library in order to obtain knowledge. But now, we can easily lookup words we do not know in Wikipedia and we can immediately understand them. I have a feeling that this convenience and rapid process of obtaining knowledge may be generating some trade-offs concerning the qualities and effectiveness of experiences which knowledge is obtained from. I think we must demonstrate these kinds of things steadily in social science research.

However, in the present age, problems are becoming more complicated, meaning we cannot work them out by the fusion of different fields and interdisciplinary movements. It seems to me that the degree of difficulty of setting up "principle" is getting higher. I understand that the process of Dr. Ema's dialogue method is to first discover common protocols and then to extract structural patterns such as consensus prototypes from there. That reminds me of the methods used by the international organization "Internet Engineering Task Force" which establishes standards for the technologies related to the Internet. This organization supported the dawn of the Internet era. Their idea is to share consensus even in a broad sense while everyone has been developing working codes, in other words, while establishing moving systems.

Ema: At first, we would like to develop a rough consensus within a small network, through trial and error. What I care about in this process is "Reflexiveness", "Locality", and "Transparency". Reflexiveness refers to constantly checking our own standing position, values, and expertise in the changing reality. Locality refers to observation of what is actually going on the site right now. In STS, it is also called "Local Knowledge". It is important to have communication with people who are actually facing the problems, and who are working at the site, including recognizing what our point of view is and what we can observe from that point of view. And the Transparency refers to a record keeping of the process.

Chen: Due to the different realities that each site is facing, the discussions and principles will transform into something different. In a traditional modernistic and rationalistic way of thinking, someone may say that it is self-contradictory and merely a theory. But I believe it is necessary to build a theory that is also contradictory, and perhaps, even science in general may have come to the point of facing a greater transformation.

Nishigaki: The "Information" tends to be defined on logical absolutism, so it is usually seen as something non-living and mechanical. Therefore a relative point of view has not entered into the current information education. But from now on, we must consider that information often reflects individuality. In addition, we must admit the fact that not only human beings, but also animals like dogs, cats, and even insects, are actually watching the world with their own eyes. It is worth directing our attention to the diverse "umwelt (environmental world)".

Nevertheless, the current direction of AI optimization is the other way around, and I am afraid the technology would advance to the point of being completely out of control if left alone. Many writers and artists raise alarms about such a future. But just expressing their anxieties in their works is merely of use for letting off steam. Those who are engaged in the technological development need to think about the issues thoroughly from the inside. To that end, I would like HITE to incorporate not only the viewpoints of AI researchers in academia, but also those of engineers actually involved in development projects. The problem is, however, hard workers in the midst of development have little time to think about the social effects of their products. With regards to this, Dr. Chen, you are a rare researcher in Japan who spans across both the academic field and being on site. I think it is important to keep thinking of social effects simultaneously, while implementing the technologies.

Ema: There are variety of the "the site." In addition to those who are developing technology, the experts using the technology while developing it are also important. For instance, if you ask doctors about a remote medical system, there are so many things they have to be arranged in order to implement the system, including usability of the interface, insurance applications, maintenance and technical training, etc., etc.

At the same time, they are the people who actually reconfigure the systems based upon the fundamental question of what medical service should be. I would like to value the process of organizing and recording the interactions with such sites.

Chen: What I have been thinking recently is how we can overcome the worship of usefulness, and incorporate irrational activities in system thinking. Recently, it became widely known that Google has adopted the idea of mindfulness in the company and is creating time for the employees to meditate. These activities show outcomes in team building, so that middle managers can become a better leader. However, for engineers who are dealing with a flood of work every day, and for ordinary people as well, encouraging them to use a method to practice meditation to calm their minds would be difficult. If we can present evidence of its usefulness, such as the fact that practicing meditation will lower your blood sugar level for example, it will become a strong force when we try to spread this method throughout society. I would like to believe that we need to provide an exquisite prescription corresponding with what state the human mind is in, but should be careful to avoid creating a so-called transient boom or bad pop science.

Ema: It is also important to pay attention to the easy-usage of the word "fit in" in society. It is, of course, important for society to fit in with the technologies, but if we fit in too well with them, it will become invisible. It is necessary to incorporate a mechanism to see things reflexively to prevent the generation of a new stereotype as a result of the familiarity which would exclude new technologies.

Chen: On the other hand, we can also say that the people who work on site are really seeking fundamental ideas. At a work site, the specifications are determined on a hit-or-miss basis and the outcome often becomes a system that nobody desires in a large size company. For example, sales reps and technical workers who have different points of view often cannot come to an agreement because they do not share the same protocol. If they can establish a way of reliably sharing their visions and protocols within the team, I think they would be able to build a system without any obstacles.

Ema: Trust is important for everything. The AIR research group spent one year to build trust with people from completely different cultures. If we tried to proceed with discussions without establishing a trust, it would almost certainly have failed. With trust, agenda setting and decision making will be carried out more appropriately.

Incorporating Humor as a Trigger for Dialogue

Chen: In 2016, two researchers from Machine Learning at Shanghai Jiao Tong University published a paper stating that crime incidence rate can be determined from facial recognition. This would be used to determine crime rates by analyzing photographs of faces, making AI study face recognition data of criminals. In response to that, Data Scientists in the US argued that the human process of identifying criminals contain social and racial biases to begin with, and that it is nothing more than this distortion being expanded into AI. They also argued that it is always necessary to be skeptical as to whether the data given to AI is not biased by human beings.

Ema: I think that the researchers have been studying with "goodwill" to detect criminals. Our projects at HITE do not aims to point out the biases in a straightforward way, but to create an opportunity for people to notice that there are different ideas out there. For example, there are artists engaging in activities to escape the eyes of surveillance cameras by asymmetric makeup and styles. Their activities make people notice "If criminals do the same thing, they cannot be tracked. Isn't trying to track criminals with these surveillance cameras useless to begin with?" I think that there is also a way to get attention using humor in art, in such ways as this.

Chen: I agree that activism and art will be playing a more important role from now on. These acts of the activists who wear such makeup are also a type of speculative design (design to create a question or to spark discussion) which casts questions from a standpoint different from the common perception of society, and is one of the social functions performed by art, I think. I have personal experience that leads me to believe that dialogue is born from the agitation of art. I have my head up for hope on this.

Ema: I think addressing the issue by saying "Let's talk about disadvantages and ethics of society" is sometimes a "top down attitude", which is too formal and only interested people comes. I would like to know what people are thinking. That's why we go to field studies. I am interested in places where people gather naturally to talk happily through the use of art and humor.

What kinds of activities do you plan to conduct for HITE's three-year project?

Chen: I would like to establish a platform to solve problems in information technology and society for which the solutions have not been introduced so far, and build the basic discussions that are a guideline in order to establish the platform. There, I was not sure who would be the proper people from the various fields to be the members of the project. However, I would be encouraged if someone like Dr. Ema who is specialized in dialogue methods can join us.

Ema: I am also looking forward to learning from you! The AIR research group does not have a leader. If someone comes out with an interesting idea, we form a new team to work on it. For us, dialogue is not a purpose but a tool. Through dialogue, we can look back on the past, and arrange the present to build networks, and make a small step toward the future.

Nishigaki: Personally my goal is to investigate the principle and universality of information technology while respecting relative values and liberalism. In HITE, as a researcher, I hope to continue my information studies by way of communicating with people who are working on actual sites.

v01_3_chen_ema_nishigaki.png

Toru Nishigaki
HITE Advisor. Born in Tokyo in 1948. Graduated from the Department of Mathematical Engineering and Information Physics, School of Engineering, the University of Tokyo. Joined Hitachi, Ltd. Engaged in the research and development of computer software. Stayed temporarily as a visiting scholar at Stanford University. Currently Professor Emeritus of the University of Tokyo and Professor of the Department of Communication at Tokyo Keizai University, after working as a Professor of Graduate School of Interdisciplinary Information Studies (Interfaculty Initiative in Information Studies) at the University of Tokyo. Doctor of Engineering. Specializes in Informatics and Media Theory.

Arisa Ema
Principal Investigator of "Acceptable Intelligence with Responsibility - Values Awareness Support (AIR-VAS)" of HITE Project. Assistant Professor, Science Interpreter Training Program, Komaba Organization for Educational Excellence, University of Tokyo. Director of NPO Citizen's Science Initiative Japan. Co-founder of Acceptable Intelligence with Responsibility Study Group (AIR: http://sig-air.org/) established in 2014, which seeks to address emerging issues and relationships between artificial intelligence and society. Visiting researcher at RIKEN Center for Advanced Intelligence Project since 2017. PhD.

Dominique Chen
Research and development member of "Development and Dissemination of Information Technology Guidelines for Promoting Japanese-style Wellbeing", HITE Adopted Project. Associate Professor at Waseda University, School of Culture, Media and Society since 2017. Director, of Specified Nonprofit Organization "Commonsphere". Co-founder of Dividual Inc. Participant in HITE Adopted Project, "Building co-creation community Alife Lab. for co-evolution of people and information technology".

*1 / Japanese Society for Artificial Intelligence cover flames incident. In 2014, an incident where a major debate was invoked on SNS in which the illustration on the cover of the Japanese Society for Artificial Intelligence led to discrimination against women. A visual image of a "female type android robot" was criticized for being an image of an out-of-date female figure, AI, and robots. AIR members wrote an article on the incident ("Ethics and Social Responsibility: Case Study of a Journal Cover Design Under Fire." CHI-2015)

*2 / First, second and the third booms of AI. In 1956, since the term "Artificial Intelligence" was first used, AI has seen a boom three times while its technologies have been developing. The first boom was in the 1960s when reasoning and search by computer became possible, the second boom was in the 1980s when the expert system (later described) became a reality. The current third boom is greatly contributed to by machine learning that focuses on learning by simulating functions such as pattern recognition, perception, and vision, and logic to a lesser extent. Eventually the advancement of deep machine learning has contributed greatly to this as well. 

*3 /Expert systems - Approach to making AI solve real and complicated problems by making a computer emulate the knowledge and decision-making ability of experts.

Date of interview: January 23, 2016

This interview has been printed in our Program Introduction booklet Vol.01. If you would like to check the other articles please click the link below.

Human Information Technology Ecosystem (HITE) Program Introduction booklet - Vol.01 Vol.02