[Trusted quality AI systems] Year Started : 2020

Takayuki Ito

HyperDemocracy : Large-scale Consensus Support Platfrom based on Social Multiagent Systems.

Research Director
Takayuki Ito

Professor
Graduate School of Informatics
Kyoto University

Collaborator
Susumu Ohnuma Professor
Graduate School of Humanities and Human Sciences
Hokkaido University
Shun Shiramatsu Professor
Graduate School of Engineering
Nagoya Institute of Technology
Tokuro Matsuo Professor
Graduate School of Industrial Technology
Advanced Institute of Industrial Technology
Outline

This project realizes a SNS platform called “hyper democracy” where softwaer agents and human collaborately make democratic decisions. Here, agents who work on behalf of human are allocated into SNS distributedly, and support consensus making process (social multiagent system). These agens protect users from the problems like flaming, fake news, etc., while supporing smarter consensus and group decision making.

Kentaro Inui

AI systems that can explain by language based on knowledge and reasoning

Research Director
Kentaro Inui


Professor Graduate School of Information Sciences Tohoku University / Visiting Professor Mohamed bin Zayed University of Artificial Intelligence

Collaborator
Minao Kukita Associate Professor
Graduate School of Information Sciences
Nagoya University
Sadao Kurohashi Specially Appointed Professor Graduate School of Informatics Kyoto University / Director General National Institute of Informatics
Daisuke Bekki Professor
Faculty of Core Science
Ochanomizu University
Outline

When humans perform knowledge-intensive activities, such as fact-checking and decision support, they can use language to explain the processes and reasons behind their decisions. In this project, we explore a computational paradigm for explaining decision processes with natural language, in a manner analogous to that of humans. By building AI systems that support human judgments with interactive explanations and by employing methods from the humanities and social sciences to understand the prerequisites of “explanation as communication”, we also aim to formulate design principles for explainable AI.

Isao Echizen

Social information technologies to counter infodemics

Research Director
Isao Echizen


Professor National Institute of Informatics Information and Society Research Division

Collaborator
Kazutoshi Sasahara Associate Professor
School of Environment and Society
Tokyo Institute of Technology
Noboru Babaguchi Specially Appointed Professor
Instutute for Datability Science
Osaka University
Outline

This research establishes social information technologies that support diverse communication and decision-making while appropriately dealing with the potential threats posed by fake media (FM) generated by AI. We will establish FM generation technology, FM detection technology, and FM detoxification technology for fake media (FM) with various modality classified into three types. Furthermore, by utilizing these technologies, we will establish decision-making support technologies that support various decisions on SNS.

Masataka Goto

Building a Trusted Explorable Recommendation Foundation Technology

Research Director
Masataka Goto

Prime Senior Researcher
Human Informatics and Interaction Research Institute
National Institute of Advanced Industrial Science and Technology (AIST)

Collaborator
Yoshinori Hijikata Professor
Graduate School of Information Science
University of Hyogo
Shinichi Furuya Researcher
Research Laboratory
Sony Computer Science Laboratories, Inc.
Outline

To promote a society where people can feel secure in receiving personalized support from AI systems, this interdisciplinary research integrating informatics, neurophysiology, and social psychology aims to establish fundamental technologies that allow users of recommender systems to explore recommendation behaviors. The goal is to provide human-centered, controllable, and transparent recommender systems that can be used sustainably by consumers or producers as a trusted social infrastructure.

Kensaku Mori

Reliable Interventional AI Robotics sharing its ambiguity of AI diagnosis with medical profession

Research Director
Kensaku Mori

Professor
Graduate School of Informatics
Nagoya University

Collaborator
Tadayoshi Aoyama Associate Professor
Graduate School of Engineering
Nagoya University
Yasuhisa Hasegawa Professor
Institute of Innovation for Future Society
Nagoya University
Outline

Our project proposes a concept of Reliable Interventional (RI) AI Robotics where a medical profession could diagnose and treat a patient with AI and robots as well as conventional medical devices, sharing ambiguity points in the reliable suggestion from AI. We also develop “reliable” AI generic technologies and AI robot system equipped with the developed generic technologies. Aim of this project is to realize a framework and a practical model of “Reliable AI” in medical treatment by turning the ambiguity of AI’s judgment into advantage and utilizing the ambiguity effectively. Furthermore, while developing the medical robot system with the reliable AI, we address establishment of “Reliable AI” which is available on the site of medical treatment.

Quick Access

Program

  • CREST
  • PRESTO
  • ACT-I
  • ERATO
  • ACT-X
  • ACCEL
  • ALCA
  • RISTEX
  • AIP Network Lab
  • Global Activities
  • Diversity
  • SDGs
  • OSpolicy
  • Yuugu
  • Questions