- JST Home
- /
- Strategic Basic Research Programs
- /
PRESTO
- /
- project/
- The fundamental technologies for Trustworthy AI/
- [Trustworthy AI] Year Started : 2020
Assistant professor
Graduate School of Information Science and Technology
The University of Tokyo
Algorithmic decision-making is being adopted in many contexts, ranging from ride-sharing to peer-review paper assignment. Fairness and credibility are the key criteria to justify the allocation mechanism. In this project, we consider the problem of allocating indivisible resource among multiple participants. Our goal is to develop a resource allocation mechanism that achieves a good balance between fairneess, efficiency, and credibility.
Researchers
Center for Advanced Intelligence Project
RIKEN
We study deep learning agents that follow intuitive instructions of natural languages and solve the complex and systematic tasks specified in them. Especially, we aim to develop embodied agents that understand various and complex textual instructions and behave in realistic virtual environments. We utilize the actions to language predictions, which inspire the new methodology to visualize the prediction of their actions grounded on the given instructions. We expect that our work contributes to language-understanding technologies in the real world in the future.
Assistant Professor
National Institute of Informatics
Research Organization of Information and Systems
This project develops deep reinforcement learning, which reveals limitations of both robustness and safety. As a fundamental technology, quantitative evaluation of robustness and safety is crucial. Therefore, this project develops a new dynamics model and a regularization mechanism for adversarial learning, and theoretically and statistically tests them at the unit level. The proposed technologies are evaluated from both the theoretical and practical perspectives, conducting real-world problems where human and robot are in physical contact for instance.
Assistant professor
National Institute of Informatics
We construct explainable natural language understanding datasets that are annotated with requisite skills for language understanding as precise evaluation metrics. To define the skills, we refer to existing tasks and technologies in natural language processing and use formal representations for validating questions. These datasets enable us to improve the interpretability and robustness of language understanding systems in the real world.
Senior Lecturer
Graduate School of Informatics
Kyoto University
We conduct research on spatio-temporal causal inference models for reliable decision-making, which are foundational technologies for our society with AI systems that connect the vast amounts of spatio-temporal data measured by all IoT systems. Specifically, we aim to realize technologies that can make reliable and unbiased predictions from biased spatio-temporal data, and develop new machine learning models that combine spatio-temporal data analysis techniques with causal inference technologies.
Senior Researcher
Center for Information and Neural Networks
National Institute of Information and Communications Technology
This project aims to elucidate the brain mechanism for producing human’s subjective trust in AI systems and, based on this finding, develop a technique to evaluate the trustworthiness of AI systems through brain decoding. In addition, by applying a self-developed method to decode simulated brain activity, this project seeks to drastically reduce the cost of brain measurement for brain decoding while maintaining higher evaluation accuracy of brain decoding relative to the other existing techniques. This extension can produce an innovative technique that achieves both high accuracy and low cost in evaluating the subjective trustworthiness of AI systems.
Distinguished Researcher
NTT Communication Science Laboratories
NTT Corporation
It is difficult to develop a statistical machine learning model that does not make prediction errors. Therefore, developing AI systems using machine learning models is costly since they have to deal with prediction errors. On the other hand, some kinds of prediction errors can be checked and corrected using external verification modules. This research aims to develop a machine learning model with concurrent verifiers that can guarantee no existence of externally verifiable errors.
Associate Professor
The Institute of Scientific and Industrial Research
Osaka University
The focus of this project is on developing methods for explaining and modifying machine learning models through the “instance-based approach”. Through each of the data instance, the instnace-based approach enables users, who are experts in some data domains, to obtain an explanation of the decisions made by machine learning models or to modify the models to improve its performance. For such an user-model interaction, it is essential to quantify how each data instance is related to the model’s behavior. The challenge of this project is to establish metrics effective for such a quantification.
Associate Professor
Graduate School of Advanced Science and Technology
Japan Advanced Institute of Science and Technology
This project focuses on the perception of ambiguous figures as a class of reproducible and testable cognitive phenomena, in which the perceived structure depends on one’s interpretation. We will propose and empirically test a theoretical account for the perception of ambiguous figures, by building a statistical model to estimate adjoint functors, in which data and interpretation are viewed as a category, and their relationship is viewed as a functor. Through the proof of concept of this statistical technique based on category theory, we aim to build a basis for a new research area “machine understanding”.
Associate Professor
Graduate School of Informatics
Nagoya University
The development of measurement technology, which has made it possible to record multi-agent motions movement of various organisms and artificial objects, will contribute to various social and scientific fields. However, it can be sometimes difficult to explain, operate, and make decisions for experts in multi-agent motions in the real-world such as researchers in biology and team sports coaches. In this project, I will develop artificial intelligence technologies with which experts in multi-agent motions in the real-world can explain, operate, discover, verify, and make decisions.
Professor
Graduate School of Engineering
Kyoto University
To realize trustworthy music AI, this project investigates (1) audio-to-symbol music conversion (semi-supervised automatic music transcription based on multi-task learning), (2) symbol-domain representation of music knowledge (unsupervised music grammar induction based on self-similarity), and (3) cooperative symbol manipulation (semi-supervised music creation based on data assimilation).