[Trustworthy AI] Year Started : 2020

Ayumi Igarashi

Credible resource allocation

Researcher
Ayumi Igarashi

Assistant professor
Graduate School of Information Science and Technology
The University of Tokyo

Outline

Algorithmic decision-making is being adopted in many contexts, ranging from ride-sharing to peer-review paper assignment. Fairness and credibility are the key criteria to justify the allocation mechanism. In this project, we consider the problem of allocating indivisible resource among multiple participants. Our goal is to develop a resource allocation mechanism that achieves a good balance between fairneess, efficiency, and credibility.

Shuhei Kurita

AI agents that follow given textual instructions and explain their behavior grounded on textual instructions

Researcher
Shuhei Kurita

Researchers
Center for Advanced Intelligence Project
RIKEN

Outline

We study deep learning agents that follow intuitive instructions of natural languages and solve the complex and systematic tasks specified in them. Especially, we aim to develop embodied agents that understand various and complex textual instructions and behave in realistic virtual environments. We utilize the actions to language predictions, which inspire the new methodology to visualize the prediction of their actions grounded on the given instructions. We expect that our work contributes to language-understanding technologies in the real world in the future.

Taisuke Kobayashi

Deep reinforcement learning with explicit limitations of robustness and safety

Researcher
Taisuke Kobayashi

Assistant Professor
National Institute of Informatics
Research Organization of Information and Systems

Outline

This project develops deep reinforcement learning, which reveals limitations of both robustness and safety. As a fundamental technology, quantitative evaluation of robustness and safety is crucial. Therefore, this project develops a new dynamics model and a regularization mechanism for adversarial learning, and theoretically and statistically tests them at the unit level. The proposed technologies are evaluated from both the theoretical and practical perspectives, conducting real-world problems where human and robot are in physical contact for instance.

Saku Sugawara

Construction of Benchmarking Datasets for Explainable Natural Language Understanding

Researcher
Saku Sugawara

Assistant professor
National Institute of Informatics

Outline

We construct explainable natural language understanding datasets that are annotated with requisite skills for language understanding as precise evaluation metrics. To define the skills, we refer to existing tasks and technologies in natural language processing and use formal representations for validating questions. These datasets enable us to improve the interpretability and robustness of language understanding systems in the real world.

Koh Takeuchi

Spatio-temporal Causal Modeling for Reliable Decision-Making

Researcher
Koh Takeuchi

Senior Lecturer
Graduate School of Informatics
Kyoto University

Outline

We conduct research on spatio-temporal causal inference models for reliable decision-making, which are foundational technologies for our society with AI systems that connect the vast amounts of spatio-temporal data measured by all IoT systems. Specifically, we aim to realize technologies that can make reliable and unbiased predictions from biased spatio-temporal data, and develop new machine learning models that combine spatio-temporal data analysis techniques with causal inference technologies.

Satoshi Nishida

Developing a technique to evaluate trustworthiness of AI systems using brain information

Researcher
Satoshi Nishida

Senior Researcher
Center for Information and Neural Networks
National Institute of Information and Communications Technology

Outline

This project aims to elucidate the brain mechanism for producing human’s subjective trust in AI systems and, based on this finding, develop a technique to evaluate the trustworthiness of AI systems through brain decoding. In addition, by applying a self-developed method to decode simulated brain activity, this project seeks to drastically reduce the cost of brain measurement for brain decoding while maintaining higher evaluation accuracy of brain decoding relative to the other existing techniques. This extension can produce an innovative technique that achieves both high accuracy and low cost in evaluating the subjective trustworthiness of AI systems.

Masaaki Nishino

Research on machine learning models with concurrent verifiers

Researcher
Masaaki Nishino

Distinguished Researcher
NTT Communication Science Laboratories
NTT Corporation

Outline

It is difficult to develop a statistical machine learning model that does not make prediction errors. Therefore, developing AI systems using machine learning models is costly since they have to deal with prediction errors. On the other hand, some kinds of prediction errors can be checked and corrected using external verification modules. This research aims to develop a machine learning model with concurrent verifiers that can guarantee no existence of externally verifiable errors.

Satoshi Hara

Explananation and Modification of Machine Learning Models for User-Model Interaction

Researcher
Satoshi Hara

Associate Professor
The Institute of Scientific and Industrial Research
Osaka University

Outline

The focus of this project is on developing methods for explaining and modifying machine learning models through the “instance-based approach”. Through each of the data instance, the instnace-based approach enables users, who are experts in some data domains, to obtain an explanation of the decisions made by machine learning models or to modify the models to improve its performance. For such an user-model interaction, it is essential to quantify how each data instance is related to the model’s behavior. The challenge of this project is to establish metrics effective for such a quantification.

Shohei Hidaka

Building Statistical Theory to Estimate Ajoint Functors as a Basis of Machine Understanding

Researcher
Shohei Hidaka

Associate Professor
Graduate School of Advanced Science and Technology
Japan Advanced Institute of Science and Technology

Outline

This project focuses on the perception of ambiguous figures as a class of reproducible and testable cognitive phenomena, in which the perceived structure depends on one’s interpretation. We will propose and empirically test a theoretical account for the perception of ambiguous figures, by building a statistical model to estimate adjoint functors, in which data and interpretation are viewed as a category, and their relationship is viewed as a functor. Through the proof of concept of this statistical technique based on category theory, we aim to build a basis for a new research area “machine understanding”.

Keisuke Fujii

Technologies for explanation and decision-making available to experts in biological multi-agent motions

Researcher
Keisuke Fujii

Associate Professor
Graduate School of Informatics
Nagoya University

Outline

The development of measurement technology, which has made it possible to record multi-agent motions movement of various organisms and artificial objects, will contribute to various social and scientific fields. However, it can be sometimes difficult to explain, operate, and make decisions for experts in multi-agent motions in the real-world such as researchers in biology and team sports coaches. In this project, I will develop artificial intelligence technologies with which experts in multi-agent motions in the real-world can explain, operate, discover, verify, and make decisions.

Kazuyoshi Yoshii

Ability-Augumented Music Understanding and Creation Based on Human-AI Assimilation

Researcher
Kazuyoshi Yoshii

Professor
Graduate School of Informatics
Kyoto University

Outline

To realize trustworthy music AI, this project investigates (1) audio-to-symbol music conversion (semi-supervised automatic music transcription based on multi-task learning), (2) symbol-domain representation of music knowledge (unsupervised music grammar induction based on self-similarity), and (3) cooperative symbol manipulation (semi-supervised music creation based on data assimilation).

Quick Access

Program

  • CREST
  • PRESTO
  • ACT-I
  • ERATO
  • ACT-X
  • ACCEL
  • ALCA
  • RISTEX
  • AIP Network Lab
  • Global Activities
  • Diversity
  • SDGs
  • OSpolicy
  • Yuugu
  • Questions