- JST Home
- /
- Strategic Basic Research Programs
- /
- CREST
- /
- Research Director/
- Core technologies for trusted quality AI systems/
- [Trusted quality AI systems] Year Started : 2021
Professor
Graduate School of Informatics
Kyoto University
Hiromi Arai | Unit Leader Center for Advanced Intelligence Project RIKEN |
Satoshi Oyama | Professor Faculty of Data Science Nagoya City University |
Junichiro Mori | Associate Professor Graduate School of Information Science and Technology The University of Tokyo |
We aim to establish the foundation of human computation for designing trustworthy human-AI collaborative systems by (i) human-in-the-loop machine learning for human-AI collaborative data analytics, (ii) defining and optimizing reliability and trust for human computation systems, (iii) addressing ethical issues for social acceptance of human computation, and (iv) supporting human intellectual and creative activities and developing human capabilities through human computation.
Associate Professor
Graduate School of Information Science and Technology
The University of Tokyo
Masaaki Imaizumi | Associate Professor Graduate School of Arts and Sciences The University of Tokyo |
Tomoya Kitani | Associate Professor Academic Institute Shizuoka University |
Hideki Takase | Associate Professor Graduate School of Information Science and Technology The University of Tokyo |
Kentaro Yoshioka | Assistant Professor Faculty of Science and Technology Keio University |
We develop fundamental technologies for energy-efficient distributed AI systems based on federated learning for respecting the diversity of users and data and the spatial and temporal environmental variations. We define D3-AI as the distributed AI with four reliability capabilities: privacy, fairness, dynamic adaptability, and energy efficiency. We pursue researches on the fundamental technologies and applications of D3-AI in collaboration with machine learning theory, computer architecture, IoT platform, and data processing.
Professor
Graduate School of Engineering
Nagoya University
Atsushi Kawaguchi | Professor Department of Medicine Saga University |
Jun Sakuma | Professor School of Computing Tokyo Institute of Technology |
Shigeyuki Matsui | Professor Department of Medicine Nagoya University |
Hiroaki Miyoshi | Associate Professor School of Medicine Kurume University |
We develop a mathematical and computational framework for assessing the reliability of AI-driven hypotheses for AI-driven science and technology by extending the traditional statistical hypothesis testing framework. Our main focus is to develop new theories, algorithms, and software for statistical hypothesis testing that is valid for adaptively constructed AI-driven hypotheses. We study AI-driven hypotheses in static and dynamic environments and pursue how to evaluate their reliability from both theoretical and practical perspectives. We develop a protocol for evaluating the reliability of AI medical technology based on the mathematical and computational framework and demonstrate the protocol through the development of an AI-based pathological diagnosis system for malignant lymphoma.
Professor National Institute of Informatics Digital Content and Media Sciences Research Division
Tetsuo Ono | Professor Faculty of Information Science and Technology Hokkaido University |
Hirokasu Kumazaki | Professor Institute of Biomedical Sciences Nagasaki University |
Kazunori Terada | Professor Faculty of Engineering Gifu University |
Takashi Hara | Professor Faculty of Engineering Gifu University |
Our purpose is to build human-AI cooperative decision-making systems for trustworthy AI. In this project, we develop trust calibration AI (TCAI) which can detect over/under-trust by monitoring humans’ selection behaviors in human-AI collaboration and adaptively facilitate humans’ trust calibration by themselves. The TCAI uses calibration cues as stimuli that facilitate humans’ voluntary trust calibration. Finally, we apply the TCAI to human-AI cooperative imaging diagnosis and medical examination.