Progress Report

Development of “Jizai Hon-yaku-ki (At-will Translator)” connecting various minds based on brain and body functions3. Functions of Jizai Hon-yaku-ki

Progress until FY2022

1. Outline of the project

In R&D Theme 3, we aim to develop Jizai Hon-yaku-ki itself and, specifically, its key functions necessary to support our everyday interactions.

Illustation of Jizai Hon-yaku-ki (when A speaks to B)
Illustation of Jizai Hon-yaku-ki (when A speaks to B)

Jizai Hon-yaku-ki consists of two components: an interpreter “reads” the user’s mental state and an expresser “conveys” it to another user.
The primary task of this R&D Theme is to develop the two parts with sensitivity to the diversity of contexts and our personalities, so that Jizai Hon-yaku-ki can assist our mundane communication.

2. Outcome so far

  1. Succeeded in estimating a caregiver’s stress level from the caregiver’s and her child’s pulse rates;
  2. Developed a warmth-presenting display that is non-contact and highly responsive;
  3. Developed a floor interface to record room-scale human behavior;
  4. Analyzed quantitatively the context-dependent features of our utterances.
Outcome 1:

We utilized the electrocardiogram (ECG) of 51 caregivers and their 3-t+o-4-year-old children. We found that one can estimate a caregiver’s stress level from the caregiver’s and her child’s ECG. Interestingly, the physiological data from someone (a child) can improve the estimation of the mental state of another (a caregiver).

Outcome 1 — illustrative summary Figures by Professor Yukie Nagai (UTokyo)
Outcome 1 — illustrative summary
Figures by Professor Yukie Nagai (UTokyo)
Outcome 2:

We developed a visual display equipped with non-contact, high-responsive thermal feedback by swift control of infrared radiation. It enables one to “perceive” the interlocuter’s mental state and behavior, assisting our situation-driven communication.

Outcome 3:

We developed a tile-shaped interface placed on the floor with force sensors. It measures position, posture, and movement without each user wearing a device. This module helps to estimate human behaviors and related intentions that are difficult to record visually.

Outcome 4:

We used amplitude modulation (temporal variation in the intensity of the sound signal) to analyze phonetic changes in our utterances depending on the situations and contexts. This finding helps to develop voice outputs that reflect the user's intention and are easily understood by others.

Outcome 2 — illustrative summary An agent (left) feel warm when he is catching the inter-locuter’s eye on the display.
Outcome 2 — illustrative summary
An agent (left) feel warm when he is catching the inter-locuter’s eye on the display.
Source: https://doi.org/10.1145/3532721.3535569

3. Future plans

We will continue developing an interpreter and an expresser that are sensitive to contexts and personalities.
In parallel, we attempt to develop a proof-of-concept product of Jizai Hon-yaku-ki by incorporating the findings from the other five R&D Themes.
( Tokyo U: Y. Nagai, M. Inami, H. Saito
Tokyo Metropolitan U: F. Homae )