Research Results

Touching 3D images before our eyes

Realization of an alter-ego robot capable of transmitting haptic sensations FY2016

photo: Susumu Tachi
Susumu Tachi (Professor Emeritus, The University of Tokyo)
CREST
Advanced Media Technology for Everyday Living "Telexistence Communication Systems", Research Director (2000-2006)
CREST
Creation of Human-Harmonized Information Technology for Convivial Society "Construction and Utilization of Human-harmonized "Tangible" Information Environment", Research Director (2009-2015)
ACCEL
"Embodied Media Technology based on Haptic Primary Colors", Research Director (2014-2019)

Prevalence of 3D display and the problem its presents

Technology only used to allow the relationship between information media and humans to involve the passive reception of audio-visual information; however, in recent years it has progressed to active audio-visual experiences via the body following the development of virtual reality (VR) and robotic technologies.

We experience full body movement and physical sensations, including cutaneous sensations, such as the sense of touch, holding, walking and running, and interaction, besides audio-visual sensations in our daily lives. Once platforms with the ability to record, transmit, and reproduce the above-mentioned sensations are developed, it would allow us to experience and work remotely via a robot and virtual environment and would enable the creation of new haptic content.

Revolutionary new media technologies, which overcame existing technical challenges, have been presented. These technologies include the telexistence system TELESAR V and HaptoMIRAGE equipped with autostereoscopic 3D display, which were developed by a research group led by Professor Susumu Tachi.

TELESAR V consists of an avatar robot that synchronizes with a human to mimic its motions and a cockpit that transmits the sense of sight, hearing, and touch. The avatar enables a person to remotely experience interactions with others, perform manipulations of objects, and convey feelings of touch via the robot in a remote location that transmits sensations as though the other person were physically present.

This system, known as telexistence, embodies an innovative concept that was proposed by Professor Tachi in 1980, namely interactive communication technology capable of transmitting the sense of a user's existence as well as delivering a highly realistic experience in a remote environment without being there.

Photo

TELESAR V: The user on the left moves the robot on the right as an embodiment of him/herself. The telexistence effect is achieved by a camera and microphone that move in response to the user's movements and transmit the audio-visual information to the head- mounted-display (HMD) and headphones and also deliver the haptic sensation.

Photo

Examples of human activity using TELESAR V

Aiming for realization of the "tangible information environment"

Since 2009, Professor Susumu Tachi's team has aimed to develop a "tangible information environment" by developing a display that enables 3D content to be touched by hands to operate.

Enabling users to touch the 3D content required the 3D display to be constructed in a way such that no physical obstacles existed between the user and the 3D content. Previous methods included a glass surface between the user and the content, which prevented the content from being touched directly. As a result, the operation had to be conducted from a different location. One approach to overcoming the problem would have been to use an HMD to present the content; however, users would then be isolated from the real environment around them, which would cause alienation between the real space and the information space.

Thus, in 2010, Professor Tachi and his collaborators developed an autostereoscopic 3D display with multiple viewpoints, named "RePro3D." This solved three of the problems associated with previous displays, namely "autostereoscopic 3D content with multiple viewpoints," "superimposing digital information in real space," and "tactile presentation in which a person can touch what they see at the exact location at which they find themselves." This technology was augmented by a new method for presenting content, the HaptoMIRAGE, which is able to display 3D content that can be observed from within a wide area by multiple persons simultaneously.

Photo

Display example of the autostereoscopic 3D display with multiple viewpoints, "RePro3D"

Observation of natural 3D content

The HaptoMIRAGE provides an interactive experience in which real space is fused three dimensionally with the information environment; for example, a person can draw a 3D sketch with a pen in mid-air. It then becomes possible to project the 3D virtual object onto a real stand for interaction via the real object, that is to say, the 3D virtual object moves when the stand turns.

Enabling multiple users to observe a 3D virtual object autostereoscopically from each user's viewpoint would require a group of light beams to be provided for both of each user's eyes. These beams correspond to both binocular parallax (the perception of stereoscopic vision by content that is introduced to both the right and left eyes) and motion parallax (the perception of stereoscopic vision by changing the viewpoint according to the position of the user's head) in accordance with the user' s standing position.

Photo

3D virtual drawing in real space

Introduction to the field of broadcasting, entertainment, telework, medicine, and welfare

In this way, an excellent autostereoscopic 3D display capable of projecting 3D content in mid-air was developed. This display is expected to find application in various fields, such as in interactive exhibitions in museums, electronic signatures (digital signage) in public spaces, and entertainment systems (arcade games etc.).

The research group has also developed the technology for the transmission of haptic sensation based on the principle of haptic primary colors that process haptic sensations as media similar to audiovisual sensations. The group has already launched a research and development project involving Embodied Media, which aims to realize virtual physical experience by integrating the above- described technologies. TELESAR V was first developed in 2012 and remains under further development. Future expectations are that anyone would be able to undertake an active role from anywhere by using their alter-ego robot via a network system irrespective of their location or time zone.