Cybernetics and Reality Engineering

Manipulate reality to augment human capabilities

Research Staff

  • Prof. Kiyoshi Kiyokawa

    Professor:
    Kiyoshi Kiyokawa

  • Assoc.Prof. Tomokazu Sato

    Assoc.Prof.
    Tomokazu Sato

  • Affiliate Assoc.Prof. Yuta Nakashima

    Affiliate Assoc.Prof.
    Yuta Nakashima

  • Assist.Prof. Norihiko Kawai

    Assist.Prof.
    Norihiko Kawai

  • Assist.Prof. Nobuchika Sakata

    Assist.Prof.
    Nobuchika Sakata

E-mail { kiyo, tomoka-s, n-yuta, norihi-k, sakata }[at] is.naist.jp

Research Area

Cybernetics is an academic field that unifies humans and systems. Reality engineering is used in the meaning of a superordinate concept bundling virtual reality (VR), augmented reality (AR), mixed reality (MR) and so on. In this laboratory, we are studying all of these, especially sensing, display and interaction technologies (Fig.1).

Humans have acquired new capabilities by inventing various tools long before computers came up and mastering them as if they were part of the body. In this laboratory, we conduct research to create “tools of the future” by making full use of human and environmental sensing, sensory representation, wearable computing, context awareness, machine learning, biological information processing and other technologies. In particular, by manipulating various sensations such as vision, we aim to live more conveniently, more comfortably, or more securely by offering “personalized reality” which empathizes each person. Through such information systems, we would like to contribute to the realization of an inclusive society where all people can maximize their abilities and help each other.

Our laboratory has newly been established in April 2017. We will continue to try new challenges, inheriting the assets of Vision and Media Computing Laboratory. (Following topics include those in the former organization)

Sensing: Measuring people and the environment

We are studying various sensing technologies that acquire human and environmental conditions using computer vision, pattern recognition, machine learning and so on.

Estimation of drowsiness and degree of concentration from blinking and body movement

Estimation of user’s psychological state from gaze behavior

HMD calibration and gaze tracking using corneal reflection images (Fig.2)

Three-dimensional reconstruction from video and sensor fusion (Fig.3)

Image restoration based on similarity and mesh defect repair of 3D shape models

Display: Manipulating perception

We are studying technologies, such as virtual reality and augmented reality, to freely manipulate and modulate various sensations such as vision and auditory, their effects, and their display hardware.

Super wide field of view optical see-through HMD (Fig.4)

View expansion with a fisheye video see-through HMD

Exaggeration of facial expressions using an eigenspace method (Fig.5)

Diminished reality / Object removal using image restoration technology (Fig.6)

A non-grounded and encountered-type haptic display using a drone

Interaction: Creating and using tools

We combine sensing and technologies to study the new ways of interaction between human and human, and human and the environment.

AR pet recognizing people and the environment and having own emotions

AR assembly support system that automatically recognizes assembly status

AR furniture arrangement system that automatically transits to the optimal viewpoint (Fig.7)

Human motion reproduction system by augmented reality (Fig.8)

Remote robot manipulation interface with augmented free viewpoint image synthesis (Fig.9)

Interaction: Creating and using tools

We combine sensing and technologies to study the new ways of interaction between human and human, and human and the environment.

AR pet recognizing people and the environment and having own emotions

AR assembly support system that automatically recognizes assembly status

AR furniture arrangement system that automatically transits to the optimal viewpoint (Fig.7)

Human motion reproduction system by augmented reality (Fig.8)

Remote robot manipulation interface with augmented free viewpoint image synthesis (Fig.9)

Research Equipment

A variety of HMDs, a spherical immersive display (Fig.10)

Omnidirectional camera systems, a laser range finder (Fig.10)

Research Grants, Collaborations, Social Services etc. (2016)

MEXT Grants-in-Aid (Kakenhi) (B, C, Young B)

Collaboration (Panasonic)

MIC Strategic Information and Communications R&D Promotion Programme (SCOPE)

JSPS Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers (CMU, JHU, TUM)

JASSO Student Exchange Support Program (University of Oulu)

Steering / Organizing Committee members of international conferences on augmented and virtual reality

本研究室の研究分野

Fig.1 Research fields

Fig.2: HMD calibration and gaze tracking using corneal reflection images

Fig.2: HMD calibration and gaze tracking using corneal reflection images

Fig.3: Three-dimensional reconstruction from video

Fig.3: Three-dimensional reconstruction from video

Fig.4: Super wide field of view optical see-through HMD

Fig.4: Super wide field of view optical see-through HMD

Fig.5: Exaggeration of facial expressions using an eigenspace method

Fig.5: Exaggeration of facial expressions using an eigenspace method

Fig.6: Diminished Reality

Fig.6: Diminished Reality

Fig.7: AR furniture arrangement system that automatically transits to the optimal viewpoint

Fig.7: AR furniture arrangement system that automatically transits to the optimal viewpoint

Fig.8: AR human motion reproduction

Fig.8: AR human motion reproduction

Fig.9: Free viewpoint remote robot manipulation

Fig.9: Free viewpoint remote robot manipulation

Fig.10: Part of research equipment

Fig.10: Part of research equipment