コロキアムB発表

日時: 7月23日(月)4限(15:10~16:40)


会場: L1

司会: 久保 尋之
CALUYA NICKO REGINIO M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 清川 清, Christian Sandor, Alexander Plopski
title: The Ideal Training Environment: Should We Train Spatial Memory in Augmented or Virtual Reality?
abstract: Work space simulations help trainees acquire skills necessary to perform their tasks efficiently without disrupting the workflow, forgetting important steps during a procedure, or the location of important information. This training can be conducted in Augmented and Virtual Reality (AR, VR) to enhance its effectiveness and speed. When the skills are transferred to the actual application, it is referred to as positive training transfer. However, thus far, it is unclear which training, AR or VR, achieves better results in terms of positive training transfer. To compare the effectiveness of AR and VR for spatial memory training in a control-room scenario, users were asked to memorize the location of buttons and information displays in their surroundings. A within-subject study with 16 participants was conducted and impact the training had on short-term and long-term memory was evaluated. Results of this study show that VR outperformed AR when tested in the same medium after the training. In a memory transfer test conducted two days later AR outperformed VR. The findings from this study have implications on the design of future training scenarios and applications.
language of the presentation: English
 
TY JAYZON FLORES M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 清川 清, Christian Sandor, Alexander Plopski
Title: Towards Generating Realistic Avatars: A Comparison of Different Body Scanning and Rigging System Combinations
Abstract: Over the past few years, human 3D reconstruction technology have seen rapid developments, enabling us to reconstruct humans into 3d models with very high visual fidelity. With the help of automatic model rigging methods, it is possible to automatically generate a skeleton hierarchy, as well as skin weights, in order to make the reconstructed human models move, either through animations or through motion capture. The combination of human 3D reconstruction methods and automatic rigging methods provides us with a pipeline for generating realistic avatars in terms of appearance and movement. However, with the abundance of methods developed for each component of the pipeline, it can be difficult to determine which human 3D reconstruction system works well with which automatic rigging system for the avatar creation process. This study aims to establish a foundation for answering this question by performing comparisons between avatars generated by a small subset of the different combinations of human 3D reconstruction methods and automatic rigging methods.
Language of the presentation: English
 
CHEN CHEN M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 向川 康博, Christian Sandor, Alexander Plopski
title: An Artifacts Removal Algorithm for the Near-Eye Microlens Array Display
abstract: Nowadays, we are using various kinds of head-mounted displays as output in a virtual reality system. I focus on Near-eye microlens displays. By using the microlens array, head-mounted displays could be light and compact. Since there are still some limitations, for example, it is very difficult to build an ideal demo, we cannot compare results in a convenient way. So we choose to use a simulation instead of a real object. In a simulation, we can simulate scenes that human beings can see, which is called retina image. Thus we can compare the results in different parameters. By checking retina images, we found ghost images could appear in some cases. After applying for the ghost images removal algorithm, we found the brightness of retina image is not uniform, which is called artifacts. In this research, we proposed an algorithm to remove artifacts and check the results.
language of the presentation: English