ゼミナール発表

日時: 9月18日(金)4限 (15:10-16:40)


会場: L2

司会: 藤本 まなと
大嶋 晃平 1451023: M, 2回目発表 加藤 博一,横矢 直和,Christian Sandor,山本 豪志朗
title: SharpView: Improved Clarity of Defocussed Content on Optical See-Through Head-Mounted Displays
abstract: Optical See-Through Head-Mounted Displays (OST HMDs) have a focus blur problem because there is a depth deviation between the gaze point and the virtual screen. This focus blur decreases the visibility of CG on OST HMDs. We proposed SharpView to overcome this problem. SharpView is the pre-filtering technique, which sharpens the CG according to eye's Point Spread Function. This sharpened image will be corrected by passing through the eye lens. Our objective is to determine optimal sharpening degree and prove that SharpView is effective in the real task.
language of the presentation: English
 
ROMPAPAS DAMIEN CONSTANTINE 1451127: M, 2回目発表 加藤 博一,横矢 直和,Christian Sandor,武富 貴史

title: EyeAR: Physically-Based Depth of Field through Eye Measurements

abstract: Augmented Reality(AR) is a technology which superimposes computer graphics (CG) images onto a user’s view of the real world. A commonly used AR display device is an Optical See-Through Head-Mounted Display (OST-HMD), which is a transparent HMD, enabling users to observe the real-world directly, with CG added to it. A common problem in such systems is the mismatch between the properties of the user’s eyes and the virtual camera used to generate CG. The goal of our system, is to accurately reect the state of the user’s eyes in our renderings. Using an Auto Refractometer, we measure the user’s pupil size and accommodative state and feed these values into a realtime raytracer. The resulting renderings accurately reect the Depth-of-Field (DoF) blur effect the user perceives in their view of the real world.

language of the presentation: English

 
SOROKIN NICHOLAS JOHN 1451128: M, 2回目発表 加藤 博一,横矢 直和,Christian Sandor,武富 貴史
title: Tangible Augmented Reality Tabletop Game
abstract: I will talk about the development of a tangible AR tabletop game. Currently there exists various tabletop games based on AR content, but many fail to really integrate the real objects available. Often the interactions in the game are through a tablet touch screen or by moving markers around. My proposed system creates a game environment that reacts to the tabletop, and existance and movement of arbitrary objects on it, without the need for printed markers, which allows the players to interact directly with the environment of the game.
language of the presentation: English
 
DAYRIT FABIAN LORENZO BAYTION 1461015: D, 中間発表 横矢 直和,加藤 博一,佐藤 智和,中島 悠太
title: Free-viewpoint AR human-motion reenactment based on a single RGB-D video stream
abstract: Standard video does not capture the 3D aspect of human motion, which is important for comprehension of motion that may be ambiguous. In this paper, we apply augmented reality (AR) techniques to give viewers insight into 3D motion by allowing them to manipulate the viewpoint of a motion sequence of a human actor using a handheld mobile device. The motion sequence is captured using a single RGB-D sensor, which is easier for a general user, but presents the unique challenge of synthesizing novel views using images captured from a single viewpoint. To address this challenge, our proposed system reconstructs a 3D model of the actor, then uses a combination of the actor's pose and viewpoint similarity to find appropriate images to texture it. We call this novel view of a moving human actor a reenactment.
language of the presentation: English