コロキアムB発表

日時: 6月17日(水)3限(13:30~15:00)


会場: L1

司会: 高橋 慧智
FANG YU M, 1回目発表 インタラクティブメディア設計学 加藤 博一, 清川 清, 神原 誠之, 藤本 雄一郎
title: Disentangling Camera-Invariant Representations for Person Re-identification
abstract: Extracting meaningful factors from diverse data variations has raised attention in various fields. The extracted factors should be interpretable and can be used in the downstream tasks. In this study, we focus on disentangled representations learning on a specific task called Person Re-identification(Re-ID). Re-ID aims to match the same person across different camera views. It suffers difficulties due to the change of human poses, viewpoints, backgrounds and illuminations. Some related works attempts to disentangling ID-related features, but they can not guarantee the independence of the separated factors. In addition, paired data is needed for achieving strong disentanglement. In order to reduce the costs of manual annotation, we propose learning robust camera-invariant features using synthetic and real world datasets.
language of the presentation: English
 
ZHANG RENJIE M, 1回目発表 インタラクティブメディア設計学 加藤 博一, 清川 清, 神原 誠之, 藤本 雄一郎
title: Differentiable hand image generation
abstract: Hand pose recognization is the base of future Human Computer Interaction. In recent years, RGB-D based method has reached great success, while RGB based method still perform poor. In order to put the pose right, we would like to check if the predicted output match the input. To do this, we need to transform the estimated hand pose to the corresponding image. Currently, there are two kinds of method in generating such a pose-conditioned method, data-based and model-based. The data based method tends to generate blurred image while model based method can not provide gradient for training. In our work, we innovatively combine two methods together and hope to get a model that can produce clear image and at the same time differentiable.
language of the presentation: English
発表題目: *** この部分を発表題目に ***
発表概要: *** この部分を発表概要に ***
 
CALUYA NICKO REGINIO D, 中間発表 インタラクティブメディア設計学 加藤 博一, 清川 清, 神原 誠之, 藤本 雄一郎
title: Visual Perception Issues in Augmented Reality Training Experiences
abstract: In this research, I would like to highlight the importance of visual perception in the overall effectiveness of augmented reality training experiences in two ways. First, I conducted a user study about the effects of the field of view of optical see-through head-mounted displays (OST-HMD) on spatial memory, as an extension of my master's degree thesis.Using a VR HMD simulation of an AR environment, participants memorized locations of objects using three different FOV sizes of the augmented area (30◦, 70◦, 110◦ diagonal) in three training sessions. Results from recall tests showed that narrower FOV size did not affect user’s performance on both short-term and transfer tests, but HMD data revealed that users rotated their heads less with a 110◦ FOV and proximity of objects to memorize had an interaction effect with smaller FOV sizes. Secondly, my current in-progress research involves speed perception and its effects on performance when training in augmented reality. In domains like sports and vehicle operation, speed misperception is a hindrance to an effective and safe task performance. The goal is to use perception-based augmented reality to compensate for the instances when humans incorrectly perceive speed, like providing adjustments when underestimating speeds of incoming objects in motion.
language of the presentation: English
 
井上 陽平 M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 清川 清, 神原 誠之, Alexander Plopski, 藤本 雄一郎
title: EUI: Finger-aware multitouch interaction with virtual hands
abstract: In this research, we present EUI, a finger-aware multi-touch interaction with virtual hands drawn on the desktop screen which are dynamically assigned functions to each finger. Virtual hands are drawn as two-dimensional contours and the finger-aware touch interaction on the desk surface is realized by capturing user’s hands on the desk by using a RGB-D camera was installed diagonally in front of the user. The finger-aware multi-touch interaction combines target selection, command selection, and argument control. EUI dynamically assign possible functions to each fingers and integrate multi-touch gestures and menu techniques. That allows more than the number of fingers direct-manipulation without switching modes. By presenting the functions in easy-to-understand manner, users can recognize the finger-function correspondence and manipulate immediately when functions are switched. We are implementing and will conduct qualitative and quantitative studies.
language of the presentation: Japanese
発表題目: EUI: バーチャルハンドによる指認識マルチタッチインタラクション
発表概要: 本研究では、デスク上でのコンピュータの利用において、指認識マルチタッチインタラクションを画面に描画されたバーチャルハンドで実現し、各指に動的に機能を割り当てるEnhanced-User Interaction(EUI)を提案し、実装及び評価を行う。利用者の前方斜め上にRGB-Dカメラを設置し、デスク上の手を撮影することでバーチャルハンドを2次元の輪郭で描画し、デスク表面での指識別タッチ操作を実現する。指識別タッチ操作によって、操作対象の選択、コマンド選択、引数入力を結合した動作で行う。可能な操作を指に動的に割り当て、従来のマルチタッチジェスチャやメニューインタラクションと統合することで、モード切り替えをせず指の数以上の直接操作を可能とする。割り当てられた機能を分かりやすく提示することで、指と機能の対応の認知を容易にし、指の機能が切り替わっても即座に操作可能にする。