The 9th COE Postdoctoral and Doctoral Researchers
Technical Presentation

Date: Thursday, Jan. 12, 2006 (Rescheduled)
Time: 13:30 - 16:20
Place: L3 Lecture Room
Language: English (Oral Presentation), English/Japanese (Question)
Chairperson: Hiroshi Igaki (Software Engineering Lab.: PD)
Yuichiro Kanzaki (Software Engineering Lab.: D3)

Program (20 mins each: 15 mins presentation and 5 mins discussion)

  1. "Manipulative Familiarization and Fatigue Evaluation Using Contact State Transition"
    Masahiro Kondo (Robotics Lab.:D2)
    近藤 誠宏(ロボティクス講座 : D2)

    [Abstract]

    In this research, we propose a method for generating a template used for a manipulation recognition considerating familiarization and fatigue. Our recognition system allows quantitative comparison of similarities among manipulations performed by observing a contact state transition on a palm surface. The system detects the contact states on the palm surface using an attached tactile sensor sheet on a manipulated object.In the experiment, the familiarization and the fatigue are detected by measuring the variance of the manipulation. The experimental results indicate that the manipulation can be divided into three periods, which are learning, familiarization and fatigue. The results also indicate that the variation of the manipulation becomes small when the template is generated by the manipulation data during the familiarization period.


    Top
  2. "On the Relation between Robot Bodily Expressions and their Impression on the User"
    Khiat, Abdelaziz (Robotics Lab.: D3)
    Khiat, Abdelaziz (ロボティクス講座 : D3)

    [Abstract]

    During an interaction process, people usually adapt their behavior according to the interpretation of their partner's bodily expressions. It is not known how much similar expressions performed by robots affect a human observer. This presentation will explore this issue. The study shows a correlation between the nature of the bodily expressions, through the result of questionnaires, and the effect on brain activity. I will illustrate how unpleasant bodily expressions of the robot elicit unpleasant impressions and vice versa. This was observed through brain activity in a specific area when the expression is pleasant, and in another area when it is unpleasant.


    Top
  3. "Evaluation of Semi-automatic Location-based Photo Captioning System"
    Kiyoko Iwasaki (Vision and Media Computing Lab.: D1)
    岩崎季世子 (視覚情報メディア講座 : D1)

    [Abstract]

    With the spread of digital cameras, shooting photos has been becoming an everyday affair. However, there are few methods or systems to manage photos simply, and a huge amount of photo data remains unorganized. We have proposed a semi-automatic photo captioning system that enables users to generate captions simply. Caption candidates are acquired from geographic database retrieval and relevant word extraction using web retrieval based on shooting position and orientation information. Caption candidates are acquired from geographic database retrieval and relevant word extraction using web retrieval based on shooting position and orientation information. We built a prototype system to evaluate the proposed photo captioning framework. In this presentation, we show results of user evaluation by some experiments.


    Top
  4. "Novel View Generation from Multiple Omni-directional Videos"
    Tomoya Ishikawa (Vision and Media Computing Lab.:D1)
    石川 智也 (視覚情報メディア講座 : D1)

    [Abstract]

    Generation of novel views from images acquired by multiple cameras has been investigated in the fields of virtual and mixed reality. Most conventional methods need some assumptions about the scene such as a static scene and limited positions of objects. In this presentation, we propose a method for generating novel view images of a dynamic scene with a wide view. The images acquired from omni-directional cameras are first divided into static regions and dynamic regions. The novel view images are then generated by applying a morphing technique to static regions and by computing visual hulls for dynamic regions in realtime. In experiments, we show that a prototype system can generate novel view images in real-time from live video streams.


    Top
  5. ==================== Break (10 min) ====================

  6. "Device Access Control Mechanism for Transparent Device Sharing Technologies"
    Takahiro Hirofuchi (Internet Architecture and Systems Lab.: D2)
    広渕 崇宏 (インターネットアーキテクチャ講座: D2)

    [Abstract]

    USB/IP (USB request over IP networks) is a transparent device sharing technology. Using a virtual peripheral bus driver, users can share a diverse range of devices over networks without any modification in existing operating systems and applications. Our previous work showed USB/IP has sufficient I/O performance for many USB devices, including isochronous ones. In this presentation, we propose access control mechanism for USB/IP and other transparent device sharing technologies.


    Top
  7. "Multiple Active Camera Assignment for High Fidelity 3D Video"
    Sofiane Yous(Artificial intelligence Lab.:D2)
    Sofiane Yous(知能情報処理学講座:D2)

    [Abstract]

    In my last presentation, I have presented a multiple camera assignment for high fidelity 3D video of a moving object, mainly an acting human body. The goal was to assign a set of active high resolution cameras to the different parts of the moving object so as to achieve a high visibility of the whole object. The presented scheme is executed at each frame independently, based on the visibility analysis of a roughly reconstructed 3D surface of the object. In this presentation, I will present the temporal extension of this scheme. In addition to the visibility analysis, the new assignment scheme involves the last camera orientation as an additional constraint. The purpose is an optimized camera movement.


    Top
  8. "Wearable Augmented Reality System for Wide Indoor Environments Using Invisible Visual Markers"
    Yusuke Nakazato (Vision and Media Computing Lab.:D1)
    中里 祐介 (視覚情報メディア講座 : D1)

    [Abstract]

    To realize an augmented reality (AR) system using a wearable computer, the exact position and orientation of a user are required. We propose a wearable AR system which is based on using an infrared camera and invisible visual markers consisting of translucent retro-reflectors. In the system, the camera captures the reflection of infrared LEDs attached to it. In this presentation, we describe the quantitative evaluation of the proposed localization method using a mechanically controlled infrared camera. We then carry out the localization experiment in real environments for a wearable AR system.


    Top
  9. "A recommendation mechanism for distributed agents environment"
    Ismail Arai (Internet Architecture and Systems Lab.: D2)
    新井 イスマイル (インターネットアーキテクチャ講座: D2)

    [Abstract]

    We propose suitable recommendation mechanism utilizing distributed agents environment which suppose ubiquitous environment. And we need a recommendation mechanism which is accommodable for user's dynamic occasion, because user's needs are changing according to his circumstances and preferences.


    Top

21st Century COE Program
NAIST Graduate School of Information Science