The 1st COE Postdoctoral and Doctoral Researchers
Technical Presentation

Date: Thursday, April. 28, 2005
Time: 13:30 - 15:30
Place: L1 Lecture Room
Language: English (Oral Presentation), English/Japanese (Question)
Chairperson: Hideki Shimada (Internet Architecture and Systems Lab. : PD),
Sei Ikeda (Vision and Media Computing Lab. : D3)

Program (20 mins each: 15 mins presentation and 5 mins discussion)

  1. "Teleradiology for Remote Quantitative Analysis of PET images -Omission of Arterial Blood Sampling-"
    長縄 美香 ( 像情報処理学講座 : PD )
    Mika Naganawa ( Image Processing Laboratory : PD )

    [Abstract]
    The number of PET scans annually will continue to increase as the instrumentation is more available. There are 121 PET imaging systems in Japan (November 2004). A kinetic analysis of positron emission tomography (PET) data provides functional parameters in an absolute unit. I propose a teleradiology system for kinetic analysis in order to cover the loss of radiologists. Two problems arise in achieving a teleradiology system for kinetic analysis: huge calculation time to process PET images and necessity of arterial cannulation. In this talk, I will focus on a method for omission of arterial blood sampling using spatial independent component analysis.
    Top
  2. "Automatic user location system using Active IR-tag"
    坂田 宗之 ( 像情報処理学講座 : PD )
    Muneyuki Sakata ( Image Processing Laboratory : PD )

    [Abstract]
    In ubiquitous computing environment, user location is one of the most important information. Global Positioning System (GPS) is most powerful tool in outdoor use, but it can not be userd in indoors. We has proposed ALTAIR, a new user location system that realizes user tracking and identification in indoors. In this presentation, I discribe the experiment that show ALTAIR can automatically detect and track the locations of users and each user can derive the information based on the place where he/she is. As ALTAIR manages users' location using database, the information can be applied to many types of application that uses users' location of past and present.
    Top
  3. "Investigating the role of the Lombard reflex in Non-Audible Murmur (NAM) recognition"
    Panikos Heracleous ( 音情報処理学講座 : PD )
    Panikos Heracleous ( Speech and Acoustics Laboratory : PD )

    [Abstract]
    Previously, we reported experimental results for Non-Audible Murmur (NAM) automatic recognition. Using a small amount of data and adaptation techniques we achieved a 93.9% word accuracy for a 20k dictation task in a clean environment. In this work, we report results in noisy environments and investigate the role of the Lombard reflex in NAM recognition. In noisy environements, howver, the speakers attempt to increase the intelligibility of their voice, and as a result their speech characteristics (e.g., fundamental frequency, formants, spectrl tilt, etc) change affecting the performance of a speech recognizer. Our results show, that the Lombard reflex has a negative impact effect on NAM recognition.

    Top
  4. "Runtime Feature Interaction Detection and Resolution in Integrated Services of Networked Home Appliances"
    井垣 宏 ( ソフトウェア工学講座 : PD )
    Hiroshi Igaki ( Software Engineering Laboratory : PD )

    [Abstract]
    The HNS integrated service is to orchestrate multiple networked appliances to add values of user's life. When multiple integrated services are executed simultaneously, the features in the services may conflict with each other, which results in unexpected behaviors. In our previous research, we proposed formalization and off-line detection method for appliance interactions and environment interactions. However, the off-line detection enables only off-line resolution(rebuild or delete of scenarios), which significantly limits the flexible service creation. In this presentation, we propose on-line detection method. The interaction detection is performed during runtime. We also present several on-line interaction resolution schemes, and conduct a comparative discussion for the schemes.

    Top
  5. "Intuitively Annotating User's Gazed Objects for Wearable AR Systems"
    天目 隆平 ( 視覚情報メディア講座 : D3 )
    Ryuhei Tenmoku ( Vision and Media Computing Laboratory : D3 )

    [Abstract]
    By realizing augmented reality on wearable computers, it becomes possible to overlay annotations on the real world based on user's current position and orientation. However, it is difficult for the user to understand links between annotations and real objects intuitively when the scene is complicated or many annotations are overlaid at the same time. In this presentation, a view management method which emphasizes user's gazed real objects and their annotations using 3D models of the scene is described. The proposed method effectively highlights the objects gazed by the user. In addition, when the gazed object is occluded by other real objects, the object is complemented by using an image, which is made from 3D models, on the overlaid image.

    Top

21st Century COE Program
NAIST Graduate School of Information Science