ゼミナール発表

日時: 6月27日(月)3限 (13:30-15:00)


会場: L1

司会: 能地 宏
杉山 弘晃 1461201: D, 中間発表 知能コミュニケーション 中村 哲, 松本 裕治, 戸田 智基, Sakriani Sakti, Graham Neubig
title: Development of chat-oriented dialogue systems
abstract: We present our work on developing chat-oriented dialogue systems. In chat-oriented dialogue, people tend to talk about wide range of topics, which are conducted by questions about their own personalities. To respond such utterances appropriately, we leverage two types of proposed methods: open-domain utterance generation based on dependency relations and question-answering for systems' personalities. Our experiments show that both the methos are effective to generate appropriate responses.
language of the presentation: Japanese
 
PHAN DUC ANH 1461213: D, 中間発表 自然言語処理学 松本 裕治, 中村 哲, 新保 仁, 進藤 裕之
Title: Multiple Emotions Detection in Conversation Transcript
Abstract: We present a method of predicting emotions from multi-label conversation transcripts. The transcripts are from a movie dialog corpus and annotated partly by 3 annotators. The method includes building an emotion lexicon bootstrapped from Wordnet following the notion of Plutchik's basic emotions and dyads. The lexicon is then adapted to the training data by using a simple Neural Network to fine-tune the weights toward each basic emotion. We then use the adapted lexicon to extract the features and use them for another Deep Network which does the detection of emotions in conversation transcripts. The experiments were conducted to confirm the effectiveness of the method, which turned out to be nearly as good as a human annotator. However, improvement must be made to improve the inter-annotator agreement score of the corpus.
language of the presentation: English
 
PHI VAN THUY 1451208: M, 2回目発表 自然言語処理学 松本 裕治, 中村 哲, 新保 仁, 進藤 裕之
title: Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction
abstract: Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In our research, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multi-task learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the part-whole relation, and outperformed the original Espresso system. Our system can be extended for extracting other binary relations.
language of the presentation: English
 

会場: L2

司会: 武富 貴史
FUVATTANASILP VARUNYU 1551202: M, 1回目発表 インタラクティブメディア設計学 加藤 博一
title: Gravity-aware annotating for handheld augmented reality in remote collaboration
abstract: Remote task assistance using augmented reality allow expert to have remote collaboration with local user for guidance during task. In order to put virtual 3D annotation in a real environment with right position and orientation, the system needs to have a 3D structure information for the target environment. However, it is difficult due to limitation of hardware.In this presentation, I will present the method for putting 3D annotation without 3D structure information. We use the gravity information for deciding orientation of the annotation.
language of the presentation: English
 
PIPATANAKUL KUNAT 1551206: M, 1回目発表 視覚情報メディア 横矢 直和

title: Indirect Augmented Reality With Online Panoramic Image Creation

abstract: Indirect augmented reality (Indirect AR) enables to achieve the high-quality fusion of real and virtual worlds without jitters by preliminarily capturing an omnidirectional image and embedding virtual objects into it. While ordinary augmented reality requires accurate camera pose estimation to superimpose virtual objects in real-time, Indirect AR only needs the rough estimation of the device pose because the estimated pose is used only for cropping the omnidirectional image with virtual objects. However, existing Indirect AR methods have a problem that a user cannot experience the applications in an unknown location because they need the pre-captured omnidirectional image. In this paper, we introduce a new method for Indirect AR that simultaneously creates a panoramic image with virtual objects in online by stitching images and shows AR images without jitters between real and virtual worlds. In experiments, we show how the proposed method works in a real world situation.

language of the presentation: English

 
RONGSIRIGUL THIWAT 1551207: M, 1回目発表 視覚情報メディア 横矢 直和
title: GPU Accelerated Novel View Synthesis for HMDs
abstract: Recently, the proliferation of off-the-shelf head-mounted displays (HMDs) let end-users enjoy virtual reality (VR) applications, such as telepresence. Such applications synthesize images of a real world scene and present it to users. To synthesize the images, a multi-view stereo technique can be used to reconstruct a 3D model of the scene from captured images. However, a model obtained from this technique is not always accurate. To relieve this problem, a view-dependent texture mapping technique is used. The VDTM technique renders a novel view by selecting textures from the most appropriate image for the user current viewing direction. However, this process takes a long time since VDTM needs to search every captured image. For stereoscopic HMDs, the situation is much worse because we need to synthesize the view twice and almost doubles the computational cost. In this paper, we render the reconstructed real world scene to an HMD. By using a GPU we are able to improve the performance of the VDTM process.
language of the presentation: English