コロキアムB発表

日時: 6月15日(月)3限(13:30~15:00)


会場: L1

司会: 田中 宏季
JOHANES EFFENDI THE D, 中間発表 知能コミュニケーション 中村 哲, 渡辺 太郎, Sakriani Sakti
title:
Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain

abstract:
Previously, a machine speech chain, which is based on sequence-to-sequence deep learning, was proposed to mimic speech perception and production behavior. Such chains separately processed listening and speaking by automatic speech recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled them to teach each other in semisupervised learning when they received unpaired data. Unfortunately, this speech chain study is limited to speech and textual modalities. In fact, natural communication is actually multimodal and involves both auditory and visual sensory systems. Although the said speech chain reduces the requirement of having a full amount of paired data, in this case we still need a large amount of unpaired data. In this research, we take a further step and construct a multimodal chain and design a closely knit chain architecture that combines ASR, TTS, image captioning, and image production models into a single framework. The framework allows the training of each component without requiring a large number of parallel multimodal data. Our experimental results also show that an ASR can be further trained without speech and text data and cross-modal data augmentation remains possible through our proposed chain, which improves the ASR performance.

language of the presentation: English
 
KAING HOUR D, 中間発表 知能コミュニケーション 中村 哲, 渡辺 太郎, 須藤 克仁
title: Cross-Lingual Transfer Learning for Language Analysis for Understudied Low-resource Languages
abstract: Many great success stories of natural language processing can be obviously seen in Indo-European languages, such as English. However, there are still many under discovered languages that utilize the complex scripts and especially lack of resources, including Khmer, Myanmar, Thai, Lao. The basic problem of these languages is lacking of good language analysis tools together with data scarcity. This study focus on Khmer language and tend to solve the problem using Cross-lingual transfer learning approaches to induce the analyzer without or with a little data. Cross-Lingual Word Embedding (CLWE), which migrates the embedding space of source (poor-resource) language to target (rich-resource) language, has been widely used for zero-short learning for many downstream tasks. This presentation observes the performance of linear-transformation for CLWE between Khmer and English. We evaluate two optimization objectives, least squares and orthorgonal, on Bilingual Lexicon Induction (BLI) task with K-nearest neighbor (k-nn). The results is that CLWE can't even well fit the training examples. According to our analysis, this phenomena is caused by anisomorphism between Khmer and English. This illustrates the difficulties of CLWE for zero-shot language analyzer.
language of the presentation: English
 
WU BIN D, 中間発表 知能コミュニケーション 中村 哲, 渡辺 太郎, Sakriani Sakti
title: Unsupervised Phoneme Discovery Using DPGMM-RNN Hybrid Model
abstract: The lack of correspondence between perceputal phonemes and acoustic signals forms a big challenge in designing unsupervised algorithms to distinguish phonemes from sound. Recently, the DPGMM clustering algorithm achieves the top performance in this unsupervised phoneme discovery task. However. the DPGMM clustering algorithm suffers from the fragmentation problem. We propose the DPGMM-RNN hybrid model that improves phoneme categorization by relieving the fragmentation problem. Our results show that the DPGMM-RNN hybrid model relieves the fragmentation problem and improves phoneme discriminability.
language of the presentation: English