コロキアムB発表

日時: 9月17日 (火) 4限目(15:10-16:40)


会場: L1

司会: Delwar Hossain
TAN HAOTIAN D, 中間発表 ヒューマンAIインタラクション Sakriani Sakti 渡辺 太郎 大内 啓樹 Faisal Mehmood
title: Adaptive Self-regulating Simultaneous Interpretation System based on Neural Feedback Loop Mechanism
abstract: Simultaneous speech translation (SST) systems have made significant progress in bridging language barriers between people speaking different languages. However, they are highly dependent on what they have learned during training and are incapable of adapting or regulating according to unexpected perturbations and social needs of users. In this research, we draw inspiration from human communication to introduce a neural feedback loop mechanism for SST systems, enabling self-adaptation and self-regulation. This mechanism includes two key components: a private feedback loop between model generation and reception, allowing the SST model to adapt to unexpected perturbations, and a public feedback loop between the model and users, facilitating real-time regulations based on social context. Specifically, as a first step, we propose the contrastive feedback mechanism (CFM) as the private loop, a novel method that leverages earlier model predictions as feedback to enhance the quality of subsequent translations. Experimental results across eight languages demonstrate that CFM effectively improves SST performance. Our future work will focus on developing the public loop to enable real-time self-regulation based on user needs.
language of the presentation: English
 
辻 航平 M, 2回目発表 ソーシャル・コンピューティング(多言語ナレッジコンピューティング) 荒牧 英治☆ 渡辺 太郎 岩倉 友哉 鄭 育昌 若宮 翔子 矢田 竣太郎
title:SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization
abstract: Many datasets of natural language processing (NLP) sometimes include annotation errors. Researchers have attempted to develop methods to reduce the effect of errors in datasets automatically. However, an existing method is time-consuming because it requires many trained models to detect errors. We propose a novel method to reduce the time of error detection. Specifically, we use a tokenization technique called subword regularization to create multiple pseudo-models which are used to detect errors. Our proposed method can perform annotation weighting four to five times faster than the existing method. Additionally, it improved performance in both document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, pseudo-incorrect labels were adequately detected.
language of the presentation: Japanese