コロキアムB発表

日時: 06月28日 (金) 3限目(13:30-15:00)


会場: L2

司会: 嶋利 一真
平野 颯 D, 中間発表 自然言語処理学 渡辺 太郎, 荒牧 英治, 上垣外 英剛, 大内 啓樹
title: Investigating Deep Learning Model Regarding Language Features
abstract: With the increase in the number of parameters in deep learning models, it has become possible to train models to solve natural language processing tasks in multiple languages. However, task-solving ability varies from language to language. This issue stems from the large imbalance in the amount of training data for each language, and this situation is not likely to be resolved anytime soon. As a first step, this study assessed how existing model-building methods can generalize language information based on linguistic features defined across languages.
language of the presentation: Japanese
発表題目: 深層学習モデルにおける言語特徴分布に関する研究
発表概要: 深層学習モデルのパラメータ数の増加により、複数の言語で言語処理タスクを解けるようモデルを訓練することができるようになった。 しかしながら、タスクの解決能力は言語ごとに差がある。 この課題は、学習データの数が言語ごとに大きな偏りがあることに起因し、この状況はすぐに解決されるものではない。 そこで最初の取り組みとして、本研究では既存のモデル構築手法がどの程度言語情報を汎化することができているかを、言語横断的に定義できる特徴をもとに評価した。
 
PUSSEWALA KANKANANGE ASHMARI PRAMODYA M, 1回目発表 自然言語処理学 渡辺 太郎, 荒牧 英治, 上垣外 英剛
title: Translating Movie Subtitles by Large Language Models on Movie Meta-information
abstract: Large Language Models (LLMs) have significantly advanced natural language processing by understanding, generating, and manipulating written texts. These models have evolved from simple beginnings into complex systems with remarkable linguistic abilities. The transition from rule-based systems to neural network architectures has enabled them to learn from extensive datasets, producing context-rich and nuanced text. However, training LLMs for domain-specific purposes involves a computationally intensive pre-training phase. Recently, the focus has shifted towards a "pre-train, prompt, predict" approach, which reduces computational effort and requires specialized datasets through prompt engineering. Prompt engineering has shown potential for improving translation quality in LLMs, yet the use of translation concepts in prompt design remains largely underexplored. Additionally, the translation of movie subtitles is an unexplored area. This study aims to address this gap by focusing on the translation of movie subtitles based on the story and scenes, considering the movie's meta-information during translation. We construct a multilingual dataset for evaluating this task by mapping the OpenSubtitles dataset to Wikipedia articles and using this data to evaluate suitable prompts for LLMs. The study will explore the behaviour of different prompts in enhancing subtitle translation quality.
language of the presentation: English
 
LIU JINGXUAN M, 1回目発表 自然言語処理学 渡辺 太郎, 荒牧 英治, 上垣外 英剛
title: Interpretable Quality Score Estimation in Machine Translation
abstract:As machine translation (MT) output quality continues to improve, traditional surface-level metrics are becoming less reliable for evaluation. Keeping pace with these advancements requires rapid evolution of automatic evaluation metrics. Current MT evaluation metrics often rely on black-box large language models, which, despite strong correlations with human judgments, are met with hesitance in adoption for system assessment. COMET , is now a dominant approach for evaluating MT quality, yet uncertainty remains about how its internal processes relate to the validity of its estimates. My research aims to analyze COMET's representations, exploring what they capture and investigating its ability to recognize some features of translations.
language of the presentation: English
 
橋本 航 D, 中間発表 自然言語処理学 渡辺 太郎, 池田 和司, 上垣外 英剛
title: Efficient Nearest Neighbor based Uncertainty Estimation for Natural Language Processing Tasks
abstract: Trustworthy prediction in Deep Neural Networks (DNNs), including Pre-trained Language Models (PLMs) is important for safety-critical applications in the real world. However, DNNs often suffer from uncertainty estimation, such as miscalibration. In particular, approaches that require multiple stochastic inference can mitigate this problem, but the expensive cost of inference makes them impractical. In this study, we propose $k$-Nearest Neighbor Uncertainty Estimation ($k$NN-UE), which is an uncertainty estimation method that uses the distances from the neighbors and label-existence ratio of neighbors. Experiments on sentiment analysis, natural language inference, and named entity recognition show that our proposed method outperforms the baselines or recent density-based methods in confidence calibration, selective prediction, and out-of-distribution detection. Moreover, our analyses indicate that introducing dimension reduction or approximate nearest neighbor search inspired by recent $k$NN-LM studies reduces the inference overhead without significantly degrading estimation performance when combined them appropriately.
language of the presentation: Japanese
発表題目: 自然言語処理タスクにおける $k$ Nearest Neighbor を用いた不確実性推定
発表概要: 深層学習モデルにおける信頼性の高い予測は、安全性が求められる応用先にとって重要である。しかし、深層学習モデルには実際の正解率と予測された信頼度の乖離が大きい誤較正の問題が存在する。複数の確率推論を必要とするアプローチは、この問題を軽減することができるが、推論コストが高いため実用的ではない。本研究では、近傍事例との距離と近傍事例のラベル情報を用いて効率的に不確実性推定を行う手法である$k$-Nearest-Neighbor Uncertainty Estimatiion ($k$NN-UE) を提案する。感情分析、自然言語推論、および固有表現認識に関する実験より、我々の提案手法が確信度較正・選択的予測・分布外検出においてベースラインや近年の密度ベースの手法を上回ることが示される。さらに、次元削減や近似最近傍探索技術を適切に組み合わせることで、不確実性推定性能を大幅に低下させることなく推論速度を改善できることを示す。