| 長谷川 遼 | M, 2回目発表 | 自然言語処理学 | 渡辺 太郎 | Sakriani Sakti | 上垣外 英剛 | 坂井 優介 |
|
title: Knowledge Editing Induces Underconfidence in Language Models
abstract: As language models continue to scale, the demand for knowledge editing, a retraining-free knowledge update method, has increased. However, since knowledge editing directly alters token prediction probabilities acquired during pretraining, the probabilities may diverge from the empirical distribution. In this study, we analyze the impact of knowledge editing to compare the alignment between token prediction probabilities and task accuracy by calculating confidence calibration before and after knowledge editing. Our results reveal that, for tasks requiring semantic understanding, the range of increase in token prediction probabilities tends to be smaller than that of accuracy improvement, suggesting that knowledge editing methods lead to less confidence in prediction. language of the presentation: English | ||||||