コロキアムB発表

日時: 9月15日(金)3限目(13:30-15:00)


会場: L1

司会: 大内 啓樹
佐藤 太一 M, 2回目発表 情報セキュリティ工学 林 優一, 藤川 和利, 安本 慶一, 藤本 大介, KIM Youngwoo(客員助教)
title: Study on EM probe detection utilizing ring oscillators under voltage fluctuations
abstract: Cryptographic technology is essential to ensure the information security of various communication devices. Cryptographic algorithms have been increasingly implemented and used in cryptographic modules to accelerate cryptographic processing. On the other hand, many physical attacks on cryptographic modules have been reported. Among these threats, electromagnetic analysis (EMA) especially, which analyzes electromagnetic (EM) radiation generated by cryptographic processing in the module and reveals secret information, has been reported as a realistic threat. In response to these EMA attacks, EM probe detection methods have been proposed using EM radiation. A conventional method utilized capacitive coupling between the external wiring of the IC and the probe. However, it was unable to detect EM probes under voltage fluctuations due to the voltage dependence of ring oscillators. In contrast, this study proposes a method to detect voltage fluctuations by analyzing the frequency difference between ring oscillators by adding a closed ring oscillator in the IC. Furthermore, we demonstrate the feasibility of attack detection, even in environments where the power supply voltage fluctuates due to external noise.
language of the presentation: Japanese
 
米山 樹里 M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 中村 哲, 神原 誠之, 藤本 雄一郎, 澤邊 太志
title: AR Visual Effects for Mitigating Anxiety of In-person Conversation for Individuals with Social Anxiety Disorder
abstract: Individuals with social anxiety experience heightened anxiety during in-person conversations, due to their sensitivity to perceived negative stimuli. For them, even a neutral facial expression can be interpreted as negative. Unlike video conferencing, traditional in-person interactions do not offer the option to obscure the face of the conversational partner. However, augmented reality (AR) introduces potential solutions. In this study, we explored the impact of manipulating a conversational partner's facial expression to enhance conversational ease and reduce anxiety in in-person dialogues. The AR application we designed operates via an AR HMD that either overlays an anime-style avatar onto the conversational partner or modifies their facial expression to a smile. We conducted a user study with 29 participants. Our analysis yielded two main insights: (1) Using an anime-style avatar overlay can potentially enhance conversational ease; (2) We identified specific traits indicating which individuals might benefit most from our system.
language of the presentation: English
 
田口 和駿 M, 2回目発表 インタラクティブメディア設計学 加藤 博一, 安本 慶一, 神原 誠之, 藤本 雄一郎, 澤邊 太志
title: Effective generation of interesting dialogue for daily text chat communication with virtual robot
abstract: This study aims to address the high manual labor cost issue in previous research related to maintaining the dialogue continuation desire of dialogue agents. This is achieved by improving response generation through the use of LLM (Large Language Model) and utilizing past dialogues to enhance the quality of generated responses. Additionally, this method is validated through empirical experiments. Results from experiments involving response generation using LLM showed a reduction in working time by approximately 30%. As a future perspective, we will explain the management and utilization of past dialogues.
language of the presentation: Japanese
発表題目: バーチャルロボットとの日常的テキストチャットのための対話の品質向上と効率化
発表概要: 本研究では対話エージェントの対話継続意欲の維持に向けたLLMを用いた返答の生成と過去対話の活用による生成品質の向上によって先行研究の課題であった人手による対話の高い作業コストを改善する.またこの手法を実証実験を通して検証する.LLMを用いた返答の生成による実験の結果として,作業時間を約30%削減することができた.今後の展望として,過去対話の管理と活用方法について説明する.
 
有隅 惟人 M, 2回目発表 サイバネティクス・リアリティ工学 清川 清, 加藤 博一, 内山 英昭, Perusquia Hernandez Monica, 平尾 悠太朗

発表題目: 聴覚障碍者のオノマトペ教育に向けた環境音のAR表示
発表概要: 日本において擬音語や擬態語等のオノマトペは、会話において話に具体性を持たせるためによく使用される。一方で、聴覚障碍者は手話で会話を行うため、オノマトペを使用することは少ない。そのためオノマトペ習得のために聾学校では読書や授業を通じたオノマトペの学習が行われるが、自身の経験や場の雰囲気などとの結び付け等が難しいため、習得は難しい。本研究では自身が発生させた音や周囲の効果音などをリアルタイムにオノマトペとして重畳表示するARデバイスを提案する。これにより使用者は生活の中で様々なオノマトペに触れることが可能となり、自身の体験に基づ いた学習が容易に行えるようになる。