コロキアムB発表

日時: 9月24日(木)4限(15:10~16:40)


会場: L2

司会: 新谷 道広
ZHANG ZHIHUA D, 中間発表 ユビキタスコンピューティングシステム 安本 慶一, 中村 哲, 荒川 豊, 藤本 まなと, 松田 裕貴
title: Exploring the Impact of Elaborateness and Indirectness in a Behavior Change Support System
abstract: Numerous technologies exist for promoting a healthier lifestyle. However, while the majority of the existing apps use a quantitative data representation, it has been shown that this approach might harm users' motivation and lead to a failure of promoting behavior change since it is hard to understand the meaning behind the data. Therefore, it is necessary to provide the interpretation of the quantitative data as supplement. However, different descriptions of the same data may lead to different outcomes. In this paper, we explore the impact of different communication styles for the interpretation of quantitative data on behavior change by developing and evaluating Walkeeper, a web-based application that provides interpretation of users' daily walking steps using different levels of elaborateness and indirectness to promote users to walk. Through the quantitative analysis and results of a four-week study, we contribute new knowledge on designing the interpretations along with quantitative data.
language of the presentation: English
 
矢野 史剛 M, 2回目発表 光メディアインタフェース 向川 康博, 中村 哲, 舩冨 卓哉, 久保 尋之, 田中 賢一郎
title: A study on auto-inbetweening using interpolation method assisted 3D-model-trained neural network
abstract: In animation production, inbetweening is a time-consuming task. Previous research shows high-quality interpolation technology, however, it can not apply frames with dynamic movement. Additionally, animations almost have dynamic movement frames, for this reason, it is difficult for interpolation technology to apply animation auto-inbetweening. To apply frames with dynamic movement, we propose an auto-inbetweening method using interpolation method assisted 3D-model-trained neural network. This method estimates target’s pose parameters (joints point, angle, etc.) in an intermediate frame from the key frames by 3D-model-trained neural network. Using the rendered image of the target’s 3D-model with estimated parameters, we can reduce the movement between key frames and improve the quality of the intermediate frame generated by auto-interpolation. To discuss the possibility of our method, we conducted an experiment testing the fundamental idea of our proposed method under the simple condition. We also conducted an experiment using this method for inbetweening. In this presentation, we report the result of the experiment and the discussion of the possibility of our method.
language of the presentation: Japanese
発表題目: 3Dモデルを学習させたニューラルネットワークとフレーム補間技術による自動中割り生成に関する研究
発表概要: アニメ制作において,中割り制作は時間的コストが高い.高品質なフレーム補間を行う技術が先行研究でなされているが,大きな動きのあるフレームの補間には適用できない.加えてアニメーションでは大きな動きのあるフレームが存在していることが多く,大きな動きのあるアニメーションの中割り制作に自動フレーム補間を適用することは困難である. そこで我々は3Dモデルを学習させたニューラルネットワークによって補助された自動フレーム補間による自動中割り生成法を提案する.本手法では3Dモデルを学習させたニューラルネットワークにより,キーフレームから中割りにおける3Dモデル上の姿勢パラメータ(関節角度,位置etc…)を推定する.このパラメータを基にレンダリングされた画像を用いればキーフレーム間の動きを緩和させることが可能で,よってフレーム補間で生成される中割りの品質向上が可能である. 本手法の実現性を検討するため,単純な条件下で本手法の基礎アイディアの検証を行った.また,実際に本手法により中割りを生成する実験を行った.
 
東 健太 M, 2回目発表 サイバネティクス・リアリティ工学 清川 清, 中村 哲, 酒田 信親, 磯山 直也
title: Study of way to enhance conversational satisfcation by manipulation of facial movie
abstract:We can obtain a lot of information from the visual sense in interpersonal communication. In particular , It has been clarified that conversational satisfaction is greatly affected by the factors such as nodding and gaze of the conversation partner. In this study, we manipulated visual information on conversation in Virtual Enviroment, and investigate whether and how it affects the conversational satisfaction.
language of the presentation: Japanese
 
宮本 佳奈 M, 2回目発表 知能コミュニケーション 中村 哲, 小笠原 司, 作村 諭一, 田中 宏季
title: Music generation and emotion estimation from EEG signals for inducing affective states
abstract: Music is known to be effective for emotion induction. Although emotion induction using music has been studied, the emotions felt by listening to music vary among individuals. Therefore, we propose a feedback system that generates music from the continuous value of emotion estimated from electroencephalogram (EEG) to provide personalized emotion induction. In order to construct the system, we created a music generator to induce emotions. We also estimated participants' emotions from their EEG while they were listening to music generated by the music generator. We compared three regression models for emotion estimation: linear regression and convolutional neural network (with/without transfer learning). As a result, we obtained the lowest RMSE between the actual and estimated emotional values with a convolutional neural network with transfer learning.
language of the presentation: Japanese