コロキアムB発表

日時: 12月14日(水)3限(13:30-15:00)


会場: L2

司会: 松田 裕貴
米山 樹里 M, 1回目発表 インタラクティブメディア設計学 加藤 博一, 中村 哲, 神原 誠之, 藤本 雄一郎, 澤邊 太志
title: AR support system for reducing fear of making face-to-face conversation
abstract: Social Anxiety Disorder (SAD) is the third most prevalent psychiatric disorder in the world. The patients of it feel anxiety when making conversation face to face. The previous research found that SAD people revealed more intimate information about themselves when interacting with a virtual human via a video conference system. However, it can’t apply to face-to-face conversations. Therefore this research investigates how AR Head-mounted display which morphs or overlays something to the person can reduce of fear of making face-to-face communication.
language of the presentation: English
発表題目: 対面会話時の不安を緩和するAR支援システムの開発と評価
発表概要: 社交不安障害は世界で3番目に多い精神疾患であり、対面会話時の不安がある。また、社交不安障害と診断されていなくても対面会話に不安を感じる人は多い。先行研究では、パソコンを用いたテレビ会議において、対話相手にバーチャルアバター等のオブジェクトを重ねることで、会話に対する不安を軽減できた。しかし、実際の対面会話時には使用できない。本研究では、実際の対面会話時にリアルタイムで相手にオブジェクトを重ねられるARヘッドマウントディスプレイを使用した会話時の不安軽減AR支援システムを開発し、評価する。
 
PIERRE JUDE CRENER JUNIOR M, 1回目発表 ソーシャル・コンピューティング 荒牧 英治, 中村 哲, 若宮 翔子, 矢田 竣太郎, LIEW KONG MENG, SHE WAN JOU

title: Intentional and Unintentional ASMR : A comparative analysis using Youtube comments

abstract: One of the most popular trends related to entertainment or relaxation videos on YouTube is ASMR, although many people do not know what ASMR is. ASMR is a physiological sensation felt following sensory stimuli, usually audio/visuals in the case of YouTube. The term ASMR was used in 2010, however, some research has been carried out, but not too much to date, and this phenomenon is not yet fully understood by researchers. We are interested in conducting an in-depth analysis of the reactions of YouTube viewers, by taking into account both types of ASMR, the intentional type which is caused mainly by specific stimuli from ASMR content creators, and the unintentional type, purely accidental, coming from ordinary videos but that ASMR consumers feel the same sensations when viewing them. This study aims to explore how YouTube viewers react to these two categories of ASMR videos, analyze the different stimuli, using Natural Language Processing techniques for text data, and analyze the similay rities and differences of certain audio extracts from the two categories. Thus, these analyzes will allow us to know a little more about the phenomenon and how viewers perceive them.
Language of the presentation: English

 
LIN CHING-YUAN M, 1回目発表 ユビキタスコンピューティングシステム 安本 慶一, 中村 哲, 諏訪 博彦, 松田 裕貴
title: Estimating stress level at home using audio-visual data during communicating with smart speaker
abstract: Stress has a huge impact on humans. Psychologically, too much stress can lead to depression and low productivity, or even suicidal tendency. And physically, it will seriously influence appetite and sleep quality, indirectly leading to other diseases. However, in most cases, we do not easily notice the accumulation of stress, and it may result in the health being in a bad state by the time you realize it, so we think it’s necessary to determine the stress level of people everyday. In this research, we would like to collect data on human interactions at home, while the user is talking to the smart speaker, the data includes facial expressions, voice, heartbeat rate, then use multimodal training to analyze the stress level of users. It may allow people to get information about their stress anytime at home.
language of the presentation: English
 
池山 哲矢 M, 1回目発表 光メディアインタフェース 向川 康博, 中村 哲, 舩冨 卓哉, 藤村 友貴, 北野 和哉
title: Estimation of flood fill areas for automatic colorization
abstract: The current automatic colorization of animation line drawings is based on the assumption that there are no holes in the line drawings, which results in highly accurate label estimation. In reality, however, there may be holes in line drawings. Related research has been conducted on filling holes in line drawings, but there is no guarantee that all holes have been reliably filled. Therefore, this study estimates the flood filling area for automatic colorization, which is not affected by holes in line drawings. In this presentation, a description of a segmentation method that is independent of holes in line drawings, a description of hole filling in line drawings during vectorization of raster line drawings and research on hole filling in vector line drawings will be presented.
language of the presentation: Japanese
発表題目: 自動彩色のためのベタ塗り領域の推定
発表概要: 現状のアニメ線画の自動彩色は、「線画に穴が開いていない」という前提のもとで精度の高いラベル推定が行えている。しかし、実際は線画に穴が開いている場合がある。関連研究では、線画の穴埋めに関する研究はされているが、確実に全部の穴がふさがったという保証はない。そのため、本研究では線画の穴に左右されない自動彩色のためのベタ塗り領域の推定を行う。本発表では、線画の穴に左右されないセグメンテーション法に関する記述やラスター線画のベクター化に際した線画の穴埋めに関する記述、ベクター線画の穴埋めに関する研究について紹介する。
 
岸下 昂生 M, 1回目発表 数理情報学 池田 和司, 笠原 正治, 久保 孝富, 福嶋 誠, 日永田 智絵
title: Theoretical analysis of multi-level attention pooling
abstract: Graph structured data are found in many fields, such as molecular structural formula, biochemical reaction pathways, brain connection networks, and so on. Because of this ubiquity, machine learning methods on graphs have been developed in a variety of ways. Recently, graph neural networks (GNNs) have rapidly emerged as a new framework for graph representation learning (GRL). One of the core concepts of GNN is message passing procedure that propagetes the information in a node to its neighborhood node. However, message passing has some problems, especially in prediction of graph-level prediction because the local information is lost by many steps of message passing. In general, graphs in the real world have fractal characteristics. Therefore, GNNs need to be capable of capturing both local information and global information. In previous studies, models collected node representations by a graph pooling layer so that they cannot utilize nodes' local information in computing the graph representation. To tackle this problem, multi-level attention pooling (MLAP) architecture was proposed\cite{mlap}. MLAP architecture introduces an attention pooling layer for each message passing step to compute layer-wise graph representations. After that, it aggregates them to compute the final graph representation. Owing to this, MLAP can utilize both local information and global information. Their experiments showed that MLAP architecture improved its performance by comparison with previous GNN models such as JumpingKnowledge Network. However, the role of MLAP has not been clarified both theoretically and experimentally. The goal of this research is to clarify how MLAP works and why MLAP improves its performance analytically or in numerical experiments.
language of the presentation: 日本語
発表題目: Multi-level attention poolingの理論解析
発表概要:グラフ構造は多くの場に現れる。例えば、分子構造式、生化学反応経路、脳神経回路網などである。 この偏在性により、多くのグラフ上の機械学習手法が開発されてきた。 最近、グラフニューラルネットワーク(GNN)がグラフ表現学習の新しいフレームワークとして台頭している。 その中心概念の一つは、メッセージパッシングによる近接ノード間の情報交換である。 しかしながら、従来のGNNでは数回のメッセージパッシングの後にaggregation関数を通過させるため、全体的な情報が失われてしまうという欠点があった。 Multi-level attention pooling(MLAP)はこの問題を解決するために開発された。 MLAPは各層がattention poolingをもち、最終層においてそれらを集めるので、局所情報と全体情報の両方を利用することが出来ると考えられる。 MLAPは実験により、JumpingKnowledge Network(JKNet)などの従来手法よりも高い精度を示すことが分かったが、MLAPの構造の持つ役割は未だ不明である。 この研究の目的は、MLAPがどう働き、なぜその性能が向上するのかを解析的もしくは数値実験的に示すことにある。
 
KAUSMALLY MOHAMMAD SHAHOOR HUSAIN M, 1回目発表 数理情報学 池田 和司, 松本 健一, 久保 孝富, 福嶋 誠, 日永田 智絵

title: *** Neural Activation Pattern Of Functional Categories of Code And Its Link With Programming Expertise *** 

abstract: *** In our current society, there is a great demand for programmers. This makes the understanding of the neural mechanism of programming crucial for training new programmers efficiently. In this study, we investigate the activation patterns of functional categories of codes(e.g Math) by making use of an open dataset of brain activity of programmers of different expertise levels which is determined by using their AtCoder rating. First, we attempt to find activated clusters that correlates with each functional category of code and investigate the relationship with respect to the programmer’s expertise. I will present our plan for future analyses. *** 

language of the presentation: *** English ***