上田 裕己 | M, 2回目発表 | ソフトウェア工学 | 松本 健一, 笠原 正治, 石尾 隆, 畑 秀明, Raula G. Kula |
title: Automatically Code Review based on code change history -A case study of IF-conditional-statements-
abstract: Code review is key to ensuring the absence of software defects. To reduce review costs, the goal of this study is to suggest how developer should fix IF-conditional-statement issues which are the most frequent changes in software development. We conduct an empirical study to understand hidden IF-statement implementation rule through code review from the past reviewers' feedback. We analyze the changes of IF-conditional statements that are revised through code review. Also, we show an automatically review bot that suggests the code examples to fix an issue based on hidden IF-conditional-statement implementation rules. language of the presentation: Japanese | |||
桂川 大輝 | M, 2回目発表 | ソフトウェア工学 | 松本 健一, 笠原 正治, 石尾 隆, Raula G. Kula |
title: Automatic Third-Party Library Recommendations Using Categories
abstract: sustaining a healthy project and mitigating the risk that becomes outdated and obsolete. In this presentation, we propose an approach using pre-defined categories (i.e., grouping of libraries that share similar functionality) in library recommendations that aids in library selection. Our empirical study covers 8,142 systems and 150 categories of Java libraries. We show that recommending categories is practical and its suggestion of libraries within that category is comparable to existing techniques. language of the presentation: Japanese | |||
鈴木 文丈 | M, 2回目発表 | 数理情報学 | 池田 和司☆, 松本 健一, 川人 光男(客員), 森本 淳(客員) |
TItle:The effect of acute stress on human emotion control.
Abstract: Stress exert a huge influence on people's behavior and brain function, has the function to promote habitual behavior and fear detection, and suppress the ability to pay attention and make complex decisions. The purpose of this research is to clarify how actions and brain functions influence "preference", which is considered to be the basis of human decision making, by stress. Using multiple evaluation indices with basic six emotions added to the subjective evaluation of the preference for image stimulation, we compare evaluations between the conditions when stress is given and the case where it is not given, and examine each evaluation’s effect of stress. Furthermore, we measure the brain activity when looking at the image stimulus and compare the coupling when not giving stress and giving stress. Language of the presentation: Japanese | |||
中井 文哉 | M, 2回目発表 | 数理情報学 | 池田 和司☆, 松本 健一, 川人 光男(客員), 森本 淳(客員) |
title: Methodological improvement of Cross Frequency Coupling analysis in MEG signal
abstract: Because they explain key role of neuronal activity and interaction, brain electrical rhythms are widely investigated. Today the role and importance of signal “Phase” is more investigated, especially in terms of Cross-Frequency-Coupling, which is the interactive relationship of low frequency and high frequency. These neuronal signal analyses are commonly carried with the invasive imaging method, such like ECoG or iEEG. Although they can record electrical neuronal activation precisely, the condition and subjects of recording is limited. Here we suggest novel signal Phase analysis method for small-window MEG signals, which is non-invasive recording method but difficult to analyze in previous studies because of its noisiness. We simulated the performance of the novel method and it showed higher precision than previous method. We also mention the importance of signal phase analysis with MEG recording. language of the presentation: Japanese 発表題目: *** この部分を発表題目に *** 発表概要: *** この部分を発表概要に *** | |||
矢倉 晴子 | D, 中間発表 | 知能コミュニケーション | 中村 哲, 松本 裕治, 田中 宏季 |
title: Cognitive state recognition model using EEG
abstract: Emotion can be obtained using a variety of techniques, such as facial expressions, linguistics, gestures and physiological signals. One advantage in the use of physiological signals is that, even if a person does not express his/her emotions through voice, facial expressions or gestures, changes in physiological state are uncontrollable and thus detectable.In this study, we propose speech recognition through EEG signals during listening to prosody of voices using machine learning techniques. In particular, we first constrained emotional linguistic information, and focused on pure prosodic cues. language of the presentation: Japanese | |||
NURUL FITHRIA LUBIS | D, 中間発表 | 知能コミュニケーション | 中村 哲, 松本 裕治, Sakriani Sakti, 吉野 幸一郎 |
title: Positive Emotion Eliciation with Affective Dialogue Systems
abstract: Dialogue systems started as a way for users to naturally interact with machines to complete certain tasks. However, as the technology develops, the potential of agents to improve the emotional-well being of users has been growing as well. An emotionally-competent computer agent could be a valuable assistive technology in performing various affective tasks, e.g., caring for the elderly, low-cost ubiquitous chat therapy, and providing emotional support in general. In this research, I propose promoting a more positive emotional state through dialogue system interaction. Positive emotion elicitation seeks to improve user's emotional state through dialogue system interaction, where a chat-based scenario is layered with an implicit goal to address user's emotional needs. To achieve this goal, I firstly constructed a corpus to capture dialogue strategy in positive emotion elicitation scenario. I exploited the data using neural network techniques to construct an end-to-end dialogue system that is aware of user emotional state. Three positive emotion elicitation strategies with neural networks are elaborated. The talk concludes with future direction of the research towards learning an explicit dialogue strategy for positive emotion elicitation. language of the presentation: English | |||
VETTER MARCO | D, 中間発表 | 知能コミュニケーション | 中村 哲, 松本 裕治, Sakriani Sakti |
title: Using Automatically Generated Prosody Labels to Improve Word Segmentation and Lexical Discovery abstract: Exploration and documentation of under-resourced languages is a difficult and time-consuming task. Computer-assisted analysis can facilitate this process. However, in the absence of sufficient training data, low- or zero-resource approaches are required to allow for this. An important step in automatic language analysis is the segmentation of a speech signal into phonemes and words. Infants seem to employ the previously acquired ability to detect speech prosody to aid in this task. In this work, we investigate the possibility of using automatically generated prosodic information to improve attempts at segmenting continuous speech into smaller units. For this we train artificial neural networks to correctly detect human speech prosody and generate prosodic break and intonation labels. Following this, we will apply the learned models in a cross-lingual fashion to generate the same type of information for a language that has not been seen in training. language of the presentation: English | |||