コロキアムB発表

日時: 9月18日(水)2限(11:00~12:30)


会場: L1

司会: 小蔵正輝
幾谷 吉晴 D, 中間発表 ソフトウェア工学 松本 健一, 池田 和司, 久保 孝富, 石尾 隆, 畑 秀明
title: Toward Identifying the Neural Basis of Programming Expertise : an fMRI Study with Expert Programmers
abstract: Learning programming language is one of the most attractive activities in the current digitized world and people have been spending enormous amount of time to read/write such artificially designed language. However, we know little about what neural bases differentiate expert programmers from novices despite its huge impact on modern society and industries. Here, we investigate the neural basis of programming expertise by conducting multi-group comparisons of programmers with different levels of programming expertise. In the experiment, subjects performed source code categorization tasks while their brain activities were measured by functional magnetic resonance imaging (fMRI). Our results show that individual levels of programming expertise are strongly associated with the multi-voxel pattern discriminability specialized for semantic categories of program source code.
language of the presentation: English
 
池田 裕哉 M, 2回目発表 コンピューティング・アーキテクチャ 中島 康彦, 池田 和司, 中田 尚, Tran Thi Hong, 張 任遠
title: Evaluation of neuromorphic hardware using neural networks and oxide semiconductors
abstract: As the amount of data that people handle increases, the conventional Neumann-type computer architecture is reaching its limits. Therefore, research on hardware implementation of machine learning systems is being actively conducted. In this presentation, I have implemented and evaluated neuromorphic hardware that realizes human brain neurons using oxide semiconductor of amorphous In-Ga-Zn-O (a-IGZO) and a cellular neural network. It was confirmed how variations of initial resistance and deterioration rate of the oxide semiconductor affect accuracy of the neuromorphic hardware.
language of the presentation: Japanese
 
西本 宏樹 M, 2回目発表 コンピューティング・アーキテクチャ 中島 康彦, 池田 和司, 中田 尚, Tran Thi Hong, 張 任遠
title: Acceleration of Variational Bayesian Gaussian Mixture Models
abstract: Machine learning technology has been developed along with the performance improvement of CPUs and GPGPUs, and it has been used for analyses of big data. Clustering is one task of them. Gaussian Mixture Models is a representation method that realizes clustering, and widely used in many applications for probability density modeling and soft clustering. However, the parameters of a GMM need to be estimated from data. There are some method for estimate parameters, Variational Bayesian Gaussian Mixture Models (VB-GMM) is one of them , and it is hardly to over-fitting than other methods. But, it is computationally demanding. So, we accelerate VB-GMM. Now, we have implemanted it on GPGPU and its execution speed is 524 times faster than CPU implementation.
language of the presentation: Japanese