¥¼¥ß¥Ê¡¼¥ëȯɽ

Æü»þ: 11·î20Æü¡Ê¿å¡Ë3¸Â (13:30-15:00)


²ñ¾ì: L1

»Ê²ñ: ÉðÉÙ µ®»Ë
²­¡¡½¤Ê¿ 1351021: M, 1²óÌÜȯɽ ¥Í¥Ã¥È¥ï¡¼¥¯¥·¥¹¥Æ¥à³Ø
title:Performance Evaluation of using Subspace Method in LCX Based Positioning System of Radio Terminals
abstract:Recently, the service using the position information of radio terminal is provided in various scenes because of spread of smart phones. Existing positioning methods have some problems in using indoors, so the demand for the indoor position detection of radio terminal is increasing. In this presentation, we propose a positioning method using leaky coaxial cable (LCX) and OFDM (Orthogonal Frequency Division Multiplexing) signals based on WiFi standard. In addition, we improve an accuracy of this system by using subspace method after the channel estimation between radio terminal and base station.
language of the presentation:Japanese
ȯɽÂêÌÜ:LCX̵ÀþüËö°ÌÃÖ¸¡½Ð¥·¥¹¥Æ¥à¤Ë¤ª¤±¤ëÉôʬ¶õ´ÖË¡¤òÍѤ¤¤¿À­Ç½É¾²Á
ȯɽ³µÍ×:¶áǯ¡¤¥¹¥Þ¡¼¥È¥Õ¥©¥ó¤Ê¤É¤ÎÉáµÚ¤Ëȼ¤¤Ã¼Ëö¤Î°ÌÃÖ¾ðÊó¤òÍøÍѤ·¤¿ÍÍ¡¹¤Ê¥µ¡¼¥Ó¥¹¤¬Ä󶡤µ¤ì¤Æ¤¤¤ë¡¥¤·¤«¤·¡¤´û¸¤ÎüËö¤Î°ÌÃÖ¸¡½ÐÊýË¡¤Ï²°Æâ¤Ç¤Î»ÈÍѤò¹Í¤¨¤ë¤ÈÍÍ¡¹¤ÊÌäÂ꤬¤¢¤ë¤¿¤á¡¤²°Æâ¤Ç¤ÎÍ­ÍѤʰÌÃÖ¸¡½ÐÊýË¡¤ËÂФ·¤Æ¼ûÍפ¬¹â¤Þ¤Ã¤Æ¤¤¤ë¡¥Ëܸ¦µæ¤Ç¤Ï¡¤Ï³±ÌƱ¼´¥±¡¼¥Ö¥ë¡ÊLCX:Leaky Coaxial Cable¡Ë¤È̵ÀþLANɸ½àµ¬³Ê¤Ë½àµò¤·¤¿OFDM(Orthogonal Frequency Division Multiplexing)¿®¹æ¤òÍѤ¤¤ë¤³¤È¤ÇƱ»þ¤ËÄÌ¿®¥µ¡¼¥Ó¥¹¤òÄ󶡲Äǽ¤Ê̵ÀþüËö°ÌÃÖ¸¡½Ð¼êË¡¤Ë¤Ä¤¤¤ÆÄó°Æ¤¹¤ë¡¥¤Þ¤¿¡¤Äó°Æ¼êË¡¤ËÂФ·¤Æ̵ÀþüËö¤È´ðÃ϶ɤδ֤ÎÅÁÈÂÏ©¿äÄê¤ò¹Ô¤Ã¤¿¸å¤ËÉôʬ¶õ´ÖË¡¤òŬÍѤ¹¤ë¤³¤È¤Ç¸¡½ÐÀºÅ٤βþÁ±¤ò¿Þ¤ë¡¥
 
¼ÄËÜ¡¡¾Ä 1351050: M, 1²óÌÜȯɽ »ë³Ð¾ðÊó¥á¥Ç¥£¥¢
title: [Paper Introduction] Simultaneous Super-Resolution of Depth and Images using a Single Camera
abstract: Image resolution is an important factor for the quality of 3D scene reconstruction or camera pose estimation using depth map and images from a single moving camera. In this paper, the authors propose an optimization framework for the simultaneous estimation of super-resolved depth map and images. This enables us to reconstruct 3D structure of the scene or camera pose in high quality. In the proposed method, depth map estimation and image super-resolution are formulated in a single energy function and solved simultaneously using a first-order primal-dual algorithm. Experimental results show the proposed method has as much accuracy as the conventional methods and takes much less computational time.
language of the presentation: Japanese
ȯɽÂêÌÜ: [ÏÀʸ¾Ò²ð] ñ°ì¥«¥á¥é¤òÍѤ¤¤¿¥Ç¥×¥¹¤ª¤è¤Ó²èÁü¤ÎƱ»þĶ²òÁü
ȯɽ³µÍ×: ñ°ì¥«¥á¥é¤Ç°ÜÆ°»£±Æ¤µ¤ì¤¿²èÁü·²¤È¥Ç¥×¥¹¥Þ¥Ã¥×¤òÍøÍѤ·¤¿3¼¡¸µ¥·¡¼¥óºÆ¹½À®¤ä¥«¥á¥é»ÑÀª¿äÄê¤È¤¤¤Ã¤¿¼êË¡¤Ë¤ª¤¤¤Æ¡¤²èÁü¤Î²òÁüÅÙ¤¬ÉʼÁ¤òÂ礭¤¯º¸±¦¤¹¤ëÍ×°ø¤È¤Ê¤ë¡¥¤½¤³¤ÇËÜÏÀʸ¤Ç¤Ï¡¤¹â²òÁüÅ٤ʲèÁü¤ª¤è¤Ó¥Ç¥×¥¹¥Þ¥Ã¥×¤ÎƱ»þ¿äÄê¤ò¹Ô¤¦¤¿¤á¤ÎºÇŬ²½¥Õ¥ì¡¼¥à¥ï¡¼¥¯¤òÄó°Æ¤¹¤ë¡¥¤³¤ì¤Ë¤è¤ê¡¤3¼¡¸µ¥·¡¼¥ó¤ä¥«¥á¥é»ÑÀª¤ÎºÆ¹½À®¤ò¹âÉʼÁ¤Ë¹Ô¤¦¤³¤È¤¬²Äǽ¤È¤Ê¤ë¡¥Äó°Æ¼êË¡¤Ç¤Ï¡¤¥Ç¥×¥¹¥Þ¥Ã¥×¤ª¤è¤Ó²èÁüĶ²òÁü¤Î¿äÄê¤ò1¤Ä¤Î¥¨¥Í¥ë¥®¡¼´Ø¿ô¤Ë¸ø¼°²½¤·¡¤1¼¡¤Î¼çÁÐÂÐ¥¢¥ë¥´¥ê¥º¥à¤Ë¤è¤Ã¤ÆƱ»þ¤Ë²ò¤òµá¤á¤ë¡¥¼Â¸³¤Ç¤Ï¡¤½¾Íè¤ÎĶ²òÁüµ»½Ñ¤ËɤŨ¤¹¤ëÀ­Ç½¤òÊÝ»ý¤·¤Ä¤Ä¡¤·×»»¥³¥¹¥È¤Îºï¸º¤ò¼Â¸½¤·¤¿¡¥
 
À¾³À¡¡Í§Íý 1351081: M, 1²óÌÜȯɽ ÃÎǽ¥³¥ß¥å¥Ë¥±¡¼¥·¥ç¥ó
title: A new speech synthesis technique using both sound and text information as input
abstract: I present about a new expressive speech synthesis technique in order to expand creating frameworks using speech sounds. As typical techniques of speech synthesis, text-to-speech synthesis and voice conversion have been proposed. Text-to-speech synthesis synthesizes any voice based on input text information. Voice conversion converts voices of source speaker into those of target speaker, keeping the linguistic information. In the text-to-speech, we can compose a nature-related sound, but expression of the meter is poor. On the other hand, as for the voice conversion, expressiveness of the meter is controllable depending on how to put intonations of the source speaker. However, the converted speech deteriorates in comparison with a text-to-speech synthesis. In this research, we propose the new speech synthesis technique using the text and the sound as input information for the purpose of implementing higher nature-related and expression.
language of the presentation: Japanese
ȯɽÂêÌÜ: ¥Æ¥­¥¹¥È¤È²»À¼¤òÆþÎϤȤ¹¤ë²»À¼¹çÀ®Ë¡
ȯɽ³µÍ×: ËÜȯɽ¤Ç¤Ï¡¤²»À¼¤òÍøÍѤ·¤¿¥³¥ó¥Æ¥ó¥ÄÀ©ºî¼êÃʤγȽ¼¤Î¤¿¤á¡¤¹â¤¤É½¸½ÎϤò»ý¤Ä¿·¤¿¤Ê²»À¼¹çÀ®Ë¡¤Ë¤Ä¤¤¤Æȯɽ¤ò¹Ô¤¦¡¥ ÂåɽŪ¤Ê²»À¼¹çÀ®Ë¡¤È¤·¤Æ¡¤Ç¤°Õ¤Î¥Æ¥­¥¹¥È¤Ë´ð¤Å¤­²»À¼¤ò¹çÀ®¤¹¤ë¥Æ¥­¥¹¥È²»À¼¹çÀ®¤È¤¢¤ëÏüԤβ»À¼¤ò°Û¤Ê¤ëÏüԤβ»À¼¤Ø¤È¸À¸ì¾ðÊó¤òÊÝ»ý¤·¤¿¤Þ¤ÞÊÑ´¹¤¹¤ëÀ¼¼ÁÊÑ´¹¤¬µó¤²¤é¤ì¤ë¡¥ ¥Æ¥­¥¹¥È²»À¼¹çÀ®¤Ç¤Ï¡¤¼«Á³À­¤Î¹â¤¤²»À¼¤ò¹çÀ®½ÐÍè¤ë¤¬±¤Î§¤Îɽ¸½ÎϤÏ˳¤·¤¯¡¤É½¸½Ë­¤«¤Ê²»À¼¤ò¹çÀ®¤¹¤ë¤³¤È¤Ïº¤Æñ¤Ç¤¢¤ë¡¥ °ìÊý¡¤À¼¼ÁÊÑ´¹¤Ç¤Ï¡¤±¤Î§¤¬ÆþÎÏÏüԤ˰͸¤·¤Æ¤ª¤ê¡¤ÆþÎÏÏüԤÎɽ¸½Îϼ¡Âè¤Ç¿ºÌ¤ÊÊÑ´¹²»À¼¤ò¹çÀ®²Äǽ¤Ç¤¢¤ë¡¥ ¤·¤«¤·¡¤ÊÑ´¹²»À¼¤Ï¥Æ¥­¥¹¥È²»À¼¹çÀ®¤ËÈæ¤ÙÎô²½¤·¤Æ¤ª¤ê¡¤ÉʼÁ¤Î¹â¤¤²»À¼¤ò¹çÀ®½ÐÍè¤Ê¤¤¡¥ Ëܸ¦µæ¤Ç¤Ï¡¤¼«Á³À­¤¬¹â¤¯É½¸½ÎÏË­¤«¤Ê²»À¼¹çÀ®¤Î¼Â¸½¤òÌܻؤ·¡¤ÆþÎϾðÊó¤È¤·¤Æ¥Æ¥­¥¹¥È¤È²»À¼¤òÍѤ¤¤¿¿·¤·¤¤²»À¼¹çÀ®Ë¡¤òÄó°Æ¤¹¤ë¡¥
 
¾®ÅÄ¡¡Íª²ð 1351023: M, 1²óÌÜȯɽ ÃÎǽ¥³¥ß¥å¥Ë¥±¡¼¥·¥ç¥ó
title: Learning of PCCG Semantic Parsing for Automatic Programming
abstract: Creating computer programs requires manual coding, so obtaining human resources for programming work is an important issue. We think that if we construct an automatic programming system that creates programs from specification documents on behalf of human, then this load can be reduced. In order to create programs corresponding to documents written by a natural language, we need to extract semantics of each document using some kind of method. Combinatory categorical grammar (CCG) treats this task within the framework of syntax parsing, and can obtain semantics as well as syntax information. A previous work proposed a method of source code generation by semantics extracted from a document using CCG. However, it has a problem of extensibility because the grammar was made by human labor. This research considers a method to obtain extensibility automatically from existing documents using machine learning, without man-made rules. We will use Probabilistic CCG (PCCG) model to handle it. In this presentation, we firstly introduce an abstraction of automatic programming and semantic parsing using CCG. Second, we show a result of preliminary experiment of learning CCG model for easy arithmetic problems. Finally, we explain our future work.
language of the presentation: Japanese
ȯɽÂêÌÜ: ¼«Æ°¥×¥í¥°¥é¥ß¥ó¥°¤Ø¸þ¤±¤¿PCCG°ÕÌ£²òÀÏ¥â¥Ç¥ë¤Î³Ø½¬
ȯɽ³µÍ×: ¥³¥ó¥Ô¥å¡¼¥¿¥×¥í¥°¥é¥à¤ÎºîÀ®¤Ï¿Í¤Î¼ê¤Ë¤è¤ë¥³¡¼¥Ç¥£¥ó¥°¤òɬÍפȤ·¡¢ ¤³¤Î¤¿¤á¤Î¿ÍŪ»ñ¸»¤Î³ÎÊݤ¬½ÅÍפʲÝÂê¤È¤Ê¤Ã¤Æ¤¤¤ë¡£ »ÅÍͽñ¤«¤é¿Í¤ËÂå¤ï¤Ã¤Æ¥×¥í¥°¥é¥à¤òÀ¸À®¤¹¤ë ¼«Æ°¥×¥í¥°¥é¥ß¥ó¥°¥·¥¹¥Æ¥à¤ò¹½ÃÛ¤¹¤ë¤³¤È¤¬¤Ç¤­¤ì¤Ð¡¢ ¤³¤ì¤é¤ÎÉéô¤ò·Ú¸º¤¹¤ë¤³¤È¤¬¤Ç¤­¤ë¤È¹Í¤¨¤é¤ì¤ë¡£ ¼«Á³¸À¸ì¤Ç½ñ¤«¤ì¤¿Ê¸¤ËÂбþ¤¹¤ë¥×¥í¥°¥é¥à¤òÀ¸À®¤¹¤ë¤Ë¤Ï¡¢ ²¿¤é¤«¤ÎÊýË¡¤Çʸ¤Î°ÕÌ£¤òÃê½Ð¤¹¤ëɬÍפ¬¤¢¤ë¡£ Áȹ礻ÈÏáÆʸˡ(CCG)¤Ï¤³¤ì¤ò¹½Ê¸²òÀϤÎÏÈÁȤߤǽèÍý¤¹¤ë¼êË¡¤Ç¤¢¤ê¡¢ ¹½Ê¸¾ðÊó¤ÈƱ»þ¤Ë°ÕÌ£¤Î¼èÆÀ¤ò¹Ô¤¦¤³¤È¤¬¤Ç¤­¤ë¡£ ½¾Í踦µæ¤È¤·¤Æ¡¢CCG¤Ë¤è¤Ã¤ÆÆÀ¤é¤ì¤¿°ÕÌ£¤«¤é¥½¡¼¥¹¥³¡¼¥É¤òÀ¸À®¤¹¤ë»î¤ß¤¬¤¢¤ë¡£ ¤·¤«¤·¡¢¤³¤Î¼êË¡¤Ç¤Ï¥ë¡¼¥ë¤ò¿Í¤Î¼ê¤ÇºîÀ®¤·¤Æ¤ª¤ê¡¢³ÈÄ¥À­¤ËÌäÂ꤬¤¢¤ë¡£ Ëܸ¦µæ¤Ç¤Ï¡¢µ¡³£³Ø½¬¤òÍѤ¤¤Æ´û¸¤Îʸ½ñ¤«¤é¼«Æ°Åª¤Ë¥ë¡¼¥ë¤ò³ÍÆÀ¤¹¤ë¤³¤È¤Ë¤è¤Ã¤Æ¡¢ ¿Í¤Î¼ê¤Ë¤è¤ë¥ë¡¼¥ëºîÀ®¤ò¹Ô¤ï¤º¤Ë³ÈÄ¥À­¤ò¼ê¤ËÆþ¤ì¤ë¼êË¡¤ò¸¡Æ¤¤·¤Æ¤¤¤ë¡£ ¤³¤Î¤¿¤á¤Î¥â¥Ç¥ë¤È¤·¤Æ¡¢CCG¤Ë³ÎΨ¤òƳÆþ¤·¤¿PCCG¥â¥Ç¥ë¤ò»ÈÍѤ¹¤ë¤³¤È¤ò¹Í¤¨¤Æ¤¤¤ë¡£ ËÜȯɽ¤Ç¤Ï¼«Æ°¥×¥í¥°¥é¥ß¥ó¥°¤ÈCCG¤Ë¤è¤ë°ÕÌ£²òÀϤγµÍפò½Ò¤Ù¤ë¤È¤È¤â¤Ë¡¢ PCCG¥â¥Ç¥ë¤ò´Êñ¤Ê»»½ÑÌäÂê¤ËÂФ·¤Æ³Ø½¬¤·¤¿Í½È÷¼Â¸³·ë²Ì¤ò¼¨¤·¡¢ º£¸å¤Î²ÝÂê¤Ë¤Ä¤¤¤Æ½Ò¤Ù¤ë¡£
 
¿¿ÌÚ¡¡Í¦¿Í 1351096: M, 1²óÌÜȯɽ ÃÎǽ¥³¥ß¥å¥Ë¥±¡¼¥·¥ç¥ó
title: Multi Channel EEG Signal Separation Using Probabilistic Model Considering Statical Features
EEG is the recording of electrical activity resulting from current flows within the neurons. EEG is used for rehabilitation support, decoding mental state, and brain machine interfaces and so on. But EEG is a highly noisy signal. A technique to separate objective signals from noise is needed. Synchronous addition and Independent Component Analysis have fundamental problems in the context of EEG. A new separation method using a probabilistic model is proposed recently but it doesn't consider statical features such as a time feature and a frequency feature peculiar to each separated siganals. In this research, we research statical features resulting from various phenomenons and propose probabilistic model considering the prior knowledges and implement it.
language of the presentation : Japanese
ȯɽÂêÌÜ¡§¥Þ¥ë¥Á¥Á¥ã¥Í¥ëǾÇÈ¿®¹æ¤ÎÅý·×ŪÀ­¼Á¤ò¹Íθ¤·¤¿³ÎΨ¥â¥Ç¥ë¤è¤ë¿®¹æʬΥ
ȯɽ³µÍ×: EEG¤ÏǾ³èÆ°¤ÎÍͻҤòÅŵ¤Åª¤Ëª¤¨¤¿¿®¹æ¤Ç¤¢¤ê¡¢¥ê¥Ï¥Ó¥ê¥Æ¡¼¥·¥ç¥ó»Ù±ç¡¢¿´Íý¾õÂ֤οäÄê¡¢¼¡À¤Âå·¿¥¤¥ó¥¿¡¼¥Õ¥§¡¼¥¹¤Ê¤ÉÍÍ¡¹¤Ê±þÍѤ¬´üÂÔ¤µ¤ì¤Æ¤¤¤ë¡£¤·¤«¤·¡¢EEG¤Ï¶Ë¤á¤Æ¥Î¥¤¥º¤Î¿¤¤¿®¹æ¸»¤Ç¤¢¤ë¡£¤·¤¿¤¬¤Ã¤Æ¡¢ÌÜŪ¿®¹æ¤È¥Î¥¤¥º¤È¤òʬΥ¤¹¤ë¼êË¡¤¬ÉԲķç¤Ç¤¢¤ë¡£EEGʬΥ¤Î¼çή¤ò¤Ê¤¹²Ã»»Ê¿¶ÑË¡µÚ¤ÓÆÈΩÀ®Ê¬Ê¬ÀϤϡ¢Î¾Êý¤È¤âEEG¤ÎʬΥ¤Ë¤ª¤¤¤ÆËܼÁŪ¤ÊÌäÂê¤òÊú¤¨¤Æ¤¤¤ë¡£¶áǯ¿·¤·¤¯Ä󾧤µ¤ì¤¿³ÎΨ¥â¥Ç¥ë¤òÍѤ¤¤ëʬΥ¼êË¡¤Ï¡¢»þ´ÖÆÃÀ­¤ä¼þÇÈ¿ôÆÃÀ­¤Ê¤É¤Î³ÆʬΥ¿®¹æ¤Ë¸ÇÍ­¤ÎÅý·×ŪÆÃħ¤ò¹Íθ¤·¤Æ¤¤¤Ê¤¤ÌäÂ꤬¤¢¤ë¡£Ëܸ¦µæ¤Ç¤Ï¡¢ÍÍ¡¹¤Ê¸½¾Ý¤ËͳÍ褹¤ëǾÇÈ¿®¹æ¤ÎÅý·×ŪÆÃħ¤òÄ´ºº¤·¤¿¾å¤Ç¡¢¤½¤ì¤é¤Ë´Ø¤¹¤ë»öÁ°Ã챤ò¹Íθ¤·¤¿³ÎΨ¥â¥Ç¥ë¤Ë¤è¤ëʬΥ¼êË¡¤ÎÄó°Æ¤È¤½¤Î¼ÂÁõ¤ò¹Ô¤¦¡£
 

²ñ¾ì: L2

»Ê²ñ: °Ë¸¶ ¾´µª
Æ£¸¶¡¡´²¹â 1351095: M, 1²óÌÜȯɽ ¥¤¥ó¥¿¡¼¥Í¥Ã¥È¹©³Ø
Title: Mechanism of a "Drive by Download Attack"
Abstract: In Japan, malware infections are increasing in frequency. A "Drive by Download Attack" goes undetected by the user, and is a way for malware to intrude upon a system. Furthermore, This kind of attack is difficult to detect on Intrusion Detection System(IDS). To address this, in this presentation, I introduce the paper entitled "Anatomy of Drive by Download Attack".
Language of the presentation: Japanese
ȯɽÂêÌÜ: Drive by Download¹¶·â¤Î»ÅÁȤß
ȯɽ³µÍ×: ¥Þ¥ë¥¦¥§¥¢´¶À÷¤ÎÊó¹ð¤ÏÆüËܤǤâÁý²Ã·¹¸þ¤Ë¤¢¤ë¡£Drive by Download¹¶·â¤Ï¥æ¡¼¥¶¤Ëµ¤¤Å¤«¤ì¤ë¤³¤È¤Ê¤¯¡¢¥Þ¥ë¥¦¥§¥¢¤ò¥·¥¹¥Æ¥à¤Ë¿¯Æþ¤µ¤»¤ëÊýË¡¤Ç¤¢¤ë¡£¤³¤Î¹¶·â¤ÏIDS¤äIPS¤È¤¤¤Ã¤¿¿¯Æþ¸¡ÃÎ¥·¥¹¥Æ¥à¤Ç¤â¸¡ÃΤ¬Æñ¤·¤¤¡£ËÜȯɽ¤Ç¤Ï"Anatomy of Drive-by Download Attack"¤Î¾Ò²ð¤ò¹Ô¤¦¡£
 
Ê¿ÌεÍÎ 1351090: M, 1²óÌÜȯɽ ¥³¥ó¥Ô¥å¡¼¥Æ¥£¥ó¥°¡¦¥¢¡¼¥­¥Æ¥¯¥Á¥ã
title:An introduction of Graphchi and a brief study of triangle counting algorithm
abstract:These days, with the continuous increase of the information volume, data processing has entered a big data era, where the real time processing becomes increasly difficult along the data set growing direction.. A library named Graphchi was previously proposed to accelerate the disk I/O performance, especially to solve the huge random access delay in the big data processing. With GraphChi, it is possible to even use single commercial computer for big data analysis. However, this brings new problems as the previous hidden delays of the calculation and the cache misses may become the new bottlenecs when disk I/O is largely improved by Graphchi. In my research, I will focus on the triangle-counting big data algorithm first which takes a complexity of O(V*E), by studying its cache access pattern and possible cache hierarchy to meet its pattern. In this presentation I will briefly introduce previous work, my research plan and progress.
language of the presentation:Japanese
 
Åò·î¡¡Î¼Ê¿ 1351110: M, 1²óÌÜȯɽ ¥½¥Õ¥È¥¦¥§¥¢¹©³Ø
 
ÏÉÈø¡¡Ä¾Âç 1351113: M, 1²óÌÜȯɽ ¾ðÊó´ðÈ×¥·¥¹¥Æ¥à³Ø
title: A Proposal for the Preceding Route Information Acquisition System using the Vehicle Clustering based on Trajectories
abstract: In the inter-vehicle communication, in order to obtain the important traffic information by using passing communication with oncoming vehicles, to communicate with the vehicle with the useful information is required in a very short period of time. With the aim to obtain the traffic accident or traffic congestion information of the preceding route, we propose a preceding route information acquisition system using the vehicle clustering based on trajectories.
language of the presentation: Japanese
ȯɽÂêÌÜ: Áö¹Ôµ°Àפ˴ð¤Å¤¯¼Öξ¥¯¥é¥¹¥¿¥ê¥ó¥°¤òÍѤ¤¤¿Âиþ¼Öξ¤È¤Î¶¨Ä´¤Ë¤è¤ëÀè¹Ô·ÐÏ©¾ðÊó¼èÆÀ¥·¥¹¥Æ¥à¤ÎÄó°Æ
ȯɽ³µÍ×: ¼Ö¼Ö´ÖÄÌ¿®¤Ë¤ª¤¤¤Æ¡¤Âиþ¼Ö¤È¤Î¤¹¤ì°ã¤¤ÄÌ¿®¤òÍѤ¤¤ÆɬÍפʾðÊó¤òÆÀ¤ë¤¿¤á¤Ë¤Ï¡¤¶Ë¤á¤Æû¤¤»þ´ÖÆâ¤ËɬÍפʾðÊó¤ò»ý¤Ä¼Öξ¤ÈÄÌ¿®¤ò¹Ô¤ï¤Ê¤±¤ì¤Ð¤Ê¤é¤Ê¤¤.Ëܸ¦µæ¤Ç¤Ï¡¤Àè¹Ô·ÐÏ©¾å¤Î»ö¸Î¤ä½ÂÂڤȤ¤¤Ã¤¿Æ»Ï©¾ðÊó¤ò¼èÆÀ¤¹¤ë¤¿¤á¤Ë¡¤Áö¹Ôµ°Àפ˴ð¤Å¤¯¼Öξ¥¯¥é¥¹¥¿¥ê¥ó¥°¤òÍѤ¤¤¿Àè¹Ô·ÐÏ©¾ðÊó¼èÆÀ¥·¥¹¥Æ¥à¤òÄó°Æ¤¹¤ë.
 
ºä¸ý¡¡±Ñ»Ê 1351046: M, 1²óÌÜȯɽ ¥½¥Õ¥È¥¦¥§¥¢¹©³Ø
title: Paper introduction: Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining
abstract: To develop high quality software, test codes should be added/modified as well as product codes. It is because functionality changes or specification changes may occur in the development process. Nevertheless, methods or tools that recognize evolution of product codes and, its corresponding test codes have not been proposed. In this presentation, I¡Çll introduce a paper which proposes three views for gaining insight in the nature of co-evolution of production and test code. Concretely, the tree views are that combine information from a software project¡Çs versioning system, the size of the various artifacts and the test coverage reports. At the end of the presentation, I¡Çll describe my research plan.
language of the presentation: Japanese
 

²ñ¾ì: L3

»Ê²ñ: »³ËÜ ¹ë»Öϯ
²¼»³¡¡ÅßÇÏ 1351057: M, 1²óÌÜȯɽ ´Ä¶­ÃÎǽ³Ø
title: Estimation of a human body condition using pose estimation using depth sensors.
abstract: To estimate the state of a human body by using human body pose estimation from moving images.For example, I estimate health of the legs of an old man by video of walking.It is necessary to perform pose estimation, accurate for that.In this presentation, I present about the previous study of pose estimation using depth sensors and estimation of a human body condition and I present my future work.
language of the presentation: Japanese
ȯɽÂêÌÜ: µ÷Î¥¥»¥ó¥µ¡¼¤Ë¤è¤ë»ÑÀª¿äÄê¤òÍѤ¤¤¿¥Ò¥È¤Î¿ÈÂξõÂ֤οäÄê
ȯɽ³µÍ×: Æ°²èÁü¤«¤é¤Î¿ÍÂλÑÀª¿äÄê¤òÍѤ¤¤Æ¥Ò¥È¤Î¿ÈÂξõÂÖ¤ò¿äÄꤹ¤ë¡£Î㤨¤Ð¡¢Ï·¿Í¤Î­¹ø¤Î·ò¹¯¾õÂÖ¤òÊ⤤¤Æ¤¤¤ëÆ°²èÁü¤«¤é¿äÄꤹ¤ë¡£¤½¤Î¤¿¤á¤Ë¤ÏÀºÅ٤ι⤤»ÑÀª¿äÄê¤ò¹Ô¤¦É¬Íפ¬¤¢¤ë¡£Ëܸ¦µæ¤Ç¤Ïµ÷Î¥¥»¥ó¥µ¡¼¤òÍѤ¤¤Æ¥Ò¥È¤Î»ÑÀª¿äÄê¤ò¹Ô¤¦¡£ËÜȯɽ¤Ç¤Ïµ÷Î¥¥»¥ó¥µ¡¼¤òÍѤ¤¤¿»ÑÀª¿äÄê¤Ë¤Ä¤¤¤Æ¤ÎÀè¹Ô¸¦µæ¡¢¤ª¤è¤ÓÆ°²èÁü¤«¤é¤Î¥Ò¥È¤Î¾õÂ֤οäÄê¤Ë¤Ä¤¤¤Æ¤ÎÀè¹Ô¸¦µæ¤Ë¤Ä¤¤¤Æȯɽ¤·¡¢º£¸å¤ÎÊý¿Ë¤ò½Ò¤Ù¤ë¡£
 
±Ê°æ¡¡Íβð 1351073: M, 1²óÌÜȯɽ ´Ä¶­ÃÎǽ³Ø
title: [Paper Introduction] Adherence to Smartphone Application for Weight Loss Compared to Website and Paper Diary: Pilot Randomized Controlled
abstract: There is growing interest in the use of information communication technologies to treat obesity. Although there have been studies of texting-based intervention and smartphone applications (apps) used as adjuncts to other treatments, there are currently no randomized controlled trials (RCT) of a stand-alone smartphone application for weight loss that focuses primarily on self-monitoring of diet and physical activity. The aim of this pilot study was to collect acceptability and feasibility outcomes of a self-monitoring weight management intervention delivered by smartphone app, compared to a website and paper diary. Finally, I present my research plan.
language of the presentation: Japanse
 
»°±º¡¡Ì¤Íè 1351101: M, 1²óÌÜȯɽ ¼«Á³¸À¸ì½èÍý³Ø
title: Introduction about method of Recognize Textual Entailment
abstract: In NLP research, recognize textual entailment(RTE) is investigated. It is expected to inprove performance of other NLP research's methods. In this talk, I introduce some methods of RTE , and explain direction of my research.
language of the presentation: Japanese
ȯɽÂêÌÜ: ¥Æ¥­¥¹¥È´Ö¤Î´Þ°Õ´Ø·¸Ç§¼±¼êË¡¤ÎÄ´ººÊó¹ð
ȯɽ³µÍ×: NLP¤Ë¤ª¤¤¤Æ¡¢¥Æ¥­¥¹¥È´Ö¤Î´Þ°Õ´Ø·¸Ç§¼±(RTE)¤Ë´Ø¤¹¤ë¸¦µæ¤¬¹Ô¤ï¤ì¤Æ¤¤¤ë¡£¤³¤ì¤Ï¡¢Â¾¤ÎNLP¤Î±þÍÑ¥¿¥¹¥¯¤ÎÀºÅÙ¸þ¾å¤Ë´óÍ¿¤¹¤ë²ÄǽÀ­¤¬´üÂÔ¤µ¤ì¤Æ¤¤¤ëµ»½Ñ¤Ç¤¢¤ë¡£ËÜȯɽ¤Ç¤Ï¡¢¸½ºßÄó°Æ¤µ¤ì¤Æ¤¤¤ëRTE¤Î¼êË¡¤Ë¤Ä¤¤¤Æ¾Ò²ð¤¹¤ë¤È¤È¤â¤Ë¡¢¼«Ê¬¤Î¸¦µæ¤ÎÊý¸þÀ­¤Ë¤Ä¤¤¤ÆÀâÌÀ¤¹¤ë¡£
 
»³Ö¿¡¡ËûÍ¿ 1351109: M, 1²óÌÜȯɽ ¼«Á³¸À¸ì½èÍý³Ø
title: Survey about multi-document abstract technique of extracting it of the information
abstract: The need of automatic text summarization has recently increased due to the proliferation of information on the Internet. Particularly, the study about the abstract for multi-documents is performed flourishingly because it is difficult to choose useful information by manual operation from large-scale document group. In this report, I survey main technique mainly on an abstract of extracting it of the information for plural documents. I speak a future policy afterwards.
language of the presentation: Japanese
ȯɽÂêÌÜ: ÊóÃÎŪ¡¦Ãê½ÐŪʣ¿ôʸ½ñÍ×Ìó¼êË¡¤Ë´Ø¤¹¤ëÄ´ººÊó¹ð
ȯɽ³µÍ×: ¶áǯ¡¤¥¤¥ó¥¿¡¼¥Í¥Ã¥È¾å¤Ë¸ºß¤¹¤ë¾ðÊó¤ÎµÞ®¤ÊÁý²Ã¤Ëȼ¤¤¡¤¼«Æ°Í×Ìó¤Î¼ûÍפ¬¹â¤Þ¤Ã¤Æ¤¤¤ë¡¥Æäˡ¤Â絬ÌϤÊʸ½ñ·²¤ÎÃ椫¤éÍ­ÍѤʾðÊó¤ò¼êÆ°¤Ç¼è¼ÎÁªÂò¤¹¤ë¤³¤È¤Ïº¤Æñ¤Ç¤¢¤ë¤¿¤á¡¤Ê£¿ôʸ½ñ¤òÂоݤȤ·¤¿Í×Ìó¤Ë´Ø¤¹¤ë¸¦µæ¤¬À¹¤ó¤Ë¹Ô¤ï¤ì¤Æ¤¤¤ë¡¥ËÜÊó¹ð¤Ç¤Ï¡¤Ê£¿ôʸ½ñ¤òÂоݤȤ·¤¿¡¤ÊóÃÎŪ¡¤Ãê½ÐŪÍ×Ìó¤òÃæ¿´¤Ë¡¤¼çÍפʼêË¡¤ò³µ´Ñ¤¹¤ë¡¥¤½¤Î¸å¡¤º£¸å¤ÎÊý¿Ë¤Ë¤Ä¤¤¤Æ½Ò¤Ù¤ë¡¥
 
¶áÆ£¡¡²í˧ 1351044: M, 1²óÌÜȯɽ ¼«Á³¸À¸ì½èÍý³Ø
title: A new visualization of documents and words with considering the ambiguity of a word
abstract: On this presentation, I will propose a new visualization of documents and words with considering the ambiguity of a word. In recent year, it is most important to intuitively understand the general structure of Big Data in the Internet, where massive and varied information has been generated. In previous research, the visualization:PLSV with Topic Model for document datasets is introduced and then I will explain my new visualization for them whose model is constructed with considering the ambiguity of a word.
language of the presentation: Japanese
ȯɽÂêÌÜ: ¿µÁÀ­¤ò¹Íθ¤·¤¿Ã±¸ìµÚ¤Óʸ½ñ¤Î²Ä»ë²½¤Ë´Ø¤¹¤ë¸¦µæ
ȯɽ³µÍ×: ñ¸ì¤Î¿µÁÀ­¤ò¹Íθ¤·¤¿¿·¤·¤¤²Ä»ë²½¼êË¡¤Ë¤Ä¤¤¤ÆÀâÌÀ¤¹¤ë¡£Æü¡¹¡¢¥¤¥ó¥¿¡¼¥Í¥Ã¥È¤ò²ð¤·Web¤ËÀ¸¤ß½Ð¤µ¤ì¤ëËÄÂ礫¤Ä¿ÍͤʾðÊó¤ËÂФ·¤Æ¡¢¤½¤ÎÁ´ÂÎŪ¹½Â¤¤òľ´¶Åª¤ËÇÄ°®¤¹¤ë¤³¤È¤ÏÈó¾ï¤Ë½ÅÍפʲÝÂê¤Ç¤¢¤ë¡£ËÜȯɽ¤Ç¤Ï¡¢Àè¹Ô¸¦µæ¤È¤·¤Æ¥È¥Ô¥Ã¥¯¥â¥Ç¥ë¤òÍøÍѤ·¤¿²Ä»ë²½¼êË¡:PLSV¤ò¾Ò²ð¤·¡¢¤½¤Î¸å¡¢Ëܸ¦µæ¤ÇÄó°Æ¤¹¤ëʸ½ñ¤È¸ì×äδط¸À­¤È¸ì×äοµÁÀ­¤ËÃíÌܤ·¤¿¿·¤·¤¤²Ä»ë²½¼êË¡¤Ë¤Ä¤¤¤Æ¤ÎÀâÌÀ¤ò¹Ô¤¦¡£