コロキアムB発表

日時: 9月13日 (金) 4限目(15:10-16:40)


会場: L2

司会: 江口 僚太
石原 実 M, 2回目発表 光メディアインタフェース 向川 康博 松原 崇充 舩冨 卓哉 藤村 友貴 北野 和哉

Title: Representation of Geometric Transform Field using Neural Network

Abstract: Abstract Conventional methods for describing geometric transform fields using kernel regression have faced the challenge of computational complexity that scales with the number of sample points. In this study, we leverage neural networks to achieve regression where computational complexity remains constant regardless of the increase in sample points. We validated our approach on a registration task using human embryo slice images, confirming the feasibility of describing geometric transform fields with neural networks. Future work includes verifying the computational advantages of our method on larger datasets.

Language of the presentation: Japanease

発表題目: ニューラルネットワークによる幾何変換場の記述

発表概要: 従来のカーネル回帰による幾何変換場の記述法は、 サンプル点の増加に対して計算量が膨大になるという問題があった。
それに対し本研究では、ニューラルネットワークを用いることで、 サンプル点の増加に対して計算量が変わらない回帰を実現した。
ヒトの胚切片画像に対する位置合わせタスクで検証を行い、 ニューラルネットワークによる幾何変換場の記述が可能であることを確認した。 また、今後の課題として、大規模データにおける計算量の優位性の確認が挙げられた。

幾何変換場による物体の変形の例
 
福田 竜平 M, 2回目発表 ロボットラーニング 松原 崇充 安本 慶一 柴田 一騎 鶴峯 義久 佐々木 光
title: Imitation learning for long-term tasks using skill decomposition
abstract: Automation of long-term tasks by robots is expected. To automate long-term tasks, an approach in which a robot learns to imitate actions demonstrated by a human is essential. However, the demonstration of a long-term task involves complex actions composed of multiple skills, which makes the learning process difficult. This study proposes imitation learning that decomposes a demonstration into individual skills. The performance of the method is verified by decomposing the demonstration based on specific skill conditions in the pick-and-place task of an object.
language of the presentation: Japanese
 
本間 天譲 M, 2回目発表 ロボットラーニング 松原 崇充 安本 慶一 柴田 一騎 鶴峯 義久 佐々木 光 角川 勇貴
title: Sim-to-Real Reinforcement Learning for Neurochip-Driven Edge Robots
abstract: Neurochips are computational devices suitable for function approximation of Spiking Neural Networks (SNNs), and their power-saving nature makes them promising for use as computational devices for control policies in edge robot tasks where battery capacity is limited. To achieve this, reinforcement learning (RL) methods that learn SNN policies from the interaction between the real environment and the robot have been studied. However, a challenge is that it is extremely difficult to collect sufficient learning samples in a realistic amount of time for edge robot tasks, which involve a huge number of possible state-action transition patterns. In this study, we therefore extend the conventional method to a Sim-to-Real RL framework to solve the data collection problem. As a result, by learning control policies from the interaction between the simulation environment and the robot, sufficient learning samples can be collected in a short amount of time. As a preliminary verification of the proposed framework, we compared its performance in a large-scale maze simulation environment with a huge number of state-action transition patterns. In the future, we plan to extend the method to transfer the learned control policy to a more complex real-world environment.
language of the presentation: Japanese ^o^