新任助教講演会(Lectures from New Assistant Professors)

日時(Datetime) 令和3年6月7日(月)3限 (13:30 -- 15:00), 2021/06/07, Monday, 3rd slot
場所(Location) L2
司会(Chair) 黄 銘 (Ming HUANG)

講演者(Presenter) 品川 政太朗(Seitaro Shinagawa), 知能コミュニケーション研究室 (Human Augmented Communication Lab.)
題目(Title) A Conversational System for Interactive Image Editing
概要(Abstract) Systems with natural language interfaces such as conversational interface are useful for human users in human-system collaboration tasks. Interactive image editing task is a task that uses natural language interfaces, which is a potential application for non-skilled users. If users want to create an imagined image, they can ask the system to create the image as we usually do with skilled workers. This thesis presents an interactive image editing system based on neural network image generative models, which proactively communicates with users to create the desired image. The interactive image editing task is challenging on the following two aspects: 1) the system has to handle various editing requests from the users in natural language, 2) the system has to handle the uncertainty of the generated images due to the diversity of editing requests. For the first problem, we propose an interactive image editing framework based on neural network-based image generative models. This framework aims at training a model to automatically learn relationships between the change of images and the natural language editing requests. We demonstrate that our model can successfully edit a given image according to the editing requests. For the second problem, a naive solution is to show the multiple images generated from multiple editing models and ask the user to confirm the most relevant image to the editing request every time. However, this strategy makes the interactive process redundant. To solve this problem, we propose a proactive confirmation method that enables the system to confirms with the user when the system is tentative about selecting a better image to match the editing requests. We defined an uncertainty score by using the entropy of the generated image to decide the system action to confirm. We demonstrate that our method achieves fewer confirmations to the users with better image qualities through the dialogues.

講演者(Presenter) Md Delwar Hossain, サイバーレジリエンス構成学研究室 (Laboratory for Cyber Resilience)
題目(Title) Deep Learning-based Intrusion Detection Systems for In-vehicle CAN Bus Communication
概要(Abstract) The modern automobile is a complex piece of technology that uses the Controller Area Network (CAN) bus system as a central system for managing the communication between the electronic control units (ECUs). Despite its central importance, the CAN bus system does not support authentication and authorization mechanisms, i.e., CAN messages are broadcast without basic security features. As a result, it is easy for attackers to launch attacks at the CAN bus network system. Attackers can compromise the CAN bus system in several ways including Denial of Service (DoS), Fuzzing and Spoofing attacks. It is imperative to devise methodologies to protect modern cars against the aforementioned attacks. In this research, we propose a Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN)-based Intrusion Detection System (IDS) to detect and mitigate the CAN bus network attacks. We generate our own dataset by first extracting attack-free data from our experimental cars and by injecting attacks into the latter and collecting the dataset. We use the dataset for testing and training our model. Our experiment results demonstrate that our classifier is efficient for detecting the CAN bus system attacks.