コロキアムB発表

日時: 6月14日(月)3限(13:30~15:00)


会場: L2

司会: 藤本 大介
赤部 知也 M, 1回目発表 コンピューティング・アーキテクチャ 中島 康彦, 林 優一, TRAN THI HONG, 張 任遠
title: Evaluation of Narrow Bit-Width Variation for Training Neural Networks
abstract: In the inference of deep learning, narrow bit-width models have produced impressive results in handling massive volumes of input data. Whereas, in the training process, fast iterations with various parameters have been adopted, but still require high-speed computation for large amounts of the data. However, training with conventional IEEE 754's numerical formats has developed with limited success. Here, we explore an optimal format to maintain the accuracy of training a deep neural network (DNN) model. On CIFAR-10, in a convolutional neural network (CNN) model with a slit detector as input, our results highlight that the bit width can be reduced from 32 bits to 15 bits.
language of the presentation: Japanese
発表題目: ニューラルネットワークの狭ビット幅での学習の評価
発表概要: 深層学習の推論では、ビット幅の狭いモデルが大量の入力データを処理して素晴らしい結果を出している。 一方、学習の過程では、様々なパラメータを用いた高速な反復処理が採用されているが、大量のデータを高速に計算する必要がある。 しかし、従来のIEEE 754の数値フォーマットを用いた学習は、あまり成功していない。 ここでは、ディープニューラルネットワーク(DNN)モデルの学習精度を維持するための最適なフォーマットを探る。 CIFAR-10において、スリット検出器を入力とする畳み込みニューラルネットワーク(CNN)モデルにおいて、ビット幅を32ビットから15ビットに削減できることを明らかにした。
 
PHAN TRIDUNG M, 2回目発表 コンピューティング・アーキテクチャ 中島 康彦, 林 優一, TRAN THI HONG, 張 任遠
title: Design and Evaluation of High-performance SHA-3 System on Chip for Society 5.0
abstract: In this research, we develop a high-performance SHA-3 SoC by increasing the processing rate of both the SHA-3 core and its outside data flow. We enhance the processing speed of the SHA-3 core with a fully unrolled 24-round architecture. Furthermore, we re-structured the system process to make the design more efficient. Also, Direct Memory Access (DMA) is used to shorten the data transfer time. Our system is implemented and evaluated on FPGA Zynq UltraScale+ MPSoC ZCU102, and proofed maximum frequency performance is improved 380%, throughput is increased 206.25%.
language of the presentation: English
 
PHAN VAN DAI M, 2回目発表 コンピューティング・アーキテクチャ 中島 康彦, 林 優一, TRAN THI HONG, 張 任遠
title: High Performance Multicore SHA-256 Accelerator using Fully Parallel Computation and Local Memory
abstract: Integrity checking is indispensable in the current technological age. One of the most popular algorithms for integrity checking is SHA-256. To achieve high performance, many applications generally design SHA-256 in hardware. However, the processing rate of SHA-256 is often low due to a large number of computations. Besides, data must be repeated in many loops to generate a hash, which requires transferring data multiple times between accelerator and off-chip memory if not using local memory. In this research, an ALU combining fully parallel computation and pipeline layers is proposed to increase the SHA-256 processing rate. Moreover, the local memory is attached near ALU for reducing off-chip memory access during the iterations of computing. In the high hash rate, we design a SoC-based multicore SHA-256 accelerator. As a result, our proposed accelerator enhances throughput by more than 40% and be 2x higher hardware efficiency compared with the state-of-the-art design.
language of the presentation: English
 
KAN YIRONG D, 中間発表 コンピューティング・アーキテクチャ 中島 康彦, 林 優一, TRAN THI HONG, 張 任遠
title: A Multi-Grained Reconfigurable Accelerator for Approximate Computing
abstract: An elastic neural network is implemented by FPGA for constructing the multi-grained reconfigurable accelerator (MGRA). On the basis of a novel bisection neural network (BNN) topology, the entire network on hardware is efficiently partitioned into arbitrary pieces with diamond-like shape (seen as “DiaNet”) which perform regressions for retrieving arbitrary approximate calculations in parallel. By organizing massive DiaNets, the entire network is reconfigurable in fine-grained (functions of each DiaNet), mid-grained (DiaNet features), and coarse-grained (organization of DiaNets) without redundancy. In this work, a proof-of-concept BNN with 8x8 processing elements (PEs) is implemented by FPGA for performing six calculation units (CU) in parallel. Over various approximate computing tasks with one, two, and three operands, all calculations are retrieved with the inaccuracy less than 3.1%. The maximum hardware utilization of a single CU is reduced to 1.7%, 17.9%, and 7.6% of general arithmetic logic unit (ALU), approximate computing units powered by domain-specific architecture (DSA) and neural network, respectively.
language of the presentation: English
 

会場: L3

司会: Tran Thi Hong
東山 恵祐 D, 中間発表 知能コミュニケーション 中村 哲, 渡辺 太郎, 須藤 克仁
title: NLchain: Data augmentation method for Data-to-Text
abstract: E2E dataset has been published for MR(Meaning Representation)-to-Text task. However, the dataset includes only 30K paired data for training, which is much fewer than the millions of data for other tasks such as machine translation. To tackle the problem, we have developed a method to generate paired data from unpaired data by training a MR-to-Text model and a Text-to-MR model simultaneously and iteratively.
language of the presentation: Japanese
発表題目: NLchain: MR-to-TextとText-to-MRの同時学習によるデータ拡張方法についての検討
発表概要: MR(Meaning Representation)-to-Textタスク向けデータセットの一つとしてE2E datasetが公開されているが,MRとTextのPaired Dataが潤沢に用意されているわけではないため,高精度な文章生成モデルを学習するには不十分である.本研究では,文章生成モデルと同時にTextからMRを推測する意味解析モデルを学習し,MRのみあるいはTextのみのUnpaired Dataから対となるText乃至MRを生成して拡張学習データを得ることにより,文章生成モデルの精度を上げることを試みている.
 
笹田 大翔 M, 2回目発表 サイバーレジリエンス構成学 門林 雄基, 渡辺 太郎, 妙中 雄三, 宮本 大輔(客員)
title:A Study on Differentially-Private Transformation of Text to Prevent Private-Attribute Inference
abstract:With the spread of social media such as Twitter and Facebook, an enormous amount of text is generated by users on a daily basis. However, since user-generated text may contain sensitive attribute information that can lead to personal identification, privacy protection processing is required to prevent personal identification when the text is provided to a third-party organization. Although anonymization by suppression or generalization is one of the privacy-protective processing methods, these methods require the knowledge of the attacker to be processed, and cannot deal with unexpected cases. However, these anonymization methods need to process the attacker's knowledge and cannot deal with unexpected cases. In this study, we construct a deep generative model with differential privacy to transform text into privacy-protected text. However, differential privacy has the property that the amount of noise added increases when many unique values are included, and when the text contains unique expressions such as names of people, places, and organizations, a large amount of noise needs to be added to the gradient when building the generative model. In order to reduce the amount of noise added to the differential privacy, we create multiple duplicates by generalizing the named entities before training, and satisfy k-anonymity in advance. By adding noise to the gradient during training, we can build a text generation model that satisfies differential privacy, and transform the text into a pseudo-text where individuals cannot be identified. In this presentation, we describe our evaluation experiments and discuss the consumption of privacy budget.
language of the presentation: *** English or Japanese (choose one) ***
発表題目:プライベート属性推定の阻止に向けたテキストの差分プライベート変換に関する研究
発表概要:TwitterやFacebook等のソーシャルメディアの普及に伴いユーザによって膨大な量のテキストが日々生成されている.しかしユーザが生成したことでテキストには個人の識別につながるセンシティブな属性情報が含まれることがあるため,第三者機関に提供する際には個人を識別できないようなプライバシ保護加工が求められる.プライバシ保護加工法として,抑制や一般化による匿名化が挙げられるが,これらの匿名化は攻撃者の知識を過程する必要があり,仮定外の場合には対応することができない.そこで本研究では,差分プライベートな深層生成モデルを構築して,テキストのプライバシ保護変換を試みる.ただし差分プライバシには,一意な値が多く含まれる場合にはノイズ付加 量が増大するという性質があり,人名や地名,組織名等の一意な固有表現が含まれるテキストでは,生成モデル構築の際に勾配に対して大きなノイズの付加が必要となってしまう.そこで差分プライバシのノイズ付加量を抑えるために,学習を行う前に固有表現の一般化によって重複を複数作成し,事前にk匿名性を満足する.学習時に勾配に対してノイズを加えることで差分プライバシを満足するテキスト生成モデルを構築し,個人の識別が不可な疑似テキストへの変換を目指す.発表では,実施した評価実験とプライバシバジェットの消費に関する考察について述べる.
 
LIAO HUNG-YI M, 2回目発表 ユビキタスコンピューティングシステム 安本 慶一, 杉本 謙二, 藤本 まなと, 松田 裕貴
title: Smart Lighting Control System to improve emotional status
abstract: Recently, the smart lighting system has been attracting attention as one of the methods to improve the quality of life (QoL). The lighting color has a great impact on people's emotion. Therefore, if the lighting color is not suitable for people, their emotion greatly dropped and their work motivation and performance may be decreased. In this study, we propose a novel smart lighting system that can help people live more comfortably and work efficiently in order to solve such a problem. The key idea of our proposed system is to dynamically change the lighting color according to the people's emotion. Specifically, our system uses reinforcement learning to find the best lighting for each user. System can find the best color by user’s response. In one episode the system sets a lighting, then users need input their emotion when they see this lighting. After training for about 30 episodes, the system will find the result and provide the best color for the user. The current progress is that we just implemented the initial prototype of the proposed system consisting of reinforcement learning and lighting control subsystems. We plan to evaluate the performance of our system in a smart home testbed.
language of the presentation:English