ゼミナール発表

日時: 6月17日(月)3限 (13:30-15:00)


会場: L1

司会: 大和 勇太
Tanvir Ahmed 1261027: D, 中間発表 中島 康彦, 井上 美智, 姚 駿, 原 祐子
title: Selective Check of Data-Path for Permanent Fault Locating and Improvement in Robustness of Partial Redundancy by SDC Prediction
abstract: Nowadays, fault tolerance has been playing a progressively important role in covering increasing soft/hard error rates in electronic devices that accompany the advances of process technologies. Error detection is thus a required function to maintain execution correctness. Currently, many highly dependable methods to cover permanent faults are commonly over-designed by using very frequent checking, due to lack of awareness of the fault possibility in circuits used for the pending executions. In this research, we introduce a metric for permanent defect, as operation defective probability (ODP), to quantitatively instruct the check operations being placed only at critical positions. By using this selective checking approach, we can achieve a near 100% dependability by having about 53% less check operations, as compared to the ideal reliable method. By this means, we are able to reduce 21.7% power consumption by avoiding the non-critical checking inside the over-designed method. On the other hand, Partial redundancy is a method to address errors from single event effects (SEEs) on critical data while leaving less important data unprotected for energy consumption tradeoffs. Under a low SEE rate, the method can provide a good cost-effective fault tolerance, while many silent data corruptions (SDCs) may occur under a high fault rate due to the incomplete fault coverage. This research proposes a system level approach to additionally cover SDCs in a partial redundancy by a light-weighted error prediction. Our results from simulation under a stress radiation test condition show that with an average 8% cost in energy consumption, we can reduce the SDC rate from 12% to 0.37%, for our studied work loads.
language of the presentation: English
 
徐 浩 1151213: M, 2回目発表 中島 康彦, 井上 美智, 姚 駿, 原 祐子
 
関 賀 1151211: M, 2回目発表 中島 康彦, 井上 美智, 姚 駿, 原 祐子
 

会場: L2

司会: Sakriani Sakti
Liu Xiaodong 1161205: D, 中間発表 松本 裕治, 中村 哲, 新保 仁, Kevin Duh
title: A Novel Framework for Extracting Bilingual Dictionary from Comparable Corpus
abstract: We propose a flexible and effective framework for extracting a bilingual dictionary from comparable corpora. Our approach is based on a novel combination of topic modeling and word alignment techniques. Intuitively, our approach works by converting a comparable document-aligned corpus into a parallel topic-aligned corpus, then learning word alignments using co-occurrence statistics. This topic-aligned corpus is similar in structure to the sentence-aligned corpus frequently used in statistical machine translation, enabling us to exploit advances in word alignment research. Unlike many previous work, our framework does not require any language-specific knowledge for initialization. Furthermore, our framework attempts to handle polysemy by allowing multiple translation probability models for each word. On a large-scale Wikipedia corpus, we demonstrate that our framework reliably extracts high-precision translation pairs on a wide variety of comparable data conditions.
language of the presentation: English
 
進藤 裕之 1261202: D, 中間発表 松本 裕治, 中村 哲, 新保 仁, Kevin Duh
title: Statistical grammar induction based on a Bayesian approach and its application to parsing and language modelig
Abst: Statistical grammar induction is an important process for parsing and language modeling in the field of natural language processing. We have proposed probabilistic models and inference algorithms based on a Bayesian approach for extracting tree insertion grammars and symbol-refined tree subsitution grammars. In this talk, we focus on an inference algorithm of the probabilistic grammars from treebank data. Gibbs sampler is widely used for learning grammatical models, however, by sampling only one variable at a time, the sampler suffers from a local optimum due to the strong dependency among variables. We tackle this problem by a novel pseudo blocked subtree sampler. Our method collects the same type of subtrees for each iteration and updates them simultaneously from the approximate posterior distribution over grammatical rules. The experimental results show that our method achieves better performance compared with conventional methods.
language of the presentation: Japanese