コロキアムB発表

日時: 9月25日(金)1限(9:20~10:50)


会場: L1

司会: 張 任遠
清光 将生 M, 2回目発表 大規模システム管理 笠原 正治, 藤川 和利, 笹部 昌弘
title: Analysis of Minimum Distribution Time of Two-Class Tit-for-Tat-based P2P File Distribution
abstract: Due to the security concerns, (periodical) software updates over the Internet have been more important. When a new update is released, many users tend to simultaneously access the distribution servers, which makes them bottleneck. Several systems (e.g., Windows Update) have started applying the Peer-to-Peer (P2P) file distribution paradigm where users (i.e., peers) assist the file distribution. Since peers will consume their access link capacities to upload fragments of the file (i.e., pieces) to others, an appropriate incentive mechanism should be designed to realize such cooperative P2P file distribution. In this research, we focus on the Tit-for-Tat (TFT) based P2P file distribution which encourages equivalent amount of piece exchange among each pair of peers. In the previous work, a Linear Program (LP) for minimizing file distribution time of the TFT-based P2P file distribution was developed based on the fluid model, which succeeded in revealing the system performance of large-scale TFT-based P2P file distribution. In this research, to clarify the relationship between the system parameters (e.g., the number of peers and the upload capacity distribution) and the minimum distribution time of the TFT-based P2P file distribution, we newly derive explicit equations of the minimum file distribution time of the two-class TFT-based P2P file distribution under the assumption that the system only includes two-class peers: peers with high (resp. low) upload capacity. Through numerical results, we verify the validity of derived equations and compare the system performance of TFT based P2P file distribution with those of traditional client-server and P2P system.
language of the presentation: Japanese
 
馬場 柾也 M, 2回目発表 大規模システム管理 笠原 正治, 藤川 和利, 笹部 昌弘
title: Service Chaining for Energy-Efficient and Highly Available NFV Networks
abstract: Network functions virtualization (NFV) is a framework to realize low cost, flexible and agile network services by decoupling network functions from dedicated hardware and executing them on generic hardware as virtual network functions (VNFs). In NFV networks, a network service can be realized as a sequence of VNFs called a service chain. In this case, we should first find the service path from an origin to a destination and then allocate the physical resource (computers and communication links) appropriately so that VNFs can be executed in the request order on this path. Service chaining is a problem that aims to find a service path from the origin to the destination and allocate the physical resources (servers and communication links) appropriately to execute the VNFs at intermediate nodes in the request order. There are many studies on the service chaining for different objectives. In this research, we formulate the service chaining problem as an integer linear program (ILP) to achieve low-energy and highly-available NFV networks. Through numerical results, we solve the ILP using the existing solver and evaluate the fundamental characteristics of the proposed service chaining.
language of the presentation: Japanese
 
前田 健登 M, 2回目発表 ソフトウェア設計学 飯田 元, 藤川 和利, 市川 昊平, 髙橋 慧智
title: Cloud Gaming System using Volunteer Computing
abstract: In recent years, cloud gaming services that run games on cloud servers and enable play over networks have been attracting attention. In cloud gaming, the player's device only renders the video of the game streamed from the cloud gaming server and sends the player's operations to the server. For this reason, a high-quality game experience can be expected even on a device with poor performance, but on the other hand, it has a big problem of delay in player operation and screen display. In this research, we propose a cloud gaming framework that uses idle computers owned by general volunteers in the vicinity, instead of the conventional cloud gaming architecture that is centralized in the data center. This will reduce the delay that existed between the player and the data center and aim to improve the playability of cloud gaming.
language of the presentation: Japanese
 
中川 豊 D, 中間発表 ネットワークシステム学 岡田 実, 藤川 和利, 東野 武史, Duong Quang Thang