Improving and Expanding Gaming Expereinces based on Cloud Gaming

Kar Long Chan ( 1561022 )


Ideally Cloud Gaming is a promising service of providing novel gaming experience but in reality, various technological barriers make Cloud Gaming not comprehensively feasible for every type of game, such as first person shooting game which requires fast responsiveness. In addition, most existing cloud service, which streams encoded video sequence back to the client, is difficult to catch up with the rising demands for graphic quality.

In this dissertation, we aim at addressing both of the above issues by providing solutions that could potentially improve user's experience of playing games delivered on Cloud Gaming, as well as expanding the usage of Cloud Gaming on novel gaming experiences. First, for addressing the graphics quality issue, we propose a Hybrid-Streaming System that takes the respective benefits from Cloud Gaming and traditional gaming to provide highly-accessible gaming with close-to-original graphics quality. The system distributes rendering operations to game player's PC and Cloud server to achieve the desired improvement by utilising graphics processing power from both sides. Quantitative result shows graphics quality's improvement of the proposed system over traditional Cloud Gaming system while maintaining acceptable network bandwidth consumption. Moreover, since rendering tasks are reasonably distributed, server's workload is mitigated. Furthermore, we explore methods that could make Cloud Gaming available for VR gaming, which comparatively has stricter latency requirement. As such, we propose utilizing Recurrent-Neural-Network-based Head-motion prediction model to compensate the inevitable latency issue in a Cloud Gaming environment and the random nature of player's head motion. Our results show that with an assumption of a normal Cloud Gaming environment at 150ms latency, not only the model could well predict head-motion within the period, but also it could fit well with different player's motion.