Skeleton Guided GANs for Person Re-identification

QIu Jie (1651210)


Person re-identification (ReID) plays a critical role in many video-surveillance applications. However, ReID faces many technical challenges which limit its performance and hinder its application in real systems. Pose variations caused by body movements and camera viewpoint changes are most significant among all these challenges. Though pose has already been concerned for ReID, all existing solutions have only used pose for alignment in feature extraction. In this paper, we explore and showcase a concrete example for a brand new direction: pose-adaptive whole sample generation for ReID. The model is named Skeleton Guided Deconvolutional-Generative Adversarial Network (SG-DGAN), which contains de-convolution networks to generate coarse images and generative adversarial networks to refine the image details for any given target pose. In order to preserve personal details in the target pose and make the generated images realistic, Siamese training framework and pre-definition loss are proposed in the refinement stage. For evaluation, we tested our framework on several public benchmark datasets to show the advanced performance of our framework. Experiments on representative large-scale benchmark datasets such as Market1501, Duke and CUHK03 demonstrate the superiority of our proposed SG-DGAN model, and a preliminary transfer learning experiment on VIPeR shows its encouraging generalization ability.