ゼミナールI講演

日時: 平成21年10月5日(月)3限 (13:30 -- 15:00)
場所: L1

講演者1: Edmond Boyer ( Associate Professor,INRIA, Rhone-Alpes, Grenoble, France )
題目: Multi-camera 3D modeling for a 3D video
概要: The platform called Grimage combines multi-camera 3D modeling, physical simulation, and parallel execution for a new immersive experience. Put any object into the interaction space, and it is instantaneously modeled in 3D and injected into a virtual world populated with solid and soft objects. Push them, catch them, and squeeze them. In this way a 3D Video can be generated and provide you with a total visibility* *over anything you film. If you put your hand in front of your face, you can look at it from any angle by turning your hand. 3D Video gives you this total visibility. Its like having a million video cameras around whatever you're filming. You can look from any angle. You can look in real-time while you are filming, and of course you can play it back and look in a different way later.
講演者紹介: Edmond Boyer is an associate professor of computer science at Grenoble Universites (France) and he is doing research on computer vision with the PERCEPTION team at the INRIA Rhone-Alpes. His research interests include 3D modeling from images and videos, motion capture and motion recognition, among other topics. Before Grenoble, he was in Cambridge (UK), in 1998, working with the SVR group in the Engineering Department. From 1994 to 1998, he was in Nancy (France) where he obtained a PhD in 1996 with the ISA team at the INRIA Lorraine. He was a member of the organizing committee of ECCV 2008, and has regularly been area chairs and PC members of ICCV, CVPR, ECCV, and BMVC.

講演者2: Ming-Hsuan yang ( Associate Professor,UC Merced, California, USA)
題目: Toward Robust Online Visual Tracking
概要: Human beings are capable of tracking objects in dynamic scenes effortlessly, and yet visual tracking remains a challenging problem in computer vision. The main reason can be attributed to the difficulty in handling appearance variation of a target object. Intrinsic appearance change include out-of-plane motion and shape deformation of a target object, whereas extrinsic illumination change, camera motion, camera viewpoint, and occlusions inevitably cause large appearance variation.
講演者紹介: Ming-Hsuan Yang is an assistant professor in Electrical Engineering and Computer Science of University of California at Merced. He received his PhD degree in Computer Science from University of Illinois at Urbana-Champaign (UIUC). Prior to joining UC Merced, he has held positions at Honda Research Institute in Mountain View, California, and Computer Science and Information Engineering at National Taiwan University. While at UIUC, he was awarded the Ray Ozzie Fellowship given to outstanding graduate students in Computer Science. He has co-authored the book Face Detection and Gesture Recognition for Human-Computer Interaction (Kluwer Academic Publishers), and co-edited a special issue on face recognition of Computer Vision and Image Understanding. He serves as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and Image and Vision Computing, an area chair in CVPR 2008, CVPR 2009, ACCV 2009, as well as a publication co-chair in CVPR 2010. His research interests include computer vision, pattern recognition, robotics, cognitive science, and machine learning.

講演者3: Yaser Sheikh ( Assistant Professor, Carnegie Mellon University, Pittsburgh, PA, USA)
題目: Linear Models for Dynamic Scene Reconstruction from Monocular Views
概要: 3D reconstruction has been studied in computer vision literature, almost from its inception. For monocular video, the majority of this research has focused on reconstructing static scenes, yet typical real scenes are dynamic --- people walk around, trees sway in the wind, and cars drive around. In this talk, I discuss recent research into using linear deformation models for dynamic scene reconstruction and present a new approach that uses a dual deformation space which we call the trajectory space. I will also highlight the principal unsolved problems in reconstructing dynamic scenes to encourage future research into this area.
講演者紹介: Yaser Sheikh is an assistant research professor in the Robotics Institute at Carnegie Mellon University. His research interest is in the field of computer vision, primarily in analyzing dynamic scenes including scene reconstruction, the geometry of mobile camera networks, and nonrigid motion estimation, with a particular focus on analyzing human activity. He obtained his doctoral degree from the University of Central Florida in May 2006 and from May 2006 to May 2008 he was a postdoctoral fellow at Carnegie Mellon University. He is a recipient of the Hillman award for excellence in computer science research.

ゼミナール I, II ページへ