Third Workshop on Computer Vision for AR/VR

June 17, 2019, Long Beach, CA

Organized in conjunction with CVPR 2019

Overview | Submission | Program | People | FAQ


Program

Long Beach Convention Center / Main Program: Room 202C / Posters and Demos: Pacific Ballroom and 2F Lobby

 

Morning Session

  • 8:00-8:30 AM Breakfast

  • 8:30-8:35 AM Welcome and Opening Remarks

  • 8:35-9:10 AM Spotlight Talks (2 mins. each, see details below) 

Augmented Reality

10:25-10:35 AM Break 

12:20-1:00 PM Lunch 

Afternoon Session

  • 1:00-1:50 PM Posters and Demos @ Pacific Ballroom and 2F Lobby (see details below)

Virtual Reality

3:15- 3:25 PM Break 

 

Spotlights, Posters, and Demos

 

Spotlight Talks

8:35-9:10 AM (Room 202C), 2 minutes each

Presentation order:

  1. BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs

  2. Towards Scalable Sharing of Immersive Live Telepresence Experiences Beyond Room-scale based on Efficient Real-time 3D Reconstruction and Streaming

  3. Lightweight Real-time Makeup Try-on in Mobile Browsers with Tiny CNN Models for Facial Tracking

  4. Human Hair Segmentation In The Wild Using Deep Shape Prior

  5. The Advantages of a Joint Direct and Indirect VSLAM in AR

  6. Efficient 2.5D Hand Pose Estimation via Auxiliary Multi-Task Training for Embedded Devices

  7. Practical 3D Photography

  8. Real-time Hair segmentation and recoloring on Mobile GPUs

  9. HoloPose: Real Time Holistic 3D Human Reconstruction In-The-Wild

Posters and Demos

1:00 - 1:50 PM (Pacific Ballroom and 2F Lobby, Map)

  • Poster boards #236 and onwards in the Pacific Ballroom, see poster numbers

  • All posters that do not have a demo will be in the Pacific Ballroom

  • The demos will be split between the Pacific Ballroom and 2F Lobby

  • Each demo will have a poster board for its poster

 

Extended Abstracts

 
 

Speakers

 
Marc-1-180x180.jpg

Marc Pollefeys (Microsoft)

Marc Pollefeys is Director of Science leading a team of scientists and engineers to develop advanced perception capabilities for HoloLens.  He is also a Professor of Computer Science at ETH Zurich and was elected Fellow of the IEEE in 2012.

He is best known for his work in 3D computer vision, having been the first to develop a software pipeline to automatically turn photographs into 3D models, but also works on robotics, graphics and machine learning problems.  Other noteworthy projects he worked on with collaborators at UNC Chapel Hill and ETH Zurich are real-time 3D scanning with mobile devices, a real-time pipeline for 3D reconstruction of cities from vehicle mounted-cameras, camera-based self-driving cars and the first fully autonomous vision-based drone.  Most recently his academic research has focused on combining 3D reconstruction with semantic scene understanding.  He has published over 250 peer-reviewed publications and holds several patents.  His lab at ETH Zurich also developed the PixHawk auto-pilot which can be found in over half a million drones and he has co-founded several computer vision start-ups.

 

 
jean-yves-bouguet-63bb7.jpg

Jean-Yves Bouguet (Magic Leap)

Jean-Yves Bouguet has been leading the computer vision group at Magic Leap since 2013. He is heading a team of researchers and engineers that is developing all of the core computer vision components of our Mixed Reality device including visual inertial odometry, environment reconstruction and mapping, object detection, eye tracking, and sensor calibration technologies. Our first product, the Magic Leap One Creator Edition, is now available for purchase.

From 2007 to 2013, he was a member of the founding StreetView team at Google where he designed, implemented and maintained the data processing pipeline for Streetview. He was in charge of processing all of the data collected by the Streetview vehicles driving around the world.

From 1999 to 2007, he worked at Intel Research where he conducted research on several projects in Computer Vision, Machine Learning, and Graphics. In particular, he was one of the founding members of the Open Source Computer Vision Library (OpenCV) that includes his universally used Camera Calibration Toolkit.

Jean-Yves Bouguet holds a Ph.D. from the California Institute of Technology (Caltech) where he conducted research on several topics of Geometric Computer Vision under the supervision of Prof. Pietro Perona.

He holds several patents in the field of machine vision, including a patent for a 3D scanning system based using shadows cast by a pencil, and a patent on a Hardware-accelerated visualization of surface light fields.

 

 
Torralbaheadshot.jpg

Antonio Torralba (MIT)

Antonio Torralba is a Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT), the MIT director of the MIT-IBM Watson AI Lab, and the inaugural director of the MIT Quest for Intelligence, an MIT campus-wide initiative to discover the foundations of intelligence.  He received the degree in telecommunications engineering from Telecom BCN, Spain, in 1994 and the Ph.D. degree in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. From 2000 to 2005, he spent postdoctoral training at the Brain and Cognitive Science Department and the Computer Science and Artificial Intelligence Laboratory, MIT, where he is now a professor. Prof. Torralba is an Associate Editor of the International Journal in Computer Vision, and has served as program chair for the Computer Vision and Pattern Recognition conference in 2015. He received the 2008 National Science Foundation (NSF) Career award, the best student paper award at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) in 2009, and the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition (IAPR). In 2017, he received the Frank Quick Faculty Research Innovation Fellowship and the Louis D. Smullin ('39) Award for Teaching Excellence.

 

 
sharam.jpeg

Shahram Izadi (Google)

Dr. Shahram Izadi is a director at Google within the AR/VR division. Prior to Google he was CTO and co-founder of perceptive.io, a Bay-Area startup specializing in real-time computer vision and machine learning techniques for AR/VR. His company was acquired by Alphabet/Google in 2017. Previously he was a partner and research manager at Microsoft Research (both Redmond US and Cambridge UK) for 11 years where he led the interactive 3D technologies (I3D) group. His research focuses on building new sensing technologies and systems for AR/VR. Typically, this meant developing new sensing hardware (depth cameras and imaging sensors) alongside practical computer-vision or machine-learning algorithms and techniques for these technologies. He was at Xerox PARC in 2000-2002, and obtained his PhD from the Mixed Reality Lab at the University of Nottingham, UK, in 2004. In 2009, he was named one of the TR35, an annual list published by MIT Technology Review magazine, naming the world's top 35 innovators under the age of 35. He has published over 120 research papers (see DBLP & Google Scholar), and more than 120 patents. His work has led to products and projects such as the Kinect for Windows, Kinect Fusion, Microsoft Touch Mouse, Microsoft Surface Sensor-in-Pixel and most recently HoloLens and Holoportation.

 

 
RichardNewcombe2019.JPG

Richard Newcombe (Facebook)

Richard Newcombe is Director of Machine Perception at Facebook Reality Labs and an Affiliate Assistant Professor at the University of Washington. His team at Facebook Reality Labs is developing a new generation of Machine Perception technologies, devices, and infrastructure to unlock the potential of Augmented Reality and Contextualized AI. He received his PhD from Imperial College in London with a Postdoctoral Fellowship at the University of Washington and went on to co-found Surreal Vision, LTD that was acquired by Facebook in 2015. His original research introduced the Dense SLAM paradigm demonstrated in KinectFusion and DynamicFusion which has influenced a generation of real-time and interactive systems being developing in the emerging fields of AR/VR and robotics. He received the best paper award at ISMAR 2011, best demo award ICCV 2011, best paper award at CVPR 2015 and best robotic vision paper award at ICRA 2017. His interests span sub-disciplines across machine perception and machine learning from hardware-software sensor device design to computer vision algorithms and novel infrastructure research.

 

 
MattMiesnieks.jpg

Matt Miesnieks (6D.ai)

Matt is renowned as one of the AR industry's thought leaders through his influential blog posts. He is co-founder & CEO of 6D.ai, the leading AR Cloud platform, which is his 3rd AR startup. He also helped found SuperVentures (investing in AR), built AR system prototypes at Samsung, and had a long executive and technical career in mobile software infrastructure before jumping into AR infrastructure back in 2009.

 

 
gordon_wetzstein_small.jpg

Gordon Wetzstein (Stanford)

Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab, an interdisciplinary research group focused on advancing imaging, microscopy, and display systems. At the intersection of computer graphics, machine vision, optics, scientific computing, and perception, Prof. Wetzstein's research has a wide range of applications in next-generation consumer electronics, scientific imaging, human-computer interaction, remote sensing, and many other areas. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at the MIT Media Lab. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an Alain Fournier Ph.D. Dissertation Award, an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.

 

 
YaserPhoto.jpg

Yaser Sheikh (Facebook)

Yaser Sheikh is an Associate Professor at the Robotics Institute, Carnegie Mellon University. He also directs the Facebook Reality Lab in Pittsburgh, which is devoted to achieving photorealistic social interactions in AR and VR. His research broadly focuses on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. With colleagues and students, he has won the Honda Initiation Award (2010), Popular Science’s "Best of What’s New" Award, best student paper award at CVPR (2018), best paper awards at WACV (2012), SAP (2012), SCA (2010), ICCV THEMIS (2009), best demo award at ECCV (2016), and he received the Hillman Fellowship for Excellence in Computer Science Research (2004). Yaser has served as a senior committee member at leading conferences in computer vision, computer graphics, and robotics including SIGGRAPH (2013, 2014), CVPR (2014, 2015, 2018), ICRA (2014, 2016), ICCP (2011), and served as an Associate Editor of CVIU. His research has been featured by various media outlets including The New York Times, BBC, MSNBC, Popular Science, and in technology media such as WIRED, The Verge, and New Scientist.

 

 
Theobalt.jpg

Christian Theobalt (Max-Planck-Institute)

Christian Theobalt is a Professor of Computer Science and the head of the research group "Graphics, Vision, & Video" at the Max-Planck-Institute (MPI) for Informatics, Saarbrücken, Germany. He is also a Professor of Computer Science at Saarland University, Germany.

In his research he looks at algorithmic problems that lie at the intersection of Computer Graphics, Computer Vision and machine learning, such as: static and dynamic 3D scene reconstruction, marker-less motion and performance capture, virtual and augmented reality, computer animation, appearance and reflectance modelling, intrinsic video and inverse rendering, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering. He is also interested in using reconstruction techniques for human computer interaction.

For his work, he received several awards, including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, the German Pattern Recognition Award 2012, and the Karl Heinz Beckurts Award in 2017. He received an ERC Starting Grant in 2013 and an ERC Consolidator Grant in 2017. Christian Theobalt is also a co-founder of an award-winning spin-off company from his group - www.thecaptury.com - that is commercializing one of the most advanced solutions for marker-less motion and performance capture.

 

 
MSR_July_2018_124_N8A1033 (2).jpg

Andrew Fitzgibbon (Microsoft)

Andrew Fitzgibbon is a scientist with HoloLens at Microsoft, Cambridge, UK. He is best known for his work on 3D vision, having been a core contributor to the Emmy-award-winning 3D camera tracker “boujou”, to Kinect for Xbox 360, and to the hand tracking in HoloLens 2, but his interests are broad, spanning computer vision, graphics, machine learning, and occasionally a little neuroscience. His recent work has included new numerical algorithms for Eigen, and compilation of functional programs to runtimes without garbage collection.

He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. Before joining Microsoft in 2005, he was a Royal Society University Research Fellow at Oxford University, having previously studied at Edinburgh University, Heriot-Watt University, and University College, Cork.

 

 
matthias_photo1.jpg

Matthias Niessner (TUM)

Prof. Dr. Nießner is heading the Visual Computing Lab at Technical University of Munich (TUM). He obtained his PhD from the University of Erlangen-Nuremberg in 2013, and was a Visiting Assistant Professor at Stanford University from 2013 to 2017. Since 2017 he is Professor at TUM focusing on cutting-edge research at the intersection of computer vision, graphics, and machine learning. He is particularly interested in novel techniques for 3D reconstruction, semantic 3D scene understanding, and video editing. In addition to his academic career, Prof. Nießner is a co-founder and director of Synthesia Inc., a startup empowering storytellers with AI. Prof. Nießner is a TUM-IAS Rudolph Moessbauer Fellow, and he has received the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the ERC Starting Grant 2018.

 

 
paul_debevec.jpg

Paul Debevec (Google)

Paul Debevec is Senior Staff Engineer at Google Daydream and Adjunct Research Professor of Computer Science at the University of Southern California, working within the Vision and Graphics Laboratory at the USC Institute for Creative Technologies. Debevec's computer graphics research has been recognized with ACM SIGGRAPH's first Significant New Researcher Award in 2001 for "Creative and Innovative Work in the Field of Image-Based Modeling and Rendering", a Scientific and Engineering Academy Award® in 2010 for "the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures", the SMPTE Progress Medal in 2017 in recognition of "pioneering techniques for illuminating computer-generated objects based on measurement of real-world illumination and their effective commercial application in Hollywood films", and an Academy Technical Achievement Award in 2019 "for the invention of the Light Stage X Polarized Spherical Gradient Illumination facial appearance capture method".

 

Workshop Sponsors