StyleGAN-Based Portrait Image and Video Style Transfer

Speaker

Shuai Yang received the B.S. and Ph.D. degrees (Hons.) in computer science from Peking University, Beijing, China, in 2015 and 2020, respectively. He is currently a postdoctoral research fellow with the S-Lab, Nanyang Technological University. Dr. Yang was a Visiting Scholar with the Texas A&M University, from Sep. 2018 to Sep. 2019. He was a Visiting Student with the National Institute of Informatics, Japan, from Mar. 2017 to Aug. 2017. He received the Excellent Doctoral Dissertation Award of China Society of Image and Graphics, 2020 and Excellent Doctoral Dissertation Award of Peking University, 2020. He received the IEEE ICME 2020 Best Paper Awards and IEEE MMSP 2015 Top 10% Paper Awards. His current research interests include image stylization and image translation.

Homepage: https://williamyang1991.github.io/.

Abstract

Portrait style transfer aims to render artistic portraits from real faces, which are ubiquitous in our daily life as well as creative industries in forms of arts, social media avatars, movies and entertainment advertising. In this talk, I will introduce our two portrait style transfer models DualStyleGAN and VToonify. We first propose a novel DualStyleGAN to characterize and control the intrinsic and extrinsic styles for exemplar-based high-resolution portrait style transfer, requiring only a few hundred style examples. Then, based on DualStyleGAN, we propose a novel VToonify framework for controllable high-resolution portrait video style transfer, supporting unaligned faces and various video sizes.

Video