Multi-View Face Synthesis via Progressive Face Flow

Yangyang Xu, Xuemiao Xu, Jianbo Jiao, Keke Li, Cheng Xu, Shengfeng He

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Existing GAN-based multi-view face synthesis methods rely heavily on 'creating' faces, and thus they struggle in reproducing the faithful facial texture and fail to preserve identity when undergoing a large angle rotation. In this paper, we combat this problem by dividing the challenging large-angle face synthesis into a series of easy small-angle rotations, and each of them is guided by a face flow to maintain faithful facial details. In particular, we propose a Face Flow-guided Generative Adversarial Network (FFlowGAN) that is specifically trained for small-angle synthesis. The proposed network consists of two modules, a face flow module that aims to compute a dense correspondence between the input and target faces. It provides strong guidance to the second module, face synthesis module, for emphasizing salient facial texture. We apply FFlowGAN multiple times to progressively synthesize different views, and therefore facial features can be propagated to the target view from the very beginning. All these multiple executions are cascaded and trained end-to-end with a unified back-propagation, and thus we ensure each intermediate step contributes to the final result. Extensive experiments demonstrate the proposed divide-and-conquer strategy is effective, and our method outperforms the state-of-the-art on four benchmark datasets qualitatively and quantitatively.

Original languageEnglish
Article number9466401
Pages (from-to)6024-6035
Number of pages12
JournalIEEE Transactions on Image Processing
Volume30
Early online date28 Jun 2021
DOIs
Publication statusPublished - 2 Jul 2021

Bibliographical note

Funding Information:
Manuscript received June 23, 2020; revised January 14, 2021; accepted June 9, 2021. Date of publication June 28, 2021; date of current version July 2, 2021. This work was supported in part by the Key-Area Research and Development Program of Guangdong Province, China, under Grant 2020B010166003, Grant 2020B010165004, and Grant 2018B010107003; in part by the National Natural Science Foundation of China under Grant 61772206, Grant U1611461, Grant 61472145, and Grant 61972162; in part by the Guangdong International Science and Technology Cooperation Project under Grant 2021A0505030009; in part by the Guangdong Natural Science Foundation under Grant 2021A1515012625; in part by the Guangzhou Basic and Applied Research Project under Grant 202102021074; and in part by the China Computer Federation (CCF)-Tencent Open Research Fund. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Christophoros Nikou. (Corresponding authors: Xuemiao Xu; Shengfeng He.) Yangyang Xu, Keke Li, Cheng Xu, and Shengfeng He are with the School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China (e-mail: cnnlstm@gmail.com; cslikeke@mail.scut.edu.cn; cschengxu@gmail.com; hesfe@scut.edu.cn).

Publisher Copyright:
© 1992-2012 IEEE.

Keywords

  • Multi-view face synthesis
  • pose-invariant face recognition
  • face reconstruction

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Multi-View Face Synthesis via Progressive Face Flow'. Together they form a unique fingerprint.

Cite this