There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face. These features include (1) facial components, (2) interactive color adjustments, (3) makeup variations, (4) robustness to poses and expressions, and the (5) use of multiple reference images. To tackle the key features, we propose a novel style- and latent-guided makeup generative adversarial network for makeup transfer and removal. We provide a novel, perceptual makeup loss and a style-invariant decoder that can transfer makeup styles based on histogram matching to avoid the identity-shift problem. In our experiments, we show that our SLGAN is better than or comparable to state-of-the-art methods. Furthermore, we show that our proposal can interpolate facial makeup images to determine the unique features, compare existing methods, and help users find desirable makeup configurations.
在使用生成对抗网络对人脸照片进行化妆时,有五个特征需要考虑。这些特征包括:(1)面部组件,(2)交互式颜色调整,(3)妆容变化,(4)对姿势和表情的鲁棒性,以及(5)使用多个参考图像。为了解决这些关键特征,我们提出了一种新颖的基于风格和潜在引导的化妆生成对抗网络,用于妆容迁移和去除。我们提供了一种新颖的感知妆容损失以及一种风格不变解码器,它可以基于直方图匹配来迁移妆容风格,以避免身份偏移问题。在我们的实验中,我们表明我们的SLGAN优于或可与最先进的方法相媲美。此外,我们表明我们的方案可以对脸部妆容图像进行插值以确定独特特征,比较现有方法,并帮助用户找到理想的妆容配置。