High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,2015(NVIDIA)
Abstract
We present a new method for synthesizing highresolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs).
Conditional GANs have enabled a variety of applications, but the results are often limited to lowresolution and still far from realistic.
In this work, we generate 2048 × 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures.
Furthermore, we extend our framework to interactive visual manipulation with two additional features.
First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category.
Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively.
Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.
1. Introduction
Photo-realistic image rendering using standard graphics techniques is involved, since geometry, materials, and light transport must be simulated explicitly.
Although existing graphics algorithms excel at the task, building and editing virtual environments is expensive and time-consuming.
That is because we have to model every aspect of the world explicitly.
If we were able to render photo-realistic images using a model learned from data, we could turn the process of graphics rendering into a model learning and inference problem.
Then, we could simplify the process of creating new virtual worlds by training models on new datasets.
We could even make it easier to customize environments by allowing users to simply specify overall semantic structure rather than modeling geometry, materials, or lighting.
In this paper, we discuss a new approach that produces high-resolution images from semantic label maps.
This method has a wide range of applications.
For example, we can use it to create synthetic training data for training visual recognition algorithms, since it is much easier to create semantic labels for desired scenarios than to generate training images.
Using semantic segmentation methods, we can transform images into a semantic label domain, edit the objects in the label domain, and then transform them back to the image domain.
This method also gives us new tools for higher-level image editing, e.g., adding objects to images or changing the appearance of existing objects.
To synthesize images from semantic labels, one can use the pix2pix method, an image-to-image translation framework [21] which leverages generative adversarial networks (GANs) [16] in a conditional setting.
Recently, Chen and Koltun [5] suggest that adversarial training might be unstable and prone to failure for high-resolution image generation tasks. Instead, they adopt a modified perceptual loss [11, 13, 22] to synthesize images, which are highresolution but often lack fine details and realistic textures.
Here we address two main issues of the above stateof-the-art methods: (1) the difficulty of generating highresolution images with GANs [21] and (2) the lack of details and realistic textures in the previous high-resolution results [5].
We show that through a new, robust adversarial learning objective together with new multi-scale generator and discriminator architectures, we can synthesize photo-realistic images at 2048 × 1024 resolution, which are more visually appealing than those computed by previous methods [5, 21].
We first obtain our results with adversarial training only, without relying on any hand-craftedlosses [44] or pre-trained networks (e.g. VGGNet [48]) for perceptual losses [11, 22] (Figs. 9c, 10b).
Then we show that adding perceptual losses from pre-trained networks [48] can slightly improve the results in some circumstances (Figs. 9d, 10c), if a pre-trained network is available. Both results outperform previous works substantially in terms of image quality.
Furthermore, to support interactive semantic manipulation, we extend our method in two directions.
First, we use instance-level object segmentation information, which can separate different object instances within the same category.
This enables flexible object manipulations, such as adding/removing objects and changing object types.
Second, we propose a method to generate diverse results given the same input label map, allowing the user to edit the appearance of the same object interactively.
We compare against state-of-the-art visual synthesis systems [5, 21], and show that our method outperforms these approaches regarding both quantitative evaluations and human perception studies.
We also perform an ablation study regarding the training objectives and the importance of instance-level segmentation information.
In addition to semantic manipulation, we test our method on edge2photo applications (Figs. 2,13), which shows the generalizability of our approach. Code and data are available at our website.
2. Related Work
Generative adversarial networks Generative adversarial networks (GANs) [16] aim to model the natural image distribution by forcing the generated samples to be indistinguishable from natural images.
GANs enable a wide variety of applications such as image generation [1, 42, 62], representation learning [45], image manipulation [64], object detection [33], and video applications [38, 51, 54].
Various coarse-to-fine schemes [4] have been proposed [9,19,26,57] to synthesize larger images (e.g. 256 × 256) in an unconditional setting.
Inspired by their successes, we propose a new coarse-to-fine generator and multi-scale discriminator architectures suitable for conditional image generation at a much higher resolution.
Image-to-image translation Many researchers have leveraged adversarial learning for image-to-image translation [21], whose goal is to translate an input image from one domain to another domain given input-output image pairs as training data.
Compared to L1 loss, which often leads to blurry images [21, 22], the adversarial loss [16] has become a popular choice for many image-to-image tasks [10, 24, 25, 32, 41, 46, 55, 60, 66].
The reason is that the discriminator can learn a trainable loss function and automatically adapt to the differences between the generated and real images in the target domain.
For example, the recent pix2pix framework [21] used image-conditional GANs [39] for different applications, such as transforming Google maps to satellite views and generating cats from user sketches.
Various methods have also been proposed to learn an image-to-image translation in the absence of training pairs [2, 34, 35, 47, 50, 52, 56, 65].
Recently, Chen and Koltun [5] suggest that it might be hard for conditional GANs to generate high-resolution images due to the training instability and optimization issues.
To avoid this difficulty, they use a direct regression objective based on a perceptual loss [11, 13, 22] and produce the first model that can synthesize 2048 × 1024 images.
The generated results are high-resolution but often lack fine details and realistic textures.
Motivated by their success, we show that using our new objective function as well as novel multiscale generators and discriminators, we not only largely stabilize the training of conditional GANs on high-resolution images, but also achieve significantly better results compared to Chen and Koltun [5].
Side-by-side comparisons clearly show our advantage (Figs. 1, 9, 8, 10).
Deep visual manipulation Recently, deep neural networks have obtained promising results in various image processing tasks, such as style transfer [13], inpainting [41],colorization [58], and restoration [14].
However, most of these works lack an interface for users to adjust the current result or explore the output space.
To address this issue, Zhu et al. [64] developed an optimization method for editing the object appearance based on the priors learned by GANs.
Recent works [21, 46, 59] also provide user interfaces for creating novel imagery from low-level cues such as color and sketch.
All of the prior works report results on low-resolution images. Our system shares the same spirit as this past work, but we focus on object-level semantic editing, allowing users to interact with the entire scene and manipulate individual objects in the image.
As a result, users can quickly create a new scene with minimal effort.
Our interface is inspired by prior data-driven graphics systems [6, 23, 29].
But our system allows more flexible manipulations and produces high-res results in real-time.
3. Instance-Level Image Synthesis
We propose a conditional adversarial framework for generating high-resolution photo-realistic images from semantic label maps. We first review our baseline model pix2pix (Sec. 3.1).
We then describe how we increase the photorealism and resolution of the results with our improved objective function and network design (Sec. 3.2).
Next, we use additional instance-level object semantic information to further improve the image quality (Sec. 3.3).
Finally, we introduce an instance-level feature embedding scheme to better handle the multi-modal nature of image synthesis, which enables interactive object editing (Sec. 3.4).
3.1. The pix2pix Baseline
The pix2pix method [21] is a conditional GAN framework for image-to-image translation. It consists of a generator G and a discriminator D. For our task, the objective of the generator G is to translate semantic label maps to realistic-looking images, while the discriminator D aims to distinguish real images from the translated ones.
The framework operates in a supervised setting. In other words, the training dataset is given as a set of pairs of corresponding images {(si , xi)}, where si is a semantic label map and xi is a corresponding natural photo.
Conditional GANs aim to model the conditional distribution of real images given the input semantic label maps via the following minimax game: (1) where the objective function LGAN (G, D) is given by (2)
The pix2pix method adopts U-Net [43] as the generator and a patch-based fully convolutional network [36] as the discriminator.
The input to the discriminator is a channelwise concatenation of the semantic label map and the corresponding image.
However, the resolution of the generated images on Cityscapes [7] is up to 256 × 256.
We tested directly applying the pix2pix framework to generate highresolution images but found the training unstable and the quality of generated images unsatisfactory.
Therefore, we describe how we improve the pix2pix framework in the next subsection.
3.2. Improving Photorealism and Resolution
We improve the pix2pix framework by using a coarse-tofine generator, a multi-scale discriminator architecture, and a robust adversarial learning objective function.
Coarse-to-fine generator We decompose the generator into two sub-networks: G1 and G2. We term G1 as the global generator network and G2 as the local enhancer network.
The generator is then given by the tuple G = {G1, G2} as visualized in Fig. 3.
The global generator network operates at a resolution of 1024 × 512, and the local enhancer network outputs an image with a resolution that is 4× the output size of the previous one (2× along each image dimension).
For synthesizing images at an even higher resolution, additional local enhancer networks could be utilized. For example, the output image resolution of the generator G = {G1, G2} is 2048 × 1024, and the output image resolution of G = {G1, G2, G3} is 4096 × 2048.
Our global generator is built on the architecture proposed by Johnson et al. [22], which has been proven successful for neural style transfer on images up to 512 × 512.
It consists of 3 components: a convolutional front-end G(F )1, a set of residual blocks G (R)1[18], and a transposed convolutional back-end G(B)1.
A semantic label map of resolution 1024×512 is passed through the 3 components sequentially to output an image of resolution 1024 × 512.
The local enhancer network also consists of 3 components: a convolutional front-end G(F )2, a set of residual blocks G(R)2, and a transposed convolutional back-endG(B)2.
The resolution of the input label map to G2 is 2048 × 1024. Different from the global generator network, the input to the residual block G(R)2is the element-wise sum of two feature maps: the output feature map of G(F )2, andthe last feature map of the back-end of the global generator network G(B)1.
This helps integrating the global information from G1 to G2.
During training, we first train the global generator and then train the local enhancer in the order of their resolutions.
We then jointly fine-tune all the networks together.
We use this generator design to effectively aggregate global and local information for the image synthesis task.
We note that such a multi-resolution pipeline is a wellestablished practice in computer vision [4] and two-scale is often enough [3].
Similar ideas but different architectures could be found in recent unconditional GANs [9, 19] and conditional image generation [5, 57].
Multi-scale discriminators High-resolution image synthesis poses a significant challenge to the GAN discriminator design.
To differentiate high-resolution real and synthesized images, the discriminator needs to have a large receptive field.
This would require either a deeper network or larger convolutional kernels, both of which would increase the network capacity and potentially cause overfitting.
Also, both choices demand a larger memory footprint for training, which is already a scarce resource for highresolution image generation.
To address the issue, we propose using multi-scale discriminators.
We use 3 discriminators that have an identical network structure but operate at different image scales.
We will refer to the discriminators as D1, D2 and D3.
Specifically, we downsample the real and synthesized highresolution images by a factor of 2 and 4 to create an image pyramid of 3 scales. The discriminators D1, D2 and D3 are then trained to differentiate real and synthesized images at the 3 different scales, respectively.
Although the discriminators have an identical architecture, the one that operates at the coarsest scale has the largest receptive field.
It has a more global view of the image and can guide the generator to generate globally consistent images.
On the other hand, the discriminator at the finest scale encourages the generator to produce finer details.
This also makes training the coarse-to-fine generator easier, since extending a lowresolution model to a higher resolution only requires adding a discriminator at the finest level, rather than retraining from scratch.
Without the multi-scale discriminators, we observe that many repeated patterns often appear in the generated images.
With the discriminators, the learning problem in Eq. (1) then becomes a multi-task learning problem of (3)
Using multiple GAN discriminators at the same image scale has been proposed in unconditional GANs [12]. Iizuka et al. [20] add a global image classifier to conditional GANs to synthesize globally coherent content for inpainting.
Here we extend the design to multiple discriminators at different image scales for modeling high-resolution images.
Improved adversarial loss We improve the GAN loss in Eq. (2) by incorporating a feature matching loss based on the discriminator.
This loss stabilizes the training as the generator has to produce natural statistics at multiple scales.
Specifically, we extract features from multiple layers of the discriminator and learn to match these intermediate representations from the real and the synthesized image.
For ease of presentation, we denote the ith-layer feature extractor of discriminator Dk as D(i)k (from input to the ith layer of Dk).
The feature matching loss LFM(G, Dk) is then calculated as: (4) where T is the total number of layers and Ni denotes the number of elements in each layer.
Our GAN discriminator feature matching loss is related to the perceptual loss [11,13,22], which has been shown to be useful for image superresolution [32] and style transfer [22].
In our experiments, we discuss how the discriminator feature matching loss and the perceptual loss can be jointly used for further improving the performance. We note that a similar loss is used in VAEGANs [30].
Our full objective combines both GAN loss and feature matching loss as: (5) where λ controls the importance of the two terms.
Note that for the feature matching loss LFM, Dk only serves as a feature extractor and does not maximize the loss LFM.
3.3. Using Instance Maps
Existing image synthesis methods only utilize semantic label maps [5,21,25], an image where each pixel value represents the object class of the pixel.
This map does not differentiate objects of the same category.
On the other hand, an instance-level semantic label map contains a unique object ID for each individual object.
To incorporate the instance map, one can directly pass it into the network, or encode it into a one-hot vector.
However, both approaches are difficult to implement in practice, since different images may contain different numbers of objects of the same category.
Alternatively, one can pre-allocate a fixed number of channels (e.g., 10) for each class, but this method fails when the number is set too small, and wastes memory when the number is too large.
Instead, we argue that the most critical information the instance map provides, which is not available in the semantic label map, is the object boundary.
For example, when objects of the same class are next to one another, looking at the semantic label map alone cannot tell them apart.
This is especially true for the street scene since many parked cars or walking pedestrians are often next to one another, as shown in Fig. 4a.
However, with the instance map, separating these objects becomes an easier task.
Therefore, to extract this information, we first compute the instance boundary map (Fig. 4b).
In our implementation, a pixel in the instance boundary map is 1 if its object ID is different from any of its 4-neighbors, and 0 otherwise.
The instance boundary map is then concatenated with the one-hot vector representation of the semantic label map, and fed into the generator network.
Similarly, the input to the discriminator is the channel-wise concatenation of instance boundary map, semantic label map, and the real/synthesized image.
Figure 5b shows an example demonstrating the improvement by using object boundaries.
Our user study in Sec. 4 also shows the model trained with instance boundary maps renders more photo-realistic object boundaries.
3.4. Learning an Instance-level Feature Embedding
Image synthesis from semantic label maps is a one-tomany mapping problem.
An ideal image synthesis algorithm should be able to generate diverse, realistic images using the same semantic label map.
Recently, several works learn to produce a fixed number of discrete outputs given the same input [5,15] or synthesize diverse modes controlled by a latent code that encodes the entire image [66].
Although these approaches tackle the multi-modal image synthesis problem, they are unsuitable for our image manipulation task mainly for two reasons.
First, the user has no intuitive control over which kinds of images the model would produce [5, 15]. Second, these methods focus on global color and texture changes and allow no object-level control on the generated contents.
To generate diverse images and allow instance-level control, we propose adding additional low-dimensional feature channels as the input to the generator network.
We show that, by manipulating these features, we can have flexible control over the image synthesis process.
Furthermore, note that since the feature channels are continuous quantities, our model is, in principle, capable of generating infinitely many images.
To generate the low-dimensional features, we train an encoder network E to find a low-dimensional feature vector that corresponds to the ground truth target for each instance in the image.
Our feature encoder architecture is a standard encoder-decoder network.
To ensure the features are consistent within each instance, we add an instance-wise average pooling layer to the output of the encoder to compute the average feature for the object instance.
The average feature is then broadcast to all the pixel locations of the instance.
Figure 6 visualizes an example of the encoded features.
We replace G(s) with G(s, E(x)) in Eq. (5) and train the encoder jointly with the generators and discriminators.
After the encoder is trained, we run it on all instances in the training images and record the obtained features.
Then we perform a K-means clustering on these features for each semantic category.
Each cluster thus encodes the features for a specific style, for example, the asphalt or cobblestone texture for a road.
At inference time, we randomly pick one of the cluster centers and use it as the encoded features.
These features are concatenated with the label map and used as the input to our generator.
We tried to enforce the KullbackLeibler loss [28] on the feature space for better test-time sampling as used in the recent work [66] but found it quite involved for users to adjust the latent vectors for each object directly.
Instead, for each object instance, we present K modes for users to choose from.
4. Results
We first provide a quantitative comparison against leading methods in Sec. 4.1.
We then report a subjective human perceptual study in Sec. 4.2.
Finally, we show a few examples of interactive object editing results in Sec. 4.3.
Implementation details We use LSGANs [37] for stable training. In all experiments, we set the weight λ = 10 (Eq. (5)) and K = 10 for K-means.
We use 3-dimensional vectors to encode features for each object instance.
We experimented with adding a perceptual loss λ PN i=1 1 Mi [||F(i)(x) − F(i)(G(s))||1] to our objective(Eq. (5)), where λ = 10 and F(i) denotes the i-th layer with Mi elements of the VGG network.
We observe that this loss slightly improves the results. We name these two variants as ours and ours (w/o VGG loss).
Please find more training and architecture details in the appendix.
Datasets We conduct extensive comparisons and ablation studies on Cityscapes dataset [7] and NYU Indoor RGBD dataset [40].
We report additional qualitative results on ADE20K dataset [63] and Helen Face dataset [31, 49].
Baselines We compare our method with two state-of-the-art algorithms: pix2pix [21] and CRN [5].
We train pix2pix models on high-res images with the default setting.
We produce the high-res CRN images via the authors’ publicly available model.
4.1. Quantitative Comparisons
We adopt the same evaluation protocol from previous image-to-image translation works [21, 65].
To quantify the quality of our results, we perform semantic segmentation on the synthesized images and compare how well the predicted segments match the input.
The intuition is that if we can produce realistic images that correspond to the input label map, an off-the-shelf semantic segmentation model (e.g., PSPNet [61] that we use) should be able to predict the ground truth label.
Table 1 reports the calculated segmentation accuracy. As can be seen, for both pixel-wise accuracy and mean intersection-over-union (IoU), our method outperforms the other methods by a large margin.
Moreover, our result is very close to the result of the original images, the theoretical “upper bound” of the realism we can achieve.
This justifies the superiority of our algorithm.
4.2. Human Perceptual Study
We further evaluate our algorithm via a human subjective study.
We perform pairwise A/B tests deployed on the Amazon Mechanical Turk (MTurk) platform on the Cityscapes dataset [7].
We follow the same experimental procedure as described in Chen and Koltun [5].
More specifically, two different kinds of experiments are conducted: unlimited time and limited time, as explained below.
Unlimited time For this task, workers are given two images at once, each of which is synthesized by a different method for the same label map. We give them unlimited time to select which image looks more natural.
The leftright order and the image order are randomized to ensure fair comparisons.
All 500 Cityscapes test images are compared 10 times, resulting in 5, 000 human judgments for each method. In this experiment, we use the model trained on labels only (without instance maps) to ensure a fair comparison.
Table 2 shows that both variants of our method outperform the other methods significantly.
Limited time Next, for the limited time experiment, we compare our result with CRN and the original image (ground truth). In each comparison, we show results of two methods for a short period of time.
We randomly select a duration between 1/8 seconds and 8 seconds, as adopted by prior work [5].
This evaluates how quickly the difference between the images can be perceived. Fig. 7 shows the comparison results at different time intervals.
As the given time becomes longer and longer, the differences between these three types of images become more apparent and easier to observe.
Figures 9 and 10 show some example results.
Analysis of the loss function We also study the importance of each term in our objective function using the unlimited time experiment.
Specifically, our final loss contains three components: GAN loss, discriminator-based feature matching loss, and VGG perceptual loss.
We compare our final implementation to the results using (1) only GAN loss, and (2) GAN + feature matching loss (i.e., without VGG loss).
The obtained preference rates are 68.55% and 58.90%, respectively.
As can be seen, adding the feature matching loss substantially improves the performance, while adding perceptual loss further enhances the results.
However, note that using the perceptual loss is not critical, and we are still able to generate visually appealing results even without it (e.g., Figs. 9c, 10b).
Using instance maps We compare the results using instance maps to results without using them.
We highlight the car regions in the images and ask the participants to choose which region looks more realistic.
We obtain a preference rate of 64.34%, which indicates that using instance maps improves the realism of our results, especially around the object boundaries.
Analysis of the generator We compare results of different generators with all the other components fixed.
In particular, we compare our generator with two state-of-the-art generator architectures: U-Net [21, 43] and CRN [5].
We evaluate the performance regarding both semantic segmentation scores and human perceptual study results.
Table 3 and Table 4 show that our coarse-to-fine generator outperforms other networks by a large margin.
Analysis of the discriminator Next, we also compare results using our multi-scale discriminators and results using only one discriminator while we keep the generator and the loss function fixed.
The segmentation scores on Cityscapes [7] (Table 5) demonstrate that using multi-scale discriminators helps produce higher quality results as well as stabilize the adversarial training.
We also perform pairwise A/B tests on the Amazon Mechanical Turk platform.
69.2% of the participants prefer our results with multi-scale Additional datasets To further evaluate our method, we perform unlimited time comparisons on the NYU dataset.
We obtain 86.7% and 63.7% against pix2pix and CRN, respectively. Fig. 8 show some example images.
Finally, we show results on the ADE20K [63] dataset (Fig. 11).
4.3. Interactive Object Editing
Our feature encoder allows us to perform interactive instance editing on the resulting images.
For example, we can change the object labels in the image to quickly create novel scenes, such as replacing trees with buildings (Fig. 1b).
We can also change the colors of individual cars or the textures of the road (Fig. 1c). Please check out our interactive demoson our website.
Besides, we implement our interactive object editing feature on the Helen Face dataset where labels for different facial parts are available [49] (Fig. 12).
This makes it easy to edit human portraits, e.g., changing the face color to mimic different make-up effects or adding beard to a face.
5. Discussion and Conclusion
The results in this paper suggest that conditional GANs can synthesize high-resolution photo-realistic imagery without any hand-crafted losses or pre-trained net-works.
We have observed that incorporating a perceptual loss [22] can slightly improve the results.
Our method allows many applications and will be potentially useful for domains where high-resolution results are in demand but pre-trained networks are not available (e.g., medical imaging [17] and biology [8]).
This paper also shows that an image-to-image synthesis pipeline can be extended to produce diverse outputs,and enable interactive image manipulation given appropriate training input-output pairs (e.g., instance maps in our case).
Without ever been told what a “texture” is, our model learns to stylize different objects, which may be generalized to other datasets as well (i.e., using textures in one dataset to synthesize images in another dataset).
We believe these extensions can be potentially applied to other image synthesis problems.
'비지도학습 > GAN' 카테고리의 다른 글
[논문]DiscoFaceGAN,2020 (0) | 2021.02.25 |
---|---|
Patch-Based Image Inpainting with Generative Adversarial Networks,2018 (0) | 2021.02.18 |
[11주차] StarGAN v2: Diverse Image Synthesis for Multiple Domains, 2020 (0) | 2021.01.28 |
[2주차] GAN (수정중) (0) | 2021.01.28 |
[1주차] AE/VAE/GAN(수정중) (0) | 2021.01.28 |