[2104.03310] Regularizing Generative Adversarial Networks under Limited Data

Significance

Regularizing discriminator stabilizes GAN training with limited dataset

Keypoints

  • Propose a discriminator regularization function for training GAN under limited data
  • Theoretically show relationship between the regularized loss and LeCam-divergence
  • Experimentally show improvement in GAN training dynamics

Review

Background

Generative adversarial network (GAN) is a class of the generative models, which employs an adversarial training between the generator $G$ and the discriminator $D$ to minimize the divergence between the true data distribution $p_{data}$ and the generated data distribution with random noise $z$ as prior $p_{G(z)}$. Although GANs are capable of generating realistic images, training of the GANs requires a large amount data and the learning dynamic is not very stable. To overcome the instability of the training dynamics, it has been proposed to minimize different loss function (LSGAN, WGAN) or to regularize the weight matrices (Spectral normalization). However, not many works have addressed the issue of training the GANs with limited dataset. This work presents a simple way to improve the quality of the generated image with GANs by adding a regularization to the discriminator loss during training.

Keypoints

Propose a discriminator regularization function for training GAN under limited data

Training the GAN alternates between maximizing the discriminator loss $V_{D}$ and minimizing the generator loss $L_{G}$. \begin{align} \underset{D}{\max}V_{D}, \quad V_{D} &= \underset{\mathbf{x}\sim\mathcal{T}}{\mathbb{E}} [ f_{D}(D(\mathbf{x})) ] + \underset{\mathbf{z}\sim\mathcal{N}(0,1)}{\mathbb{E}} [ f_{G}(D(G(\mathbf{z}))) ] \\ \underset{G}{\max}L_{G}, \quad L_{G} &= \underset{\mathbf{z}\sim\mathcal{N}(0,1)}{\mathbb{E}} [ g_{G}(D(G(\mathbf{z}))) ], \end{align} where $\mathcal{T}$ is the training dataset distribution, and $f$, $g$ are loss functions for training the GAN.

To improve the performance when only a limited amount of data can be sampled from the training data distribution $\mathcal{T}$, the authors propose to add a regularization term $R_{LC}$ to the $V_{D}$, arriving at the final discriminator loss $L_{D}$: \begin{equation}\label{eq:proposed} L_{D} = -V_{D} + \lambda R_{LC}, \end{equation} where $\lambda$ is the coefficient hyperparameter and the $R_{LC}$ is defined as \begin{equation} R_{LC} = \underset{\mathbf{x}\sim\mathcal{T}}{\mathbb{E}} [ || D(\mathbf(x)) - \alpha_{F}||^{2} ] + \underset{\mathbf{z}\sim\mathcal{N}(0,1)}{\mathbb{E}} [ || D(G(\mathbf(z))) - \alpha_{R}||^{2} ]. \end{equation} The $\alpha_{R}$ and the $\alpha_{F}$ are exponential moving average $\alpha^{(t)} = \gamma \times \alpha^{(t-1)} + (1-\gamma) \times v^{(t)}$ of the discriminator prediction for real and fake image inputs. Note that the distance between the real image input and the fake moving average is minimized, and vice versa.

Theoretically show relationship between the regularized loss and LeCam-divergence

The authors provide two propositions to claim that:

  1. Minimizing WGAN objective with the proposed regularization follows minimizing the LeCam(LC)-divergence
  2. LC-divergence is an $f$-divergence with properties suited for GAN training.

Derivation of the two propositions is not very complicated so is referred to the original paper. After the relationship between the regularized WGAN objective and the LC-divergence is shown, it is insisted that minimizing the LC-divergence is more stable when the number of training data is limited because it is more robust to extreme values. 210408-1 LC-divergence is more robust to extreme values than other $f$-divergences

In my opinion, it is not very straightforward to think that the stability of the function leads to the stability of the training when the number of data is limited. Furthermore, if we assume that it is true, then the above figure indicates that training with the vanilla GAN (JS-divergence) is more stable than training with the LSGAN ($\chi^{2}$-divergence) or the EBGAN (Total Variance).

Experimentally show improvement in GAN training dynamics

Improvement in GAN training dynamics is demonstrated by experiments using the CIFAR-10/CIFAR-100/ImageNet datasets and BigGAN(conditional)/StyleGAN2(unconditional) models with Inception score (IS)/Fréchet inception distance (FID) as the metric. The authors prove the utility of the proposed regularization \eqref{eq:proposed} by extensive quantitative and qualitative experiments, and can be summarized as:

  1. Image generation quality is relatively well preserved when the training dataset is limited
  2. Can further complement data augmentation strategies for GAN
  3. Training dynamic of GAN is stabilized

210408-2 Qualitative and quantitative performance with limited dataset, along with synergistic effect with data augmentation (DA) 210408-3 Training dynamic is stabilized, especially when the training dataset is limited

Related

Share

Comment

#image-generation #multi-modal #language-model #retrieval-augmentation #robotics #forecasting #psychiatry #instruction-tuning #diffusion-model #notice #graph-neural-network #responsible-ai #privacy-preserving #scaling #mixture-of-experts #generative-adversarial-network #speech-model #contrastive-learning #self-supervised #image-representation #image-processing #object-detection #pseudo-labeling #scene-text-detection #neural-architecture-search #data-sampling #long-tail #graph-representation #zero-shot #metric-learning #federated-learning #weight-matrix #low-rank #vision-transformer #computer-vision #normalizing-flow #invertible-neural-network #super-resolution #image-manipulation #thread-summarization #natural-language-processing #domain-adaptation #knowledge-distillation #scene-text #model-compression #semantic-segmentation #instance-segmentation #video-understanding #code-generation #graph-generation #image-translation #data-augmentation #model-pruning #signal-processing #text-generation #text-classification #music-representation #transfer-learning #link-prediction #counterfactual-learning #medical-imaging #acceleration #transformer #style-transfer #novel-view-synthesis #point-cloud #spiking-neural-network #optimization #multi-layer-perceptron #adversarial-training #visual-search #image-retrieval #negative-sampling #action-localization #weakly-supervised #data-compression #hypergraph #adversarial-attack #submodularity #active-learning #deblurring #object-tracking #pyramid-structure #loss-function #gradient-descent #generalization #bug-fix #orthogonality #explainability #saliency-mapping #information-theory #question-answering #knowledge-graph #robustness #limited-data #recommender-system #anomaly-detection #gaussian-discriminant-analysis #molecular-graph #video-processing