user-icon Kevin Widholm & Gisela Vallejo
09. November 2018
timer-icon 8 min

GANs – Applied on Novatec employees

How does the average employee of Novatec look like? In this post we want to investigate deeper generative models in order to solve this question. Generative Adversarial Nets (GANs) can be understood as an adversarial process to estimate generative models. Here, we show how we trained such a GAN model in Python so that it creates some fake images from Novatec employees. But first, let’s take a look at the main idea behind GANs and then we’ll turn back to our question/problem.

How does the average employee of Novatec look like? In this post we want to investigate deeper generative models in order to solve this question. Generative Adversarial Nets (GANs) can be understood as an adversarial process to estimate generative models. Here, we show how we trained such a GAN model in Python so that it creates some fake images from Novatec employees. But first, let’s take a look at the main idea behind GANs and then we’ll turn back to our question/problem.

What are Generative Adversarial Networks?

According to Yann LeCunn, Facebook’s AI research director: Generative Adversarial Networks are “the most interesting idea in the last 10 years in ML” (2016). GANs were introduced by Ian Goodfellow in 2014 in this paper (click here) and their main feature is the deep neural net architecture comprised of two separate nets defined by default as Multi-Layer-Perceptron, pitting one against the other.

How GANs work?

As the name suggests, a GAN system is created from two adversarial models: a generator and a discriminator. The generative part learns the distribution of the data and is trying to fool the discriminator by creating some fake samples. The counterpart is represented by the discriminator, which classifies his input as a real or fake sample. In fact, the discriminative part learns the boundary between classes and evaluates the input data (alternating data from the generator and the real dataset) for authenticity.

It’s a ‘Min-Max-game’: The generator wants to minimize the success value of the discriminator. The discriminator tries to maximize this value.  A great metaphor to understand the mechanism of GANs is the forger-detective-metaphor of Goodfellow. The generator was a team of forgers trying to generate fake paintings, while the discriminator was a team of detectives trying to tell the difference between real and fake. The forgers never get to see the real paintings — only the feedback of the detectives. They are blind forgers.

Over time both models get better until the generator gets a master forger and the discriminator cannot say if the input is real or fake. At this point, the discriminator’s probability should be always around 50/50. Given that the trained generator becomes over time very powerful in generating realistic outputs which cannot be distinguished from the training data.

Implementation of the GAN model

As an implementation example, we’ve developed an application to generate some fake Novatec employee images. To do so, we fed real Novatec employee images into the discriminator and trained the generator to be able to create some fake images after the training.

Setting up

In order to achieve this model, we used the code from this repository: https://github.com/FelixMohr/Deep-learning-with-Python/blob/master/DCGAN-face-creation.ipynb. Let’s take a look at the code and check how to build a GANs model.


The dataset we used was a collection of pictures all of them of Novatec employees. In order to allow a reasonable training time, we downscaled the image data to 40 x 40 pixels. Our hyperparameters can be found in the following table:

Hyperparameter Value
Activation function

Dropout rate
Feature maps per filter (Discriminator)

Feature maps per filter (Generator)

Filter width

Mini-batch size

Random noise

leaky ReLU

0.6

256, 128, 64

256, 128, 64, 3

5

16

16

At the moment of running our experiments, the leaky ReLU function wasn’t supported by TensorFlow. Therefore, we took the self-implemented function from the repository. We also ran experiments using the standard ReLU but we only got generated black squares as output.

Discriminator

First, we implement our detective – the discriminator. On the one hand, it takes real employee images from the data set as input. On the other hand, it is also fed with some fake images created by our generator. Here we don’t apply the default MLP architecture but we use a series of convolutions, which results on a special type of GAN, called Deep Convolutional Generative Adversarial Network or simpler DCGAN. We use sigmoid as the activation function for the last layer calculate the probability of the input image being a real profile picture of a Novatec employee.

Generator

The generator, our blind forger, takes random noise and learns to transform this noise into images looking as similar as possible as real training examples. The parameters of the generator have to be tuned to train him effectively. For example, we included batch normalization and tried to find the right dimensions within the different layers. Here is how we got our best result:

Losses

After defining the discriminator and the generator functions, we initiate and put them together. However, it is necessary to create two discriminator objects, one for real images and one for the fake images the generator makes. The idea is that both discriminators share their variables. Therefore, the reuse-boolean has to be set to True. Out of these objects we need to calculate losses. We need one loss for real images, when the discriminator learns to compute values near one, which means the image is real. The other loss function is for fake images and values near zero. In that case the discriminator is confident the image comes from the generator and is fake.

In contrast to the discriminator, the generator tries to make the discriminator to fail, so he assigns values near one to fake images. To save and restore our model afterwards we implement a saver object.

Training

Now let’s train our net! We input random noise in our generator, who will learn to create employee images using that noise. We apply loss balancing, so that the generator and the discriminator learn constantly and none of them becomes much stronger than the other.

Results

Here are the images drawn by our generator after a training duration of 20 hours on a GPU Tesla K80. Given the fact that our training images were rescaled to 40 x 40 pixels, our generated images don’t have a good resolution. Please notice that if you don’t have a strong GPU, it would take much longer. Our next step will be running some experiments with images of higher resolution.

If you take a look at the images, you may recognize some new Novatec coworkers. They consist of mixed features of real Novatec employees. Imagine one of these could be one of our teammates, with whom you want to spend your next coffee break! Let’s keep in mind that the neural network never has seen any people and especially any Novatec employee before, and how little effort we had to put into the implementation to develop a model that knows what characterizes a Novatecler.

We, the ML community of Novatec, think that this technique obviously has a lot of potential for new amazing applications in the future. And yes, we agree with Mr. LeCunn’s opinion: GANs are the most interesting idea in the last decade in terms of ML.

Comment article