There are many variations of Generative Adversarial Networks. GAN Zoo actually became so big that just scrolling through all papers that are utilizing this concept can cause pain in your finger. All jokes aside GANs main concepts changed the world of deep learning. Their simple architecture that is consisting of two neural networks which are competing against each other, opened a completely new chapter in neural networks history.
Adversarial training, however, was not a new idea event at the moment of GAN’s emerging. It can be traced back to machine learning legend Arthur L. Samuel. His two main papers (Samuel 1959; Samuel 1967) are landmarks in Artificial Intelligence. In his 1959. paper, which explored computer checkers, he described the problem of an agent which is playing a game of chess against itself. This is a typical example of the Adversarial process. Ian Goodfellow, the inventor of GANs, defined the adversarial process as “Training a model in a worst-case scenario, with inputs chosen by an adversary”.
Check out this article if you want to learn how exactly GANs used this approach. So far in our GAN journey, we had a chance to explore and implement several architectures. Apart from standard GAN, we explored DCGAN and Adversarial Autoencoders. In the previous article, we got familiar with a special case in this niche Cycle GAN. These networks are not used for generating data but rather for transferring certain characteristics of images from one domain to the images of another domain. This problem is called Unpaired Image-to-Image Translation. Before we dive into implementation, let’s remind ourselves a little bit about the nature and structure of this type of neural networks.
Cycle GAN Architecture
In order to solve the problem of transferring style and characteristics of images from one domain to another and vice versa, we will create two sets of Generators and Discriminators. The first Generator (G) has the task to transform images from X domain to Y domain(G: X → Y), and the second Generator (F) has the task of transferring images from Y domain to X (F: Y → X). Their respective adversarial Discriminators are Dy and Dx.
Discriminator Dy pushes generator G to translate inputs from X into outputs that look like the images from Y. Second discriminator – Dx forces generator F to transform inputs from X into outputs from domain Y. Here is how the architecture looks like:
However, this process would be highly unstable if we left it just like this. Meaning that the process of mapping images from one domain to the other needs to be regularized. This is done using two-cycle consistency losses. These losses guarantee that image that is transferred from one domain to another, and back again will be the same(ish).
The first loss is called forward cycle-consistency loss (x → G(x) → F(G(x)) ≈ x), and the second one is called backward cycle-consistency loss (y → F(y) → G(F(y)) ≈ y ). Using this mechanism, Cycle GAN is actually pushing its generators to be consistent with each other. If you want to learn more about the theory and math behind Cycle GAN, check out this article.
Technologies, Dataset and Helpers
Now, that we had a small recap of how Cycle GAN work, so let’s find out technologies and data that we will use in this article. Apart from that, we will explore one helper class that is used for image manipulation. In this implementation, we are using Python 3.6.5 and TensorFlow 1.10.0 and Keras 2.1.6. If you need help with TensorFlow installation follow this article.
Regarding the dataset, we will use the one we used one of the datasets provided by authors of the architecture – monet2photo. In this dataset, we are having paintings of Monet and photos of landscapes. We will transfer Monet’s style to landscape photos and make Monet’s paintings more real, so to say. This and other datasets of this type can be downloaded from here.
In some of the previous articles, we used helper class for image manipulation. In this one, we use a similar class. It is more complicated than in previous examples, but it is still quite straight forward:
import os | |
import numpy as np | |
from glob import glob | |
import scipy | |
import matplotlib.pyplot as plt | |
class ImageHelper(object): | |
def save_image(self, plot_images, epoch): | |
os.makedirs('cycle_gan_images', exist_ok=True) | |
titles = ['Original', 'Transformed'] | |
fig, axs = plt.subplots(2, 2) | |
cnt = 0 | |
for i in range(2): | |
for j in range(3): | |
axs[i,j].imshow(plot_images[cnt]) | |
axs[i, j].set_title(titles[j]) | |
axs[i,j].axis('off') | |
cnt += 1 | |
fig.savefig("cycle_gan_images/{}".format(epoch)) | |
plt.close() | |
def plot20(self, images_paths_array): | |
plt.figure(figsize=(10, 8)) | |
for i in range(20): | |
img = plt.imread(images_paths_array[i]) | |
plt.subplot(4, 5, i+1) | |
plt.imshow(img) | |
plt.title(img.shape) | |
plt.xticks([]) | |
plt.yticks([]) | |
plt.tight_layout() | |
plt.show() | |
def load_image(self, path): | |
return scipy.misc.imread(path, mode='RGB').astype(np.float) | |
def load_testing_image(self, path): | |
self.img_res=(128, 128) | |
path_X = glob(path + "/testA/*.jpg") | |
path_Y = glob(path + "/testB/*.jpg") | |
image_X = np.random.choice(path_X, 1) | |
image_Y = np.random.choice(path_Y, 1) | |
img_X = self.load_image(image_X[0]) | |
img_X = scipy.misc.imresize(img_X, self.img_res) | |
if np.random.random() > 0.5: | |
img_X = np.fliplr(img_X) | |
img_X = np.array(img_X)/127.5 – 1. | |
img_X = np.expand_dims(img_X, axis=0) | |
img_Y = self.load_image(image_Y[0]) | |
img_Y = scipy.misc.imresize(img_Y, self.img_res) | |
if np.random.random() > 0.5: | |
img_X = np.fliplr(img_X) | |
img_Y = np.array(img_Y)/127.5 – 1. | |
img_Y = np.expand_dims(img_Y, axis=0) | |
return (img_X, img_Y) | |
def load_batch_of_train_images(self, path, batch_size=1): | |
self.img_res=(128, 128) | |
path_X = glob(path + "/trainA/*.jpg") | |
path_Y = glob(path + "/trainB/*.jpg") | |
self.n_batches = int(min(len(path_X), len(path_Y)) / batch_size) | |
total_samples = self.n_batches * batch_size | |
path_X = np.random.choice(path_X, total_samples, replace=False) | |
path_Y = np.random.choice(path_Y, total_samples, replace=False) | |
for i in range(self.n_batches–1): | |
batch_A = path_X[i*batch_size🙁i+1)*batch_size] | |
batch_B = path_Y[i*batch_size🙁i+1)*batch_size] | |
imgs_A, imgs_B = [], [] | |
for img_A, img_B in zip(batch_A, batch_B): | |
img_A = self.load_image(img_A) | |
img_B = self.load_image(img_B) | |
img_A = scipy.misc.imresize(img_A, self.img_res) | |
img_B = scipy.misc.imresize(img_B, self.img_res) | |
imgs_A.append(img_A) | |
imgs_B.append(img_B) | |
imgs_A = np.array(imgs_A)/127.5 – 1. | |
imgs_B = np.array(imgs_B)/127.5 – 1. | |
yield imgs_A, imgs_B |
Here is the explanation of the functions provided by this class:
- save_image – This method saves images used during training. Original and translated images are passed to it and using them this function displays results.
- plot20 – Plots 20 images from the defined path.
- load_image – In essence, this method is just a wrap for scipy.misc.imread. Meaning, it loads the image in the memory from the predefined location.
- load_testing_image – This method loads random images from the test folder, one image per domain.
- load_batch_of_train_images – This method loads a batch of train images (from the train folder) from both domains.
Implementation
The implementation of Cycle GAN is located inside of the Python class with the same name – CycleGAN. Note that this is one large class and that we will go through the important parts of implementation separately. Ready? Ok, here it is:
from __future__ import print_function, division | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
# Keras modules | |
from keras.layers import Input, LeakyReLU, UpSampling2D, Conv2D, Concatenate | |
from keras_contrib.layers.normalization import InstanceNormalization | |
from keras.models import Model | |
from keras.optimizers import Adam | |
class CycleGAN(): | |
def __init__(self, image_shape, cycle_lambda, image_hepler): | |
self.optimizer = Adam(0.0002, 0.5) | |
self.cycle_lambda = cycle_lambda | |
self.id_lambda = 0.1 * self.cycle_lambda | |
self._image_helper = image_hepler | |
self.img_shape = image_shape | |
# Calculate output shape | |
patch = int(self.img_shape[0] / 2**4) | |
self.disc_patch = (patch, patch, 1) | |
print("Build Discriminators…") | |
self._discriminatorX = self._build_discriminator_model() | |
self._compile_discriminator_model(self._discriminatorX) | |
self._discriminatorY = self._build_discriminator_model() | |
self._compile_discriminator_model(self._discriminatorY) | |
print("Build Generators…") | |
self._generatorXY = self._build_generator_model() | |
self._generatorYX = self._build_generator_model() | |
print("Build GAN…") | |
self._build_and_compile_gan() | |
def train(self, epochs, batch_size, train_data_path): | |
real = np.ones((batch_size,) + self.disc_patch) | |
fake = np.zeros((batch_size,) + self.disc_patch) | |
history = [] | |
for epoch in range(epochs): | |
for i, (imagesX, imagesY) in enumerate(self._image_helper.load_batch_of_train_images(train_data_path, batch_size)): | |
print ("———————————————————") | |
print ("******************Epoch {} | Batch {}***************************".format(epoch, i)) | |
print("Generate images…") | |
fakeY = self._generatorXY.predict(imagesX) | |
fakeX = self._generatorYX.predict(imagesY) | |
print("Train Discriminators…") | |
discriminatorX_loss_real = self._discriminatorX.train_on_batch(imagesX, real) | |
discriminatorX_loss_fake = self._discriminatorX.train_on_batch(fakeX, fake) | |
discriminatorX_loss = 0.5 * np.add(discriminatorX_loss_real, discriminatorX_loss_fake) | |
discriminatorY_loss_real = self._discriminatorY.train_on_batch(imagesY, real) | |
discriminatorY_loss_fake = self._discriminatorY.train_on_batch(fakeY, fake) | |
discriminatorY_loss = 0.5 * np.add(discriminatorY_loss_real, discriminatorY_loss_fake) | |
mean_discriminator_loss = 0.5 * np.add(discriminatorX_loss, discriminatorY_loss) | |
print("Train Generators…") | |
generator_loss = self.gan.train_on_batch([imagesX, imagesY], | |
[real, real, | |
imagesX, imagesY, | |
imagesX, imagesY]) | |
print ("Discriminator loss: {}".format(mean_discriminator_loss[0])) | |
print ("Generator loss: {}".format(generator_loss[0])) | |
print ("———————————————————") | |
history.append({"D":mean_discriminator_loss[0],"G":generator_loss}) | |
if i%100 ==0: | |
self._save_images("{}_{}".format(epoch, i), train_data_path) | |
self._plot_loss(history) | |
def _encode__layer(self, input_layer, filters): | |
layer = Conv2D(filters, kernel_size=4, strides=2, padding='same')(input_layer) | |
layer = LeakyReLU(alpha=0.2)(layer) | |
layer = InstanceNormalization()(layer) | |
return layer | |
def _decode_transform_layer(self, input_layer, forward_layer, filters): | |
layer = UpSampling2D(size=2)(input_layer) | |
layer = Conv2D(filters, kernel_size=4, strides=1, padding='same', activation='relu')(layer) | |
layer = InstanceNormalization()(layer) | |
layer = Concatenate()([layer, forward_layer]) | |
return layer | |
def _build_generator_model(self): | |
generator_input = Input(shape=self.img_shape) | |
print("Build Encoder…") | |
encode_layer_1 = self._encode__layer(generator_input, 32); | |
encode_layer_2 = self._encode__layer(encode_layer_1, 64); | |
encode_layer_3 = self._encode__layer(encode_layer_2, 128); | |
encode_layer_4 = self._encode__layer(encode_layer_3, 256); | |
print("Build Transformer – Decoder…") | |
decode_transform_layer1 = self._decode_transform_layer(encode_layer_4, encode_layer_3, 128); | |
decode_transform_layer2 = self._decode_transform_layer(decode_transform_layer1, encode_layer_2, 64); | |
decode_transform_layer3 = self._decode_transform_layer(decode_transform_layer2, encode_layer_1, 32); | |
generator_model = UpSampling2D(size = 2)(decode_transform_layer3) | |
generator_model = Conv2D(self.img_shape[2], kernel_size=4, strides=1, padding='same', activation='tanh')(generator_model) | |
final_generator_model = Model(generator_input, generator_model) | |
final_generator_model.summary() | |
return final_generator_model | |
def _build_discriminator_model(self): | |
discriminator_input = Input(shape=self.img_shape) | |
discriminator_model = Conv2D(64, kernel_size=4, strides=2, padding='same')(discriminator_input) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = Conv2D(128, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(256, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(512, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(1, kernel_size=4, strides=1, padding='same')(discriminator_model) | |
return Model(discriminator_input, discriminator_model) | |
def _compile_discriminator_model(self, model): | |
model.compile(loss='binary_crossentropy', | |
optimizer=self.optimizer, | |
metrics=['accuracy']) | |
model.summary() | |
def _build_and_compile_gan(self): | |
imageX = Input(shape=self.img_shape) | |
imageY = Input(shape=self.img_shape) | |
fakeY = self._generatorXY(imageX) | |
fakeX = self._generatorYX(imageY) | |
reconstructedX = self._generatorYX(fakeY) | |
reconstructedY = self._generatorXY(fakeX) | |
imageX_id = self._generatorYX(imageX) | |
imageY_id = self._generatorXY(imageY) | |
self._discriminatorX.trainable = False | |
self._discriminatorY.trainable = False | |
validX = self._discriminatorX(fakeX) | |
validY = self._discriminatorY(fakeY) | |
self.gan = Model(inputs=[imageX, imageY], | |
outputs=[ validX, validY, | |
reconstructedX, reconstructedY, | |
imageX_id, imageY_id ]) | |
self.gan.compile(loss=['mse', 'mse', | |
'mae', 'mae', | |
'mae', 'mae'], | |
loss_weights=[ 1, 1, | |
self.cycle_lambda, self.cycle_lambda, | |
self.id_lambda, self.id_lambda ], | |
optimizer=self.optimizer) | |
self.gan.summary() | |
def _save_images(self, epoch, path): | |
(img_X, img_Y) = self._image_helper.load_testing_image(path) | |
fake_Y = self._generatorXY.predict(img_X) | |
fake_X = self._generatorYX.predict(img_Y) | |
plot_images = np.concatenate([img_X, fake_Y, img_Y, fake_X]) | |
# Rescale | |
plot_images = 0.5 * plot_images + 0.5 | |
self._image_helper.save_image(plot_images, epoch) | |
def _plot_loss(self, history): | |
hist = pd.DataFrame(history) | |
plt.figure(figsize=(20,5)) | |
for colnm in hist.columns: | |
plt.plot(hist[colnm],label=colnm) | |
plt.legend() | |
plt.ylabel("loss") | |
plt.xlabel("epochs") | |
plt.show() |
That is a lot of code, right? Let’s split it into smaller chunks and check out the most important parts. There are two main access points of this class – the constructor and the train method. Everything starts with the constructor where the whole model is created. So, let’s explore it first:
def __init__(self, image_shape, cycle_lambda, image_hepler): | |
self.optimizer = Adam(0.0002, 0.5) | |
self.cycle_lambda = cycle_lambda | |
self.id_lambda = 0.1 * self.cycle_lambda | |
self._image_helper = image_hepler | |
self.img_shape = image_shape | |
# Calculate output shape | |
patch = int(self.img_shape[0] / 2**4) | |
self.disc_patch = (patch, patch, 1) | |
print("Build Discriminators…") | |
self._discriminatorX = self._build_discriminator_model() | |
self._compile_discriminator_model(self._discriminatorX) | |
self._discriminatorY = self._build_discriminator_model() | |
self._compile_discriminator_model(self._discriminatorY) | |
print("Build Generators…") | |
self._generatorXY = self._build_generator_model() | |
self._generatorYX = self._build_generator_model() | |
print("Build GAN…") | |
self._build_and_compile_gan() |
As you can see, the first two discriminators are made and compiled. This is done using _build_discriminator_model and _compile_discriminator_model method. Than two generator models are created with help of _build_generator_model method. Finally, all these graphs are connected together into architecture described in the previous article. Of course, inside of constructor class fields like cycle_lambda, _image_helper and optimizer are initialized. Now, let’s see those helper methods that we used to build our model. First, let’s explore _build_discriminator method:
def _build_discriminator_model(self): | |
discriminator_input = Input(shape=self.img_shape) | |
discriminator_model = Conv2D(64, kernel_size=4, strides=2, padding='same')(discriminator_input) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = Conv2D(128, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(256, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(512, kernel_size=4, strides=2, padding='same')(discriminator_model) | |
discriminator_model = LeakyReLU(alpha=0.2)(discriminator_model) | |
discriminator_model = InstanceNormalization()(discriminator_model) | |
discriminator_model = Conv2D(1, kernel_size=4, strides=1, padding='same')(discriminator_model) | |
return Model(discriminator_input, discriminator_model) |
Discriminator, in this case, is standard Convolutional Neural Network. Several layers of convolutional layers are used to detect features and based on that decide if the input image is coming from the desired domain. Simple as that. Building generator is a little bit complicated, so we will now go through three methods _encode__layer, _decode_transform_layer and _build_generator_model.
def _encode__layer(self, input_layer, filters): | |
layer = Conv2D(filters, kernel_size=4, strides=2, padding='same')(input_layer) | |
layer = LeakyReLU(alpha=0.2)(layer) | |
layer = InstanceNormalization()(layer) | |
return layer | |
def _decode_transform_layer(self, input_layer, forward_layer, filters): | |
layer = UpSampling2D(size=2)(input_layer) | |
layer = Conv2D(filters, kernel_size=4, strides=1, padding='same', activation='relu')(layer) | |
layer = InstanceNormalization()(layer) | |
layer = Concatenate()([layer, forward_layer]) | |
return layer | |
def _build_generator_model(self): | |
generator_input = Input(shape=self.img_shape) | |
print("Build Encoder…") | |
encode_layer_1 = self._encode__layer(generator_input, 32); | |
encode_layer_2 = self._encode__layer(encode_layer_1, 64); | |
encode_layer_3 = self._encode__layer(encode_layer_2, 128); | |
encode_layer_4 = self._encode__layer(encode_layer_3, 256); | |
print("Build Transformer – Decoder…") | |
decode_transform_layer1 = self._decode_transform_layer(encode_layer_4, encode_layer_3, 128); | |
decode_transform_layer2 = self._decode_transform_layer(decode_transform_layer1, encode_layer_2, 64); | |
decode_transform_layer3 = self._decode_transform_layer(decode_transform_layer2, encode_layer_1, 32); | |
generator_model = UpSampling2D(size = 2)(decode_transform_layer3) | |
generator_model = Conv2D(self.img_shape[2], kernel_size=4, strides=1, padding='same', activation='tanh')(generator_model) | |
final_generator_model = Model(generator_input, generator_model) | |
final_generator_model.summary() | |
return final_generator_model |
Essentially, our generator is created using so called encoding part, transformer part and decoder part. This can be visualized like this:
So, we need several encoding layers for down-sampling, several transformation layers for applying styles and several upsampling or decoding layers. These specific layers are created in functions _encode__layer and _decode_transform_layer. In the first one, encoding parts are built using convolutional layers and in second ones transformational and decoding layers are created using upsampling layers. All of these are connected inside of the _build_generator_model method.
In the end, all created generators and discriminators are connected using _build_and_compile_gan method:
def _build_and_compile_gan(self): | |
imageX = Input(shape=self.img_shape) | |
imageY = Input(shape=self.img_shape) | |
fakeY = self._generatorXY(imageX) | |
fakeX = self._generatorYX(imageY) | |
reconstructedX = self._generatorYX(fakeY) | |
reconstructedY = self._generatorXY(fakeX) | |
imageX_id = self._generatorYX(imageX) | |
imageY_id = self._generatorXY(imageY) | |
self._discriminatorX.trainable = False | |
self._discriminatorY.trainable = False | |
validX = self._discriminatorX(fakeX) | |
validY = self._discriminatorY(fakeY) | |
self.gan = Model(inputs=[imageX, imageY], | |
outputs=[ validX, validY, | |
reconstructedX, reconstructedY, | |
imageX_id, imageY_id ]) | |
self.gan.compile(loss=['mse', 'mse', | |
'mae', 'mae', | |
'mae', 'mae'], | |
loss_weights=[ 1, 1, | |
self.cycle_lambda, self.cycle_lambda, | |
self.id_lambda, self.id_lambda ], | |
optimizer=self.optimizer) | |
self.gan.summary() |
Finally, let’s examine the only public method of this class – train method:
def train(self, epochs, batch_size, train_data_path): | |
real = np.ones((batch_size,) + self.disc_patch) | |
fake = np.zeros((batch_size,) + self.disc_patch) | |
history = [] | |
for epoch in range(epochs): | |
for i, (imagesX, imagesY) in enumerate(self._image_helper.load_batch_of_train_images(train_data_path, batch_size)): | |
print ("———————————————————") | |
print ("******************Epoch {} | Batch {}***************************".format(epoch, i)) | |
print("Generate images…") | |
fakeY = self._generatorXY.predict(imagesX) | |
fakeX = self._generatorYX.predict(imagesY) | |
print("Train Discriminators…") | |
discriminatorX_loss_real = self._discriminatorX.train_on_batch(imagesX, real) | |
discriminatorX_loss_fake = self._discriminatorX.train_on_batch(fakeX, fake) | |
discriminatorX_loss = 0.5 * np.add(discriminatorX_loss_real, discriminatorX_loss_fake) | |
discriminatorY_loss_real = self._discriminatorY.train_on_batch(imagesY, real) | |
discriminatorY_loss_fake = self._discriminatorY.train_on_batch(fakeY, fake) | |
discriminatorY_loss = 0.5 * np.add(discriminatorY_loss_real, discriminatorY_loss_fake) | |
mean_discriminator_loss = 0.5 * np.add(discriminatorX_loss, discriminatorY_loss) | |
print("Train Generators…") | |
generator_loss = self.gan.train_on_batch([imagesX, imagesY], | |
[real, real, | |
imagesX, imagesY, | |
imagesX, imagesY]) | |
print ("Discriminator loss: {}".format(mean_discriminator_loss[0])) | |
print ("Generator loss: {}".format(generator_loss[0])) | |
print ("———————————————————") | |
history.append({"D":mean_discriminator_loss[0],"G":generator_loss}) | |
if i%100 ==0: | |
self._save_images("{}_{}".format(epoch, i), train_data_path) | |
self._plot_loss(history) |
In this method, the model is utilized along with the image helper. First, we define some ground truth variables which are used during the training (real and fake). Once that is done we read a batch of images from the training folders and push them into generator models. This way we are able to get translated images. After that, we proceed with discriminators training. Once we trained discriminators, we train generators and repeat the whole thing for the defined number of epochs.
Usage
Because we extracted image handling into the separate class and complicated parts of model creation into the private methods, it is quite easy to use CycleGAN class:
import numpy as np | |
from glob import glob | |
from image_helper_cycle_gan import ImageHelper | |
from cycle_gan import CycleGAN | |
image_helper = ImageHelper() | |
print("Ploting the images…") | |
filenames = np.array(glob('monet2photo/testA/*.jpg')) | |
image_helper.plot20(filenames) | |
generative_advarsial_network = CycleGAN((128, 128, 3), 10.0, image_helper) | |
generative_advarsial_network.train(100, 1, "monet2photo") |
First, we create ImageHelper instance which we inject into CycleGAN object. After that, we just run the training method. If you want to use any other dataset, all you have to do is to download it and rename the path that points to it. Note that we used 128×128 image size for processing, but you can experiment with any size you like.
Results
So let’s see how our solution for Unpaired Image-to-Image Translation problem turned out. As expected, in the beginning, our results were catastrophic. In our first epoch we got these results:
However, already by the fifth epoch, we were able to see a lot of improvements:
When training reached 40th epoch, even on small size images like 128×128, we were able to see a lot of improvements:
Notice how sky in the first transformed image is having a more realistic feel to it and how you can note Monet’s fast brushes on the second transferred image. The results got even better in the epoch 70. Check out:
Once again, notice how we can see the desired change in both of the transformed images. Finally, here is what we got in the epoch 100:
Conclusion
In this article, we applied some of the theoretical and math knowledge we got in the previous article and implemented Cycle GAN architecture using Python, TensorFlow
Thank you for reading!
This article is a part of Artificial Neural Networks Series, which you can check out here.
Read more posts from the author at Rubik’s Code.
Trackbacks/Pingbacks