At ITU AI Center, one of the projects, which we currently are working on Deep Learning and Computer Vision field, involves creation of new images using AI algorithms. We design new deep neural network models in order to:
(i) transfer style of images from one style (e.g. a miniature) to a given image;
(ii) fill in missing regions or holes in data, particularly in visual data;
(iii) generate high resolution images from low resolution images.
Image Inpainting is a method to fill the missing or corrupted regions of an image with reasonable parts. To perform this task, we use GAN (Generative Adversarial Networks) models where two adversarial networks play a zero-sum game. GANs enable us to generate images, texts etc. close to human made level.
In the GIF shown above, our model’s results can be seen. Missing regions are completed with two content aware Discriminators, where one of them looks for global consistency, where the other looks for patch-wise consistency.
Neural style transfer is a technique used to generate images in the style of another image. The neural-style algorithm takes a content-image as input, a style image, and returns the content image as if it were painted using the artistic style of the style image. One sample result from our model that produces a miniature styled image is shown above.