The below codes are implemented by Pytorch for personal study.
Deokyun Kim, Minseon Kim, Gihyun Kwon and Dae-Shik Kim, Progressive Face Super-Resolution via Attention to Facial Landmark, BMVC 2019
Abstract: Face Super-Resolution (SR) is a subfield of the SR domain that specifically targets the reconstruction of face images. The main challenge of face SR is to restore essential facial features without distortion. We propose a novel face SR method that generates photo-realistic 8× super-resolved face images with fully retained facial details. To that end, we adopt a progressive training method, which allows stable training by splitting the network into successive steps, each producing output with a progressively higher resolution. We also propose a novel facial attention loss and apply it at each step to focus on restoring facial attributes in greater details by multiplying the pixel difference and heatmap values. Lastly, we propose a compressed version of the state-of-the-art face alignment network (FAN) for landmark heatmap extraction. With the proposed FAN, we can extract the heatmaps suitable for face SR and also reduce the overall training time. Experimental results verify that our method outperforms state-of-the-art methods in both qualitative and quantitative measurements, especially in perceptual quality.
Github(Official): https://github.com/DeokyunKim/Progressive-Face-Super-Resolution
To learn for Single image super-resolution using deep learning, I read the following typical papers related to SISR below and implemented in code.
Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network, CVPR 2016.
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, CVPR 2017 Oral presentation.
I am trying to solve image translation from NIR Images to RGB Images. The key question is
-
How can deep neural network infer the color information from NIR Images?
-
Following question is "shouldn't the same object be inferred from different colors, depending conditions such as weather and time?"
In addition, research so far has found that the deep neural network is biased into green(wood) and blue(sky) that make up the majority of the images.
Patricia L. Suarez, Angel D. Sappa, and Boris X. Vintimilla, Infrared Image Colorization based on a Triplet DCGAN Architecture, CVPR 2017 Workshop
Visual correction of photographs is the domain of well-trained experts. I trained a deep neural network with noisy and well-corrected clean image by referring to the following papers. Because the noisy image is mostly dark, and clean image is mostly bright, I have found that the trained deep neural network makes the bright images much brighter. Thus, I trained the network by using the color-wise normalize method to robust the brightness of images.
Andrey Ignatov , Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, Luc Van Gool, DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks, ICCV 2017
Yu-Sheng Chen, Yu-Ching Wang,Man-Hsin Kao, Yung-Yu Chuang, Deep Photo Enhancer: Unpaired Learning for Image Enhancement from Photographs with GANs, CVPR 2018
MIT-Adobe 5K Dataset Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs
Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR 2015