Learning a Spatial Activation Function for Efficient Image Restoration.
Please refer our paper for more details.
If you use this code for your research, please cite our papers:
@inproceedings{kligvasser2018xunit,
title={xunit: Learning a spatial activation function for efficient image restoration},
author={Kligvasser, Idan and Rott Shaham, Tamar and Michaeli, Tomer},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={2433--2442},
year={2018}
}
@article{kligvasser2018dense,
title={Dense xUnit Networks},
author={Kligvasser, Idan and Michaeli, Tomer},
journal={arXiv preprint arXiv:1811.11051},
year={2018}
}
Clone this repository into any place you want.
git clone https://github.com/kligvasser/xUnit
cd xUnit
python -m pip install -r requirements.txt
This code requires PyTorch 1.0+ and python 3+.
Pretrained models are avaible at: LINK.
For the super-resolution task, the dataset should contains a low and high resolution pairs, in folder structure of:
train
├── img
├── img_x2
├── img_x4
val
├── img
├── img_x2
├── img_x4
You may prepare your own data by using the matlab script:
./super-resolution/scripts/matlab/bicubic_subsample.m
Or download a prepared dataset based on the BSD and VOC datasets from LINK.
python3 main.py --root <path-to-dataset> --g-model g_xsrgan --d-model d_xsrgan --model-config "{'scale':4, 'gen_blocks':10, 'dis_blocks':5}" --scale 4 --reconstruction-weight 1.0 --perceptual-weight 0 --adversarial-weight 0 --crop-size 40
python3 main.py --root <path-to-dataset> --g-model g_xsrgan --d-model d_xsrgan_ad --model-config "{'scale':4, 'gen_blocks':10, 'dis_blocks':5}" --scale 4 --reconstruction-weight 1.0 --perceptual-weight 1.0 --adversarial-weight 0.005 --crop-size 64 --epochs 1200 --step-size 900 --gen-to-load <path-to-psnr-pretrained-pt> --wgan --penalty-weight 10
python3 main.py --root <path-to-dataset> --g-model g_xsrgan --d-model d_xsrgan --model-config "{'scale':4, 'gen_blocks':10, 'dis_blocks':5, 'spectral':True}" --scale 4 --reconstruction-weight 1.0 --perceptual-weight 1.0 --adversarial-weight 0.01 --crop-size 40 --epochs 2000 --step-size 800 --gen-to-load <path-to-psnr-pretrained-pt> --dis-betas 0 0.9
python3 main.py --root <path-to-dataset> --g-model g_xsrgan --d-model d_xsrgan --model-config "{'scale':4, 'gen_blocks':10, 'dis_blocks':5}" --scale 4 --evaluation --gen-to-load <path-to-pretrained-pt>
Pretrained models are avaible at: LINK.
For the denoising task, the dataset should contains only clean images, in folder structure of:
train
├── img
val
├── img
python3 main.py --root <path-to-dataset> --g-model g_xdncnn --d-model d_xdncnn --model-config "{'gen_blocks':10, 'dis_blocks':4, 'in_channels':1}" --reconstruction-weight 1.0 --perceptual-weight 0 --adversarial-weight 0 --crop-size 50 --gray-scale --noise-sigma 50 --epochs 500 --step-size 150
python3 main.py --root <path-to-dataset> --g-model g_xdncnn --d-model d_xdncnn --model-config "{'gen_blocks':10, 'dis_blocks':4, 'in_channels':3}" --reconstruction-weight 1.0 --perceptual-weight 0 --adversarial-weight 0 --crop-size 64 --noise-sigma 75 --epochs 1000 --step-size 300
python3 main.py --root <path-to-dataset> --g-model g_xdncnn --d-model d_xdncnn --model-config "{'gen_blocks':10, 'dis_blocks':4, 'in_channels':3}" --reconstruction-weight 1.0 --perceptual-weight 1.0 --adversarial-weight 0.01 --crop-size 72 --noise-sigma 75 --epochs 1000 --step-size 300 --gen-to-load <path-to-psnr-pretrained-pt> --wgan --penalty-weight 10
python3 main.py --root <path-to-dataset> --g-model g_xdncnn --d-model d_xdncnn --model-config "{'gen_blocks':10, 'dis_blocks':5, 'in_channels':1}" --reconstruction-weight 1.0 --perceptual-weight 0 --adversarial-weight 0 --crop-size 50 --gray-scale --noise-sigma 50 --blind --epochs 500 --step-size 150