Skip to content

B-Step62/self-attention-gan

 
 

Repository files navigation

Self-Attention GAN

Tensorflow implementation for reproducing main results in the paper Self-Attention Generative Adversarial Networks by Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena.

Dependencies

python 3.6

TensorFlow 1.5

Data

Download Imagenet dataset and preprocess the images into tfrecord files as instructed in improved gan. Put the tfrecord files into ./data

Training

The current batch size is 64x4=256. Larger batch size seems to give better performance. But it might need to find new hyperparameters for G&D learning rate. Note: It usually takes several weeks to train one million steps.

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_imagenet.py --generator_type test --discriminator_type test --data_dir ./data

Evaluation

CUDA_VISIBLE_DEVICES=4 python eval_imagenet.py --generator_type test --data_dir ./data

Citing Self-attention GAN

If you find Self-attention GAN is useful in your research, please consider citing:

@article{Han18,
  author    = {Han Zhang and
               Ian J. Goodfellow and
               Dimitris N. Metaxas and
               Augustus Odena},
  title     = {Self-Attention Generative Adversarial Networks},
  year      = {2018},
  journal = {arXiv:1805.08318},
}

References

  • Spectral Normalization for Generative Adversarial Networks Paper
  • cGANs with Projection Discriminator Paper
  • Non-local Neural Networks Paper

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%