This is the official codebase for running the small, filtered-data GLIDE model from GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models.
For details on the pre-trained models in this repository, see the Model Card.
To install this package, clone this repository and then run:
pip install -e .
For detailed usage examples, see the notebooks directory.
- The text2im notebook shows how to use GLIDE (filtered) with classifier-free guidance to produce images conditioned on text prompts.
- The inpaint notebook shows how to use GLIDE (filtered) to fill in a masked region of an image, conditioned on a text prompt.
- The clip_guided notebook shows how to use GLIDE (filtered) + a filtered noise-aware CLIP model to produce images conditioned on text prompts.