Skip to content

Add PLMS sampling and do one 2x size batch per sampling step #51

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Merged
merged 4 commits into from
Apr 15, 2022

Conversation

crowsonkb
Copy link
Contributor

Changes in this PR:

  • PLMS sampling (https://arxiv.org/abs/2202.09778) for high quality outputs in 50 steps or acceptable quality outputs in 25-35 steps
  • The sampling code for classifier-free guidance did two forward passes per sampling step, this PR combines them into one batch.

@@ -57,6 +57,7 @@ Quality, sampling speed and diversity are best controlled via the `scale`, `ddim
As a rule of thumb, higher values of `scale` produce better samples at the cost of a reduced output diversity.
Furthermore, increasing `ddim_steps` generally also gives higher quality samples, but returns are diminishing for values > 250.
Fast sampling (i.e. low values of `ddim_steps`) while retaining good quality can be achieved by using `--ddim_eta 0.0`.
Faster sampling (i.e. even lower values of `ddim_steps`) while retaining good quality can be achieved by using `--ddim_eta 0.0` and `--plms` (see [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778)).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, love it!

@rromb
Copy link
Collaborator

rromb commented Apr 15, 2022

Thank you very much, this looks great :)

@rromb rromb merged commit 4c8bff3 into CompVis:main Apr 15, 2022
YiqiaoYAN pushed a commit to YiqiaoYAN/latent_diffusion that referenced this pull request Apr 23, 2025
Add PLMS sampling and do one 2x size batch per sampling step
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants