-
-
Notifications
You must be signed in to change notification settings - Fork 448
LoRA
LoRA, short for Low-Rank Adaptation, is a method used in Generative AI models to fine-tune the model with specific styles or concepts while keeping the process efficient and lightweight.
Here’s how it works in simple terms:
- The Problem:
Fine-tuning a huge model like Stable Diffusion to recognize or replicate new styles or concepts (e.g., making it draw in the style of a specific artist or recognize unique objects) usually requires a lot of computational power and storage.
The LoRA Solution:
- Instead of tweaking all the internal parameters of the Generative AI model, LoRA focuses only on a small subset of them. Think of it as adding a "style filter" to the model that can be applied or removed as needed.
It reduces the complexity by breaking down large changes into smaller, simpler steps.
These smaller steps don’t interfere with the original model, meaning you don’t lose the model’s core abilities.
Why it’s Cool:
- Efficient: It uses way less memory and is faster than traditional fine-tuning methods.
- Flexible: You can train multiple LoRA "filters" for different styles or concepts and swap them in and out without modifying the base model.
- Compatible: LoRA modules can be shared or reused easily, so artists and developers can collaborate or try out others’ custom styles.
Example Use Case
- Say you want to teach Generative AI models to draw in the style of a fictional artist.
You can train a LoRA on a handful of sample images in that style.
Once trained, the LoRA module acts like a plug-in—you just load it into Generative AI models, and the model starts generating images in that style!
In short, LoRA makes it easy to teach models new tricks without overwhelming your computer or altering the original model. It’s a user-friendly way to get customized results!
There are many LoRA types, here are some of the most common ones: LoRA, DoRA, LoCon, HaDa, gLoRA, LoKR, LyCoris
They vary in:
- Which model components are being trained. Typically UNET, but can be TE as well
- Which layers of the model are being trained. Each LoRA type trains different layers of the model
- Math algorithm to extrach LoRA weights for the specific trained layers
Warning
LoRA must always match base model used for its training
For example, you cannot use SD1.5 LoRA with SD-XL model
Warning
SD.Next attempts to automatically detect and apply the correct LoRA type.
However, new LoRA types are popping up all the time
If you find LoRA that is not compatible, please report it so we can add support for it.
- Using UI: go to the networks tab and go to the lora's and select the lora you want and it will be added to the prompt.
- Manually: you can also add the lora manually by adding
<lora:lora_name:strength>
to the prompt and then selecting the lora you want to use.
Some (not all) LoRAs associate specific words during training so same words can be used to trigger specific behavior from the LoRA.
SD.Next displays these trigger words in the UI -> Networks -> LoRA, but they can also be used manually in the prompt.
You can combine any number of LoRAs in a single prompt to get the desired output.
Tip
If you want to automatically apply trigger words/tags to prompt, you can use auto-apply
feature in "Settings -> Networks"
Tip
You can change the strength of the lora by changing the number <lora:name:x.x>
to the desired number
Tip
If you're combining multiple LoRAs, you can also "export" that as a single lora via "Models -> Extract LoRA"
Typically :strength
is applied uniformly for all components of the LoRA.
However, you can also specify individual component weights by adding :comp=x.x
to the LoRA tag.
Example: <lora:test_lora:te=0.5:unet=1.5>
Instead of using simple :strength
, you can specify individual block weights for LoRA by adding :in=x.x:mid=y.y:out=z.z
to the LoRA tag.
Example <lora:test_lora:1.0:in=0:mid=1:out=0>
LoRA can also be applied will full per-step control by adding step-specific instuctions to the LoRA tag.
Example: <lora:test_lora:te=0.1@1,0.6@6>
Would mean apply LoRA to text-encoder with strength 0.1 on step 1 and then switch to strength 0.6 on step 6.
For any LoRA related issues, please follow the below procedure:
- set environment variable
SD_LORA_DEBUG=true
- start SD.Next as usual and run it until problem occurs
- create GitHub issue
- upload full
sdnext.log
as well as any console exception messages
SD.Next has multiple ways that address how LoRA is loaded and applied, all available in "Settings -> Networks":
You must pick only one of the below methods:
-
LoRA fuse directly to model: enabled (default)
- Apply LoRA first time its needed without maintaining model weights as backup
- Unapply works by subtracting LoRA from model weights
- This can result in numerical instabilities and/or model degradation when switching LoRAs
if LoRA numerical range is not normalized or badly trained,
but saves 2x memory compared to older load/apply/unapply methods
-
LoRA load using legacy method: disabled
- Apply LoRA first time its needed and create model backup weights in system memory
- Unapply works by restoring model weights
- Most stable method, but can use up to 2x sysem memory
-
LoRA load using legacy method: enabled
- Apply LoRA on-the-fly during model execution and create model backup weights in system memory
- Unapply works by restoring model weights
- This was default method prior to 12/2024 release
-
LoRA load using Diffusers method: enabled
- Do not use SD.Next native LoRA processing and rely on underlying
diffusers
library for all LoRA actions - Does not support all LoRA types
- Do not use SD.Next native LoRA processing and rely on underlying
Tip
If you see visual model degradation after unapplying LoRA,
you can restore model weights using "System -> Reload Model"