-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Usage
Assuming you setup everything correctly, you can run any UI (interchangeably, but not in parallel) using the command:
docker compose --profile [ui] up --build
where [ui]
is one of hlky
, auto
, auto-cpu
, or lstein
.
The data
and output
folders are always mounted into the container as /data
and /output
, use them so if you want to transfer anything from / to the container.
(Optional) If you want to customize the behaviour of the uis, you can create a docker-compose.override.yml
and override the CLI_ARGS
variable or whatever you want from the main docker-compose.yml
file. Example:
services:
auto:
environment:
- CLI_ARGS=--lowvram
Possible configuration:
By default: --medvram
are given, which allow you to use this model on a 6GB GPU, you can also use --lowvram
for lower end GPUs.
You can find the full list of cli arguments here.
This also has support for custom models, put the weights in the folder data/StableDiffusion
, you can then change the model from the settings tab.
There is multiple files in data/config/auto
such as config.json
and ui-config.json
which let you which contain additional config for the UI.
put your scripts data/config/auto/scripts
and restart the container
First, you have to add --enable-insecure-extension-access
to your CLI_ARGS
in your docker-compose.override.yml
:
services:
auto:
environment:
# put whatever other flags you want
- CLI_ARGS=--enable-insecure-extension-access --allow-code --medvram --xformers
Then, put your extensions in data/config/auto/extensions
, there is also the option to create a script data/config/auto/startup.sh
which will be called on container startup, in case you want to install any additional dependencies for your extensions or anything else.
An example of your startup.sh
might looks like this:
# install all packages for the extensions
for s in extensions/*/install.py; do
python "$s"
done
I maintain neither the UI nor the extension, I can't help you.
CPU instance of the above, some stuff might not work, use at your own risk.
By default: --optimized-turbo
is given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. You can find the full list of cli arguments here.
This fork might require a preload to work, see #72