Skip to content

Latest commit

 

History

History

diffusers

⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

Running Stable diffusion model using Huggingface Diffusers in Torchserve.

Step 1: Download model

Set access token generated form Huggingface in Download_model.py file

Install dependencies

pip install -r requirements.txt
python Download_model.py

Step 2: Compress downloaded model

NOTE: Install Zip cli tool

Navigate back to model directory.

cd Diffusion_model
zip -r ../model.zip *

Step 3: Generate MAR file

Navigate up one level to diffusers directory.

torch-model-archiver --model-name stable-diffusion --version 1.0 --handler stable_diffusion_handler.py --extra-files model.zip -r requirements.txt

Step 4: Start torchserve

Update config.properties and start torchserve

torchserve --start --ts-config config.properties --disable-token-auth  --enable-model-api

Step 5: Run inference

python query.py --url "http://localhost:8080/predictions/stable-diffusion" --prompt "a photo of an astronaut riding a horse on mars"

The image generated will be written to a file output-20221027213010.jpg.

NOTE: For KServe implementation use the below inputs for v1 and v2 protocols. Kserve v1 protocol - sample_v1.json Kserve v2 protocol - sample_v2.json