SD.Next extension to send compute tasks to remote inference servers. Aimed to be universal for all providers, feel free to request other providers.
Note
This project is still a Work In Progress, please report issues.
- (WIP) SD.Next (someone else running SD.Next API)
- (WIP) ComfyUI (someone else running ComfyUI API)
- StableHorde (free)
- (WIP) NovitaAI (paid, affiliated)
- (WIP) ComfyICU (paid, affiliated)
- (WIP) Others :D
SD.Next API | ComfyUI API | StableHorde | NovitaAI | ComfyICU | |
---|---|---|---|---|---|
Model browsing | |||||
Checkpoints browser | ✅ | 🆗 | ✅ | ✅ | ✅ |
Loras browser | ✅ | 🆗 | ⭕ | ✅ | ✅ |
Embeddings browser | ✅ | 🆗 | ⭕ | ✅ | ✅ |
Generation | |||||
Txt2img | ✅ | 🆗+ | ✅ | ✅ | ✅ |
Second pass (hires) | 🆗+ | 🆗 | ✅ | ✅ | 🆗 |
Second pass (refiner) | 🆗 | 🆗 | ❌ | 🆗+ | 🆗 |
Img2img | ✅ | 🆗+ | ✅ | ✅ | 🆗+ |
Inpainting | 🆗+ | 🆗+ | ✅ | ✅ | 🆗+ |
Outpainting | 🆗 | 🆗 | 🆗 | 🆗+ | 🆗 |
Upscale & Postprocess | 🆗 | 🆗 | ✅ | 🆗 | 🆗 |
AnimateDiff | 🆗 | 🆗 | ❌ | ❌ | 🆗 |
Generation control | |||||
Loras and TIs | 🆗 | 🆗 | ✅ | ✅ | ✅ |
ControlNet | 🆗 | 🆗 | 🆗 | ||
IpAdapter | 🆗 | 🆗 | ❌ | 🆗+ | 🆗 |
User | |||||
Balance (credits/kudos) | ⭕ | ⭕ | ✅ | ✅ | ✅ |
Generation cost estimation | ⭕ | ⭕ | 🆗 | 🆗 | ❌ |
- ✅ functional
⚠️ partial functionality- 🆗+ work in progress
- 🆗 roadmap
- ⭕ not needed
- ❌ not supported
- StableHorde worker settings
- Dynamic samplers/upscalers lists
- API calls caching
- Hide NSFW networks option
There already are plenty of integrations of AI Horde. The point of this extension is to bring all remote providers into the same familiar UI instead of relying on other websites. Eventually I'd also like to add support for other SD.Next extensions like dynamic prompts, deforum, tiled diffusion, adetailer and regional prompter (UI extensions like aspect ratio, image browser, canvas zoom or openpose editor should already be supported).
- Installation
- Go to extensions > manual install > paste
https://github.com/BinaryQuantumSoul/sdnext-remote
> install - Go to extensions > manage extensions > apply changes & restart server
- Go to system > settings > remote inference > set right api endpoints & keys
- Go to extensions > manual install > paste
- Usage
- Select desired remote inference service in dropdown, refresh model list and select model
- Set generations parameters as usual and click generate
Note
You can launch SDNext with --debug
to follow api requests
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.