-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[Feature Request]: Retrieve meta-data for models from a YAML file #8029
Comments
I've actually started something similar to this using a fork of the model-keywords extension (though using JSON). My scope is a little less ambitious but very similar. That experience tells me most of this could be implemented as an extension (and probably should be). What that looks like right now is this: {
"title": "MeinaMix",
"tags": [
"anime",
"illustration",
"mix",
"semi-realistic",
"model"
],
"author": "Meinaaa",
"type": "checkpoint merge",
"description": "This model may do nsfw art! (add nsfw in the negative prompt if you don't wish for nsfw art )My main objectives for my model is:1- Not need a long prompt to generate good images and relay less in luck, using the prompt only to fine-tune the results.2- Be capable of generating wallpaper like images!However making models, merging and testing takes a lot of time, so i made a ko-fi page in case you like my model and want me to support me improve it by helping me stay awake by giving me coffee <3 , it will be very much appreciated: https://ko-fi.com/meinaRecommendations of use:for the negative: (worst quality, low quality:1.4), (malformed hands:1.4),(poorly drawn hands:1.4),(mutated fingers:1.4),(extra limbs:1.35),(poorly drawn face:1.4), The best samplers in most of the generations is DPM++ SDE/DPM++ SDE Karass at 20 to 50 steps, Euler A at 50 steps, with a CFG scale of 5 up to 10. ( Clip skip 1 or 2. )As for the upscaler in most of the scenarios is R-ESRGAN 4x, with 10 steps at 0.4 up to 0.6 denoising.I've been testing with the Orangemix VAE, it will be added in the download option in case you don't have it. I changed the VAE and it will be baked in all of the versions starting now with the 2.1! I'll love to see the images everyone can generate using it and help me find situations where the model needs improving, it will help for the next versions of Meina to be better!!!In the merged models list: Meina Version 1, Kenshi, AbyssOrangeMix2, PastelMix and Grapefruit, i do not have the exact recipe because i did multiple mixings using block weighted merges with multiple settings and kept the better version of each merge.",
"link": "https://civitai.com/models/7240/meinamix",
"version": "Meina V4.1 - Baked VAE",
"updated": "2023-02-16T12:59:05.832Z",
"trigger": [],
"settings": {
"negative_prompt": [ "(worst quality, low quality:1.4), (malformed hands:1.4),(poorly drawn hands:1.4),(mutated fingers:1.4),(extra limbs:1.35),(poorly drawn face:1.4)" ]
},
"suggested": {
"sampler": ["DPM++ SDE", "DPM++ SDE Karras"],
"steps": [20,50],
"clip_skip": [1,2]
},
"base": "SD 1.5",
"preview": "https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b6e80b63-4ac2-42f0-9a34-b89568aae000/width=400",
"files": [
{
"id": 10454,
"filename": "Meina V4.1 - Baked VAE.safetensors",
"url": "https://civitai.com/api/download/models/11187?type=Model&format=SafeTensor",
"type": "Model",
"format": "SafeTensor"
}
]
} (These are generated from a small CLI I wrote that downloads the files associated with a specific version on CivitAI and writes the metadata file alongside the files it downloaded. It also downloads the first image from the site as the preview.) The extension reads the settings key for prompt and negative_prompt currently, though I intend to at least add clip_skip to that. My thoughts on the suggested section was to expose the min the UI (but I haven't got as far as thinking about what that would look like exactly. I like your ideas around tags in the filter section in particular.) |
Thanks for your reply, Skeula!
Would you mind sharing this?
Personally I also prefer JSON over YAML (don't even get me started), but my thoughts were:
Not sure about that; for now my scope is very limited to "read the data".
I trust you here, as I have no experience with this project so far. Will other extensions still be able to access the data?
That's such a cool idea, or as the kids would say, "shut up and take my money!" ;)
Right,
I'm awful at UI/UX, my approach would have been "color the sliders" for all the values that have a slider.
Thanks! Though I didn't think of those as "tags"; my approach was to actually have those exact 5 keys in But the more I think about it, the more over-engineered that sounds :) just having this as tags ( Thanks again for your thoughts! |
+1 on this. Model and config management are real gaps in all the new SD interfaces. I have ideas I'll type up over the weekend... I totally agree with the proposed approach here tho. |
Oh, yes, sure... https://github.com/Skeula/model-specific-prompts/
Honestly this is a pretty good point.
There are probably ways, but extensions mostly don't interact much directly.
I've put it up here: https://github.com/Skeula/stable-diffusion-webui/blob/skeula/get-civit I saw that there's a module for browsing civitai directly from the ui, so I've been thinking that integrating with that would be best. |
Added some lines to my fork of stable-diffusion-webui, see branch This will read a That data can then be used elsewhere, e.g. I have modified |
Just updated my branch to also do this for LoRAs. |
Pushed the updates for the other 2 types (embeddings and hypernetworks). I would consider this patch "complete" for now, i.e. it makes the webui read meta-data, so that other parts can access it. btw: Just today, someone else tried a completely different approach: butaixianran/Stable-Diffusion-Webui-Civitai-Helper |
Is there an existing issue for this?
What would your feature do ?
Allow to store meta-data for models (i.e. Checkpoints, LoRAs, Hypernets,
Embeddings) which can be useful for the WebUI, in a YAML file together with the
model (in the same directory, with extension
.webui.yaml
).This would allow solving a whole bunch of existing feature requests; I found at
least those:
#3121 #3497 #3522 #4996 #5237 #5922 #6013 #6729 #7169
It might also help with:
#4476 #3443 #4286 #1800 #6574
There already exists a pull request, #7953, which stores 1 piece of meta-data
(a description) in a
.txt
file, similar to sd-model-preview,but we might want to store so much more, e.g.
showing any models with that tag)
checkpoints does it work with?)
(and that list is just what I can come up with, I'm sure others will have
a lot of other great ideas!)
Proposed workflow
User has to manually create the meta-data file (at least at first; later on,
there might be extensions allowing to do this via UI, or maybe model authors can
start providing such files too).
Whenever the WebUI creates a list of models by scanning a directory, it will
need to also load and parse the meta-data file, and add the retrieved
information to the object.
This would then allow other parts of the WebUI (and of course extensions) to use
this data (see above list or some ideas).
Additional information
I don't expect to simply write this feature request and then have others do all
the work :) I wanted to put this out there before starting any coding.
So this is more of an RFC:
Here's an example file for AOM3A (
AOM3A3.webui.yaml
)The text was updated successfully, but these errors were encountered: