Using a Model to generate prompts for Model applications (MidJourney, Stable Diffusion, etc...)
Like the official Mid Journey function, it supports parsing prompts from images and secondary extensions based on prompts.
Support writing prompts directly in Chinese, and then get prompts text that can be used for better effect generation.
In the past several articles, I mentioned my personal habits and recommended development environment, which is based on Docker and Nvidia's official base container for deep learning environments. I won't go into details about that here, but if you're interested, you can check out articles like this one on getting started with Docker-based deep learning environments. I believe that long-time readers should already be quite familiar with it.
Of course, since this article includes parts that can be played with just a CPU, you can also refer to Playing with the Stable Diffusion Model on MacBook Devices with M1 and M2 chips from a few months ago to configure your environment.
Once you have prepared the Docker environment configuration, we can continue to have fun.
Find a suitable directory and use git clone
or download the Zip archive to get the "Docker Prompt Generator" project code onto your local machine.
git clone https://github.com/soulteary/docker-prompt-generator.git
# or
curl -sL -o docker-prompt-generator.zip https://github.com/soulteary/docker-prompt-generator/archive/refs/heads/main.zip
Next, enter the project directory and use Nvidia's official PyTorch Docker base image to build the basic environment. Compared to pulling a pre-made image directly from DockerHub, building it yourself will save a lot of time.
Execute the following commands in the project directory to complete the model application build:
# Build the base image
docker build -t soulteary/prompt-generator:base . -f docker/Dockerfile.base
# Build the CPU application
docker build -t soulteary/prompt-generator:cpu . -f docker/Dockerfile.cpu
# Build the GPU application
docker build -t soulteary/prompt-generator:gpu . -f docker/Dockerfile.gpu
Then, depending on your hardware environment, selectively execute the following commands to start a model application with a Web UI interface.
# Run the CPU image
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -p 7860:7860 soulteary/prompt-generator:cpu
# Run the GPU image
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -p 7860:7860 soulteary/prompt-generator:gpu
Enter the IP address of the host running the container in your browser, and you can start using the tool.
Models:
- Prompt Model: succinctly/text2image-prompt-generator
- Translator Model: Helsinki-NLP/opus-mt-zh-en / GitHub
- CLIP Model: laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Datasets:
- Datasets: succinctlyai/midjourney-texttoimage