-
Notifications
You must be signed in to change notification settings - Fork 2.5k
chore: add standardized vscode devcontainer for development #7944
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Open
dsisco11
wants to merge
63
commits into
invoke-ai:main
Choose a base branch
from
dsisco11:sisco/feat/vscode-devcontainer
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+2,201
−1,503
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is awesome. Thanks for contributing - we will take a look at it this week. |
Hell yeah, thank you for working on this! I'll take some time to test next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Left a few comments.
702ad8f
to
1a69b6f
Compare
keturn
reviewed
Apr 24, 2025
…ents of src directory
… for conveniance.
chore: use container mount path variable instead of hardcoded strings chore: add proper env file initialization for devcontainer chore: explicitly create other cache folders chore remove file so it can be recommitted properly chore: commit init file with executable mode set chore: remove unneeded debug launch target chore: add more recommended extensions chore: remove unneeded git files chore: fix python preinstall chore: update cuda index to match main chore: drop redundant devcontainer specific extensions list and just use global workspace extensions list. refactor: move to hatchling for python compiler-backend chore: oh we actually DO need to specify extensions in the devcontainer.json refactor: enable UV to handle device-specific torch index resolution. chore: whoops, fix the named volume name
chore: add vscode action for "run currently focused python test file" chore: fix permissions in devcontainer image chore: fix devcontainer image brace expansion chore: improve vscode test task chore: add reasonable minimum requirements chore: add commented podman workarounds chore: setup vscode test discovery for python
…n problem for WSL
…: Frontend, Backend, AppData
…nstalled uv packages
11d41f7
to
477cdd8
Compare
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Labels
invocations
PRs that change invocations
python
PRs that change python files
python-deps
PRs that change python dependencies
Root
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds a standardized VSCode devcontainer setup for the repository.
It allows developers to checkout branches (or mount the repo from a local folder) into a sterile and isolated container that contains a standard configuration for all tools required for development.
Included is a debugger setup for both the frontend & backend, located under the usual "Run & Debug" menu.
Additionally, a few VSCode build actions have been included (Ctrl+Shift+B) for common actions such as:
Discord Thread
There is a discussion thread in the discord forum on this work.
https://discord.com/channels/1020123559063990373/1360184927538249809
Dependency Caching
The devcontainer is configured such that all current packaging systems for the project (UV & PNPM) have their package-store directory mounted as a global named volume "invokeai-dev-cache".
What this means is that packages are downloaded into this volume and then linked into any devcontainer instance that needs them.
A named volume is used here instead of a bind mount due to performance penalties incurred when docker has to transfer between its virtual file system and the host file system.
InvokeAI Data Persistance
In order to keep the devcontainer isolated & sterile, the InvokeAI app data & models are only persisted for a given container instance.
What this means is that if you have to rebuild the container or open a new container instance, then the previous app data such as the database & downloaded models will be deleted.
This isn't a bad thing, it's good to be able to work on multiple branches without risk of one envrionment impacting another.
Proposal: Model-Store
Admittedly, having to redownload AI models is frustrating.
This is why I propose we implement a relatively simple feature, a
.model-store
directory similar to the existing.download-cache
directory.This model-store would function just like the PNPM or UV package-store.
The proposed behavior is such that when Invoke goes to download a model, it will first check the model-store and if the desired model folder is found then it will be linked into the usual download destination.
This will allow us to configure the devcontainer to do the same thing it already does for the UV/PNPM stores and mount a named global volume to invokes model-store, allowing downloaded models to be shared across all container instances.
Thus, reinstalling models when you create a new container (after a rebuild or such) will complete instantly!
Custom Node Development
The devcontainer also facilitates very easy node development by allowing custom-node folders or git-repos to be placed within
.devcontainer/mounts/custom-nodes
.Node packages within this path are automatically mounted into the devcontainer.
This means the devcontainer can be used to both launch the development version of InvokeAI and develop custom nodes.
Source files for custom nodes within the devcontainer gain the benefit of being fully compatible with all dev tools such as Pylance and the Python debugger.
EDIT
Hot-Reloading for custom nodes now works!
Differences
There are a few things done differently in the devcontainer when compared to the traditional setup for InvokeAI.
Project Install
The project is installed using only the
uv sync
command, purposely excludinguv pip install
.This is because
uv sync
already installs the project in editable by default.Removing Setuptools
Using

setuptools
as the build backend was causing excessive build times within the container (upward of 50 minutes).This is possibly related to bugs such as this one.
The project has been migrated to using Hatchling, resulting in build times of around 2 minutes.
UV Index
Traditionally, the correct packages to be installed for a given compute device (CPU/CUDA/ROCM) have been determined by setting a special envar
UV_INDEX
during container builds.The problem is that there is logic required to determine the value for this envar, and the envar has to be set at multiple different stages.
Additionally, specifying an override-index like this would cause
uv lock
to produce a different lockfile for different developers (based on the compute-device they use, which determines their UV_INDEX value).To fix these issues, I have created a package resolution setup in the UV section of
pyproject.toml
.This setup uses a few custom 'extras' values along with the system/platform identifiers to determine which index to pull from.
Now, rather than manually setting an envar, one of the following extras can be specified to indicate which compute-device-type is being used.
--extra using-cuda
--extra using-rocm
--extra using-cpu
If none of these extras are specified then the system will just default to CPU.
For MacOS (
system_platform=='darwin'
) no special index is used and the default index is chosen.This completely eliminates the need to manually specify a UV index and now the switching logic lives inside the pyproject config!