You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an up-to-date Devcontainer template for developing Python projects (with Git and Poetry).
It has optional support for TensorFlow and PyTorch on GPU enabled machines.
Instructions
Create a new repository using this GitHub template.
Clone the repository and open it using VSCode.
Run python .devcontainer/change_gpu_config.py to select your GPU mode.
Press Ctrl+Shift+P to open the command palette.
Search for Dev Containers: Rebuild and Reopen in Container.
Follow the instructions in the terminal to install TensorFlow or PyTorch.
Venv-like solution with great dependency management
No
Extensions
Name
Description
ID
Python
Python language support
ms-python.python
Pylint
Static code analyser
ms-python.vscode-pylint
Black
Code formatter
ms-python.black-formatter
Jupyter
Jupyter extension pack
ms-toolsai.jupyter
Prettier
Code formatter
esbenp.prettier-vscode
Common errors
❌ Error
✅ Solution
Shell scripts fail to run or complain about \r characters
Check if the End of Line formatting of the scripts is set to LF in VSCode.
poetry shell fails or is not recognized as a command
Poetry shell was moved to a plugin (January 2025). Run pip install poetry-plugin-shell in the terminal.
Versioning
CUDA Toolkit
Linux Driver Version
Windows Driver Version
CUDA 12.x
>= 525.60.13
>= 527.41
CUDA 11.x
>= 450.80.02
>= 452.39
The CUDA Toolkits have minor version compatibility with the drivers. This means that CUDA Toolkit 12.6 should work with a driver that's designed for CUDA 12.0 because they have the same major version.
The CUDA Toolkits are backwards compatible with the drivers. This means that CUDA Toolkit 11.8 will still work with a newer driver that's designed for CUDA 12.x versions.
Latest version
Python version
CUDA
TensorFlow 2.17.0
3.9-3.12
12.3
PyTorch 2.5.0
3.9-3.12
12.4
The most recent common Python version is 3.12.
GPU accelerated containers
The hosts NVIDIA driver gets passed to the container using the NVIDIA Container Toolkit.
You can validate your Container Toolkit installation by checking the Docker daemon configuration file on your server: /etc/docker/daemon.json.
To spin up a GPU-accelerated container, append the --gpus=all and --runtime=nvidia arguments to your docker run command.
Luckily, these arguments are already preconfigured in devcontainer.json.
The NVIDIA driver on the A5000 server has version 12.0.
A GPU-enabled container requires the NVIDIA CUDA Toolkit (contains cuFFT, cuBLAS, etc.) and cuDNN in the container itself.