You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using pixi --version.
Reproducible example
Using this example Dockerfile:
FROM nvcr.io/nvidia/l4t-base:r36.2.0
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-compiler-12.2 \
cuda-minimal-build-12.2 \
cuda-libraries-dev-12.2
RUN mkdir -p /usr/local/bin && wget https://github.com/prefix-dev/pixi/releases/download/v0.40.3/pixi-aarch64-unknown-linux-musl.tar.gz -O - | tar -C /usr/local/bin -xzf -
CMD [ "pixi", "info", "-vvv"]
Launching it like this:
# Optional: If you want to run the image on a non arm64 machine you can run this once.# docker run --pull always --rm --privileged multiarch/qemu-user-static --reset -p yes -c yes
docker run --tty --rm --platform linux/arm64 $(docker build --quiet .)
Results in:
DEBUG pixi_config: Loading config from /etc/pixi/config.toml
DEBUG pixi_config: Loading config from /root/.config/pixi/config.toml
DEBUG pixi_config: Loading config from /root/.pixi/config.toml
System
------------
Pixi version: 0.40.3
Platform: linux-aarch64
Virtual packages: __unix=0=0
: __linux=6.8.0=0
: __glibc=2.35=0
: __archspec=1=aarch64
Cache dir: /root/.cache/rattler/cache
Auth storage: /root/.rattler/credentials.json
Config locations: No config files found
Global
------------
Bin dir: /root/.pixi/bin
Environment dir: /root/.pixi/envs
Manifest dir: /root/.pixi/manifests/pixi-global.toml
When manually calling the functions inside the libnvidia-ml.so that exists in the container (not in the correct location) that rattler uses to extract the version I see this message because the library that is installed is a stub. There's also no version of nvidia-smi available.
WARNING:
You should always run with libnvidia-ml.so that is installed with your
NVIDIA Display Driver. By default it's installed in /usr/lib and /usr/lib64.
libnvidia-ml.so in GDK package is a stub library that is attached only for
build purposes (e.g. machine that you build your application doesn't have
to have Display Driver installed).
Issue description
Currently when running pixi inside of an Nvidia L4T image (e.g. nvcr.io/nvidia/l4t-base:r36.2.0 ) it fails to detect the version of cuda. I believe this is because of the stubbing of the libnvidia-ml.so binary which is used for cuda version extraction by rattler.
Potentially relevant similar issue for WSL that is now done: #1355
Expected behavior
I would expect the __cuda virtual package to be populated with the correct version so that I can build software.
As mentioned in this issue, a potential solution could be to allow setting the virtual package versions in the systemwide /etc/pixi/config.toml file.
Alternatively the nvidia cuda compiler is available, perhaps that could be used as a fallback?
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
The text was updated successfully, but these errors were encountered:
I'll have to check if that is the correct-expected behavior, as I thought it required the actual drivers and not just the compilers. But for the time being, you can "fake" it by setting:
Checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using
pixi --version
.Reproducible example
Using this example Dockerfile:
Launching it like this:
Results in:
When manually calling the functions inside the
libnvidia-ml.so
that exists in the container (not in the correct location) that rattler uses to extract the version I see this message because the library that is installed is a stub. There's also no version ofnvidia-smi
available.Issue description
Currently when running pixi inside of an Nvidia L4T image (e.g. nvcr.io/nvidia/l4t-base:r36.2.0 ) it fails to detect the version of cuda. I believe this is because of the stubbing of the
libnvidia-ml.so
binary which is used for cuda version extraction by rattler.Potentially relevant similar issue for WSL that is now done: #1355
Expected behavior
I would expect the __cuda virtual package to be populated with the correct version so that I can build software.
As mentioned in this issue, a potential solution could be to allow setting the virtual package versions in the systemwide
/etc/pixi/config.toml
file.Alternatively the nvidia cuda compiler is available, perhaps that could be used as a fallback?
The text was updated successfully, but these errors were encountered: