@@ -39,7 +39,7 @@ The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automa
39
39
- Run GPU enabled containers in your Kubernetes cluster.
40
40
41
41
This repository contains NVIDIA's official implementation of the [ Kubernetes device plugin] ( https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ ) .
42
- As of v0.16.0 this repository also holds the implementation for GPU Feature Discovery labels,
42
+ As of v0.16.1 this repository also holds the implementation for GPU Feature Discovery labels,
43
43
for further information on GPU Feature Discovery see [ here] ( docs/gpu-feature-discovery/README.md ) .
44
44
45
45
Please note that:
@@ -123,7 +123,7 @@ Once you have configured the options above on all the GPU nodes in your
123
123
cluster, you can enable GPU support by deploying the following Daemonset:
124
124
125
125
``` shell
126
- $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.16.0 /deployments/static/nvidia-device-plugin.yml
126
+ $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.16.1 /deployments/static/nvidia-device-plugin.yml
127
127
```
128
128
129
129
** Note:** This is a simple static daemonset meant to demonstrate the basic
@@ -558,11 +558,11 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
558
558
$ helm repo update
559
559
```
560
560
561
- Then verify that the latest release (` v0.16.0 ` ) of the plugin is available:
561
+ Then verify that the latest release (` v0.16.1 ` ) of the plugin is available:
562
562
```
563
563
$ helm search repo nvdp --devel
564
564
NAME CHART VERSION APP VERSION DESCRIPTION
565
- nvdp/nvidia-device-plugin 0.16.0 0.16.0 A Helm chart for ...
565
+ nvdp/nvidia-device-plugin 0.16.1 0.16.1 A Helm chart for ...
566
566
```
567
567
568
568
Once this repo is updated, you can begin installing packages from it to deploy
@@ -573,7 +573,7 @@ The most basic installation command without any options is then:
573
573
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
574
574
--namespace nvidia-device-plugin \
575
575
--create-namespace \
576
- --version 0.16.0
576
+ --version 0.16.1
577
577
```
578
578
579
579
** Note:** You only need the to pass the ` --devel ` flag to ` helm search repo `
@@ -582,7 +582,7 @@ version (e.g. `<version>-rc.1`). Full releases will be listed without this.
582
582
583
583
### Configuring the device plugin's ` helm ` chart
584
584
585
- The ` helm ` chart for the latest release of the plugin (` v0.16.0 ` ) includes
585
+ The ` helm ` chart for the latest release of the plugin (` v0.16.1 ` ) includes
586
586
a number of customizable values.
587
587
588
588
Prior to ` v0.12.0 ` the most commonly used values were those that had direct
@@ -592,7 +592,7 @@ case of the original values is then to override an option from the `ConfigMap`
592
592
if desired. Both methods are discussed in more detail below.
593
593
594
594
The full set of values that can be set are found here:
595
- [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.0 /deployments/helm/nvidia-device-plugin/values.yaml ) .
595
+ [ here] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.1 /deployments/helm/nvidia-device-plugin/values.yaml ) .
596
596
597
597
#### Passing configuration to the plugin via a ` ConfigMap ` .
598
598
631
631
And deploy the device plugin via helm (pointing it at this config file and giving it a name):
632
632
```
633
633
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
634
- --version=0.16.0 \
634
+ --version=0.16.1 \
635
635
--namespace nvidia-device-plugin \
636
636
--create-namespace \
637
637
--set-file config.map.config=/tmp/dp-example-config0.yaml
@@ -653,7 +653,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
653
653
```
654
654
```
655
655
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
656
- --version=0.16.0 \
656
+ --version=0.16.1 \
657
657
--namespace nvidia-device-plugin \
658
658
--create-namespace \
659
659
--set config.name=nvidia-plugin-configs
681
681
And redeploy the device plugin via helm (pointing it at both configs with a specified default).
682
682
```
683
683
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
684
- --version=0.16.0 \
684
+ --version=0.16.1 \
685
685
--namespace nvidia-device-plugin \
686
686
--create-namespace \
687
687
--set config.default=config0 \
@@ -700,7 +700,7 @@ $ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
700
700
```
701
701
```
702
702
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
703
- --version=0.16.0 \
703
+ --version=0.16.1 \
704
704
--namespace nvidia-device-plugin \
705
705
--create-namespace \
706
706
--set config.default=config0 \
@@ -783,7 +783,7 @@ chart values that are commonly overridden are:
783
783
```
784
784
785
785
Please take a look in the
786
- [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.0 /deployments/helm/nvidia-device-plugin/values.yaml )
786
+ [ ` values.yaml ` ] ( https://github.com/NVIDIA/k8s-device-plugin/blob/v0.16.1 /deployments/helm/nvidia-device-plugin/values.yaml )
787
787
file to see the full set of overridable parameters for the device plugin.
788
788
789
789
Examples of setting these options include:
@@ -792,7 +792,7 @@ Enabling compatibility with the `CPUManager` and running with a request for
792
792
100ms of CPU time and a limit of 512MB of memory.
793
793
``` shell
794
794
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
795
- --version=0.16.0 \
795
+ --version=0.16.1 \
796
796
--namespace nvidia-device-plugin \
797
797
--create-namespace \
798
798
--set compatWithCPUManager=true \
@@ -803,7 +803,7 @@ $ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
803
803
Enabling compatibility with the ` CPUManager ` and the ` mixed ` ` migStrategy `
804
804
``` shell
805
805
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
806
- --version=0.16.0 \
806
+ --version=0.16.1 \
807
807
--namespace nvidia-device-plugin \
808
808
--create-namespace \
809
809
--set compatWithCPUManager=true \
@@ -822,7 +822,7 @@ Discovery to perform this labeling.
822
822
To enable it, simply set ` gfd.enabled=true ` during helm install.
823
823
```
824
824
helm upgrade -i nvdp nvdp/nvidia-device-plugin \
825
- --version=0.16.0 \
825
+ --version=0.16.1 \
826
826
--namespace nvidia-device-plugin \
827
827
--create-namespace \
828
828
--set gfd.enabled=true
@@ -867,7 +867,7 @@ nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED
867
867
868
868
#### Deploying gpu-feature-discovery in standalone mode
869
869
870
- As of v0.16.0 , the device plugin's helm chart has integrated support to deploy
870
+ As of v0.16.1 , the device plugin's helm chart has integrated support to deploy
871
871
[ ` gpu-feature-discovery ` ] ( https://gitlab.com/nvidia/kubernetes/gpu-feature-discovery/-/tree/main )
872
872
873
873
When gpu-feature-discovery in deploying standalone, begin by setting up the
@@ -878,13 +878,13 @@ $ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
878
878
$ helm repo update
879
879
```
880
880
881
- Then verify that the latest release (` v0.16.0 ` ) of the plugin is available
881
+ Then verify that the latest release (` v0.16.1 ` ) of the plugin is available
882
882
(Note that this includes the GFD chart):
883
883
884
884
``` shell
885
885
$ helm search repo nvdp --devel
886
886
NAME CHART VERSION APP VERSION DESCRIPTION
887
- nvdp/nvidia-device-plugin 0.16.0 0.16.0 A Helm chart for ...
887
+ nvdp/nvidia-device-plugin 0.16.1 0.16.1 A Helm chart for ...
888
888
```
889
889
890
890
Once this repo is updated, you can begin installing packages from it to deploy
@@ -894,7 +894,7 @@ The most basic installation command without any options is then:
894
894
895
895
```
896
896
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
897
- --version 0.16.0 \
897
+ --version 0.16.1 \
898
898
--namespace gpu-feature-discovery \
899
899
--create-namespace \
900
900
--set devicePlugin.enabled=false
@@ -905,7 +905,7 @@ the default namespace.
905
905
906
906
``` shell
907
907
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
908
- --version=0.16.0 \
908
+ --version=0.16.1 \
909
909
--set allowDefaultNamespace=true \
910
910
--set nfd.enabled=false \
911
911
--set migStrategy=mixed \
@@ -928,31 +928,31 @@ Using the default values for the flags:
928
928
$ helm upgrade -i nvdp \
929
929
--namespace nvidia-device-plugin \
930
930
--create-namespace \
931
- https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.16.0 .tgz
931
+ https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.16.1 .tgz
932
932
```
933
933
934
934
## Building and Running Locally
935
935
936
936
The next sections are focused on building the device plugin locally and running it.
937
937
It is intended purely for development and testing, and not required by most users.
938
- It assumes you are pinning to the latest release tag (i.e. ` v0.16.0 ` ), but can
938
+ It assumes you are pinning to the latest release tag (i.e. ` v0.16.1 ` ), but can
939
939
easily be modified to work with any available tag or branch.
940
940
941
941
### With Docker
942
942
943
943
#### Build
944
944
Option 1, pull the prebuilt image from [ Docker Hub] ( https://hub.docker.com/r/nvidia/k8s-device-plugin ) :
945
945
``` shell
946
- $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.16.0
947
- $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.16.0 nvcr.io/nvidia/k8s-device-plugin:devel
946
+ $ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.16.1
947
+ $ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.16.1 nvcr.io/nvidia/k8s-device-plugin:devel
948
948
```
949
949
950
950
Option 2, build without cloning the repository:
951
951
``` shell
952
952
$ docker build \
953
953
-t nvcr.io/nvidia/k8s-device-plugin:devel \
954
954
-f deployments/container/Dockerfile.ubuntu \
955
- https://github.com/NVIDIA/k8s-device-plugin.git#v0.16.0
955
+ https://github.com/NVIDIA/k8s-device-plugin.git#v0.16.1
956
956
```
957
957
958
958
Option 3, if you want to modify the code:
0 commit comments