-
Notifications
You must be signed in to change notification settings - Fork 387
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
cannot update snapshot metadata #147
Comments
/kind bug |
@gman0 Thanks for reporting the issue! Do you see this "failed to update snapshot default/new-nfs-share-snap creation timestamp" appear multiple times in the log? By default it should retry 5 times. |
/assign @zhucan |
@gman0 can paste your rbac yaml file for snapshot? |
@xing-yang here's the whole log https://pastebin.com/8TqMZqwt apiVersion: v1
kind: ServiceAccount
metadata:
name: openstack-manila-csi-controllerplugin
labels:
app: openstack-manila-csi
component: controllerplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openstack-manila-csi-controllerplugin
labels:
app: openstack-manila-csi
component: controllerplugin
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.manila.csi.openstack.org/aggregate-to-openstack-manila-csi-controllerplugin: "true"
rules: []
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openstack-manila-csi-controllerplugin-rules
labels:
app: openstack-manila-csi
component: controllerplugin
rbac.manila.csi.openstack.org/aggregate-to-openstack-manila-csi-controllerplugin: "true"
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots/status"]
verbs: ["update"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete", "get", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openstack-manila-csi-controllerplugin
labels:
app: openstack-manila-csi
component: controllerplugin
subjects:
- kind: ServiceAccount
name: openstack-manila-csi-controllerplugin
namespace: default
roleRef:
kind: ClusterRole
name: openstack-manila-csi-controllerplugin
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openstack-manila-csi-controllerplugin
labels:
app: openstack-manila-csi
component: controllerplugin
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openstack-manila-csi-controllerplugin
labels:
app: openstack-manila-csi
component: controllerplugin
subjects:
- kind: ServiceAccount
name: openstack-manila-csi-controllerplugin
namespace: default
roleRef:
kind: Role
name: openstack-manila-csi-controllerplugin
apiGroup: rbac.authorization.k8s.io |
@gman0 In v1.2.0, we added a new feature to support status subresource. In order for that to work, you'll have to deploy the newer version of the snapshot CRD's. I think since you have already installed snapshot CRD v1.1.0 in your k8s cluster, it will not be installed again when you deploy external-snapshotter v1.2.0. Therefore the snapshot controller and the CRD's are out of sync. Is it possible for you to restart your k8s cluster and deploy external-snapshotter 1.2.0+ and test again? We'll fix this so it is backward compatible. |
@gman0 @xing-yang I have tested it in v1.15.0, I have already installed snapshot( image's version is v1.0.1) CRD v1.0.1 in my k8s cluster and successful snapshot creation, but when I upgrade the image's version to v1.2.0, the CRD will not be upgraded, it used old CRD, so I delete the snapshot pod and old CRD and rebuild snapshot pod, It can works. It's no needed to restart k8s cluster, only delete the crd. @xing-yang Maybe we should delete the old CRD when creating snapshot pod that if the CRD version is different with the image's version? |
@xing-yang @zhucan ah I see, I'll try it out today, thanks! |
@zhucan Sure. Please work on a fix. Thanks. |
@xing-yang @zhucan removed the volumesnapshot CRDs, all is working now! Thank you both! :) A fix would be nice though :p Closing. |
@gman0 I'm going to re-open this issue to keep track of the bug fix. Once it is fixed, we can close this again. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen kubectl volumensnapshots shows "ReadyToUse false". Please suggest whether the version of snapshotter I am using is correct, |
@Sathishkunisai: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I don't think what you had is the same problem reported by this very old issue. |
Thanks @xing-yang . To give more insight, my cluster is running in Azure Kubernetes (AKS), using Azure Disk to mount rook volumes (OSD) with feature gate is enabled in API Server [ volumesnapshot=true ] . Followed the procedure here in the link as well. apart from that CSI_ENABLE_SNAPSHOTTER: "true" is enabled in operator.yaml what am i missing ? Please advise |
@xing-yang upgraded to 2.1.1 result is same NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE datadir-mongo-pvc-snapshot-15 false datadir-mongo-replica-mongodb-replicaset-0 csi-rbdplugin-snapclass 3h7m |
apiVersion: snapshot.storage.k8s.io/v1beta1 |
@Sathishkunisai as this is more specific to ceph I would suggest you open issue in a rook and try to get help in rook slack |
c0a4fb1 Merge pull request kubernetes-csi#164 from anubha-v-ardhan/patch-1 9c6a6c0 Master to main cleanup 682c686 Merge pull request kubernetes-csi#162 from pohly/pod-name-via-shell-command 36a29f5 Merge pull request kubernetes-csi#163 from pohly/remove-bazel 68e43ca prow.sh: remove Bazel build support c5f59c5 prow.sh: allow shell commands in CSI_PROW_SANITY_POD 71c810a Merge pull request kubernetes-csi#161 from pohly/mock-test-fixes 9e438f8 prow.sh: fix mock testing d7146c7 Merge pull request kubernetes-csi#160 from pohly/kind-update 4b6aa60 prow.sh: update to KinD v0.11.0 7cdc76f Merge pull request kubernetes-csi#159 from pohly/fix-deployment-selection ef8bd33 prow.sh: more flexible CSI_PROW_DEPLOYMENT, part II 204bc89 Merge pull request kubernetes-csi#158 from pohly/fix-deployment-selection 61538bb prow.sh: more flexible CSI_PROW_DEPLOYMENT 2b0e6db Merge pull request kubernetes-csi#157 from humblec/csi-release a2fcd6d Adding myself to csi reviewers group f325590 Merge pull request kubernetes-csi#149 from pohly/cluster-logs 4b03b30 Merge pull request kubernetes-csi#155 from pohly/owners a6453c8 owners: introduce aliases ad83def Merge pull request kubernetes-csi#153 from pohly/fix-image-builds 5561780 build.make: fix image publishng 29bd39b Merge pull request kubernetes-csi#152 from pohly/bump-csi-test bc42793 prow.sh: use csi-test v4.2.0 b546baa Merge pull request kubernetes-csi#150 from mauriciopoppe/windows-multiarch-args bfbb6f3 add parameter base_image and addon_image to BUILD_PARAMETERS 2d61d3b Merge pull request kubernetes-csi#151 from humblec/cm 48e71f0 Replace `which` command ( non standard) with `command -v` builtin feb20e2 prow.sh: collect cluster logs 7b96bea Merge pull request kubernetes-csi#148 from dobsonj/add-checkpathcmd-to-prow 2d2e03b prow.sh: enable -csi.checkpathcmd option in csi-sanity 09d4151 Merge pull request kubernetes-csi#147 from pohly/mock-testing 74cfbc9 prow.sh: support mock tests 4a3f110 prow.sh: remove obsolete test suppression 6616a6b Merge pull request kubernetes-csi#146 from pohly/kubernetes-1.21 510fb0f prow.sh: support Kubernetes 1.21 c63c61b prow.sh: add CSI_PROW_DEPLOYMENT_SUFFIX 51ac11c Merge pull request kubernetes-csi#144 from pohly/pull-jobs dd54c92 pull-test.sh: test importing csi-release-tools into other repo 7d2643a Merge pull request kubernetes-csi#143 from pohly/path-setup 6880b0c prow.sh: avoid creating paths unless really running tests git-subtree-dir: release-tools git-subtree-split: c0a4fb1
My attempt of creating a new VolumeSnapshot from a PVC source resulted in following error message in external-snapshotter:
The snapshot is successfully created by the driver, but external-snapshotter is having trouble updating the snapshot object metadata.
I'm using external-snapshotter v1.2.0-0-gb3f591d8 in k8s 1.15.0 running with
VolumeSnapshotDataSource=true
feature gate. The previous version of external-snapshotter, 1.1.0, works just fine though. Is this a regression or rather mis-configuration on my part? Always happy to debug more or provide more logs! Thanks!The text was updated successfully, but these errors were encountered: