-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Add new SC Names and Pools for Ceph-FS #1770
Conversation
With this change in the coming in, we will need changes in ocm-ramen-samples for ceph-fs sc name |
@@ -12,4 +12,4 @@ spec: | |||
resources: | |||
requests: | |||
storage: 1Gi | |||
storageClassName: rook-cephfs | |||
storageClassName: rook-cephfs-test-fs1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also pass filesystem name as parameter, like rook-cephfs-$fsname?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i had thought about it earlier. I was thinking it would spin more changes. but then again it's just one file. I will make the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made the changes
test/addons/rook-cephfs/test
Outdated
"--kustomize", | ||
"provision-test", | ||
context=cluster, | ||
f"--namespace={NAMESPACE}", "--filename=-", input=yaml, context=cluster | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not keep the code as is, and modify the pvc to use "test-fs1"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, the changes are not much. its a separate commit. i will drop it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed the commit.
5e47da2
to
731b972
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, need to squash the commits (second commit seems like a fix for the first).
The commit adds a new storage classes and pools for Cephfs, which allows the ramen code base to exercise the filtering logic when there are multiple storage classes in the environment. This also prepares the environment for scenarios where more than two workloads use different storage classes. The older Ceph-FS names have been renamed and accordingly the changes have been made to the other test files Signed-off-by: rakeshgm <rakeshgm@redhat.com>
731b972
to
4ab422d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but I would like another ack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicatestorageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicatestorageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicatestorageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicatestorageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicate storageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicate storageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicate storageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
In the referenced PRs [1] and [2], two StorageClasses were created for RBD and two for CephFS. Both RBD and CephFS StorageClasses had duplicate storageIDs. This commit resolves the duplicate storageID issue. References: [1] RamenDR#1756 [2] RamenDR#1770 Signed-off-by: rakeshgm <rakeshgm@redhat.com>
These commit adds a new storage classes and pools for Cephfs, which
allows the ramen code base to exercise the filtering logic when there
are multiple storage classes in the environment. This also prepares the
environment for scenarios where more than two workloads use different
storage classes.
The older Ceph-FS names have been renamed and accordingly the
changes have been made to the other test files like drenv-self-tests
and e2e