Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Use upstream conformance image when >= v1.13 #617

Merged
merged 1 commit into from
Mar 5, 2019
Merged

Use upstream conformance image when >= v1.13 #617

merged 1 commit into from
Mar 5, 2019

Conversation

johnSchnake
Copy link
Contributor

Transition to using the upstream conformance image. They started
publishing it after v1.13.0 so for versions before that we still
need to support the heptio/kube-conformance image.

This PR adds the conditional logic for choosing the right image
as well as extends the version-getting logic so that for versions
at/after v1.13 we provide all 3 segments of the semver since
we now track exactly with each release.

Fixes #593

Signed-off-by: John Schnake jschnake@vmware.com

Release note:

This changes Sonobuoy to use the upstream Kubernetes conformance image if: the version requested is >= v1.13, latest, or auto and the server reports a version >= v1.13.

// Not sure that this would be hit but default to adding the last
// segment as 0 per convention (upstream + semver).
if len(segments) < 3 {
return fmt.Sprintf("v%d.%d.%d", segments[0], segments[1], 0), nil
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dims this should be fine, dont you think? Does the server ever report a version with only 2 segments (e.g. 1.11 instead of 1.11.0)? I didn't see any conformance images tagged that way so I thought it safer just to add in the extra 0, just in case. The existing logic seemed to think 2 segments would/could happen.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will need to ask folks like @hoegaarden @calebamiles @spiffxp as i am not quite sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server version reports major.minor, the gitVersion reports the whole thing (e.g. v.1.13.0)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you're right, I wasn't writing very accurately; the question really is more specifically is w.r.t. the gitVersion, since that is where these segments come from. I wanted to make sure they would have 3 sections or, if they do only report 2 ever, would it be ok to add the 0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the real question is do you care about the "dot" release or only major.minor? I think you'd always want to run the latest release which matches your major.minor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Heptio wanted to track the releases better than major/minor but because it was a manual process they settled on doing just major.minor pretty much. It wasn't from any sort of desire or idea that major/minor was all that mattered.

It seems the most intuitive that you'd want to just run the tests that match your server version; that seems like at the time of release the k8s is stating that those are the requirements for your cluster to be conformant.

It seems like a slightly more awkward requirement to make someone use the test image correlating to the most recent patch release of their major.minor if that image wasn't even out when they got their cluster.

That would also make the sonobuoy logic much more difficult. We'd have to reach out to the server and analyze/ filter the list and try to choose an image.

I just dont see a benefit to adding that in when major.minor.patch would work just fine and is intuitive if we have those available.

@@ -32,6 +32,9 @@ const (

// DefaultKubeConformanceImageURL is the URL of the docker image to run for the kube conformance tests.
DefaultKubeConformanceImageURL = "gcr.io/heptio-images/kube-conformance"
// UpstreamKubeConformanceImageURL is the URL of the docker image to run for
// the kube conformance tests which is maintained by upstream Kubernetes.
UpstreamKubeConformanceImageURL = "gcr.io/google-containers/conformance"
// DefaultKubeConformanceImageTag is the default tag of the conformance image
DefaultKubeConformanceImageTag = "latest"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should ever run 'latest` on an image. If someone makes a change upstream and breaks the tests, users will get false positives.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, latest probably isn't the smartest default. Should we just put that at the latest tag as of a given release and then each time we release Sonobuoy we update it? We'll just need to add that to the release notes (If you agree I'll make an issue for that too)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I'm so glad you brought up the questions w.r.t. latest.

TIL: Sonobuoy gen defaults to using latest and sonobuoy run defaults to auto. I'm assuming the reason is that auto normally gets resolved at a later time in the sonobuoy run logic and so gen wanted a sane default that could work -> latest.

I'll look into it but based on that understanding I think it makes sense to make a new issue to change/track that. An ideal solution in my mind looks like changing the logic of gen so that it can have the same resolution logic as run and only if it can't reach out to the server does it default to something. In which case we could specify a default like latest or just another static tag like 1.13.0 or whatever is the most recent release when we build/release sonobuoy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question is then why does sonobuoy gen and sonobuoy run use different logic? They should follow the same logic path. That could be the core issue here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree; until this I didnt realize they differed here. I was confused why gen was using latest when I thought the default was auto

cmd/sonobuoy/app/gen.go Show resolved Hide resolved
// image instead of our own heptio/kube-conformance one. They started
// publishing it for v1.13.
switch {
case imageVersion == ConformanceImageVersionLatest:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should move to versioned image tags. But if not, could simply this to:

If < 1.13, then use DefaultKubeConformanceImageURL else use UpstreamKubeConformanceImageURL

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only problem is if they specify they want to use latest; currently they can provide just a version override (not image+version).

Since this is just a string we have to play with here and . a lexical comparison, "latest" < "v1.13" and they would get the heptio image then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that latest is a great idea, but it is a string which is common for users to specify for an image tag so we should handle that appropriately.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah but are you sure latest is the latest release? My guess is that latest probably points to something in 1.14 or 1.15 already which is invalid for a 1.13 cluster (even though it might work).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally agree; again, its not that I think we should be passing latest, but if someone overrides and says latest (which seems very possible) I'd rather give them the upstream image than the heptio one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If someone says they want latest that's fine, but sonobuoy should not default to that. For example, if I run a sonobuoy gen this is what the plugin gives me, the conformance image is: gcr.io/google-containers/conformance:latest which is what version?

apiVersion: v1
data:
  e2e.yaml: |
    sonobuoy-config:
      driver: Job
      plugin-name: e2e
      result-type: e2e
    spec:
      env:
      - name: E2E_FOCUS
        value: '\[Conformance\]'
      - name: E2E_SKIP
        value: 'Alpha|\[(Disruptive|Feature:[^\]]+|Flaky)\]'
      - name: E2E_PARALLEL
        value: '1'
      command: ["/run_e2e.sh"]
      **image: gcr.io/google-containers/conformance:latest**
      imagePullPolicy: Always
      name: e2e
      volumeMounts:
      - mountPath: /tmp/results
        name: results
        readOnly: false
      tolerations:
        - operator: "Exists"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100%, I guess we're having 2 different conversations. I thought this note was specifically about how to handle the switch/if statement.

There is another note above about the use of latest as a default value which I support shifting.

I think we're in agreement, right?

@stevesloka stevesloka requested a review from akutz March 4, 2019 16:37
@akutz
Copy link

akutz commented Mar 4, 2019

Hi @stevesloka,

Thank you for inviting me to look at this PR!

So, in all honesty, I'm ignorant of the history of the two images, so perhaps my immediate concern isn't a concern at all? Born out of my lack of context, I'm curious as to the topic of compatibility.

sk8 uses Sonobuoy for e2e conformance testing via a custom manifest (sonobuoy generate) that contains extra-volumes in order to bind mount certain assets into the container. The kube-conformance container then relies on the host to provide binaries such as kubectl and e2e.test. For example:

    extra-volumes:
    - name: kubectl
      hostPath:
        path: /opt/bin/kubectl
        type: File
    - name: ginkgo
      hostPath:
        path: /var/lib/kubernetes/platforms/linux/amd64/ginkgo
        type: File
    - name: e2e-test
      hostPath:
        path: /var/lib/kubernetes/platforms/linux/amd64/e2e.test
        type: File
    - name: cluster
      hostPath:
        path: /var/lib/kubernetes/cluster
        type: Directory
      volumeMounts:
      - mountPath: /tmp/results
        name: results
        readOnly: false
      - name: kubectl
        mountPath: /usr/local/bin/kubectl
        readOnly: true
      - name: e2e-test
        mountPath: /usr/local/bin/e2e.test
        readOnly: true
      - name: ginkgo
        mountPath: /usr/local/bin/ginkgo
        readOnly: true
      - name: cluster
        mountPath: /kubernetes/cluster
        readOnly: false

Will this continue to work, or will there be a breaking change with respect to expected paths and such?

@johnSchnake
Copy link
Contributor Author

@akutz ,

I may not completely follow the concern for compatibility; it seems like in both cases (before and after this) you are using Sonobuoy as the CLI/driver but not using the default kube-conformance image, right? So right now you arent using the heptio/kube-conformance image and then you wont use the upstream one either, right?

If thats the case then you're correct in your intuition that this won't cause any problems. There is just a special line of logic elsewhere in the code that says if the user specified an image to use don't bother with any of this and just do what they say :) So I think you're still good.

Just in case you are curious you can compare the two:

https://github.com/kubernetes/kubernetes/tree/master/cluster/images/conformance
https://github.com/heptio/kube-conformance

Heptio was maintaining this and pushed to get it upstream. There weren't major changes except getting it to run/build in in their flow.

Let me know if I didnt address your question accurately or fully.

@akutz
Copy link

akutz commented Mar 4, 2019

Hi @johnSchnake,

I'm sorry if I gave the impression I'm not using the Heptio image. I'm using gcr.io/heptio-images/kube-conformance:latest. Is this not the image we're discussing? Now, currently I'm using akutz/kube-conformance which is simply the aforementioned image with the patch from heptio/kube-conformance#33 pulled into it, but still, it's Heptio's kube-conformance image.

@johnSchnake
Copy link
Contributor Author

johnSchnake commented Mar 4, 2019

@akutz ,

Ok, I guess I was just a little confused [edit: writing this clarified it for myself a tad :) ]. I think the answer is still that yes things will work if you specify the image you want explicitly. Otherwise sonobuoy, by default, may switch you over to the newer image.

The reason I thought that you were maybe using a different image was that you were talking about having the host provide the kubectl, e2e.test, and ginkgo.

I was thinking that you were re-writing the image to use those but I see now in your YAML that you were specifying the mounts so your files line up with where the image expects them anyways. So yours are overwriting whats already there? (If thats not right let me know; I haven't done that much).

In that case, looking at the dockerfile it does look like you're still good to go. The files are all in the same spots as far as I see in a glance:

https://github.com/kubernetes/kubernetes/blob/master/cluster/images/conformance/Dockerfile#L17-L22
https://github.com/heptio/kube-conformance/blob/master/Dockerfile#L23-L28

Thanks for the clarification; let me know if I'm still missing the mark.

@stevesloka
Copy link
Contributor

@akutz I think you could rely on each host to have the binaries, but I'm not sure that everyone deploys them everywhere.

@akutz
Copy link

akutz commented Mar 4, 2019

Hi @johnSchnake / @stevesloka,

The reason I thought that you were maybe using a different image was that you were talking about having the host provide the kubectl, e2e.test, and ginkgo.

I guess my original explanation wasn't clear. I call out that I'm doing these things using the standard image: sk8 uses Sonobuoy for e2e conformance testing via a custom manifest (sonobuoy generate). So using the custom manifest I simply execute kubectl apply -f sonobuoy.yaml as opposed to sonobuoy run. This way I can modify the pod definition that is generated by sonobuoy generate -- the same yaml that sonobuoy run uses.

I think you could rely on each host to have the binaries, but I'm not sure that everyone deploys them everywhere.

Yep, but sk8-deployed clusters do, hence why I can rely on them to be there for what I'm doing.

@stevesloka
Copy link
Contributor

@akutz yup you are right! The problem that is really confusing is that Sonobuoy is plugin-based, and the conformance plugins are 1st class citizens. In its generic sense, Sonobuoy is only a plugin executor, but there's lots of added logic to make running conformance easier.

We sort of need a sonobuoy plugin repo which allows the plugins to manage their own manifests (e.g. sonobuoy e2e gen where the execution logic of e2e gen is handled by the e2e repo, similar to how kubectl plugins work).

@akutz
Copy link

akutz commented Mar 4, 2019

Hi @johnSchnake,

In that case, looking at the dockerfile it does look like you're still good to go.

Thank you for the links. I checked as well, and you're right, it looks like things should work. I can specify a custom image easily, so I may try this out at some point today.

@akutz
Copy link

akutz commented Mar 4, 2019

Hi @stevesloka,

I realize I'm not executing sonobuoy run, so my concern isn't impacted directly by this PR. My above concerns were related to the move to the standard image in general as this is simply the first step to make that move a permanent one for 1.13+. Because of that, I wanted to better understand the upstream image to make sure that the custom manifest I mentioned above continue to work. It looks like they should.

Thank you both for you patience in helping me understand things. I appreciate it!

switch {
case imageVersion == ConformanceImageVersionLatest:
return config.UpstreamKubeConformanceImageURL
case imageVersion < "v1.13":
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can imageVersion ever have an alpha, beta, or some other qualifier? If so, I believe this will fail the intended logic (link):

package main

import (
	"fmt"
)

func main() {
	fmt.Println("1.13.alpha.0" < "1.13")
}

The above program prints false because when comparing lexicographically, 1.13.alpha.0 is greater than 1.13.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That shouldn't be a problem; the alpha and such always comes after the major.minor.patch. Upstream is good as far as I can tell about actually following the semver spec https://semver.org/#spec-item-9

If that were to ever happen I'd say it' a bug in the release flow.

Transition to using the upstream conformance image. They started
publishing it after v1.13.0 so for versions before that we still
need to support the heptio/kube-conformance image.

This PR adds the conditional logic for choosing the right image
as well as extends the version-getting logic so that for versions
at/after v1.13 we provide all 3 segments of the semver since
we now track exactly with each release.

Fixes #593

Signed-off-by: John Schnake <jschnake@vmware.com>
@johnSchnake
Copy link
Contributor Author

Rebased to get updated to tip; it doesnt look like there are any other concerns/issues so I'm looking for an approval. Let me know if there are other issues though.

Copy link
Contributor

@stevesloka stevesloka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was waiting on the changes from #621 so these need to go together. I don't like the idea that we push latest conformance image. Ideally, we'd fix those comments in the same PR.

@stevesloka stevesloka merged commit 3874014 into vmware-tanzu:master Mar 5, 2019
@johnSchnake johnSchnake deleted the useUpstreamConformanceImage branch March 5, 2019 18:35
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants