-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Proposal: refactor conformance tests #548
Comments
I've volunteered to work on the registry server tests when time permits (additional help is always appreciated). Someone else would need to write the tests for clients. LF/OCI may help with a resource to work on the website, but a volunteer to help with that would also be useful. |
Some notes about gingko which is the testing framework we use in conformance tests. It is quite powerful and we may just be using a subset of its features. https://onsi.github.io/ginkgo/#filtering-specs https://onsi.github.io/ginkgo/#spec-labels |
My preference is to use the Go stdlib where possible. My goal is to minimize the number of external libraries and frameworks someone needs to learn to submit a PR to the project. |
would like to volunteer |
Oh, I also wasn't volunteering to redesign the configuration around a configuration file. My plan was to stick with environment variables, maybe CLI parameters, to simplify the integration with GitHub actions and similar CI tooling. Someone else would need to work on that task. I'm worried this issue is very broad and unlikely to be fully resolved (who's writing the conformance tests for client tools?), particularly not by a single PR. Complex tasks that require a lot of work are being listed as a single bullet point with little detail of the requirements. This may be better of being closed and individual issues or PRs opened under the conformance label (assuming they don't already exist). As an aside, I may try rewriting it as a standard Go binary instead of with the Go test interface it is using now. That would simplify building it and also allow the conformance test to be tested itself. |
I have the opposite concern that we may NIH this and end up with the same functionality. |
Okay, I'll hold off on doing anything pending agreement on these concerns. |
Some more notes ... If test configuration is passed as env vars, (vs) as a config file (which can be downloaded,shared,derived,and shared again) { "pull": { |
I have been looking at the tests and I have found following issues:
In general |
Although I am not a friend of adding to many libraries and initially was wondering why not simply using the Go stdlib... I want to use the Junit reporting on GitLab (the system used on my company) and the result coming from ginkgo is much more readable than the one from the stdlib. So I see advantages on both approaches. |
I have had a deeper look at the tests and I have done some experiments resulting in the PoC that you can find in this branch of my fork. It is a single commit b59cf09 only covering part of the tests, but hopefully enough to illustrate my point. It basically demonstrates
Finer granularityIf you compare the report of the current tests (see report.main.github.html.zip for the whole HTML report) with those of my branch (see report.silvanoc.github.html.zip for the whole HTML report), you will quickly see that all the uploads of manifests with custom artifact types and subject are failing on GitHub. These screenshots illustrate it: But having finer granularity alone is not enough... Better verbosityLooking a the above screenshot of my branch you see that all failing tests are reporting that If you compare it with the results I get running my branch on GitLab.com, you see that the PUT request has failed. That is because GitLab.com rejects as of now manifests with unknown artifact type (no matter if over So thanks to the added changes GitHub and GitLab can better understand what they still need to fix and clients can better understand what will work (occasionally being permissive on the clients ignoring the GitHub missing response header) and what not. I am aware, that some of the issues becoming visible with my PoC are related to the image-spec and not to the distribution-spec, but...
|
I would much rather see a holistic approach over adding more patches onto the existing conformance tests. Adding tests for referrers already stretched the existing model beyond the initial design. Trying to add something like sha512 tests, where everything would need to be duplicated to ensure blobs, manifests, tags, and referrers, all work with the alternative digest, would be difficult and error-prone to maintain. |
I can understand your reasons. A holistic approach is probably needed... But I have my reasons to share here my proposals on specific changes over the existing tests:
|
The feedback about conformance tests is that they have been useful but UX in general is terrible.
Here are some UX requirements from the last OCI meeting (08/01/2024)
[ ] Root-cause-analysis of test failures should be easier/clearer
[ ] Conformance tests should be configuration (config-file) driven so that
[ ] Clients expectations (of registries) from conformance tests
[ ] Registries' expectations (of clients) from conformance tests
[ ] A website revamp for the results/config matrix
The text was updated successfully, but these errors were encountered: