-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
golang:1.12-alpine does not have layers for linux/arm64/v8 #269
Comments
Looks like an instance of docker-library/official-images#3835 The manifest is updated now $ docker pull arm64v8/golang:1.12-alpine
1.12-alpine: Pulling from arm64v8/golang
3b00a3925ee4: Already exists
7809c1a4c8e2: Pull complete
8c00b1d46f44: Pull complete
955cc90a48f7: Pull complete
72f16051d572: Pull complete
Digest: sha256:05f1d1242721f0042550fcc84f35cbe87f39ef5e6f75852d0608b92f4a2d1878
Status: Downloaded newer image for arm64v8/golang:1.12-alpine |
The interesting thing is that I could pull 1.12 and 1.11 specifying arm64v8 prefix without a problem. But I can't do my builds using buildkit because it totally relies on the published manifests. Here is the buildkit's error: error: failed to solve: rpc error: code = Unknown desc = failed to copy: httpReaderSeeker: failed open: could not fetch content descriptor sha256:7cf1f7ccf392bd834eb91f02892f48992d3c2ba292c2198315a4637bb9454c30 (application/vnd.docker.distribution.manifest.v1+json) from remote: not found |
After much digging. The manifest is technically fine. The item in question is a manifest v1:
And works fine for cc @tianon, perhaps we need to somehow disable v1 pushing on our build jobs (this won't fix this now, but will prevent it in the future). edit: this would probably require custom patches to docker itself 😞 |
This is partly a bug in I'm really not very keen on patching Docker for our builders; that's kind of heinous. 😞 😱 |
(Especially given that this was a blip in the Hub that caused these to be pushed in the first place, and |
Could someone fix the current version of that cc @dmcgowan For possible docker push side validation for this. |
Is it still? There was a bump since then that should've pushed a new image that isn't schema1. |
@tianon Yes, |
I'm kind of surprised the Hub still allowed a schema1 push 😅 |
fwiw it seems likely to me that the configuration of the manifest list pointing to a v1 manifest will never be supported by containerd. Tracked in containerd/containerd#3100 |
@cowsrule any ideas? ^^ |
Also surprised, we should be blocking all v1 pushes, will followup internally. we also published this today: https://engineering.docker.com/2019/03/registry-v1-api-deprecation/ |
@cowsrule The issue here is in v2/schema1 image not actual v1 |
Closing with estesp/manifest-tool#75 |
I have seen this failure for several other images, so I don't think it's quite fixed? @thaJeztah @tianon For example,
This looks like a down-converted schema 1 image that was built 2020-06-02:
I was thinking about just scanning the official images to look for all manifest lists that reference schema 1 images, but I am pretty sure I would hit rate limits pretty immediately 😄 -- I'm sure this would be an easier query for a docker hub maintainer than to discover via the registry API, if someone is interested in enumerating and fixing these (can we backfill?). I'm kind of surprised docker hub accepts this, but I understand this is a valid manifest list per the spec. I'm more surprised docker hub is still accepting schema 1 image uploads. This is tangentially related to opencontainers/distribution-spec#212 (comment) @justincormack |
I believe the official NGINX image is now maintained by NGINX, inc. (@tianon probably knows). Not at my computer right now, but wondering if they use some different build system to build the images that produces these 🤔 |
No official images maintainer builds/pushes their own images -- all official images are built with very stock |
Here's another non-nginx image:
Edit: I see this was from the timeframe this was fixed. What's the policy on updating old images? It would be great if docker or manifest-tool had a flag to disable pulling/pushing of schema 1 images (just check |
I don't think old images are updated; they're mostly kept as an archive of older versions (and not sure if it's worth the effort to update them). I think overall the problem being discussed in opencontainers/distribution-spec#212 (comment) is the automatic conversion of content based on (thinking out loud) The only "problem" with that could be that an image that could previously be pulled as a schema 2 v1 image (through automatic conversion) would now no longer be available in that format, and only be available as a schema 2 v2 image, however any current client should be able to use schema 2 v2 images. Of course, a pre-announcement would be needed, and some time-window for users to make sure they're running "current" versions (between big quotes, as "current" means; upgraded in the last 4-5 years) edit: 4-5 years; "off by one" |
So I guess more concretely, should Hub be rejecting pushes of v2/schema1 images now? |
Absolutely. It would be great to get the ball rolling on this soon-ish if we all agree it needs to happen eventually.
Rejecting schema 1 pushes is a good start. We see a very small number of schema 1 pushes. I expect they might be unintentional due to bugs like this. Docker put out a deprecation a while ago, and I'm curious if there's a process in place already or a plan or what the next steps are.
So I'm guessing newer clients will stop being able to push and pull schema 1 images, but will dockerhub ever stop doing the down-conversion? I'm trying to decide if it's worth the effort to fix a client that doesn't support manifest lists that reference schema 1 images or if I should just wait because the problem will eventually go away. |
@jonjohnsonjr to help getting the ball rolling, could you open a ticket in our roadmap? https://github.com/docker/roadmap/issues I think there's a review session tomorrow for the roadmap, and I can bring it up |
Sure! Done: docker/roadmap#173 |
Thank you! |
The result of running
Please note that the number of layers is zero. The same for
golang:1.11-alpine
The text was updated successfully, but these errors were encountered: