Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Fallback to default registry endpoint is broken when using "*" wildcard mirror in registries.yaml with containerd 2.0 #11857

Open
lirtistan opened this issue Feb 28, 2025 · 20 comments
Assignees
Labels
kind/upstream-issue This issue appears to be caused by an upstream bug

Comments

@lirtistan
Copy link

Environmental Info:
K3s Version:

k3s version v1.32.2+k3s1 (381620ef)
go version go1.23.6

Node(s) CPU architecture, OS, and Version:

2 Node Test Cluster uname -r reporting 6.1.0-30-amd64

Both installed with a minimal Debian 12 OS (ansible deployment)

root@staging1:~# kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
staging1   Ready    control-plane,master   61m   v1.32.2+k3s1   172.16.0.1    172.16.0.1    Debian GNU/Linux 12 (bookworm)   6.1.0-30-amd64   containerd://2.0.2-k3s2
staging2   Ready    control-plane,master   60m   v1.32.2+k3s1   172.16.0.2    172.16.0.2    Debian GNU/Linux 12 (bookworm)   6.1.0-30-amd64   containerd://2.0.2-k3s2

Cluster Configuration:

  • 2 Node (both as control-plane,master) VMs running in stock KVM (libvirt)
  • every Node has next to lo two Interfaces eth0 for WAN Traffic and eth1 for LAN Traffic
  • Host: staging1 | LAN-IP: 172.16.0.1 | WAN-IP: 192.168.122.246
  • Host: staging2 | LAN-IP: 172.16.0.2 | WAN-IP: 192.168.122.90
  • Following helm charts are installed...
    root@staging2:~# helm list -A
    NAME         	NAMESPACE      	REVISION	UPDATED                                	STATUS  	CHART               	APP VERSION
    cilium       	cilium-system  	1       	2025-02-28 07:28:32.804174608 +0100 CET	deployed	cilium- 1.17.1       	1.17.1     
    ingress-nginx	ingress-nginx  	1       	2025-02-28 07:31:31.713591888 +0100 CET	failed  	ingress-nginx-4.12.0	1.12.0     
    longhorn     	longhorn-system	1       	2025-02-28 07:30:47.23270258 +0100 CET 	deployed	longhorn-1.8.0      	v1.8.0  
    

Describe the bug:
New Workload-Deployments in K3s v1.32.2+k3s1 are failing/hanging in ContainerCreating status, because something must have changed with the format of the registries.yaml config.

root@staging1:~# kubectl get pods -A -o wide
NAMESPACE         NAME                                       READY   STATUS              RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
cilium-system     cilium-envoy-krklb                         0/1     ContainerCreating   0          4m29s   172.16.0.2   staging2   <none>           <none>
cilium-system     cilium-envoy-vmckv                         0/1     ContainerCreating   0          5m48s   172.16.0.1   staging1   <none>           <none>
cilium-system     cilium-operator-85bf6f5694-sc5x8           0/1     ContainerCreating   0          5m48s   172.16.0.2   staging2   <none>           <none>
cilium-system     cilium-operator-85bf6f5694-x6k7v           0/1     ContainerCreating   0          5m48s   172.16.0.1   staging1   <none>           <none>
cilium-system     cilium-p9fpg                               0/1     Init:0/6            0          4m29s   172.16.0.2   staging2   <none>           <none>
cilium-system     cilium-pnpn6                               0/1     Init:0/6            0          5m48s   172.16.0.1   staging1   <none>           <none>
cilium-system     hubble-relay-75d4f954d-gnlsg               0/1     ContainerCreating   0          5m48s   <none>       staging1   <none>           <none>
cilium-system     hubble-relay-75d4f954d-slc5r               0/1     ContainerCreating   0          5m48s   <none>       staging1   <none>           <none>
ingress-nginx     ingress-nginx-admission-create-n2852       0/1     ContainerCreating   0          2m51s   <none>       staging2   <none>           <none>
kube-system       coredns-ff8999cc5-mwjg6                    0/1     ContainerCreating   0          6m11s   <none>       staging1   <none>           <none>
kube-system       coredns-ff8999cc5-nzpfl                    0/1     ContainerCreating   0          6m11s   <none>       staging1   <none>           <none>
kube-system       local-path-provisioner-774c6665dc-bzlcr    0/1     ContainerCreating   0          6m11s   <none>       staging1   <none>           <none>
kube-system       metrics-server-6f4c6675d5-zjdpk            0/1     ContainerCreating   0          6m11s   <none>       staging1   <none>           <none>
longhorn-system   longhorn-driver-deployer-b8bc4675f-wfhw2   0/1     Init:0/1            0          3m34s   <none>       staging2   <none>           <none>
longhorn-system   longhorn-manager-qcg9t                     0/2     ContainerCreating   0          3m34s   <none>       staging1   <none>           <none>
longhorn-system   longhorn-manager-zdjmp                     0/2     ContainerCreating   0          3m34s   <none>       staging2   <none>           <none>
longhorn-system   longhorn-ui-7749bb466f-52gcb               0/1     ContainerCreating   0          3m34s   <none>       staging1   <none>           <none>
longhorn-system   longhorn-ui-7749bb466f-gjk9k               0/1     ContainerCreating   0          3m34s   <none>       staging2   <none>           <none>

Output from a cilium-agent Pod describe:

 Warning  FailedCreatePodSandBox  7s (x9 over 110s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = NotFound desc = failed to start sandbox "eb7a7309006ec34550497ac44a5aea5b18e4348f076babd763cb1c1a19fe5d6d": failed to get sandbox image "rancher/mirrored-pause:3.6": failed to pull image "rancher/mirrored-pause:3.6": failed to pull and unpack image "docker.io/rancher/mirrored-pause:3.6": failed to resolve reference "docker.io/rancher/mirrored-pause:3.6": docker.io/rancher/mirrored-pause:3.6: not found

So i moved /etc/rancher/k3s/registries.yaml to another location and restarted the k3s.service and voila everything got pulled.

Content of the registries.yaml:

mirrors:
  "*":
    endpoint:
    - "http://localhost:5000"
configs:
  "docker.io":
  "quay.io":
  "*":
    tls:
      insecure_skip_verify: true

Steps To Reproduce:

see Bug-Description above.

Expected behavior:

registries.yaml hasn't changed between my earlier deployments nor does the Documentation mention something.
So everything should working.

Actual behavior:

Images can't be pulled, the root cause is actually unknown, currently i haven't much time to dive into the code.

Additional context / logs:

see Bug-Description above.

@lirtistan
Copy link
Author

After changing the registries.yaml from...

mirrors:
  "*":
    endpoint:
    - "http://localhost:5000"
configs:
  "docker.io":
  "quay.io":
  "*":
    tls:
      insecure_skip_verify: true

to...

#mirrors:
#  "*":
#    endpoint:
#    - "http://localhost:5000"
configs:
  "docker.io":
  "quay.io":
  "*":
    tls:
      insecure_skip_verify: true

all newer Deployments are working now.

@brandond
Copy link
Member

brandond commented Feb 28, 2025

failed to resolve reference "docker.io/rancher/mirrored-pause:3.6": docker.io/rancher/mirrored-pause:3.6: not found

Did you disable fallback to the default endpoint with disable-default-endpoint? It should be falling back to docker hub if your mirror fails.

I would probably increase the containerd log level by setting CONTAINERD_LOG_LEVEL=debug or CONTAINERD_LOG_LEVEL=trace in your k3s service environment. Restore your old mirror config, restart k3s, and check the containerd log file to see what all is actually going on when it fails to pull.

@lirtistan
Copy link
Author

lirtistan commented Feb 28, 2025

Hi Brandon,

no i didn't disable the fallback, here my /etc/rancher/k3s/config.yaml...

root@staging1:~# cat /etc/rancher/k3s/config.yaml 
flannel-backend: none
disable-kube-proxy: true
disable-network-policy: true
disable-helm-controller: true
disable:
- servicelb
- traefik
tls-san:
- localhost
- 127.0.0.1
- cluster
- 10.10.1.254
- staging1
- 172.16.0.1
- 192.168.122.246
- staging2
- 172.16.0.2
- 192.168.122.90
bind-address: "172.16.0.1"
node-ip: "172.16.0.1"
node-external-ip: "172.16.0.1"
#kubelet-arg:
#- "node-ip=172.16.0.1"
#cluster-cidr: "10.42.0.0/16" # <- managed by cilium ipam
cluster-dns : "10.43.0.10"
service-cidr: "10.43.0.0/16"
egress-selector-mode: cluster
debug: false
#datastore-endpoint: "mysql://k3s:aeh0Eu$p3O@tcp(cluster:3306)/k3s"
datastore-endpoint: "http://172.16.0.1:2379,https://172.16.0.2:2379"

Thats why iam wondering why this has worked before in version v1.32.1+k3s1 and older ones.

@brandond
Copy link
Member

brandond commented Feb 28, 2025

If you look at the release notes, you should see that we upgraded from containerd 1.7 to 2.0 in this release. Lots of changes there.

Check the logs to see what exactly it's doing. Could be a regression in fallback to the default. Do you see the same thing if you explicitly list your mirror as a mirror for docker hub, instead of using the wildcard?

@lirtistan
Copy link
Author

So i asume registries.yaml isn't not read by k3s then, i wasn't aware of that. But makes sense, i started a new deploment, so i can give feedback in a couple of minutes.
Please be patient ❤

@brandond
Copy link
Member

brandond commented Feb 28, 2025

It is read by k3s, but its contents are pretty much exclusively used to generate the containerd configuration file. It is containerd that actually pulls and runs images.

@lirtistan
Copy link
Author

Ok can't find much, the containerd.log itself shows me just (lots of entries like this)...

time="2025-02-28T09:53:17.696253365+01:00" level=debug msg="PullImage using normalized image ref: \"docker.io/rancher/mirrored-pause:3.6\""
time="2025-02-28T09:53:17.696341778+01:00" level=debug msg="PullImage \"docker.io/rancher/mirrored-pause:3.6\" with snapshotter overlayfs"
time="2025-02-28T09:53:17.699224004+01:00" level=debug msg="do request" host="localhost:5000" request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.d
istribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/v2.0.2-k3s2 request.method=HEAD u
rl="http://localhost:5000/v2/rancher/mirrored-pause/manifests/3.6?ns=docker.io"
time="2025-02-28T09:53:17.712288147+01:00" level=debug msg="fetch response received" host="localhost:5000" response.header.content-length=93 response.header.content-type="application/json; charse
t=utf-8" response.header.date="Fri, 28 Feb 2025 08:53:17 GMT" response.header.docker-distribution-api-version=registry/2.0 response.header.x-content-type-options=nosniff response.status="404 Not 
Found" url="http://localhost:5000/v2/rancher/mirrored-pause/manifests/3.6?ns=docker.io"
time="2025-02-28T09:53:17.715636010+01:00" level=info msg="stop pulling image docker.io/rancher/mirrored-pause:3.6: active requests=0, bytes read=0"
time="2025-02-28T09:53:17.718366420+01:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmvb7,Uid:1d4c20ec-378a-44de-8a6d-8dd2ed600662,Namespace:cilium-system,Attempt:0,} fa
iled, error" error="rpc error: code = NotFound desc = failed to start sandbox \"38df0ba78c0a8d2eb2abc66145cf9e5a1e663360db71e4bb0c2742a92e7b5560\": failed to get sandbox image \"rancher/mirrored-
pause:3.6\": failed to pull image \"rancher/mirrored-pause:3.6\": failed to pull and unpack image \"docker.io/rancher/mirrored-pause:3.6\": failed to resolve reference \"docker.io/rancher/mirrore
d-pause:3.6\": docker.io/rancher/mirrored-pause:3.6: not found"

... then later ...

time="2025-02-28T10:05:40.894947940+01:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-85bf6f5694-nvqmm,Uid:985d0cb3-d635-4383-b765-d9f30a1725be,Namespace:cilium-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to start sandbox \"860703f87aa49621b67e870690523b73a960e4eb3230516d78d7985a82385d16\": failed to get sandbox image \"rancher/mirrored-pause:3.6\": failed to pull image \"rancher/mirrored-pause:3.6\": failed to pull and unpack image \"docker.io/rancher/mirrored-pause:3.6\": failed to resolve reference \"docker.io/rancher/mirrored-pause:3.6\": docker.io/rancher/mirrored-pause:3.6: not found"
time="2025-02-28T10:05:41.179652534+01:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:hubble-relay-75d4f954d-br5hf,Uid:10ef6411-55d2-4f9d-8b83-9c2868b4a975,Namespace:cilium-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44242ba6700eff5b7798bc46cc25fbb2a575841823d87d4eeee5be10b4395a74\": plugin type=\"cilium-cni\" failed (add): unable to connect to Cilium agent: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http://localhost/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?"

So it seems he can't use the local registry as mirror, reason is unknown. The registry container itself is the latest version...

root@staging1:~# podman inspect registry | grep -i image
          "Image": "282bd1664cf1fccccf9f225118e31f9352f1f93e4d0ad485c92e74ec6b11ebd1",
          "ImageDigest": "sha256:319881be2ee9e345d5837d15842a04268de6a139e23be42654fc7664fc6eaf52",
          "ImageName": "docker.io/library/registry:2",
               "Image": "docker.io/library/registry:2",
                    "com.docker.official-images.bashbrew.arch": "amd64",
                    "org.opencontainers.image.base.digest": "sha256:8164b151b49623c7200fbc89aff76e34d981c240633fdd6cf060ed00ae8cf2b0",
                    "org.opencontainers.image.base.name": "alpine:3.18",
                    "org.opencontainers.image.created": "2023-10-02T18:42:41Z",
                    "org.opencontainers.image.revision": "39dd72feaab7066334829d6945c54bc51a0aee98",
                    "org.opencontainers.image.source": "https://github.com/distribution/distribution-library-image.git#39dd72feaab7066334829d6945c54bc51a0aee98:.",
                    "org.opencontainers.image.stopSignal": "15",
                    "org.opencontainers.image.url": "https://hub.docker.com/_/registry",
                    "org.opencontainers.image.version": "2.8.3"

@lirtistan
Copy link
Author

lirtistan commented Feb 28, 2025

root@staging1:~# podman search localhost:5000/
NAME                   DESCRIPTION
localhost:5000/alpine 

this is just an Alpine test image, i created before.

So now it looks like the fallback doesn't work anymore, correct!?

@lirtistan
Copy link
Author

here the containerd.log in trace mode from the staging1 node

containerd.log

@lirtistan
Copy link
Author

and here the k3s journalctl (just today) from staging1 node

k3s.journalctl.log

@lirtistan
Copy link
Author

I read currently the containerd migration docs, there is a section https://containerd.io/releases/#deprecated-features .. which mentions a change so we should use CONTAINERD_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE=1

So i added that environment var to /etc/default/k3s, but no luck yet, still the same errors...

@ricariel
Copy link

Also affects v1.31.6+k3s1 with containerd://2.0.2-k3s2

After deletion of /etc/rancher/k3s/registries.yaml and restart agents/servers everything works fine but without integrated mirror.

@sholdee
Copy link

sholdee commented Feb 28, 2025

Same problem here after upgrade from 1.31.5. to 1.31.6. Embedded registry mirror is busted and any images that are not already cached get 404 not found immediately when I try to pull with crictl. Pulling images with ctr works fine.

This is all that's in my registries.yaml:

mirrors:
  "*":

@brandond
Copy link
Member

brandond commented Feb 28, 2025

@sholdee what do you mean by "embedded registry mirror is busted"? The conversation here so far has not involved the embedded registry mirror (spegel) at all. If you are having a similar problem with that, please provide concrete details.

@lirtistan were you able to try listing your registry as a mirror for docker hub, instead of using the wildcard? I suspect perhaps only wildcard support is broken in containerd 2.0.

I will also note that you don't need to set insecure_skip_verify as your registry is using a HTTP endpoint with no TLS. If you're not using TLS then there is no verification to skip.

Try this:

mirrors:
  docker.io:
    endpoint:
    - "http://localhost:5000"
  quay.io:
    endpoint:
    - "http://localhost:5000"

@lirtistan
Copy link
Author

Sure gime some minutes to verify, actually at dinner.

@brandond brandond changed the title Cluster deploment of version v1.32.2+k3s1 is hanging in status "ContainerCreating" (because /etc/rancher/k3s/registries.yaml is somehow malformed or wrong configured) Fallback to default registry endpoint is broken when using "*" in registries.yaml with containerd 2.0 Feb 28, 2025
@brandond brandond changed the title Fallback to default registry endpoint is broken when using "*" in registries.yaml with containerd 2.0 Fallback to default registry endpoint is broken when using "*" wildcard mirror in registries.yaml with containerd 2.0 Feb 28, 2025
@lirtistan
Copy link
Author

lirtistan commented Feb 28, 2025

@brandond I can verify that the Deployments are now working with your suggestion, tyvm for investigation ❤️

@sholdee
Copy link

sholdee commented Feb 28, 2025

@sholdee what do you mean by "embedded registry mirror is busted"? The conversation here so far has not involved the embedded registry mirror (spegel) at all. If you are having a similar problem with that, please provide concrete details.

@lirtistan were you able to try listing your registry as a mirror for docker hub, instead of using the wildcard? I suspect perhaps only wildcard support is broken in containerd 2.0.

I will also note that you don't need to set insecure_skip_verify as your registry is using a HTTP endpoint with no TLS. If you're not using TLS then there is no verification to skip.

Try this:

mirrors:
docker.io:
endpoint:
- "http://localhost:5000"
quay.io:
endpoint:
- "http://localhost:5000"

After 1.31.5 > 1.31.6 upgrade, all images that are not already cached fail to pull with crictl:

ethan@k3s-master-0:~ $ sudo crictl --debug pull ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26
DEBU[0000] Asset dir /var/lib/rancher/k3s/data/410f44dd07767cc5e4d0b873d7cda6910bed0745092fc30b00e30dc83ca89b18
DEBU[0000] Running /var/lib/rancher/k3s/data/410f44dd07767cc5e4d0b873d7cda6910bed0745092fc30b00e30dc83ca89b18/bin/crictl [crictl --debug pull ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26]
DEBU[0000] get image connection
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Auth:nil,SandboxConfig:nil,}
E0227 23:03:24.749002 3562586 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26\": failed to resolve reference \"ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26\": ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26: not found" image="ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26": failed to resolve reference "ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26": ghcr.io/sholdee/caddy-proxy-cloudflare:2025.02.26: not found

Pulling images with ctr, like: ctr -n k8s.io --address /run/k3s/containerd/containerd.sock --debug image pull quay.io/prometheus/prometheus:v3.2.1 works fine, and the image is then listed by crictl images list and is distributed by the embedded registry mirror.

My containerd/registries configuration is all default except for enabling Spegel and the following registries.yaml on all my nodes:

ethan@k3s-worker-1:~ $ cat /etc/rancher/k3s/registries.yaml.bak
mirrors:
  "*":

Either removing registries.yaml and restarting k3s or removing /var/lib/rancher/k3s/agent/etc/containerd/certs.d resolves the issue, causing crictl to pull images normally. This config has worked fine on multiple k3s version for months, and my registries.yaml is the same as the k3s docs example, so it seems pretty clear there is a regression somewhere.

@brandond
Copy link
Member

brandond commented Feb 28, 2025

@sholdee please see the comment you responded to, and let me know if using explicit mirror entries instead of the wildcard works around the issue for you.

This does not appear to have anything to do with the embedded registry, but rather containerd 2.0 is failing to fall back to the default endpoint when using the * (or _default) config to look up mirrors.

@sholdee
Copy link

sholdee commented Feb 28, 2025

@sholdee please see the comment you responded to, and let me know if using explicit mirror entries instead of the wildcard works around the issue for you.

This does not appear to have anything to do with the embedded registry, but rather containerd 2.0 is failing to fall back to the default endpoint when using the * (or _default) config to look up mirrors.

This does seem to fix the issue:

ethan@k3s-worker-1:~ $ sudo cat /etc/rancher/k3s/registries.yaml
mirrors:
  "*":
ethan@k3s-worker-1:~ $ sudo crictl pull quay.io/coreos/etcd:v3.6.0-rc.1
E0228 12:43:09.049821 1023110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"quay.io/coreos/etcd:v3.6.0-rc.1\": failed to resolve reference \"quay.io/coreos/etcd:v3.6.0-rc.1\": quay.io/coreos/etcd:v3.6.0-rc.1: not found" image="quay.io/coreos/etcd:v3.6.0-rc.1"
FATA[0000] pulling image: rpc error: code = NotFound desc = failed to pull and unpack image "quay.io/coreos/etcd:v3.6.0-rc.1": failed to resolve reference "quay.io/coreos/etcd:v3.6.0-rc.1": quay.io/coreos/etcd:v3.6.0-rc.1: not found
ethan@k3s-worker-1:~ $ sudo nano /etc/rancher/k3s/registries.yaml
ethan@k3s-worker-1:~ $ sudo cat /etc/rancher/k3s/registries.yaml
mirrors:
  docker.io:
  quay.io:
  ghcr.io:
  gcr.io:
  registry.k8s.io:
  public.ecr.aws:
  oci.external-secrets.io:

ethan@k3s-worker-1:~ $ sudo systemctl restart k3s-agent
ethan@k3s-worker-1:~ $ sudo crictl pull quay.io/coreos/etcd:v3.6.0-rc.1
Image is up to date for sha256:f3788da74c9c2dce76fc84ff0ff64636641ed52412438d88cb69c475e848cd56
ethan@k3s-worker-1:~ $

@brandond
Copy link
Member

brandond commented Feb 28, 2025

Thanks, that confirms what I thought was going on. I can take a look at where this is broken, I suspect we may need to open an issue/PR against https://github.com/containerd/containerd to resolve this regression if it is not already addressed for the upcoming v2.0.3 release.

@brandond brandond moved this from New to Accepted in K3s Development Feb 28, 2025
@brandond brandond self-assigned this Feb 28, 2025
@brandond brandond added this to the 2025-03 Release Cycle milestone Feb 28, 2025
@brandond brandond added the kind/upstream-issue This issue appears to be caused by an upstream bug label Feb 28, 2025
@brandond brandond pinned this issue Feb 28, 2025
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
kind/upstream-issue This issue appears to be caused by an upstream bug
Projects
Status: To Test
Development

No branches or pull requests

5 participants