Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

broken header error #13143

Closed
ahaj-98 opened this issue Apr 2, 2025 · 2 comments
Closed

broken header error #13143

ahaj-98 opened this issue Apr 2, 2025 · 2 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ahaj-98
Copy link

ahaj-98 commented Apr 2, 2025

What happened:

What you expected to happen: [error] 32#32: *10068 broken header: "" while reading PROXY protocol, client: 10.19.1.128, server: 0.0.0.0:443

I have a 5 ingress-nginx controllers installed in my EKS cluster that have load balancer all installed as helm charts. however only one ingress-nginx controller and only on PROD environment is producing above error continuously but affecting nothing in any of the services

NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):

NGINX Ingress controller
Release: v1.12.1
Build: 51c2b81
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5

Kubernetes version (use kubectl version): v1.31

Environment: Production

  • Cloud provider or hardware configuration: AWS - EKS
  • OS (e.g. from /etc/os-release): OS (Architecture): linux (amd64), OS image Bottlerocket OS 1.34.0 (aws-k8s-1.31)
  • Kernel (e.g. uname -a): Kernel version: 6.1.128
  • Install tools:
    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:
    • kubectl version
    • kubectl get nodes -o wide

EKS cluster was created and always gets upgraded using terraform (source = "terraform-aws-modules/eks/aws" version = "20.33.1")
~ % kubectl version

Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.31.6-eks-bc803b4

~ % kubectl get nodes -o wide

NAME                                            STATUS   ROLES    AGE    VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                KERNEL-VERSION   CONTAINER-RUNTIME
ip-10-11-10-44.eu-central-1.compute.internal    Ready    <none>   18h    v1.31.4-eks-0f56d01   10.11.10.44    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-13-63.eu-central-1.compute.internal    Ready    <none>   95m    v1.31.4-eks-0f56d01   10.11.13.63    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-15-230.eu-central-1.compute.internal   Ready    <none>   35m    v1.31.4-eks-0f56d01   10.11.15.230   <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-19-214.eu-central-1.compute.internal   Ready    <none>   42h    v1.31.4-eks-0f56d01   10.11.19.214   <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-19-68.eu-central-1.compute.internal    Ready    <none>   22h    v1.31.4-eks-0f56d01   10.11.19.68    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-2-232.eu-central-1.compute.internal    Ready    <none>   42h    v1.31.4-eks-0f56d01   10.11.2.232    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-20-128.eu-central-1.compute.internal   Ready    <none>   48m    v1.31.4-eks-0f56d01   10.11.20.128   <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-20-5.eu-central-1.compute.internal     Ready    <none>   16h    v1.31.4-eks-0f56d01   10.11.20.5     <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-22-197.eu-central-1.compute.internal   Ready    <none>   17h    v1.31.4-eks-0f56d01   10.11.22.197   <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-4-107.eu-central-1.compute.internal    Ready    <none>   19h    v1.31.4-eks-0f56d01   10.11.4.107    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-5-168.eu-central-1.compute.internal    Ready    <none>   20h    v1.31.4-eks-0f56d01   10.11.5.168    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-5-178.eu-central-1.compute.internal    Ready    <none>   42h    v1.31.4-eks-0f56d01   10.11.5.178    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-7-177.eu-central-1.compute.internal    Ready    <none>   170m   v1.31.4-eks-0f56d01   10.11.7.177    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-8-246.eu-central-1.compute.internal    Ready    <none>   42h    v1.31.4-eks-0f56d01   10.11.8.246    <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
ip-10-11-9-64.eu-central-1.compute.internal     Ready    <none>   41h    v1.31.4-eks-0f56d01   10.11.9.64     <none>        Bottlerocket OS 1.34.0 (aws-k8s-1.31)   6.1.128          containerd://1.7.25+bottlerocket
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress
    • If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
    • If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
    • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances

The ingress-nginx helm chart got installed using argocd with below gitops CICD commands:

$ export ARGOCD_SERVER=argocd-server.argocd.svc.cluster.local:80 
$ (printf "=== Wait if already in progress ${APP_OF_APPS} ===\n" && argocd ${ARGOCLI_FLAGS} app wait ${APP_OF_APPS} ${ARGOCLI_FLAGS_WAIT} && printf "=== Force sync ${APP_OF_APPS} ===\n" && argocd ${ARGOCLI_FLAGS} app get ${APP_OF_APPS} ${ARGOCLI_FLAGS_GET})
$ printf "=== Wait sync ${APP_OF_APPS} ===" && argocd ${ARGOCLI_FLAGS} app wait ${APP_OF_APPS} ${ARGOCLI_FLAGS_WAIT} && printf "======== Sync ${APP_OF_APPS} OK and healthy ========\n" ||  
$ (printf "=== Wait if already in progress ${APP_NAME} ===\n" && argocd ${ARGOCLI_FLAGS} app wait ${APP_NAME} ${ARGOCLI_FLAGS_WAIT} && printf "=== Force sync ${APP_NAME} ===\n" && argocd ${ARGOCLI_FLAGS} app get ${APP_NAME} ${ARGOCLI_FLAGS_GET})
$ argocd ${ARGOCLI_FLAGS} app wait ${APP_NAME} ${ARGOCLI_FLAGS_WAIT} && printf "======== Sync ${APP_NAME} OK and healthy ========\n" || 

variables:
ARGOCLI_FLAGS: "--plaintext --insecure"
ARGOCLI_DIFF_FLAGS: "--hard-refresh"
ARGOCLI_FLAGS_GET: "--refresh"
ARGOCLI_FLAGS_WAIT: "--sync --health"

  • Current State of the controller:
    • kubectl describe ingressclasses
~ % kubectl describe ingressclasses
Name:         alb
Labels:       app.kubernetes.io/instance=aws-load-balancer-controller
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=aws-load-balancer-controller
              app.kubernetes.io/version=v2.12.0
              helm.sh/chart=aws-load-balancer-controller-1.12.0
              team=ops
Annotations:  <none>
Controller:   ingress.k8s.aws/alb
Parameters:
  APIGroup:  elbv2.k8s.aws
  Kind:      IngressClassParams
  Name:      alb
Events:      <none>


Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.12.1
              helm.sh/chart=ingress-nginx-4.12.0
              team=ops
Annotations:  <none>
Controller:   k8s.io/ingress-nginx
Events:       <none>


Name:         nginx-internal
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx-internal
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.12.1
              helm.sh/chart=ingress-nginx-4.12.0
              team=ops
Annotations:  meta.helm.sh/release-name: cfe-nginx-internal
              meta.helm.sh/release-namespace: ingress-nginx-internal
Controller:   k8s.io/ingress-nginx-internal
Events:       <none>


Name:         nginx-mtls
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx-mtls
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.12.1
              helm.sh/chart=ingress-nginx-4.12.0
              team=ops
Annotations:  <none>
Controller:   k8s.io/ingress-nginx-mtls
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
~ % kubectl get all -n ingress-nginx -o wide
NAME                                            READY   STATUS    RESTARTS   AGE     IP             NODE                                           NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-564f65f795-2t8hb   1/1     Running   0          3h32m   10.11.21.174   ip-10-11-20-5.eu-central-1.compute.internal    <none>           2/2
pod/ingress-nginx-controller-564f65f795-nrv8m   1/1     Running   0          19h     10.11.1.116    ip-10-11-4-107.eu-central-1.compute.internal   <none>           2/2

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                        PORT(S)                      AGE     SELECTOR
service/ingress-nginx-controller             LoadBalancer   172.20.154.39    k8s-ingressn-ingressn-aae43b2ab6-636154ca0f4ac376.elb.eu-central-1.amazonaws.com   80:30455/TCP,443:30531/TCP   2y69d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      172.20.66.106    <none>                                                                             443/TCP                      2y69d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP      172.20.111.119   <none>                                                                             10254/TCP                    2y19d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                                                                                     SELECTOR
deployment.apps/ingress-nginx-controller   2/2     2            2           2y69d   controller   registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                                                                                                     SELECTOR
replicaset.apps/ingress-nginx-controller-55445f84fc   0         0         0       6d15h   controller   registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=55445f84fc
replicaset.apps/ingress-nginx-controller-564f65f795   2         2         2       4d22h   controller   registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=564f65f795
replicaset.apps/ingress-nginx-controller-5788dc65d5   0         0         0       249d    controller   registry.k8s.io/ingress-nginx/controller:v1.11.1@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5788dc65d5
replicaset.apps/ingress-nginx-controller-5c879bbb54   0         0         0       47d     controller   registry.k8s.io/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5c879bbb54
replicaset.apps/ingress-nginx-controller-5d9859d7c5   0         0         0       6d16h   controller   registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5d9859d7c5
replicaset.apps/ingress-nginx-controller-6bdd87bb78   0         0         0       6d16h   controller   registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6bdd87bb78
replicaset.apps/ingress-nginx-controller-6cc565b5d8   0         0         0       47d     controller   registry.k8s.io/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6cc565b5d8
replicaset.apps/ingress-nginx-controller-7679fffd     0         0         0       691d    controller   registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7679fffd
replicaset.apps/ingress-nginx-controller-79cb476787   0         0         0       399d    controller   registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=79cb476787
replicaset.apps/ingress-nginx-controller-cc7b849fc    0         0         0       40d     controller   registry.k8s.io/ingress-nginx/controller:v1.11.1@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=cc7b849fc
replicaset.apps/ingress-nginx-controller-f78576c95    0         0         0       2y19d   controller   registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=f78576c95
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
~ % kubectl describe pod ingress-nginx-controller-56b9c4f89b-5zh7v -n ingress-nginx
Name:             ingress-nginx-controller-56b9c4f89b-5zh7v
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             ip-10-19-16-98.eu-central-1.compute.internal/10.19.16.98
Start Time:       Wed, 26 Mar 2025 18:22:37 +0100
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.12.1
                  helm.sh/chart=ingress-nginx-4.12.0
                  pod-template-hash=56b9c4f89b
                  team=ops
Annotations:      kubectl.kubernetes.io/restartedAt: 2024-05-27T17:46:59Z
                  kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory limit for container controller
Status:           Running
IP:               10.19.19.227
IPs:
  IP:           10.19.19.227
Controlled By:  ReplicaSet/ingress-nginx-controller-56b9c4f89b
Containers:
  controller:
    Container ID:    containerd://bcd267c351780bfb9fcf503dab608acd671d250f3c71ff153065e9d2349499f5
    Image:           registry.k8s.io/ingress-nginx/controller:v1.12.1@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b
    Image ID:        registry.k8s.io/ingress-nginx/controller@sha256:d2fbc4ec70d8aa2050dd91a91506e998765e86c96f32cffb56c503c9c34eed5b
    Ports:           80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --enable-annotation-validation=false
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --enable-metrics=true
    State:          Running
      Started:      Wed, 26 Mar 2025 18:22:41 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                     ingress-nginx-controller-56b9c4f89b-5zh7v (v1:metadata.name)
      POD_NAMESPACE:                ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:                   /usr/local/lib/libmimalloc.so
      AWS_STS_REGIONAL_ENDPOINTS:   regional
      AWS_DEFAULT_REGION:           eu-central-1
      AWS_REGION:                   eu-central-1
      AWS_ROLE_ARN:                 arn:aws:iam::<account_id>:role/AmazonEKSLoadBalancerControllerRole
      AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-25hjp (ro)
Readiness Gates:
  Type                                                           Status
  target-health.elbv2.k8s.aws/k8s-ingressn-ingressn-0c8f121c18   True 
  target-health.elbv2.k8s.aws/k8s-ingressn-ingressn-f02aa14c7b   True 
Conditions:
  Type                                                           Status
  target-health.elbv2.k8s.aws/k8s-ingressn-ingressn-f02aa14c7b   True 
  target-health.elbv2.k8s.aws/k8s-ingressn-ingressn-0c8f121c18   True 
  PodReadyToStartContainers                                      True 
  Initialized                                                    True 
  Ready                                                          True 
  ContainersReady                                                True 
  PodScheduled                                                   True 
Volumes:
  aws-iam-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-25hjp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              dedicated=default
                             kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
~ % kubectl describe svc ingress-nginx-controller -n ingress-nginx
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.12.1
                          helm.sh/chart=ingress-nginx-4.12.0
                          team=ops
Annotations:              service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
                          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
                          service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
                          service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: 2
                          service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: 2
                          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
                          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
                          service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                          service.beta.kubernetes.io/aws-load-balancer-ssl-cert:
                            arn:aws:acm:eu-central-1:<account_id>:certificate/8feeec5a-69a5-418d-a868-85d44c2f8110
                          service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
                          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
                          service.beta.kubernetes.io/aws-load-balancer-type: external
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.20.179.227
IPs:                      172.20.179.227
LoadBalancer Ingress:     k8s-ingressn-ingressn-afa5790ad3-943ec4f4164566e7.elb.eu-central-1.amazonaws.com
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32322/TCP
Endpoints:                10.19.19.227:80,10.19.4.231:80
Port:                     https  443/TCP
TargetPort:               http/TCP
NodePort:                 https  31082/TCP
Endpoints:                10.19.19.227:80,10.19.4.231:80
Session Affinity:         None
External Traffic Policy:  Local
Internal Traffic Policy:  Cluster
HealthCheck NodePort:     30505
Events:                   <none>
  • Current state of ingress object, if applicable:
    • kubectl -n <appnamespace> get all,ing -o wide
    • kubectl -n <appnamespace> describe ing <ingressname>
~ % kubectl describe ingress usweb -n usweb                     
Name:             usweb-ingress-origin
Labels:           app.kubernetes.io/created-by=argocd
                  app.kubernetes.io/instance=usweb
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=usweb
                  app.kubernetes.io/version=4.7
                  helm.sh/chart=usweb-0.1.0
                  team=ops
Namespace:        usweb
Address:          k8s-ingressn-ingressn-afa5790ad3-943ec4f4164566e7.elb.eu-central-1.amazonaws.com
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                         Path  Backends
  ----                         ----  --------
  origin-usweb.prod.<vhost>
                               /   usweb:443 (<none>)
Annotations:                   nginx.ingress.kubernetes.io/backend-protocol: HTTPS
                               nginx.ingress.kubernetes.io/configuration-snippet:
                                 expires 30d;
                                 add_header Cache-Control public;
                                 add_header Pragma public;
                                 add_header Cache-Control public;
                                 etag on;
                               nginx.ingress.kubernetes.io/force-ssl-redirect: true
                               nginx.ingress.kubernetes.io/secure-backends: true
                               nginx.ingress.kubernetes.io/server-snippet:
                                 proxy_ssl_name <vhost>;
                                 proxy_ssl_server_name on;
                               nginx.ingress.kubernetes.io/upstream-vhost: <vhost>
                               nginx.ingress.kubernetes.io/whitelist-source-range:<some-ips-here>
Events:                        <none>
  • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag

  • Others:

    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

/remove-kind bug
-->

@ahaj-98 ahaj-98 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 2, 2025
@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Apr 2, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@Gacko
Copy link
Member

Gacko commented Apr 2, 2025

I assume something's accessing these pods without sending the PROXY protocol header. These connection attempts normally get rejected and the NGINX is logging that. Sadly there's no way to make the PROXY protocol header optional. Either you have PROXY protocol enabled and provide the header on the TCP stack or you cannot connect to NGINX.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants