You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So by adding some affinity policies to the watcher deployment (I'll create a separate issue/PR for that), my team is on path to properly launching the watcher with multiple replicas for HA purposes, where the knative leader election kicks in correctly.
However, the non-knative, gRPC based api server is a whole different animal.
This feature asks for someone to prototype standing up a gRPC load balancer, verify a multi-replica api server, with proper affinity policies to ensure spread across multiple k8s nodes, handles things correctly, and then document the procedure for others to use.
There is also the notion of defining API servers for different clients, where you define separate services. Examples of that in the doc could be useful.
Use case
HA of any system component is a pretty standard requirement for any hosted service.
I want external clients to go through the kube rbac proxy, but do not require the watcher's communication with the api server to go through the kube rbac proxy
/label feature
The text was updated successfully, but these errors were encountered:
So by adding some affinity policies to the watcher deployment (I'll create a separate issue/PR for that), my team is on path to properly launching the watcher with multiple replicas for HA purposes, where the knative leader election kicks in correctly.
However, the non-knative, gRPC based api server is a whole different animal.
This feature asks for someone to prototype standing up a gRPC load balancer, verify a multi-replica api server, with proper affinity policies to ensure spread across multiple k8s nodes, handles things correctly, and then document the procedure for others to use.
Use case
HA of any system component is a pretty standard requirement for any hosted service.
/label feature
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Feature request
So by adding some affinity policies to the watcher deployment (I'll create a separate issue/PR for that), my team is on path to properly launching the watcher with multiple replicas for HA purposes, where the knative leader election kicks in correctly.
However, the non-knative, gRPC based api server is a whole different animal.
I started a thread in slack as well over at https://tektoncd.slack.com/archives/C01GCEH0FLK/p1707507954291439 but there has been no interest / commentary from others in the community, so I'm minimally opening this item for reference.
Best as I can tell, gRPC in general can support use of a load balancer per https://github.com/grpc/grpc/blob/master/doc/load-balancing.md , where the balancing happens on a per-call basis, not a per-connection basis.
This feature asks for someone to prototype standing up a gRPC load balancer, verify a multi-replica api server, with proper affinity policies to ensure spread across multiple k8s nodes, handles things correctly, and then document the procedure for others to use.
There is also the notion of defining API servers for different clients, where you define separate services. Examples of that in the doc could be useful.
Use case
/label feature
The text was updated successfully, but these errors were encountered: