Created due to kubernetes/org#715.
See the KEP proposal for architecture and details.
Learn how to engage with the Kubernetes community on the community page.
You can reach the maintainers of this project at:
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.
As of the 0.28.0
release, the apiserver-network-proxy project is changing its versioning and release
process. Going forward the project will adhere to these rules:
- This project follows semantic versioning (eg
x.y.z
) for releases and tags. - Tags indicate readiness for a release, and project maintainers will create corresponding releases.
- Releases and tags align with the Kubernetes minor release versions (the
y
inx.y.z
). For instance, if Kubernetes releases version1.99.0
, the corresponding release and tag for apiserver-network-proxy will be0.99.0
. - Branches will be created when the minor release version (the
y
inx.y.z
) is increased, and follow the pattern ofrelease-x.y
. For instance, if version0.99.0
has been released, the corresponding branch will be namedrelease-0.99
. - Patch level versions for releases and tags will be updated when patches are applied to the specific release
branch. For example, if patches must be applied to the
release-0.99
branch and a new release is created, the version will be0.99.1
. In this manner the patch level version number (thez
inx.y.z
) may not match the Kubernetes patch level.
For Kubernetes version 1.28.0+
, we recommend using the tag that corresponds to the same minor version
number. For example, if you are working with Kubernetes version 1.99
, please utilize the latest 0.99
tag and refer to the release-0.99
branch. It is important to note that there may be disparities in the
patch level between apiserver-network-proxy and Kubernetes.
For Kubernetes version <=1.27
, it is recommended to match apiserver-network-proxy server & client
minor release versions. With Kubernetes, this means:
- Kubernetes versions v1.26 through v1.27:
0.1.X
tags,release-0.1
branch. - Kubernetes versions v1.23 through v1.25:
0.0.X
tags,release-0.0
branch. - Kubernetes versions up to v1.23: apiserver-network-proxy versions up to
v0.0.30
. Refer to the kubernetes go.mod file for the specific release version.
Please make sure you have the REGISTRY and PROJECT_ID environment variables set.
For local builds these can be set to anything.
For image builds these determine the location of your image.
For GCE the registry should be gcr.io and PROJECT_ID should be the project you
want to use the images in.
Please ensure the go bin directory (usually ~/go/bin
) is in your PATH
.
The mockgen
tool must be installed on your system.
Currently, we are using go.uber.org/mock/mockgen@v0.4.0
go install go.uber.org/mock/mockgen@v0.4.0
Proto definitions are compiled with protoc
. Please ensure you have protoc installed (Instructions) and the protoc-gen-go
and protoc-gen-go-grpc
libraries at the appropriate version.
Currently, we are using protoc-gen-go@v1.27.1
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.27.1
Currently, we are using protoc-gen-go-grpc@v1.2
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2
make clean
make certs
make gen
make build
make docker-build
The current examples run two actual services as well as a sample client on one end and a sample destination for requests on the other.
- Proxy service: The proxy service takes the API server requests and forwards them appropriately.
- Agent service: The agent service connects to the proxy and then allows traffic to be forwarded to it.
Frontend client =HTTP over GRPC=> (:8090) proxy (:8091) <=GRPC= agent =HTTP=> http-test-server(:8000)
| ^
| Tunnel |
+---------------------------------------------------------------+
- Start Simple test HTTP Server (Sample destination)
./bin/http-test-server
- Start proxy service
./bin/proxy-server --server-ca-cert=certs/frontend/issued/ca.crt --server-cert=certs/frontend/issued/proxy-frontend.crt --server-key=certs/frontend/private/proxy-frontend.key --cluster-ca-cert=certs/agent/issued/ca.crt --cluster-cert=certs/agent/issued/proxy-frontend.crt --cluster-key=certs/agent/private/proxy-frontend.key
- Start agent service
./bin/proxy-agent --ca-cert=certs/agent/issued/ca.crt --agent-cert=certs/agent/issued/proxy-agent.crt --agent-key=certs/agent/private/proxy-agent.key
- Run client (mTLS enabled sample client)
./bin/proxy-test-client --ca-cert=certs/frontend/issued/ca.crt --client-cert=certs/frontend/issued/proxy-client.crt --client-key=certs/frontend/private/proxy-client.key
Frontend client =HTTP over GRPC+UDS=> (/tmp/uds-proxy) proxy (:8091) <=GRPC= agent =HTTP=> SimpleHTTPServer(:8000)
| ^
| Tunnel |
+----------------------------------------------------------------------------+
- Start Simple test HTTP Server (Sample destination)
./bin/http-test-server
- Start proxy service
./bin/proxy-server --server-port=0 --uds-name=/tmp/uds-proxy --cluster-ca-cert=certs/agent/issued/ca.crt --cluster-cert=certs/agent/issued/proxy-frontend.crt --cluster-key=certs/agent/private/proxy-frontend.key
- Start agent service
./bin/proxy-agent --ca-cert=certs/agent/issued/ca.crt --agent-cert=certs/agent/issued/proxy-agent.crt --agent-key=certs/agent/private/proxy-agent.key
- Run client (mTLS enabled sample client)
./bin/proxy-test-client --proxy-port=0 --proxy-uds=/tmp/uds-proxy --proxy-host=""
Frontend client =HTTP-CONNECT=> (:8090) proxy (:8091) <=GRPC= agent =HTTP=> SimpleHTTPServer(:8000)
| ^
| Tunnel |
+-------------------------------------------------------------+
- Start SimpleHTTPServer (Sample destination)
./bin/http-test-server
- Start proxy service
./bin/proxy-server --mode=http-connect --server-ca-cert=certs/frontend/issued/ca.crt --server-cert=certs/frontend/issued/proxy-frontend.crt --server-key=certs/frontend/private/proxy-frontend.key --cluster-ca-cert=certs/agent/issued/ca.crt --cluster-cert=certs/agent/issued/proxy-frontend.crt --cluster-key=certs/agent/private/proxy-frontend.key
- Start agent service
./bin/proxy-agent --ca-cert=certs/agent/issued/ca.crt --agent-cert=certs/agent/issued/proxy-agent.crt --agent-key=certs/agent/private/proxy-agent.key
- Run client (mTLS & http-connect enabled sample client)
./bin/proxy-test-client --mode=http-connect --proxy-host=127.0.0.1 --ca-cert=certs/frontend/issued/ca.crt --client-cert=certs/frontend/issued/proxy-client.crt --client-key=certs/frontend/private/proxy-client.key
- Run curl client (curl using a mTLS http-connect proxy)
curl -v -p --proxy-key certs/frontend/private/proxy-client.key --proxy-cert certs/frontend/issued/proxy-client.crt --proxy-cacert certs/frontend/issued/ca.crt --proxy-cert-type PEM -x https://127.0.0.1:8090 http://localhost:8000/success
See following README.md
See this README.md for an example that creates a local kubernetes cluster using kind
and
deploys the proxy agent on a worker node and the proxy server on a control plane node.
See this README.md for a similar example that creates a kind
cluster with a
user-configurable number of control plane and worker nodes and optionally sideloads custom proxy agent and server images.
apiserver-network-proxy
components are intended to run as standalone binaries and should not be imported as a library. Clients communicating with the network proxy can import the konnectivity-client
module.