Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Fix Client#get_resources for custom resources that are not yet defined #14

Merged
merged 16 commits into from
Aug 13, 2018

Conversation

SpComb
Copy link
Contributor

@SpComb SpComb commented Aug 13, 2018

Fixes #13

Client#get_resources uses apis(prefetch_resources: true, skip_missing: true) to handle GET /apis/certmanager.k8s.io/v1alpha1 => HTTP 404 error cases from the prefetched api_resources and skip the GET requests for those undefined custom resources. Resources that are not yet defined cannot yet exist, so it's valid to return nil for those undefined resources.

There's some complexity around the new Util.compact_map implementation because get_resources must return an array matching the resources argument while also skipping over some requests, but this should all have complete spec coverage.

@SpComb SpComb added the bug Something isn't working label Aug 13, 2018
@SpComb
Copy link
Contributor Author

SpComb commented Aug 13, 2018

Note that the apis(..., prefetch_resources: true, skip_missing: true) is perfectly safe, the client will still explode with the correct error if you try and map some undefined resource:

  1) K8s::Client for a mocked API #get_resources for custom resources which do not yet exist returns nils
     Failure/Error: raise error_class.new(method, path, response.status, "#{response.reason_phrase}: #{response_data}")
     
     K8s::Error::NotFound:
       GET /apis/pharos-test.k8s.io/v0 => HTTP 404 : 404 page not found
     # ./lib/k8s/transport.rb:171:in `parse_response'
     # ./lib/k8s/transport.rb:185:in `request'
     # ./lib/k8s/transport.rb:247:in `get'
     # ./lib/k8s/api_client.rb:42:in `api_resources!'
     # ./lib/k8s/api_client.rb:51:in `api_resources'
     # ./lib/k8s/api_client.rb:77:in `client_for_resource'
     # ./lib/k8s/client.rb:130:in `block in get_resources'
     # ./lib/k8s/client.rb:130:in `map'
     # ./lib/k8s/client.rb:130:in `get_resources'
     # ./spec/pharos/kube/client_spec.rb:340:in `block (6 levels) in <top (required)>'

This fix relies on the get_resources explicitly selecting only those API clients with prefetched/defined resources.

@SpComb
Copy link
Contributor Author

SpComb commented Aug 13, 2018

The behavior of the fixed K8s::Client#get_resources should be exactly equivalent to a loop on K8s::Client#get_resource rescuing K8s::Error::NotFound, except it's pipelined for performance:

resources.map{|resource|
  begin
    client.get_resource(resource)
  rescue K8s::Error::NotFound
    nil
  end
}

This is because the same K8s::Error::NotFound is raised by both client_for_resource => APIClient#api_resources for undefined resources as well as ResourceClient#get for defined but non-existing resources.

Without this fix, client.get_resources(resources) will correctly return nil for resources that are defined but missing, whereas it would raise K8s::Error::NotFound for resources that are not defined.

# Hash{arg => value}
value_map = Hash[func_args.zip(values)]

args.map{|arg| value_map[arg] }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of how the hash is used, any duplicate args will map to the same return value... shouldn't be a problem when used with idempotent methods like GET requests, but it's a limitation.

responses = @transport.requests(*requests, skip_missing: true)

# map non-nil requests to response objects, or nil for nil request options
Util.compact_map(requests) { |requests|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is shadowing outer local variable .. usually bad practice.

Copy link
Contributor

@jakolehm jakolehm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@SpComb
Copy link
Contributor Author

SpComb commented Aug 13, 2018

Tested to (still) fix kontena/pharos-cluster#523 with git 6d2f7ae... note the initial pipelined 404 -> the new GET /apis/certmanager.k8s.io/v1alpha1 later on before POST'ing the actual custom resource.

==> Enabling addon cert-manager
I, [2018-08-13T12:28:35.130475 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: [GET /apis/certmanager.k8s.io/v1alpha1] => HTTP [404] in 0.047s
I, [2018-08-13T12:28:35.249014 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: [GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/certificates.certmanager.k8s.io, GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/clusterissuers.certmanager.k8s.io, GET /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings/cert-manager, GET /apis/rbac.authorization.k8s.io/v1beta1/clusterroles/cert-manager, GET /apis/extensions/v1beta1/namespaces/kube-system/deployments/cert-manager, GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/issuers.certmanager.k8s.io, GET /api/v1/namespaces/kube-system/serviceaccounts/cert-manager] => HTTP [404, 404, 404, 404, 404, 404, 404] in 0.112s
I, [2018-08-13T12:28:35.249778 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/certificates.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.310063 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.058s
I, [2018-08-13T12:28:35.310860 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/clusterissuers.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.376897 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.063s
I, [2018-08-13T12:28:35.377749 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource rbac.authorization.k8s.io/v1beta1:ClusterRoleBinding/cert-manager in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.439484 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.059s
I, [2018-08-13T12:28:35.440394 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource rbac.authorization.k8s.io/v1beta1:ClusterRole/cert-manager in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.513995 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/rbac.authorization.k8s.io/v1beta1/clusterroles <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.071s
I, [2018-08-13T12:28:35.514835 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource extensions/v1beta1:Deployment/cert-manager in namespace kube-system with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.602347 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/extensions/v1beta1/namespaces/kube-system/deployments <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.085s
I, [2018-08-13T12:28:35.603488 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/issuers.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.661367 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.055s
I, [2018-08-13T12:28:35.662501 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource certmanager.k8s.io/v1alpha1:Issuer/letsencrypt in namespace default with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.714858 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: GET /apis/certmanager.k8s.io/v1alpha1 => HTTP 200: <K8s::API::MetaV1::APIResourceList> in 0.050s
I, [2018-08-13T12:28:35.789797 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /apis/certmanager.k8s.io/v1alpha1/namespaces/default/issuers <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.073s
I, [2018-08-13T12:28:35.790810 #1]  INFO -- Pharos::Kube::Stack<cert-manager>: Create resource v1:ServiceAccount/cert-manager in namespace kube-system with checksum=37de1f40cb3981e898be450e25bea72a
I, [2018-08-13T12:28:35.848169 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: POST /api/v1/namespaces/kube-system/serviceaccounts <K8s::Resource> => HTTP 201: <K8s::Resource> in 0.055s
W, [2018-08-13T12:28:36.137552 #1]  WARN -- K8s::Transport<https://167.99.39.233:6443>: Retry GET /apis/metrics.k8s.io/v1beta1/nodes => HTTP 503 Service Unavailable: Error: 'context canceled'
Trying to reach: 'https://10.250.163.130:443/apis/metrics.k8s.io/v1beta1/nodes?labelSelector=pharos.kontena.io%2Fstack%3Dcert-manager' in 0.275s
I, [2018-08-13T12:28:36.207731 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: GET /apis/metrics.k8s.io/v1beta1/nodes?labelSelector=pharos.kontena.io%2Fstack%3Dcert-manager => HTTP 200: <K8s::API::MetaV1::List> in 0.070s
I, [2018-08-13T12:28:36.208298 #1]  INFO -- K8s::Transport<https://167.99.39.233:6443>: [GET /api/v1/componentstatuses, GET /api/v1/configmaps, GET /api/v1/endpoints, GET /api/v1/events, GET /api/v1/limitranges, GET /api/v1/namespaces, GET /api/v1/nodes, GET /api/v1/persistentvolumeclaims, GET /api/v1/persistentvolumes, GET /api/v1/pods, GET /api/v1/podtemplates, GET /api/v1/replicationcontrollers, GET /api/v1/resourcequotas, GET /api/v1/secrets, GET /api/v1/serviceaccounts, GET /api/v1/services, GET /apis/apiregistration.k8s.io/v1/apiservices, GET /apis/extensions/v1beta1/daemonsets, GET /apis/extensions/v1beta1/deployments, GET /apis/extensions/v1beta1/ingresses, GET /apis/extensions/v1beta1/networkpolicies, GET /apis/extensions/v1beta1/podsecuritypolicies, GET /apis/extensions/v1beta1/replicasets, GET /apis/apps/v1/controllerrevisions, GET /apis/apps/v1/daemonsets, GET /apis/apps/v1/deployments, GET /apis/apps/v1/replicasets, GET /apis/apps/v1/statefulsets, GET /apis/events.k8s.io/v1beta1/events, GET /apis/autoscaling/v1/horizontalpodautoscalers, GET /apis/batch/v1/jobs, GET /apis/certificates.k8s.io/v1beta1/certificatesigningrequests, GET /apis/networking.k8s.io/v1/networkpolicies, GET /apis/policy/v1beta1/poddisruptionbudgets, GET /apis/policy/v1beta1/podsecuritypolicies, GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings, GET /apis/rbac.authorization.k8s.io/v1/clusterroles, GET /apis/rbac.authorization.k8s.io/v1/rolebindings, GET /apis/rbac.authorization.k8s.io/v1/roles, GET /apis/storage.k8s.io/v1/storageclasses, GET /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations, GET /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations, GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions, GET /apis/scheduling.k8s.io/v1beta1/priorityclasses, GET /apis/pharos-test.k8s.io/v0/tests, GET /apis/crd.projectcalico.org/v1/hostendpoints, GET /apis/crd.projectcalico.org/v1/ippools, GET /apis/crd.projectcalico.org/v1/clusterinformations, GET /apis/crd.projectcalico.org/v1/networkpolicies, GET /apis/crd.projectcalico.org/v1/bgpconfigurations, GET /apis/crd.projectcalico.org/v1/globalnetworksets, GET /apis/crd.projectcalico.org/v1/bgppeers, GET /apis/crd.projectcalico.org/v1/globalnetworkpolicies, GET /apis/crd.projectcalico.org/v1/felixconfigurations, GET /apis/metrics.k8s.io/v1beta1/nodes, GET /apis/metrics.k8s.io/v1beta1/pods] => HTTP [200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 503, 200] in 0.275s
D, [2018-08-13T12:28:36.209866 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource v1:ServiceAccount/cert-manager in namespace kube-system with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210130 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource extensions/v1beta1:Deployment/cert-manager in namespace kube-system with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210326 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource apps/v1:Deployment/cert-manager in namespace kube-system with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210446 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource rbac.authorization.k8s.io/v1:ClusterRoleBinding/cert-manager in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210617 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource rbac.authorization.k8s.io/v1:ClusterRole/cert-manager in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210741 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/certificates.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.210891 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/clusterissuers.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a
D, [2018-08-13T12:28:36.211001 #1] DEBUG -- Pharos::Kube::Stack<cert-manager>: List resource apiextensions.k8s.io/v1beta1:CustomResourceDefinition/issuers.certmanager.k8s.io in namespace  with checksum=37de1f40cb3981e898be450e25bea72a

@SpComb SpComb merged commit 202a2e8 into master Aug 13, 2018
@SpComb SpComb deleted the fix/get-custom-resources-404 branch August 13, 2018 12:38
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants