Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Getting Error While Running Kube-Bench on AWS from local machine #136

Closed
abhinavwwefan opened this issue Jun 13, 2018 · 7 comments
Closed
Assignees

Comments

@abhinavwwefan
Copy link

I have a cluster configured in the AWS environment. Running the Kubernetes Infrastructure

Now i am connecting to pods and nods through the kubectl using the kube-conf file

I have installed Kube-bench on my local machine and trying to execute it on master and nodes but getting the below response.

abhinav:kube-bench abhinav$ ./kube-bench master
No CIS spec for 1.10 - using tests from CIS 1.2.0 spec for Kubernetes 1.8

need apiserver executable but none of the candidates are running
abhinav:kube-bench abhinav$ ./kube-bench node
No CIS spec for 1.10 - using tests from CIS 1.2.0 spec for Kubernetes 1.8

need kubelet executable but none of the candidates are running

Can you help me how i can run the kube-bench on the infrastructure from my local machine
or if i have to setup with a Kubernetes infra or on my EC2 instance

@ttousai
Copy link
Contributor

ttousai commented Jun 13, 2018

Hello @abhinavwwefan kube-bench must be run on your infrastructure. You can use the following to run on your infrastructure:

Run the master check

kubectl run --rm -i -t kube-bench-master --image=aquasec/kube-bench:latest --restart=Never --overrides="{ \"apiVersion\": \"v1\", \"spec\": { \"hostPID\": true, \"nodeSelector\": { \"kubernetes.io/role\": \"master\" }, \"tolerations\": [ { \"key\": \"node-role.kubernetes.io/master\", \"operator\": \"Exists\", \"effect\": \"NoSchedule\" } ] } }" -- master --version 1.8

Run the node check

kubectl run --rm -i -t kube-bench-node --image=aquasec/kube-bench:latest --restart=Never --overrides="{ \"apiVersion\": \"v1\", \"spec\": { \"hostPID\": true } }" -- node --version 1.8

@lizrice
Copy link
Contributor

lizrice commented Jun 28, 2018

There isn't a set of tests defined by the CIS for 1.10 and at the moment we require you to specify the version (with the --version 1.8 flag).

@ttousai I think it would be better if we automatically defaulted to 1.8 tests if the version number is higher (for now - there should be a new CIS spec coming out soon for 1.11). Wdyt?

@ttousai
Copy link
Contributor

ttousai commented Jun 28, 2018

@lizrice I think that will work. What do you think about making it the latest kube-bench supported CIS version.

@lizrice
Copy link
Contributor

lizrice commented Jun 29, 2018

@ttousai the logic should be to pick the tests for the highest version that is less than or equal to the currently running kubernetes version.

So for example:

  • if you're running 1.10 with our config files as provided, it should use the 1.8 tests.
  • if you're running 1.10 and you added your own config files for 1.10, it should use those
  • if you're running 1.11 at the moment with our config files as provided, it should use the 1.8 tests
  • when we create tests for 1.11 (when CIS publishes a new benchmark) it should use the 1.11 tests

If you specify the tests explicitly with the --version flag, that should take precedence.

Sound right to you?

@lizrice lizrice self-assigned this Jun 29, 2018
@ttousai
Copy link
Contributor

ttousai commented Jun 29, 2018

@lizrice the flow sounds right.

@skam-github
Copy link

skam-github commented Mar 15, 2019

Hi All,

I am facing a similar issue for the OpenShift Container Platform version 3.10.

Issue 1:
[root@user kube-bench]# ./kube-bench master
need apiserver executable but none of the candidates are running

Issue 2:
[root@user1 kube-bench]# ./kube-bench node

need proxy executable but none of the candidates are running

I tried checking for apiserver and response is:
ps -ef | grep apiserver
root 45678 910112 0 06:01 pts/1 00:00:00 grep --color=auto apiserver

Then I tried getting all the pods namespaces and response is:
oc get pods --all-namespaces
Output
default docker-registry-1-4qq 1/1 Running 0 2d
default docker-registry-2-deploy 0/1 Error 0 2d
default kube-bench-master 0/1 Pending 0 5h
default master 0/1 Pending 0 3h
default registry-console-1-479 1/1 Running 1 2d
default router-1-d7zdg 1/1 Running 0 2d
default router-1-rhg2m 1/1 Running 0 2d
default router-1-vz45m 1/1 Running 0 2d
kube-system kube-storage-controller-doryd-7c8c6d5dc-5fkjg 1/1 Running 0 2d
kube-system master-api-user1.something.local 1/1 Running 1 2d
kube-system master-api-user2.something.local 1/1 Running 0 2d
kube-system master-api-user3.something.local 1/1 Running 2 2d
kube-system master-controllers-user1.something.local 1/1 Running 1 2d
kube-system master-controllers-user2.something.local 1/1 Running 0 2d
kube-system master-controllers-user3.something.local 1/1 Running 2 2d
openshift-node sync-AAAAA 1/1 Running 0 2d
openshift-node sync-BBBBB 1/1 Running 1 2d
openshift-node sync-CCCCC 1/1 Running 0 2d
openshift-sdn sdn-DDDDD 1/1 Running 2 2d
openshift-sdn sdn-EEEEE 1/1 Running 0 2d
openshift-sdn sdn-FFFFF 1/1 Running 0 2d
openshift-web-console webconsole-6ff6ff-fhrhb 1/1 Running 1 2d
openshift-web-console webconsole-6ff6ff-tdd42 1/1 Running 1 2d
openshift-web-console webconsole-6ff6ff-tflz6 1/1 Running 0 2d

Then oc status returns me:
command: oc status
In project default on server https://user1.something.local:8443

https://docker-registry-default.router.default.svc.cluster.local (passthrough) (svc/docker-registry)
dc/docker-registry deploys aaa.aaa.aaaa/openshift3/ose-docker-registry:v3.10.111
deployment #2 failed 2 days ago: config change
deployment #1 deployed 2 days ago - 1 pod

svc/kubernetes - XXX.XX.X.X ports 443->8443, 53->8053, 53->8053

https://registry-console-default.router.default.svc.cluster.local (passthrough) (svc/registry-console)
dc/registry-console deploys aaa.aaa.aaaa/openshift3/registry-console:v3.10
deployment #1 deployed 2 days ago - 1 pod

svc/router - YYY.YY.YY.Y ports 80, 443, 1936
dc/router deploys registry.access.redhat.com/openshift3/ose-haproxy-router:v3.10.111
deployment #1 deployed 2 days ago - 3 pods

pod/master runs aquasec/kube-bench:latest

pod/kube-bench-master runs aquasec/kube-bench:latest

Then checked kubectl up and running
kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+b81c8f8", GitCommit:"b81c8f8", GitTreeState:"clean", BuildDate:"2019-02-07T18:49:53Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}

Please let me know how to proceed in resolving this issue.

@lizrice
Copy link
Contributor

lizrice commented Mar 15, 2019

@skam-github for OpenShift at the moment you'll need to explicitly specify --version ocp-3.10 to pick up configuration which includes the executables that OpenShift uses

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants