-
Notifications
You must be signed in to change notification settings - Fork 605
Description
What happened
I executed the kube-hunter test suite on a K8S cluster where strict PSP defined.
podman run -it --env-file ~/opnfv/env \
-v ~/opnfv/ca.pem:/home/opnfv/functest/ca.pem:Z \
-v ~/opnfv/config:/root/.kube/config:Z \
-v ~/opnfv/results:/home/opnfv/functest/results:Z \
-v ~/opnfv/repositories.yml:/home/opnfv/functest/repositories.yml:Z \
-v ~/opnfv/cluster-admin.pem:/home/opnfv/functest/cluster-admin.pem:Z \
-v ~/opnfv/cluster-admin-key.pem:/home/opnfv/functest/cluster-admin-key.pem:Z \
opnfv/functest-kubernetes-security:v1.23 /bin/bash
run_tests -t kube_hunter
The log functest-kubernetes.debug.log
:
...
2023-02-27 10:53:20,130 - kubernetes.client.rest - DEBUG - response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \"kube-hunter\" in pod \"kube-hunter-mz5jk\" is waiting to start: CreateContainerConfigError","reason":"BadRequest","code":400}
2023-02-27 10:53:20,130 - functest_kubernetes.security.security - ERROR - Cannot run kube-hunter
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/functest_kubernetes/security/security.py", line 100, in run
self.deploy_job()
File "/usr/lib/python3.9/site-packages/functest_kubernetes/security/security.py", line 92, in deploy_job
self.pod_log = self.corev1.read_namespaced_pod_log(
File "/usr/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 22929, in read_namespaced_pod_log
return self.read_namespaced_pod_log_with_http_info(name, namespace, **kwargs) # noqa: E501
File "/usr/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 23048, in read_namespaced_pod_log_with_http_info
return self.api_client.call_api(
File "/usr/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/usr/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/usr/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/usr/lib/python3.9/site-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/usr/lib/python3.9/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '486f38ad-c75f-469c-8393-0fa6c71004f8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 27 Feb 2023 10:53:25 GMT', 'Content-Length': '217'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \"kube-hunter\" in pod \"kube-hunter-mz5jk\" is waiting to start: CreateContainerConfigError","reason":"BadRequest","code":400}
2023-02-27 10:53:20,151 - functest_kubernetes.security.security - ERROR - Cannot process results
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/functest_kubernetes/security/security.py", line 188, in run
self.process_results(**kwargs)
File "/usr/lib/python3.9/site-packages/functest_kubernetes/security/security.py", line 144, in process_results
self.details = json.loads(self.pod_log.splitlines()[-1])
IndexError: list index out of range
2023-02-27 10:53:20,151 - xtesting.ci.run_tests - INFO - Test result:
+---------------------+------------------+------------------+----------------+
| TEST CASE | PROJECT | DURATION | RESULT |
+---------------------+------------------+------------------+----------------+
| kube_hunter | functest | 20:00 | FAIL |
+---------------------+------------------+------------------+----------------+
kubectl describe pod -n kube-hunter-svrzt kube-hunter-mz5jk
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 58s default-scheduler Successfully assigned kube-hunter-svrzt/kube-hunter-mz5jk to cbis-sut1-worker-02
Normal AddedInterface 58s multus Add eth0 [192.168.72.105/32] from calico-network
Normal Pulled 6s (x7 over 57s) kubelet Container image "docker.io/aquasec/kube-hunter:0.3.1" already present on machine
Warning Failed 6s (x7 over 57s) kubelet Error: container has runAsNonRoot and image will run as root (pod: "kube-hunter-mz5jk_kube-hunter-svrzt(5090797f-f697-4841-9f73-a2d10c39f037)", container: kube-hunter)
The POD got the kubernetes.io/psp: restricted
PSP
kubectl get psp restricted -o yaml
kubectl get psp restricted -o yaml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy","metadata":{"annotations":{"seccomp.security.alpha.kubernetes.io/allowedProfileNames":"*","seccomp.security.alpha.kubernetes.io/defaultProfileName":"runtime/default"},"name":"restricted"},"spec":{"allowPrivilegeEscalation":false,"allowedHostPaths":[{"pathPrefix":"/etc/localtime","readOnly":true},{"pathPrefix":"/usr/share/zoneinfo/","readOnly":true},{"pathPrefix":"/dev/urandom","readOnly":true},{"pathPrefix":"/opt/bin/process-starter","readOnly":true}],"fsGroup":{"ranges":[{"max":65535,"min":1}],"rule":"MustRunAs"},"hostIPC":false,"hostNetwork":false,"hostPID":false,"privileged":false,"readOnlyRootFilesystem":false,"requiredDropCapabilities":["ALL"],"runAsUser":{"rule":"MustRunAsNonRoot"},"seLinux":{"rule":"RunAsAny"},"supplementalGroups":{"ranges":[{"max":65535,"min":1}],"rule":"MustRunAs"},"volumes":["configMap","emptyDir","projected","secret","downwardAPI","persistentVolumeClaim","hostPath","ephemeral"]}}
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
creationTimestamp: "2023-02-07T22:57:09Z"
name: restricted
resourceVersion: "473"
uid: e8242603-ca5d-4c49-a91e-adda30012e7f
spec:
allowPrivilegeEscalation: false
allowedHostPaths:
- pathPrefix: /etc/localtime
readOnly: true
- pathPrefix: /usr/share/zoneinfo/
readOnly: true
- pathPrefix: /dev/urandom
readOnly: true
- pathPrefix: /opt/bin/process-starter
readOnly: true
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
- hostPath
- ephemeral
When I turned off the PSP on systemwide (removing from kubelet config) the test case executed successfully.
Expected behavior
My question, is there any chance to make this test suite applicable to run on K8S cluster where PSP configured? If not, is there a way to skip those part which require higher privilege and execute the remaining (if any) part from the test suite? Or running this test suite is simply not applicable for such systems? In these case, is it possible to add a check to the test and run it before execute the test case itself, so it can print out "This test suite is no applicable to run PSP preconfigured K8S cluster" without even trying to run it.