티스토리 뷰

728x170

개요

Kubernetes는 강력한 확장성과 안정성을 제공하는 Container Management Platform으로 이미 대표적인 클라우드 컴포넌트로 자리를 확고히 하고 있다. 많은 기업에서 Kubernetes를 활용하고 있지만, 정작 Container에 대한 보안이나, Kubernetes에 대한 보안 정책은 자리잡지 못하고 있으며, 이로 인해 보안 침해 사고가 터져 나오고 있는 실정이다.

보안문제는 개인이나, 중-소 규모의 기업에서 온전히 대응하는 것은 조직을 구축하는 것부터 체계를 수립하는 것까지 쉽지 않은 일이라는 점때문에 여전히 대응 방안을 수립하기 어려운 실정이다.

이에 여러 오픈소스 소프트웨어들의 조합으로 최소한의 문제들을 진단하고 사전에 검증할 수 있는 방법을 살펴보도록 하자. 지금부터 프로젝트에 적용 가능한 보안관련 오픈소스 소프트웨어들을 살펴보고, 활용 방안에 대해 알아보도록 하자.


kube-hunter (클러스터 보안취약점)

kube-hunter는 Security Vulnerabilitiy를 발견하여 Kubernetes 클러스터의 보안을 강화하는데 도움을 주는 오픈소스 도구이다.

a. 설치

python 패키지 매니저를 이용하여 손쉽게 설치가 가능하다.

[root@ip-192-168-84-159 kube-hunter]# yum install python-pip
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Package python2-pip-20.2.2-1.amzn2.0.3.noarch already installed and latest version
Nothing to do
[root@ip-192-168-84-159 kube-hunter]# python3 -m ensurepip
Looking in links: /tmp/tmp_nw7te6m
Requirement already satisfied: setuptools in /usr/lib/python3.7/site-packages (49.1.3)
Requirement already satisfied: pip in /usr/lib/python3.7/site-packages (20.2.2)
[root@ip-192-168-84-159 kube-hunter]# pip3 --version
pip 20.2.2 from /usr/lib/python3.7/site-packages/pip (python 3.7)
[root@ip-192-168-84-159 kube-hunter]# pip3 install --user kube-hunter
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting kube-hunter
  Downloading kube_hunter-0.6.5-py3-none-any.whl (72 kB)
Collecting ruamel.yaml
  Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting future
  Downloading future-0.18.2.tar.gz (829 kB)
Collecting PrettyTable
  Downloading prettytable-3.2.0-py3-none-any.whl (26 kB)
Collecting urllib3>=1.24.3
  Downloading urllib3-1.26.8-py2.py3-none-any.whl (138 kB)
Collecting pluggy
  Downloading pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting kubernetes==12.0.1
  Downloading kubernetes-12.0.1-py2.py3-none-any.whl (1.7 MB)
Collecting netaddr
  Downloading netaddr-0.8.0-py2.py3-none-any.whl (1.9 MB)
Collecting dataclasses
  Downloading dataclasses-0.6-py3-none-any.whl (14 kB)
Collecting requests
  Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
Collecting netifaces
Collecting packaging
  Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting scapy>=2.4.3
  Downloading scapy-2.4.5.tar.gz (1.1 MB)
Collecting ruamel.yaml.clib>=0.2.6; platform_python_implementation == "CPython" and python_version < "3.11"
  Downloading ruamel.yaml.clib-0.2.6-cp37-cp37m-manylinux1_x86_64.whl (546 kB)
Collecting importlib-metadata; python_version < "3.8"
  Downloading importlib_metadata-4.11.2-py3-none-any.whl (17 kB)
Collecting wcwidth
  Downloading wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting pyyaml>=3.12
  Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
Collecting six>=1.9.0
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting python-dateutil>=2.5.3
  Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting google-auth>=1.0.1
  Downloading google_auth-2.6.0-py2.py3-none-any.whl (156 kB)
Collecting requests-oauthlib
  Downloading requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Requirement already satisfied: setuptools>=21.0.0 in /usr/lib/python3.7/site-packages (from kubernetes==12.0.1->kube-hunter) (49.1.3)
Collecting websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0
  Downloading websocket_client-1.3.1-py3-none-any.whl (54 kB)
Collecting certifi>=14.05.14
  Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
Collecting charset-normalizer~=2.0.0; python_version >= "3"
  Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5; python_version >= "3"
  Downloading idna-3.3-py3-none-any.whl (61 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
  Downloading pyparsing-3.0.7-py3-none-any.whl (98 kB)
Collecting zipp>=0.5
  Downloading zipp-3.7.0-py3-none-any.whl (5.3 kB)
Collecting typing-extensions>=3.6.4; python_version < "3.8"
  Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB)
Collecting rsa<5,>=3.1.4; python_version >= "3.6"
  Downloading rsa-4.8-py3-none-any.whl (39 kB)
Collecting pyasn1-modules>=0.2.1
  Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools<6.0,>=2.0.0
  Downloading cachetools-5.0.0-py3-none-any.whl (9.1 kB)
Collecting oauthlib>=3.0.0
  Downloading oauthlib-3.2.0-py3-none-any.whl (151 kB)
Collecting pyasn1>=0.1.3
  Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Using legacy 'setup.py install' for scapy, since package 'wheel' is not installed.
Installing collected packages: ruamel.yaml.clib, ruamel.yaml, future, zipp, typing-extensions, importlib-metadata, wcwidth, PrettyTable, urllib3, pluggy, pyyaml, six, python-dateutil, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, charset-normalizer, certifi, idna, requests, oauthlib, requests-oauthlib, websocket-client, kubernetes, netaddr, dataclasses, netifaces, pyparsing, packaging, scapy, kube-hunter
    Running setup.py install for future ... done
  WARNING: The scripts pyrsa-decrypt, pyrsa-encrypt, pyrsa-keygen, pyrsa-priv2pub, pyrsa-sign and pyrsa-verify are installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script normalizer is installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script wsdump is installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script netaddr is installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
    Running setup.py install for scapy ... done
  WARNING: The script kube-hunter is installed in '/root/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed PrettyTable-3.2.0 cachetools-5.0.0 certifi-2021.10.8 charset-normalizer-2.0.12 dataclasses-0.6 future-0.18.2 google-auth-2.6.0 idna-3.3 importlib-metadata-4.11.2 kube-hunter-0.6.5 kubernetes-12.0.1 netaddr-0.8.0 netifaces-0.11.0 oauthlib-3.2.0 packaging-21.3 pluggy-1.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pyparsing-3.0.7 python-dateutil-2.8.2 pyyaml-6.0 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.6 scapy-2.4.5 six-1.16.0 typing-extensions-4.1.1 urllib3-1.26.8 wcwidth-0.2.5 websocket-client-1.3.1 zipp-3.7.0
[root@ip-192-168-84-159 kube-hunter]#

b. kube-hunter path 등록

설치가 완료되면 kube-hunter 바이너리는 $HOME/.local/bin 경로에 생성된다. 아래와 같이 PATH에 등록하고 사용하도록 하자.

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:$HOME/.local/bin

export PATH

c. kube-hunter test list

[root@ip-192-168-84-159 ~]# kube-hunter --list

Passive Hunters:
----------------
* API Service Discovery
  Checks for the existence of K8s API Services

* K8s Dashboard Discovery
  Checks for the existence of a Dashboard

* Etcd service
  check for the existence of etcd service

* Host Discovery when running as pod
  Generates ip adresses to scan, based on cluster/scan type

* Host Discovery
  Generates ip adresses to scan, based on cluster/scan type

* Kubectl Client Discovery
  Checks for the existence of a local kubectl client

* Kubelet Discovery
  Checks for the existence of a Kubelet service, and its open ports

* Port Scanning
  Scans Kubernetes known ports to determine open endpoints for discovery

* Proxy Discovery
  Checks for the existence of a an open Proxy service

* Kubelet Readonly Ports Hunter
  Hunts specific endpoints on open ports in the readonly Kubelet server

* Kubelet Secure Ports Hunter
  Hunts specific endpoints on an open secured Kubelet

* AKS Hunting
  Hunting Azure cluster deployments using specific known configurations

* API Server Hunter
  Checks if API server is accessible

* API Server Hunter
  Accessing the API server using the service account token obtained from a compromised pod

* Api Version Hunter
  Tries to obtain the Api Server's version directly from /version endpoint

* Pod Capabilities Hunter
  Checks for default enabled capabilities in a pod

* Certificate Email Hunting
  Checks for email addresses in kubernetes ssl certificates

* Kubectl CVE Hunter
  Checks if the kubectl client is vulnerable to specific important CVEs

* Dashboard Hunting
  Hunts open Dashboards, gets the type of nodes in the cluster

* Etcd Remote Access
  Checks for remote availability of etcd, its version, and read access to the DB

* Mount Hunter - /var/log
  Hunt pods that have write access to host's /var/log. in such case, the pod can traverse read files on the host machine

* Proxy Hunting
  Hunts for a dashboard behind the proxy

* Access Secrets
  Accessing the secrets accessible to the pod

[root@ip-192-168-84-159 ~]#

kube-hunter가 지원하는 Kubernetes 탐지 대상 오브젝트들이다.

d. kube-hunter 활용

kube-hunter는 클러스터를 스캔하기 위해 세가지 옵션을 지원한다.

  • 원격지에 위치한 Cluster를 스캔하는 옵션
  • 로컬 네트워크 인터페이스를 스캔하는 옵션
  • 특정 IP 대역을 스캔하는 옵션

먼저 Rmote Scanning 방법에 대해 알아보자.

[root@ip-192-168-84-159 ~]# kube-hunter
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Interface scanning   (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ','): 192.168.84.159
2022-03-13 14:13:50,012 INFO kube_hunter.modules.report.collector Started hunting
2022-03-13 14:13:50,012 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2022-03-13 14:13:50,023 INFO kube_hunter.modules.report.collector Found open service "Etcd" at 192.168.84.159:2379
2022-03-13 14:13:50,045 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.84.159:10250

Nodes
+-------------+----------------+
| TYPE        | LOCATION       |
+-------------+----------------+
| Node/Master | 192.168.84.159 |
+-------------+----------------+

Detected Services
+-------------+----------------------+----------------------+
| SERVICE     | LOCATION             | DESCRIPTION          |
+-------------+----------------------+----------------------+
| Kubelet API | 192.168.84.159:10250 | The Kubelet is the   |
|             |                      | main component in    |
|             |                      | every Node, all pod  |
|             |                      | operations goes      |
|             |                      | through the kubelet  |
+-------------+----------------------+----------------------+
| Etcd        | 192.168.84.159:2379  | Etcd is a DB that    |
|             |                      | stores cluster's     |
|             |                      | data, it contains    |
|             |                      | configuration and    |
|             |                      | current              |
|             |                      |     state            |
|             |                      | information, and     |
|             |                      | might contain        |
|             |                      | secrets              |
+-------------+----------------------+----------------------+

No vulnerabilities were found
[root@ip-192-168-84-159 ~]#

Remote Scanning의 경우 IP를 직접 입력하여 대상 서버를 탐지할 수 있다. 위는 minikube가 기동되어 있는 IP를 입력하고, 그 결과를 확인하는 과정이다.

다음으로 IP range Scanning이다.

[root@ip-192-168-78-195 ~ (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kube-hunter
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Interface scanning   (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 3
CIDR separated by a ',' (example - 192.168.0.0/16,!192.168.0.8/32,!192.168.1.0/24): 192.168.0.0/16
2022-03-14 03:26:23,664 INFO kube_hunter.modules.report.collector Started hunting
2022-03-14 03:26:23,664 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2022-03-14 03:41:47,778 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.119.45:10250
2022-03-14 03:41:47,790 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.178.164:10250
2022-03-14 03:41:47,838 INFO kube_hunter.modules.report.collector Found open service "API Server" at 192.168.74.138:443
2022-03-14 03:41:47,868 INFO kube_hunter.modules.report.collector Found vulnerability "K8s Version Disclosure" in 192.168.74.138:443
2022-03-14 03:41:47,883 INFO kube_hunter.modules.report.collector Found open service "API Server" at 192.168.183.221:443
2022-03-14 03:41:47,930 INFO kube_hunter.modules.report.collector Found vulnerability "K8s Version Disclosure" in 192.168.183.221:443

Nodes
+-------------+-----------------+
| TYPE        | LOCATION        |
+-------------+-----------------+
| Node/Master | 192.168.183.221 |
+-------------+-----------------+
| Node/Master | 192.168.178.164 |
+-------------+-----------------+
| Node/Master | 192.168.119.45  |
+-------------+-----------------+
| Node/Master | 192.168.74.138  |
+-------------+-----------------+

Detected Services
+-------------+----------------------+----------------------+
| SERVICE     | LOCATION             | DESCRIPTION          |
+-------------+----------------------+----------------------+
| Kubelet API | 192.168.178.164:1025 | The Kubelet is the   |
|             | 0                    | main component in    |
|             |                      | every Node, all pod  |
|             |                      | operations goes      |
|             |                      | through the kubelet  |
+-------------+----------------------+----------------------+
| Kubelet API | 192.168.119.45:10250 | The Kubelet is the   |
|             |                      | main component in    |
|             |                      | every Node, all pod  |
|             |                      | operations goes      |
|             |                      | through the kubelet  |
+-------------+----------------------+----------------------+
| API Server  | 192.168.74.138:443   | The API server is in |
|             |                      | charge of all        |
|             |                      | operations on the    |
|             |                      | cluster.             |
+-------------+----------------------+----------------------+
| API Server  | 192.168.183.221:443  | The API server is in |
|             |                      | charge of all        |
|             |                      | operations on the    |
|             |                      | cluster.             |
+-------------+----------------------+----------------------+

Vulnerabilities
For further information about a vulnerability, search its ID in:
https://avd.aquasec.com/
+--------+---------------------+----------------------+----------------------+----------------------+---------------------+
| ID     | LOCATION            | MITRE CATEGORY       | VULNERABILITY        | DESCRIPTION          | EVIDENCE            |
+--------+---------------------+----------------------+----------------------+----------------------+---------------------+
| KHV002 | 192.168.74.138:443  | Initial Access //    | K8s Version          | The kubernetes       | v1.21.5-eks-bc4871b |
|        |                     | Exposed sensitive    | Disclosure           | version could be     |                     |
|        |                     | interfaces           |                      | obtained from the    |                     |
|        |                     |                      |                      | /version endpoint    |                     |
+--------+---------------------+----------------------+----------------------+----------------------+---------------------+
| KHV002 | 192.168.183.221:443 | Initial Access //    | K8s Version          | The kubernetes       | v1.21.5-eks-bc4871b |
|        |                     | Exposed sensitive    | Disclosure           | version could be     |                     |
|        |                     | interfaces           |                      | obtained from the    |                     |
|        |                     |                      |                      | /version endpoint    |                     |
+--------+---------------------+----------------------+----------------------+----------------------+---------------------+

[root@ip-192-168-78-195 ~ (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

IP Scanning의 경우 특정 CIDR 내 구축되어 있는 클러스터를 탐지한다. 위 결과는 AWS EKS가 구성되어 있는 CIDR을 입력하고, 그 결과를 확인하는 과정이다.

위와 같이 노드 4대가 검색되었다. 두대는 API Server로 EKS의 Master Node는 완전관리형으로 관리되지만, 위와 같이 API Server는 물리 노드로 노출되어 있는것을 확인할 수 있다. 또한, Kubelet API가 설치되어 있는 Worker Node가 스캔된 것을 확인할 수 있다. endpoint 노출로 인한 보안취약점 문제를 확인할 수 있다.


kube-bench (클러스터 보안취약점)

다음으로 살펴볼 도구는 kube-hunter와 같이 클러스터 보안취약점 검사로 활용 가능한 오픈소스 소프트웨어 kube-bench이다.

a. 설치

아래와 같이 바이너리를 다운로드 받아 Path에 복사하는 것만으로 설치 완료할 수 있다.

[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# wget https://github.com/aquasecurity/kube-bench/releases/download/v0.6.6/kube-bench_0.6.6_linux_amd64.tar.gz
--2022-03-14 04:16:51--  https://github.com/aquasecurity/kube-bench/releases/download/v0.6.6/kube-bench_0.6.6_linux_amd64.tar.gz
Resolving github.com (github.com)... 52.78.231.108
Connecting to github.com (github.com)|52.78.231.108|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/94779471/f3f9478c-ab50-4a05-9482-6976183c9e3e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220314%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220314T041651Z&X-Amz-Expires=300&X-Amz-Signature=350eb286f29247cfc83f31e6f3c229b296809f10b06b2490b2799d371a891df0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=94779471&response-content-disposition=attachment%3B%20filename%3Dkube-bench_0.6.6_linux_amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-03-14 04:16:51--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/94779471/f3f9478c-ab50-4a05-9482-6976183c9e3e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220314%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220314T041651Z&X-Amz-Expires=300&X-Amz-Signature=350eb286f29247cfc83f31e6f3c229b296809f10b06b2490b2799d371a891df0&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=94779471&response-content-disposition=attachment%3B%20filename%3Dkube-bench_0.6.6_linux_amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10491759 (10M) [application/octet-stream]
Saving to: kube-bench_0.6.6_linux_amd64.tar.gz

100%[=====================================================================================================================================================================>] 10,491,759  24.2MB/s   in 0.4s

[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# ls
kube-bench_0.6.6_linux_amd64.tar.gz
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# tar -xzvf kube-bench_0.6.6_linux_amd64.tar.gz
cfg/ack-1.0/config.yaml
cfg/ack-1.0/controlplane.yaml
cfg/ack-1.0/etcd.yaml
cfg/ack-1.0/managedservices.yaml
cfg/ack-1.0/master.yaml
cfg/ack-1.0/node.yaml
cfg/ack-1.0/policies.yaml
cfg/aks-1.0/config.yaml
cfg/aks-1.0/controlplane.yaml
cfg/aks-1.0/managedservices.yaml
cfg/aks-1.0/master.yaml
cfg/aks-1.0/node.yaml
cfg/aks-1.0/policies.yaml
cfg/cis-1.20/config.yaml
cfg/cis-1.20/controlplane.yaml
cfg/cis-1.20/etcd.yaml
cfg/cis-1.20/master.yaml
cfg/cis-1.20/node.yaml
cfg/cis-1.20/policies.yaml
cfg/cis-1.5/config.yaml
cfg/cis-1.5/controlplane.yaml
cfg/cis-1.5/etcd.yaml
cfg/cis-1.5/master.yaml
cfg/cis-1.5/node.yaml
cfg/cis-1.5/policies.yaml
cfg/cis-1.6/config.yaml
cfg/cis-1.6/controlplane.yaml
cfg/cis-1.6/etcd.yaml
cfg/cis-1.6/master.yaml
cfg/cis-1.6/node.yaml
cfg/cis-1.6/policies.yaml
cfg/config.yaml
cfg/eks-1.0.1/config.yaml
cfg/eks-1.0.1/controlplane.yaml
cfg/eks-1.0.1/managedservices.yaml
cfg/eks-1.0.1/master.yaml
cfg/eks-1.0.1/node.yaml
cfg/eks-1.0.1/policies.yaml
cfg/gke-1.0/config.yaml
cfg/gke-1.0/controlplane.yaml
cfg/gke-1.0/etcd.yaml
cfg/gke-1.0/managedservices.yaml
cfg/gke-1.0/master.yaml
cfg/gke-1.0/node.yaml
cfg/gke-1.0/policies.yaml
cfg/gke-1.2.0/config.yaml
cfg/gke-1.2.0/controlplane.yaml
cfg/gke-1.2.0/managedservices.yaml
cfg/gke-1.2.0/master.yaml
cfg/gke-1.2.0/node.yaml
cfg/gke-1.2.0/policies.yaml
cfg/rh-0.7/config.yaml
cfg/rh-0.7/master.yaml
cfg/rh-0.7/node.yaml
cfg/rh-1.0/config.yaml
cfg/rh-1.0/controlplane.yaml
cfg/rh-1.0/etcd.yaml
cfg/rh-1.0/master.yaml
cfg/rh-1.0/node.yaml
cfg/rh-1.0/policies.yaml
kube-bench
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# ls -la
total 34480
drwxr-xr-x  3 root root         78 Mar 14 04:17 .
dr-xr-x--- 18 root root       4096 Mar 14 04:17 ..
drwxr-xr-x 12 root root        178 Mar 14 04:17 cfg
-rwxr-xr-x  1 1001 docker 24801491 Jan 12 12:58 kube-bench
-rw-r--r--  1 root root   10491759 Jan 13 07:49 kube-bench_0.6.6_linux_amd64.tar.gz
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cp kube-bench /usr/bin/
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kube-bench --help
This tool runs the CIS Kubernetes Benchmark (https://www.cisecurity.org/benchmark/kubernetes/)

Usage:
  kube-bench [flags]
  kube-bench [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  help        Help about any command
  run         Run tests
  version     Shows the version of kube-bench.

Flags:
      --alsologtostderr                  log to standard error as well as files
      --asff                             Send the results to AWS Security Hub
      --benchmark string                 Manually specify CIS benchmark version. It would be an error to specify both --version and --benchmark flags
  -c, --check string                     A comma-delimited list of checks to run as specified in CIS document. Example --check="1.1.1,1.1.2"
      --config string                    config file (default is ./cfg/config.yaml)
  -D, --config-dir string                config directory (default "/etc/kube-bench/cfg")
      --exit-code int                    Specify the exit code for when checks fail
  -g, --group string                     Run all the checks under this comma-delimited list of groups. Example --group="1.1"
  -h, --help                             help for kube-bench
      --include-test-output              Prints the actual result when test fails
      --json                             Prints the results as JSON
      --junit                            Prints the results as JUnit
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --logtostderr                      log to standard error instead of files (default true)
      --noremediations                   Disable printing of remediations section
      --noresults                        Disable printing of results section
      --nosummary                        Disable printing of summary section
      --nototals                         Disable printing of totals for failed, passed, ... checks across all sections
      --outputfile string                Writes the JSON results to output file
      --pgsql                            Save the results to PostgreSQL
      --scored                           Run the scored CIS checks (default true)
      --skip string                      List of comma separated values of checks to be skipped
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
      --unscored                         Run the unscored CIS checks (default true)
  -v, --v Level                          log level for V logs
      --version string                   Manually specify Kubernetes version, automatically detected if unset
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

Use "kube-bench [command] --help" for more information about a command.
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kube-bench version
0.6.6
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

b. apply ruleset

검사에 활용할 수 있는 룰셋은 git repository를 통해 다운로드 받을 수 있다.

[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# mkdir git-repo
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cd git-repo/
[root@ip-192-168-78-195 git-repo (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# git clone https://github.com/aquasecurity/kube-bench
Cloning into 'kube-bench'...
remote: Enumerating objects: 4373, done.
remote: Counting objects: 100% (178/178), done.
remote: Compressing objects: 100% (101/101), done.
remote: Total 4373 (delta 84), reused 133 (delta 62), pack-reused 4195
Receiving objects: 100% (4373/4373), 8.67 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (2730/2730), done.
[root@ip-192-168-78-195 git-repo (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

c. EKS Check 수행

[root@ip-192-168-78-195 git-repo (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cd kube-bench/
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml
[INFO] 3 Worker Node Security Configuration
[INFO] 3.1 Worker Node Configuration Files
[WARN] 3.1.1 Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)
[WARN] 3.1.2 Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)
[WARN] 3.1.3 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)
[WARN] 3.1.4 Ensure that the kubelet configuration file ownership is set to root:root (Manual)
[INFO] 3.2 Kubelet
[FAIL] 3.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
[FAIL] 3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[WARN] 3.2.3 Ensure that the --client-ca-file argument is set as appropriate (Manual)
[WARN] 3.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[WARN] 3.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[FAIL] 3.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[FAIL] 3.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[WARN] 3.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 3.2.9 Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)
[WARN] 3.2.10 Ensure that the --rotate-certificates argument is not set to false (Manual)
[WARN] 3.2.11 Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)

== Remediations node ==
3.1.1 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /etc/kubernetes/kubelet.conf

3.1.2 Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root /etc/kubernetes/kubelet.conf

3.1.3 Run the following command (using the config file location identified in the Audit step)
chmod 644 /var/lib/kubelet/config.yaml

3.1.4 Run the following command (using the config file location identified in the Audit step)
chown root:root /var/lib/kubelet/config.yaml

3.2.1 If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

3.2.2 If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

3.2.3 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.4 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.5 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

3.2.7 If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

3.2.8 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.9 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.10 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
3.2.11 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1

== Summary node ==
0 checks PASS
4 checks FAIL
11 checks WARN
0 checks INFO

[INFO] 4 Policies
[INFO] 4.1 RBAC and Service Accounts
[WARN] 4.1.1 Ensure that the cluster-admin role is only used where required (Manual)
[WARN] 4.1.2 Minimize access to secrets (Manual)
[WARN] 4.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
[WARN] 4.1.4 Minimize access to create pods (Manual)
[WARN] 4.1.5 Ensure that default service accounts are not actively used. (Manual)
[WARN] 4.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
[INFO] 4.2 Pod Security Policies
[WARN] 4.2.1 Minimize the admission of privileged containers (Automated)
[WARN] 4.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
[WARN] 4.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
[WARN] 4.2.4 Minimize the admission of containers wishing to share the host network namespace (Automated)
[WARN] 4.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
[WARN] 4.2.6 Minimize the admission of root containers (Automated)
[WARN] 4.2.7 Minimize the admission of containers with the NET_RAW capability (Automated)
[WARN] 4.2.8 Minimize the admission of containers with added capabilities (Automated)
[WARN] 4.2.9 Minimize the admission of containers with capabilities assigned (Manual)
[INFO] 4.3 CNI Plugin
[WARN] 4.3.1 Ensure that the latest CNI version is used (Manual)
[WARN] 4.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
[INFO] 4.4 Secrets Management
[WARN] 4.4.1 Prefer using secrets as files over secrets as environment variables (Manual)
[WARN] 4.4.2 Consider external secret storage (Manual)
[INFO] 4.5 Extensible Admission Control
[WARN] 4.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
[INFO] 4.6 General Policies
[WARN] 4.6.1 Create administrative boundaries between resources using namespaces (Manual)
[WARN] 4.6.2 Apply Security Context to Your Pods and Containers (Manual)
[WARN] 4.6.3 The default namespace should not be used (Automated)

== Remediations policies ==
4.1.1 Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]

4.1.2 Where possible, remove get, list and watch access to secret objects in the cluster.

4.1.3 Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.

4.1.4 Where possible, remove create access to pod objects in the cluster.

4.1.5 Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false

4.1.6 Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.

4.2.1 Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.

4.2.2 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.

4.2.3 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.

4.2.4 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.

4.2.5 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.

4.2.6 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.

4.2.7 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.

4.2.8 Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.

4.2.9 Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.

4.3.1 Review the documentation of AWS CNI plugin, and ensure latest CNI version is used.

4.3.2 Follow the documentation and create NetworkPolicy objects as you need them.

4.4.1 If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.

4.4.2 Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.

4.5.1 Follow the Kubernetes documentation and setup image provenance.

4.6.1 Follow the documentation and create namespaces for objects in your deployment as you need
them.

4.6.2 Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.

4.6.3 Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.


== Summary policies ==
0 checks PASS
0 checks FAIL
23 checks WARN
0 checks INFO

[INFO] 5 Managed Services
[INFO] 5.1 Image Registry and Image Scanning
[WARN] 5.1.1 Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third-party provider (Manual)
[WARN] 5.1.2 Minimize user access to Amazon ECR (Manual)
[WARN] 5.1.3 Minimize cluster access to read-only for Amazon ECR (Manual)
[WARN] 5.1.4 Minimize Container Registries to only those approved (Manual)
[INFO] 5.2 Identity and Access Management (IAM)
[WARN] 5.2.1 Prefer using dedicated Amazon EKS Service Accounts (Manual)
[INFO] 5.3 AWS Key Management Service (KMS)
[WARN] 5.3.1 Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)
[INFO] 5.4 Cluster Networking
[WARN] 5.4.1 Restrict Access to the Control Plane Endpoint (Manual)
[WARN] 5.4.2 Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)
[WARN] 5.4.3 Ensure clusters are created with Private Nodes (Manual)
[WARN] 5.4.4 Ensure Network Policy is Enabled and set as appropriate (Manual)
[WARN] 5.4.5 Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)
[INFO] 5.5 Authentication and Authorization
[WARN] 5.5.1 Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes (Manual)
[INFO] 5.6 Other Cluster Configurations
[WARN] 5.6.1 Consider Fargate for running untrusted workloads (Manual)

== Remediations managedservices ==
5.1.1 No remediation
5.1.2 No remediation
5.1.3 No remediation
5.1.4 No remediation
5.2.1 No remediation
5.3.1 No remediation
5.4.1 No remediation
5.4.2 No remediation
5.4.3 No remediation
5.4.4 No remediation
5.4.5 No remediation
5.5.1 No remediation
5.6.1 No remediation

== Summary managedservices ==
0 checks PASS
0 checks FAIL
13 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
4 checks FAIL
47 checks WARN
0 checks INFO

[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

d. Minikube Check 수행

[root@ip-192-168-78-195 kube-bench (minikube:default)]# kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[FAIL] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
[FAIL] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[PASS] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
[PASS] 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
[FAIL] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
[FAIL] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
[WARN] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
[WARN] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
[FAIL] 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
[FAIL] 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
[INFO] 4.2 Kubelet
[FAIL] 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated)
[FAIL] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[FAIL] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[WARN] 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[WARN] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[FAIL] 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[WARN] 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
[WARN] 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
[WARN] 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
[WARN] 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
[WARN] 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)

== Remediations node ==
4.1.1 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

4.1.2 Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

4.1.5 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 /etc/kubernetes/kubelet.conf

4.1.6 Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root /etc/kubernetes/kubelet.conf

4.1.7 Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>

4.1.8 Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>

4.1.9 Run the following command (using the config file location identified in the Audit step)
chmod 644 /var/lib/kubelet/config.yaml

4.1.10 Run the following command (using the config file location identified in the Audit step)
chown root:root /var/lib/kubelet/config.yaml

4.2.1 If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

4.2.2 If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

4.2.3 If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

4.2.4 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.5 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

4.2.7 If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

4.2.8 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.9 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.10 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.11 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.12 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1
4.2.13 audit test did not run: failed to run: "/bin/ps -fC $kubeletbin", output: "error: list of command names must follow -C\n\nUsage:\n ps [options]\n\n Try 'ps --help <simple|list|output|threads|misc|all>'\n  or 'ps --help <s|l|o|t|m|a>'\n for additional help text.\n\nFor more details see ps(1).\n", error: exit status 1

== Summary node ==
2 checks PASS
11 checks FAIL
10 checks WARN
0 checks INFO

[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[WARN] 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
[WARN] 5.1.2 Minimize access to secrets (Manual)
[WARN] 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
[WARN] 5.1.4 Minimize access to create pods (Manual)
[WARN] 5.1.5 Ensure that default service accounts are not actively used. (Manual)
[WARN] 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
[WARN] 5.1.7 Avoid use of system:masters group (Manual)
[WARN] 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
[INFO] 5.2 Pod Security Policies
[WARN] 5.2.1 Minimize the admission of privileged containers (Automated)
[WARN] 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
[WARN] 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
[WARN] 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Automated)
[WARN] 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
[WARN] 5.2.6 Minimize the admission of root containers (Automated)
[WARN] 5.2.7 Minimize the admission of containers with the NET_RAW capability (Automated)
[WARN] 5.2.8 Minimize the admission of containers with added capabilities (Automated)
[WARN] 5.2.9 Minimize the admission of containers with capabilities assigned (Manual)
[INFO] 5.3 Network Policies and CNI
[WARN] 5.3.1 Ensure that the CNI in use supports Network Policies (Manual)
[WARN] 5.3.2 Ensure that all Namespaces have Network Policies defined (Manual)
[INFO] 5.4 Secrets Management
[WARN] 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual)
[WARN] 5.4.2 Consider external secret storage (Manual)
[INFO] 5.5 Extensible Admission Control
[WARN] 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
[INFO] 5.7 General Policies
[WARN] 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
[WARN] 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual)
[WARN] 5.7.3 Apply Security Context to Your Pods and Containers (Manual)
[WARN] 5.7.4 The default namespace should not be used (Manual)

== Remediations policies ==
5.1.1 Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]

5.1.2 Where possible, remove get, list and watch access to secret objects in the cluster.

5.1.3 Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.

5.1.4 Where possible, remove create access to pod objects in the cluster.

5.1.5 Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false

5.1.6 Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.

5.1.7 Remove the system:masters group from all users in the cluster.

5.1.8 Where possible, remove the impersonate, bind and escalate rights from subjects.

5.2.1 Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.

5.2.2 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.

5.2.3 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.

5.2.4 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.

5.2.5 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.

5.2.6 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.

5.2.7 Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.requiredDropCapabilities is set to include either NET_RAW or ALL.

5.2.8 Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.

5.2.9 Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.

5.3.1 If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.

5.3.2 Follow the documentation and create NetworkPolicy objects as you need them.

5.4.1 if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.

5.4.2 Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.

5.5.1 Follow the Kubernetes documentation and setup image provenance.

5.7.1 Follow the documentation and create namespaces for objects in your deployment as you need
them.

5.7.2 Use security context to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
  securityContext:
    seccompProfile:
      type: RuntimeDefault

5.7.3 Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.

5.7.4 Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.


== Summary policies ==
0 checks PASS
0 checks FAIL
26 checks WARN
0 checks INFO

== Summary total ==
2 checks PASS
11 checks FAIL
36 checks WARN
0 checks INFO

[root@ip-192-168-78-195 kube-bench (minikube:default)]#

EKS 대비 Minikube의 경우 FAIL이 다 건 발생하는 것을 볼 수 있다. 클러스터 보안을 관리하는 측면에서 WARN 레벨까지 조치하는 것이 좋지만, FAIL에 대한 결과는 반드시 조치를 한 후 클러스터 환경을 운영할 것을 권고한다.


kyverno (정책관리)

Kyverno는 그리스어로 거버넌스라는 의미이며, kubernetes를 위해 설계된 정책 엔진이다. Kyverno는 클러스터 관리자가 독립적으로 환경별 구성을 관리하고 클러스터에 대한 구성 모범 사례를 적용할 수 있도록 도와준다. Kyverno는 모범 사례를 위해 기존 워크로드를 스캔하는데 사용하거나 API 요청을 차단하거나 변경하여 모범 사례를 적용한다.

a. 설치

yaml 파일을 통해 손쉽게 설치가 가능하다.

[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/config/install.yaml
namespace/kyverno created
customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clusterpolicyreports.wgpolicyk8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterreportchangerequests.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/generaterequests.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/policyreports.wgpolicyk8s.io created
customresourcedefinition.apiextensions.k8s.io/reportchangerequests.kyverno.io created
serviceaccount/kyverno-service-account created
role.rbac.authorization.k8s.io/kyverno:leaderelection created
clusterrole.rbac.authorization.k8s.io/kyverno:admin-generaterequest created
clusterrole.rbac.authorization.k8s.io/kyverno:admin-policies created
clusterrole.rbac.authorization.k8s.io/kyverno:admin-policyreport created
clusterrole.rbac.authorization.k8s.io/kyverno:admin-reportchangerequest created
clusterrole.rbac.authorization.k8s.io/kyverno:events created
clusterrole.rbac.authorization.k8s.io/kyverno:generate created
clusterrole.rbac.authorization.k8s.io/kyverno:policies created
clusterrole.rbac.authorization.k8s.io/kyverno:userinfo created
clusterrole.rbac.authorization.k8s.io/kyverno:view created
clusterrole.rbac.authorization.k8s.io/kyverno:webhook created
rolebinding.rbac.authorization.k8s.io/kyverno:leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:events created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:generate created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:policies created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:userinfo created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:view created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:webhook created
configmap/kyverno created
configmap/kyverno-metrics created
service/kyverno-svc created
service/kyverno-svc-metrics created
deployment.apps/kyverno created
[root@ip-192-168-78-195 kube-bench (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

b. label policy 적용

아래와 같이 label이 kubernetes에서 권고하는 app.kubernetes.io/name으로 추가되어 있는 검증하는 policy를 추가해보자.

# 권장 레이블

https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/common-labels/

 

권장 레이블

kubectl과 대시보드와 같은 많은 도구들로 쿠버네티스 오브젝트를 시각화 하고 관리할 수 있다. 공통 레이블 셋은 모든 도구들이 이해할 수 있는 공통의 방식으로 오브젝트를 식별하고 도구들이

kubernetes.io

[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cat policy.sh
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-labels
spec:
  validationFailureAction: enforce
  rules:
  - name: check-for-labels
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      message: "label 'app.kubernetes.io/name' is required"
      pattern:
        metadata:
          labels:
            app.kubernetes.io/name: "?*"
EOF
[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# sh policy.sh
clusterpolicy.kyverno.io/require-labels created
[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

policy sample은 아래 url에서 확인할 수 있으며, 총 128개가 제공되고 있다.

# 정책 샘플

https://kyverno.io/policies/

 

Policies

Kyverno is a policy engine designed for Kubernetes

kyverno.io

아래와 같이 app.kubernetes.io/name이 포함되어 있지 않은 deployment를 반영해 보자.

[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
    # namespace: app-test
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        command: ["/bin/bash"]
        args: ["-c", "echo \"<p>Hello from $(hostname)</p>\" > index.html; sleep 30000 && python -m SimpleHTTPServer 8080"]
        ports:
        - containerPort: 80
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl apply -f nginx-deployment.yaml
Error from server: error when creating "nginx-deployment.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/default/my-nginx was blocked due to the following policies

require-labels:
  autogen-check-for-labels: 'validation error: label ''app.kubernetes.io/name'' is
    required. Rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

require-labels 항목과 함께 validation error가 발생하는 것을 확인할 수 있다. deployment를 수정하고 다시 반영해 보도록 하자.

[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
    # namespace: app-test
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
        app.kubernetes.io/name: nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        command: ["/bin/bash"]
        args: ["-c", "echo \"<p>Hello from $(hostname)</p>\" > index.html; sleep 30000 && python -m SimpleHTTPServer 8080"]
        ports:
        - containerPort: 80
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl apply -f nginx-deployment.yaml
deployment.apps/my-nginx created
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

위와 같이 정상적으로 반영된 것을 확인할 수 있다.

c. replicas policy

다음으로 replica에 대한 정책을 반영해 보자.

[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# policy-replica.sh
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: deployment-has-multiple-replicas
  annotations:
    policies.kyverno.io/title: Require Multiple Replicas
    policies.kyverno.io/category: Sample
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Deployment
    policies.kyverno.io/description: >-
      Deployments with a single replica cannot be highly available and thus the application
      may suffer downtime if that one replica goes down. This policy validates that Deployments
      have more than one replica.
spec:
  validationFailureAction: audit
  background: true
  rules:
    - name: deployment-has-multiple-replicas
      match:
        resources:
          kinds:
          - Deployment
      validate:
        message: "Deployments should have more than one replica to ensure availability."
        pattern:
          spec:
            replicas: ">1"
EOF
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

정책 반영시 validationFailureAction을 앞선 label에서는 enforce로 설정하였지만, replica에서는 audit으로 설정한 것을 확인할 수 있다. enforce의 경우 실패로 처리하여 배포가 되지 않도록 하지만, audit의 경우 실패를 남기지만, 배포는 수행하도록 한다.

[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl get clusterpolicy
NAME                               BACKGROUND   ACTION    READY
deployment-has-multiple-replicas   true         audit     true
require-labels                     true         enforce   true
[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl get policyreport -A
NAMESPACE   NAME               PASS   FAIL   WARN   ERROR   SKIP   AGE
app-test    polr-ns-app-test   1      1      0      0       0      56s
default     polr-ns-default    1      1      0      0       0      60m
[root@ip-192-168-78-195 kyverno (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

위와 같이 clusterpolicy를 확인할 수 있으며, replica가 1개로 등록된 app-test, default namespace 내 deployment가 각각 PASS 1(REQUIRE-LABELS), FAIL 1(DEPLOYMENT-HAS-MULTIPLE-REPLICAS)로 기록되어 있는 것을 볼 수 있다.

[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# policy-replica.sh
kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: deployment-has-multiple-replicas
  annotations:
    policies.kyverno.io/title: Require Multiple Replicas
    policies.kyverno.io/category: Sample
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Deployment
    policies.kyverno.io/description: >-
      Deployments with a single replica cannot be highly available and thus the application
      may suffer downtime if that one replica goes down. This policy validates that Deployments
      have more than one replica.
spec:
  validationFailureAction: enforce
  background: true
  rules:
    - name: deployment-has-multiple-replicas
      match:
        resources:
          kinds:
          - Deployment
      validate:
        message: "Deployments should have more than one replica to ensure availability."
        pattern:
          spec:
            replicas: ">1"
EOF
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl apply -f nginx-deployment.yaml
Error from server: error when creating "nginx-deployment.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/app-test/my-nginx was blocked due to the following policies

deployment-has-multiple-replicas:
  deployment-has-multiple-replicas: 'validation error: Deployments should have more
    than one replica to ensure availability. Rule deployment-has-multiple-replicas
    failed at path /spec/replicas/'
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]# kubectl get policyreport -A
NAMESPACE   NAME               PASS   FAIL   WARN   ERROR   SKIP   AGE
app-test    polr-ns-app-test   0      0      0      0       0      4m57s
default     polr-ns-default    1      1      0      0       0      64m
[root@ip-192-168-78-195 app (iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io:default)]#

위와 같이 ENFORCE로 수정 반영 시 deployment-has-multiple-replicas 항목과 함께 validation error가 발생하는 것을 볼 수 있다.


kube-score (yaml 구성 권고사항)

kube-score는 Kubernetes object definition을 분석하는 도구이다. kubernetes를 적용하는 object 즉 yaml 파일을 분석하여 보다 안전하고 탄력적으로 만들기 위해 개선사항을 제시해 준다.

a. 설치

설치는 수동설치가 가능하지만 krew kubectl 패키지 매니저를 활용하여 원스탭 설치가 가능하다.

[root@ip-192-168-78-195 ~ (kubeapp:default)]# kubectl krew install score
Updated the local copy of plugin index.
Installing plugin: score
Installed plugin: score
\
 | Use this plugin:
 |      kubectl score
 | Documentation:
 |      https://github.com/zegl/kube-score
/
WARNING: You installed plugin "score" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

b. 복사

[root@ip-192-168-78-195 / (kubeapp:default)]# cp /root/.krew/store/score/v1.10.0/kube-score /usr/local/bin/
[root@ip-192-168-78-195 / (kubeapp:default)]#

c. 검사 수행

앞서 살펴본 nginx-deployment.yaml 파일을 score를 돌려 결과를 확인해 보자.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
    # namespace: app-test
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        command: ["/bin/bash"]
        args: ["-c", "echo \"<p>Hello from $(hostname)</p>\" > index.html; sleep 30000 && python -m SimpleHTTPServer 8080"]
        ports:
        - containerPort: 80

kube-score를 확인해 보면 아래와 같이 yaml 파일에 대한 수정 권고사항이 출력되는 것을 볼 수 있다.

[root@ip-192-168-78-195 app (kubeapp:default)]# kube-score score nginx-deployment.yaml
apps/v1/Deployment my-nginx                                                   뮙
    [CRITICAL] Container Security Context
        . my-nginx -> Container has no configured security context
            Set securityContext to run the container in a more secure context.
    [CRITICAL] Container Resources
        . my-nginx -> CPU limit is not set
            Resource limits are recommended to avoid resource DDOS. Set resources.limits.cpu
        . my-nginx -> Memory limit is not set
            Resource limits are recommended to avoid resource DDOS. Set resources.limits.memory
        . my-nginx -> CPU request is not set
            Resource requests are recommended to make sure that the application can start and run without crashing. Set resources.requests.cpu
        . my-nginx -> Memory request is not set
            Resource requests are recommended to make sure that the application can start and run without crashing. Set resources.requests.memory
    [CRITICAL] Container Image Tag
        . my-nginx -> Image with latest tag
            Using a fixed tag is recommended to avoid accidental upgrades
    [CRITICAL] Pod NetworkPolicy
        . The pod does not have a matching NetworkPolicy
            Create a NetworkPolicy that targets this pod to control who/what can communicate with this pod. Note, this feature needs to be supported by the CNI implementation used in the Kubernetes cluster to have an effect.
[root@ip-192-168-78-195 app (kubeapp:default)]#

위와 같이 securityContext 구성, Resource Limit 구성, Image Tag, Pod NetworkPolicy 등의 권고사항이 표시되는 것을 확인할 수 있다.

그 밖에 지원하는 Check 리스트는 아래와 같다.

ingress-targets-service Ingress Makes sure that the Ingress targets a Service default
cronjob-has-deadline CronJob Makes sure that all CronJobs has a configured deadline default
container-resources Pod Makes sure that all pods have resource limits and requests set. The --ignore-container-cpu-limit flag can be used to disable the requirement of having a CPU limit default
container-resource-requests-equal-limits Pod Makes sure that all pods have the same requests as limits on resources set. optional
container-cpu-requests-equal-limits Pod Makes sure that all pods have the same CPU requests as limits set. optional
container-memory-requests-equal-limits Pod Makes sure that all pods have the same memory requests as limits set. optional
container-image-tag Pod Makes sure that a explicit non-latest tag is used default
container-image-pull-policy Pod Makes sure that the pullPolicy is set to Always. This makes sure that imagePullSecrets are always validated. default
container-ephemeral-storage-request-and-limit Pod Makes sure all pods have ephemeral-storage requests and limits set default
container-ephemeral-storage-request-equals-limit Pod Make sure all pods have matching ephemeral-storage requests and limits optional
container-ports-check Pod Container Ports Checks optional
statefulset-has-poddisruptionbudget StatefulSet Makes sure that all StatefulSets are targeted by a PDB default
deployment-has-poddisruptionbudget Deployment Makes sure that all Deployments are targeted by a PDB default
poddisruptionbudget-has-policy PodDisruptionBudget Makes sure that PodDisruptionBudgets specify minAvailable or maxUnavailable default
pod-networkpolicy Pod Makes sure that all Pods are targeted by a NetworkPolicy default
networkpolicy-targets-pod NetworkPolicy Makes sure that all NetworkPolicies targets at least one Pod default
pod-probes Pod Makes sure that all Pods have safe probe configurations default
container-security-context-user-group-id Pod Makes sure that all pods have a security context with valid UID and GID set default
container-security-context-privileged Pod Makes sure that all pods have a unprivileged security context set default
container-security-context-readonlyrootfilesystem Pod Makes sure that all pods have a security context with read only filesystem set default
container-seccomp-profile Pod Makes sure that all pods have at a seccomp policy configured. optional
service-targets-pod Service Makes sure that all Services targets a Pod default
service-type Service Makes sure that the Service type is not NodePort default
stable-version all Checks if the object is using a deprecated apiVersion default
deployment-has-host-podantiaffinity Deployment Makes sure that a podAntiAffinity has been set that prevents multiple pods from being scheduled on the same node. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ default
statefulset-has-host-podantiaffinity StatefulSet Makes sure that a podAntiAffinity has been set that prevents multiple pods from being scheduled on the same node. https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ default
deployment-targeted-by-hpa-does-not-have-replicas-configured Deployment Makes sure that Deployments using a HorizontalPodAutoscaler doesn't have a statically configured replica count set default
statefulset-has-servicename StatefulSet Makes sure that StatefulSets have an existing headless serviceName. default
deployment-pod-selector-labels-match-template-metadata-labels Deployment Ensure the StatefulSet selector labels match the template metadata labels. default
statefulset-pod-selector-labels-match-template-metadata-labels StatefulSet Ensure the StatefulSet selector labels match the template metadata labels. default
label-values all Validates label values default
horizontalpodautoscaler-has-target HorizontalPodAutoscaler Makes sure that the HPA targets a valid object default

위와 같이 kube-score는 yaml 파일 내 검증항목에 대한 모범사례를 기준으로 권고사항을 제시해 준다.


결론

Kubernetes의 보안 취약점에 대한 대응 방안을 수립하는 것은 쉽지 않은 일이다. 특히 무엇보다 이를 운영부서에서 수동으로 관리할 경우 비용은 물론 안정성 측면에서 휴먼 에러를 방지할 수 없을 것이다.

위에 소개한 도구들은 대부분 kubernetes 환경에 완전한 통합을 지원하므로, 가능한 CI/CD 파이프라인에 적용하여 자동화된 환경을 구성하는 것을 권고한다.

그리드형
댓글
댓글쓰기 폼