티스토리 뷰

728x90
반응형

Kubernetes Tools 활용

Kubernetes는 수많은 api의 조합으로 이뤄져 있다. 이를 조합하여 Kubernetes를 효율적으로 운영할 수 있는 다양한 도구를 만들어 낼 수 있다. 이미 수많은 플러그인이 개발되어 활용되고 있으며, 개발환경은 물론 운영환경에서도 효과적으로 사용할 수 있다. 지금부터는 Kubernetes를 사용하는 환경에 적용하기 용이한 플러그인들을 소개하고 직접 구축해 보도록 하자.


kubectl krew

먼저 살펴볼 내용은 kubernetes cli 도구 kubectl의 플러그인을 관리하는 패키지 매니저 krew에 대해 알아보자. krew를 사용하면, 손쉽게 kubernetes 관련 플러그인들을 설치할 수 있다.

https://github.com/kubernetes-sigs/krew

 

GitHub - kubernetes-sigs/krew: 📦 Find and install kubectl plugins

📦 Find and install kubectl plugins. Contribute to kubernetes-sigs/krew development by creating an account on GitHub.

github.com

https://krew.sigs.k8s.io/

 

Krew – kubectl plugin manager

© 2022 The Kubernetes Authors. Krew is a Kubernetes SIG CLI project. Edit Page ·

krew.sigs.k8s.io

현재 130개가 넘는 kubectl 플러그인을 사용할 수 있다.

a. 설치

아래와 같이 "("부터 ")"까지 9줄 command 라인으로 원스탭으로 설치가 가능하다.

[root@ip-192-168-78-195 ~ (kubeapp:default)]# (
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
++ mktemp -d
+ cd /tmp/tmp.DxLnkaVs48
++ uname
++ tr '[:upper:]' '[:lower:]'
+ OS=linux
++ uname -m
++ sed -e s/x86_64/amd64/ -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/'
+ ARCH=amd64
+ KREW=krew-linux_amd64
+ curl -fsSLO https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_amd64.tar.gz
+ tar zxvf krew-linux_amd64.tar.gz
./LICENSE
./krew-linux_amd64
+ ./krew-linux_amd64 install krew
Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
Updated the local copy of plugin index.
Installing plugin: krew
Installed plugin: krew
\
 | Use this plugin:
 |      kubectl krew
 | Documentation:
 |      https://krew.sigs.k8s.io/
 | Caveats:
 | \
 |  | krew is now installed! To start using kubectl plugins, you need to add
 |  | krew's installation directory to your PATH:
 |  | 
 |  |   * macOS/Linux:
 |  |     - Add the following to your ~/.bashrc or ~/.zshrc:
 |  |         export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
 |  |     - Restart your shell.
 |  | 
 |  |   * Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
 |  | 
 |  | To list krew commands and to get help, run:
 |  |   $ kubectl krew
 |  | For a full list of available plugins, run:
 |  |   $ kubectl krew search
 |  | 
 |  | You can find documentation at
 |  |   https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
 | /
/
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

b. 환경변수 적용

[root@ip-192-168-78-195 ~ (kubeapp:default)]# export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
[root@ip-192-168-78-195 ~ (kubeapp:default)]# kubectl krew
krew is the kubectl plugin manager.
You can invoke krew through kubectl: "kubectl krew [command]..."

Usage:
  kubectl krew [command]

Available Commands:
  completion  generate the autocompletion script for the specified shell
  help        Help about any command
  index       Manage custom plugin indexes
  info        Show information about an available plugin
  install     Install kubectl plugins
  list        List installed kubectl plugins
  search      Discover kubectl plugins
  uninstall   Uninstall plugins
  update      Update the local copy of the plugin index
  upgrade     Upgrade installed plugins to newer versions
  version     Show krew version and diagnostics

Flags:
  -h, --help      help for krew
  -v, --v Level   number for the log level verbosity

Use "kubectl krew [command] --help" for more information about a command.
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

위와 같이 손쉽게 설치가 완료된 것을 확인할 수 있다. 이후 아래 많은 도구 설치에서 krew를 통해 플러그인을 설치하는 과정에 대해서도 알아보도록 하자.


Multi Cluster 환경에 적용하기 용이한 Tools

1) kubectx(context 관리), kubens(namespace 관리)

kubectx, kubens는 하나의 도구 쌍으로 multi cluster에서 가장 많이 사용되는 도구이다.

kubectx는 kubeconfig로 연동되어 있는 Context 목록을 확인할 수 있다. Context 변경을 원할 경우 kubectx [CONTEXT_NAME]로 변경이 가능하다. -c 옵션으로 CurrentContext를 확인할 수 있으며, -d 옵션으로 삭제할 수 있으며, - 옵션으로 이전 Context로 돌아온다. kubectx는 kubectl config get-contexts 명령어와 비슷하지만, set과 같은 일부 옵션을 지원하지 않는다.

kubens는 현재 Context의 Namespace를 확인할 수 있다. Context의 Default Namespace 변경을 원할 경우 kubens [NAMESPACE_NAME]로 변경이 가능하다. -c 옵션으로 CurrentNamespace를 확인할 수 있으며, - 옵션으로 이전 Namespace로 돌아온다.

설치는 앞서 살펴본 krew를 통해 설치할 수 있으며, 폐쇄망 등을 고려하여 수동으로 설치도 가능하다. krew를 통해 설치할 경우 얼마나 간단히 구축이 가능한지 먼저 살펴보고 직접 수동으로 설치도 진행해 보도록 하자.

> kubectx 설치
[root@ip-192-168-78-195 ~ (kubeapp:default)]# kubectl krew install ctx
Updated the local copy of plugin index.
Installing plugin: ctx
Installed plugin: ctx
\
 | Use this plugin:
 |      kubectl ctx
 | Documentation:
 |      https://github.com/ahmetb/kubectx
 | Caveats:
 | \
 |  | If fzf is installed on your machine, you can interactively choose
 |  | between the entries using the arrow keys, or by fuzzy searching
 |  | as you type.
 |  | See https://github.com/ahmetb/kubectx for customization and details.
 | /
/
WARNING: You installed plugin "ctx" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

> kubens 설치
[root@ip-192-168-78-195 ~ (kubeapp:default)]# kubectl krew install ns
Updated the local copy of plugin index.
Installing plugin: ns
Installed plugin: ns
\
 | Use this plugin:
 |      kubectl ns
 | Documentation:
 |      https://github.com/ahmetb/kubectx
 | Caveats:
 | \
 |  | If fzf is installed on your machine, you can interactively choose
 |  | between the entries using the arrow keys, or by fuzzy searching
 |  | as you type.
 | /
/
WARNING: You installed plugin "ns" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

위와 같이 kubectl krew ctx, kubectl krew ns 명령어 만으로 플러그인 설치가 가능하다.

a. git clone

[root@ip-192-168-78-195 eks]# sudo git clone https://github.com/ahmetb/kubectx ./kubectx
Cloning into './kubectx'...
remote: Enumerating objects: 1457, done.
remote: Counting objects: 100% (172/172), done.
remote: Compressing objects: 100% (115/115), done.
remote: Total 1457 (delta 85), reused 97 (delta 51), pack-reused 1285
Receiving objects: 100% (1457/1457), 905.30 KiB | 1.75 MiB/s, done.
Resolving deltas: 100% (817/817), done.
[root@ip-192-168-78-195 eks]#

b. 권한부여

[root@ip-192-168-78-195 kubectx]# chmod +x kube
kubectx  kubens   
[root@ip-192-168-78-195 kubectx]#

c. 복사

[root@ip-192-168-78-195 kubectx]# cp kubectx /usr/local/sbin/
[root@ip-192-168-78-195 kubectx]# cp kubens /usr/local/sbin/
[root@ip-192-168-78-195 kubectx]#

d. kubectx 활용

[root@ip-192-168-78-195 ~]# kubectl config get-contexts
CURRENT   NAME                                                          CLUSTER                                      AUTHINFO                                                      NAMESPACE
          eks-default                                                   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
*         iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
          minikube                                                      minikube                                     minikube                                                      default
[root@ip-192-168-78-195 ~]# kubectx
eks-default
iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
minikube
[root@ip-192-168-78-195 ~]# kubectx -c
iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
[root@ip-192-168-78-195 ~]# kubectx minikube
Switched to context "minikube".
[root@ip-192-168-78-195 ~]# kubectx -c
minikube
[root@ip-192-168-78-195 ~]# kubectl config get-contexts
CURRENT   NAME                                                          CLUSTER                                      AUTHINFO                                                      NAMESPACE
          eks-default                                                   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
          iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
*         minikube                                                      minikube                                     minikube                                                      default
[root@ip-192-168-78-195 ~]#

e. kubens 활용

> kubens 확인
[root@ip-192-168-78-195 ~]# kubectl config get-contexts
CURRENT   NAME                                                          CLUSTER                                      AUTHINFO                                                      NAMESPACE
          eks-default                                                   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
          iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io   kube-system
*         minikube                                                      minikube                                     minikube                                                      default
[root@ip-192-168-78-195 ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   8d
kube-node-lease   Active   8d
kube-public       Active   8d
kube-system       Active   8d
[root@ip-192-168-78-195 ~]# 

> minikube namespace 추가
[root@ip-192-168-84-159 ~]# kubectl apply -f namespace.yaml 
namespace/app-test created
[root@ip-192-168-84-159 ~]# 

> kubens 확인
[root@ip-192-168-78-195 app]# kubectl get namespaces
NAME              STATUS   AGE
app-test          Active   15s
default           Active   8d
kube-node-lease   Active   8d
kube-public       Active   8d
kube-system       Active   8d
[root@ip-192-168-78-195 app]# kubectl get pods
No resources found in default namespace.
[root@ip-192-168-78-195 app]# kubens
app-test
default
kube-node-lease
kube-public
kube-system
[root@ip-192-168-78-195 app]# kubens kube-system
Context "minikube" modified.
Active namespace is "kube-system".
[root@ip-192-168-78-195 app]# kubectl get pods
NAME                                                                        READY   STATUS    RESTARTS        AGE
coredns-64897985d-25ndz                                                     1/1     Running   1 (7d14h ago)   8d
etcd-ip-192-168-84-159.ap-northeast-2.compute.internal                      1/1     Running   1 (7d14h ago)   8d
kube-apiserver-ip-192-168-84-159.ap-northeast-2.compute.internal            1/1     Running   1 (7d14h ago)   8d
kube-controller-manager-ip-192-168-84-159.ap-northeast-2.compute.internal   1/1     Running   1 (7d14h ago)   8d
kube-proxy-srclw                                                            1/1     Running   1 (7d14h ago)   8d
kube-scheduler-ip-192-168-84-159.ap-northeast-2.compute.internal            1/1     Running   1 (7d14h ago)   8d
storage-provisioner                                                         1/1     Running   2 (10h ago)     8d
[root@ip-192-168-78-195 app]#

위와 같이 kubens로 변경된 namespace를 default로 지정하여 해당 context내 사용할 수 있다.

2) kube_ps1(프롬프트 출력)

kube_ps1은 multi cluster 환경에서 사용자의 실수를 줄이고, 작업 시간을 단축시키기 위해 CurrentContext와 CurrentNamespace를 표시하는 명령어로 이를 PS1(Shell Prompt)에 적용하여 직관적으로 Shell 상에서 확인할 수 있도록 도와주는 Command이다.

a. git clone

[root@ip-192-168-78-195 eks]# git clone https://github.com/jonmosco/kube-ps1.git
Cloning into 'kube-ps1'...
remote: Enumerating objects: 585, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 585 (delta 10), reused 15 (delta 6), pack-reused 565
Receiving objects: 100% (585/585), 7.22 MiB | 10.36 MiB/s, done.
Resolving deltas: 100% (304/304), done.
[root@ip-192-168-78-195 eks]#

2) .bashrc 추가 및 반영

# 주의 - 색상지원이 되지 않는 Shell에서는 아래와 같이 KUBE_PS1_XXX_COLOR 3개 환경변수를 null로 변경한다.

> .bashrc 추가
~~~~~
source ~/eks/kube-ps1/kube-ps1.sh
PS1='[\u@\h \W $(kube_ps1)]\$ '
KUBE_PS1_SYMBOL_ENABLE=false
KUBE_PS1_SYMBOL_COLOR=null
KUBE_PS1_CTX_COLOR=null
KUBE_PS1_NS_COLOR=null
~~~~~

> 반영
[root@ip-192-168-78-195 ~ ]# source ./.bashrc
[root@ip-192-168-78-195 ~ (kube:default)]#

3) context name 변경

[root@ip-192-168-78-195 ~ (kube:default)]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ETXdOVEU1TURNd00xb1hEVE15TURNd01qRTVNRE13TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTExPClZzZGV0Rk9rNzRNRXdHa1R2WGgzMXBMWG50TisvNDdLUzZIbno2dEJCWXJMMHNPaXV0MXQvQWRlcCs5ckNlZmsKNytKWUxNR3JnN3piUzZCUys0UGZ4eTMyMldiL2FNMXEyVGN3ZVRPZWtzdlcyejRjcGZEZE51dldsYWJyVFpPZgppZVJZVG1TMmkwWXpCL1ZPZ3Z5YjZPT3ZjejBHMnp1TVVsZkcrVWE5TDdNM3pIK1c1ZVllOThjVFlhM1hxdnZDCno0TFRBSmVJZndleENYanFOYjdWVDBQS3hTUGkvbnRpb2xFbFk1MnJXamhwSTVvNitUeFlJLzZoYVhFWFg4ZVYKblhMRUdlSU91a3RmSHl6MDUzaU9tdlVOMGU2UlFUMldQTUVWcEdHd3F4eEdNODVNS1I5TG1UYjAyQURlK01hYwpjV3V2SFJ4OUVXYmYrZXI5MWw4Q0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNY3NDTTAyU21kZ1JIbmZwMXd3WnR2MmtRZWdNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBZXZxT0k1d1VMOTNOeEt3bXhpYWUxQzQyeXRsaFJhRCtxdXV2NHVFZ1RwUUo0MXJTUgpQL2FLWUFzZmVBUmxCNnlXZDA2Tzl5UXVhOEF0QktTbEkxVHdUVjlUVVlSYnU2VHgwQ0RMTEczSU9aMXJLWUxBCjZtSGZyVFdHU0QwdXN5ejcxZDFiUzd6M2JjK2hRdnY3RkJzVGp0VHMvV2xRbjUrckdhanZLSTNNa0o1VEFBRlYKamQ0cUZIa1pWV3U5L0dWeUV0TndZdi9iWUdiYndTdzc2cWphQWFVY0c2dmxTNk9uNDFOdkVGTkx0dHZVRk0wegpxZFl6bHllaE5JNENJcUJ2aWtIZDVrOWw1QzlUVTlrWTJobE5KQUZVS2I1MXp3bTZ4bU9QREx5UjFoay80ZHM2CnpablQyZmNCQmszNHFGUTNGS1Q1WE5pbXZoQlRidHZucUx4TQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://407B39524D80486F1EECD325C3180677.yl4.ap-northeast-2.eks.amazonaws.com
  name: NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
contexts:
- context:
    cluster: NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
    namespace: kube-system
    user: iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
  name: eks-default
- context:
    cluster: NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
    namespace: default
    user: iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
  name: kube	=================> kubeapp으로 변경
current-context: kube
kind: Config
preferences: {}
users:
- name: iam-root-account@NRSON-EKS-CLUSTER.ap-northeast-2.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - NRSON-EKS-CLUSTER
      - --region
      - ap-northeast-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
[root@ip-192-168-78-195 ~ (kube:default)]#

위와 같이 context name을 kube에서 kubeapp으로 변경할 아래와 같이 context를 변경함으로써 반영이 가능하다.

[root@ip-192-168-78-195 ~ (kube:default)]# kubectx
eks-default
kubeapp
[root@ip-192-168-78-195 ~ (kube:default)]# kubectx -c
kube
[root@ip-192-168-78-195 ~ (kube:default)]# kubectx kubeapp
Switched to context "kubeapp".
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

유지보수에 용이한 Tools

1) neat(manifests 정돈)

neat는 kubernetes manifests를 깔끔하게 정돈하여 보여주는 kubectl 옵션이다. 이후는 가능하면 krew를 통해 설치과정을 간소화 하도록 한다.

a. 설치

[root@ip-192-168-78-195 ~ (kubeapp:default)]# kubectl krew install neat
Updated the local copy of plugin index.
Installing plugin: neat
Installed plugin: neat
\
 | Use this plugin:
 |      kubectl neat
 | Documentation:
 |      https://github.com/itaysk/kubectl-neat
/
WARNING: You installed plugin "neat" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 ~ (kubeapp:default)]#

b. kubectl get pods -o yaml vs kubectl neat 

먼저 kubectl get pods -o yaml을 살펴보자. 적용한 deployment.yaml 파일은 아래와 같다.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
    # namespace: app-test
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        command: ["/bin/bash"]
        args: ["-c", "echo \"<p>Hello from $(hostname)</p>\" > index.html; sleep 30000 && python -m SimpleHTTPServer 8080"]
        ports:
        - containerPort: 80

적용한 deployment.yaml 파일은 위와 같이 간단하지만, -o yaml 파일을 확인해 보면 아래와 같이 현재 status까지 함께 저장되서 출력되어 굉장히 길게 yaml 파일이 출력된다. 결국 이는 이후 유지보수 시점에 불필요한 yaml 파일의 내용을 제거하고 수정해야 하는 번거로움이 생긴다.

# kubectl get deployment my-nginx -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"run":"my-nginx"}},"template":{"metadata":{"labels":{"run":"my-nginx"}},"spec":{"containers":[{"args":["-c","echo \"\u003cp\u003eHello from $(hostname)\u003c/p\u003e\" \u003e index.html; sleep 30000 \u0026\u0026 python -m SimpleHTTPServer 8080"],"command":["/bin/bash"],"image":"nginx","name":"my-nginx","ports":[{"containerPort":80}]}]}}}}
  creationTimestamp: "2022-03-07T15:00:07Z"
  generation: 1
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:run: {}
          f:spec:
            f:containers:
              k:{"name":"my-nginx"}:
                .: {}
                f:args: {}
                f:command: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":80,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:protocol: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl
    operation: Update
    time: "2022-03-07T15:00:07Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-03-07T15:00:16Z"
  name: my-nginx
  namespace: default
  resourceVersion: "362051"
  uid: a4b0096f-9830-474c-add6-049ae870530d
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: my-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: my-nginx
    spec:
      containers:
      - args:
        - -c
        - echo "<p>Hello from $(hostname)</p>" > index.html; sleep 30000 && python
          -m SimpleHTTPServer 8080
        command:
        - /bin/bash
        image: nginx
        imagePullPolicy: Always
        name: my-nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-03-07T15:00:16Z"
    lastUpdateTime: "2022-03-07T15:00:16Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-03-07T15:00:07Z"
    lastUpdateTime: "2022-03-07T15:00:16Z"
    message: ReplicaSet "my-nginx-6c6c46694f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
#

이를 kubectl neat를 통해 아래와 같이 yaml 파일을 추출할 수 있다.

# kubectl get deployment my-nginx -o yaml | kubectl neat
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  name: my-nginx
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: my-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: my-nginx
    spec:
      containers:
      - args:
        - -c
        - echo "<p>Hello from $(hostname)</p>" > index.html; sleep 30000 && python
          -m SimpleHTTPServer 8080
        command:
        - /bin/bash
        image: nginx
        imagePullPolicy: Always
        name: my-nginx
        ports:
        - containerPort: 80
          protocol: TCP
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
#

위와 같이 보다 간결하고 default 옵션으로 숨겨있던 주요 옵션까지 포함하여 yaml 파일을 추출할 수 있다. 물론 status 정보에 대해서는 제외하고 출력한다. 해당 옵션은 백업 시점의 상태 정보를 제외하고 출력하기 때문에 yaml 파일에 대한 검증이나, 타 환경에 이전 구축하는 용도로 사용하기 용이하다.

다만, 현재 status를 기준으로 백업하고, 이후 동일한 상태로 복원하고자 할 경우에는 이전 -o yaml 옵션만으로 백업하는 것을 권장한다.

2) kail(multi pod log 모니터링)

kubernetes와 같이 container 환경에서 반드시 고려되어야 하는 Telemetry 요소 중 특히 로깅 요소는 무엇보다 중요하다. 로깅은 개발단계에서부터 유용하게 활용이 가능한 Telemetry 컴포넌트로 대표적인 EFK를 통해 Visualize를 지원한다. 다만, 모든 환경에서 로그를 Visualize 할 수는 없으며, 때로는 CLI 환경에서 직접 로그를 확인하고자 하는 경우가 있다. 트러블슈팅을 지원하며, 동일 서비스내 Pod 로그를 통합으로 확인하고, 순서를 조합하여 보고자 하는 경우 등에 활용할 수 있다.

a. 설치

[root@ip-192-168-78-195 go (kubeapp:default)]# kubectl krew install tail
Updated the local copy of plugin index.
Installing plugin: tail
Installed plugin: tail
\
 | Use this plugin:
 |      kubectl tail
 | Documentation:
 |      https://github.com/boz/kail
/
WARNING: You installed plugin "tail" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 go (kubeapp:default)]#

b. kail 활용

[root@ip-192-168-78-195 go (kubeapp:default)]# kail
...
...
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:50,374+0000", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "added {{elasticsearch-client}{tHAltQHwSK-JAbfVGY-lTw}{DvjfXlbHQiubOqm2HdNCkw}{192.168.161.189}{192.168.161.189:9300}{i}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true},{elasticsearch-data}{lykb7kBAR5SvmTuSFs_O-g}{cgBjsrwIRWuIs2Ysf6kgkQ}{192.168.167.246}{192.168.167.246:9300}{d}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true},}, term: 1, version: 12, reason: Publication{term=1, version=12}"  }
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:50,412+0000", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "adding index lifecycle policy [watch-history-ilm-policy]"  }
kube-logging/elasticsearch-client-578dd48f84-lnmx8[elasticsearch-client]: {"type": "server", "timestamp": "2022-03-07T15:41:50,601+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "tHAltQHwSK-JAbfVGY-lTw",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-data-0[elasticsearch-data]: {"type": "server", "timestamp": "2022-03-07T15:41:50,642+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-data", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "lykb7kBAR5SvmTuSFs_O-g",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-client-578dd48f84-lnmx8[elasticsearch-client]: {"type": "server", "timestamp": "2022-03-07T15:41:50,928+0000", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "tHAltQHwSK-JAbfVGY-lTw",  "message": "license [1e86dffb-7ce3-40e5-b95f-e79a05cb5a40] mode [basic] - valid"  }
kube-logging/elasticsearch-client-578dd48f84-lnmx8[elasticsearch-client]: {"type": "server", "timestamp": "2022-03-07T15:41:50,929+0000", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "tHAltQHwSK-JAbfVGY-lTw",  "message": "Active license is now [BASIC]; Security is enabled"  }
kube-logging/elasticsearch-client-578dd48f84-lnmx8[elasticsearch-client]: {"type": "server", "timestamp": "2022-03-07T15:41:50,973+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "tHAltQHwSK-JAbfVGY-lTw",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-data-0[elasticsearch-data]: {"type": "server", "timestamp": "2022-03-07T15:41:50,996+0000", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-data", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "lykb7kBAR5SvmTuSFs_O-g",  "message": "license [1e86dffb-7ce3-40e5-b95f-e79a05cb5a40] mode [basic] - valid"  }
kube-logging/elasticsearch-data-0[elasticsearch-data]: {"type": "server", "timestamp": "2022-03-07T15:41:51,007+0000", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "elasticsearch", "node.name": "elasticsearch-data", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "lykb7kBAR5SvmTuSFs_O-g",  "message": "Active license is now [BASIC]; Security is enabled"  }
kube-logging/elasticsearch-data-0[elasticsearch-data]: {"type": "server", "timestamp": "2022-03-07T15:41:51,028+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-data", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "lykb7kBAR5SvmTuSFs_O-g",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:51,125+0000", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "license [1e86dffb-7ce3-40e5-b95f-e79a05cb5a40] mode [basic] - valid"  }
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:51,130+0000", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "Active license is now [BASIC]; Security is enabled"  }
kube-logging/elasticsearch-client-578dd48f84-lnmx8[elasticsearch-client]: {"type": "server", "timestamp": "2022-03-07T15:41:51,242+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-client", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "tHAltQHwSK-JAbfVGY-lTw",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-data-0[elasticsearch-data]: {"type": "server", "timestamp": "2022-03-07T15:41:51,252+0000", "level": "INFO", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elasticsearch", "node.name": "elasticsearch-data", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "lykb7kBAR5SvmTuSFs_O-g",  "message": "waiting for elected master node [{elasticsearch-master}{4D8M6FE7Tsqmml7IaddNww}{2CWU2-ZqTeK1gCd6uklb0A}{192.168.144.190}{192.168.144.190:9300}{m}{ml.machine_memory=8124866560, ml.max_open_jobs=20, xpack.installed=true}] to setup local exporter [default_local] (does it have x-pack installed?)"  }
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:53,041+0000", "level": "INFO", "component": "o.e.c.m.MetaDataCreateIndexService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "[.monitoring-es-7-2022.03.07] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [_doc]"  }
kube-logging/elasticsearch-master-745c995d88-ksldq[elasticsearch-master]: {"type": "server", "timestamp": "2022-03-07T15:41:54,352+0000", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master", "cluster.uuid": "w4yuNKxZRi2BxpDgrM37eg", "node.id": "4D8M6FE7Tsqmml7IaddNww",  "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-7-2022.03.07][0]] ...])."  }
...
...

위와 같이 별도로 옵션을 주지 않을 경우 클러스터 내 모든 pod를 대상으로 로깅을 찾는다. 그 밖에 아래와 같은 옵션으로 좀 더 범위를 좁혀 로깅할 수 있다.

-l, --label LABEL-SELECTOR match pods based on a standard label selector
-p, --pod NAME match pods by name
-n, --ns NAMESPACE-NAME match pods in the given namespace
--svc NAME match pods belonging to the given service
--rc NAME match pods belonging to the given replication controller
--rs NAME match pods belonging to the given replica set
-d, --deploy NAME match pods belonging to the given deployment
--sts NAME match pods belonging to the given statefulset
-j, --job NAME match pods belonging to the given job
--node NODE-NAME match pods running on the given node
--ing NAME match pods belonging to services targeted by the given ingress
-c, --containers CONTAINER-NAME restrict which containers logs are shown for
--ignore LABEL-SELECTOR Ignore pods that the selector matches. (default: kail.ignore=true)
--current-ns Match pods in the namespace specified in Kubernetes' "current context"
--ignore-ns NAME Ignore pods in the given namespaces. Overridden by --ns, --current-ns. (default: kube-system)

3) sniff(tcpdump 생성)

sniff는 Pod 내 TCP Dump를 생성하는 명령어이다. 마이크로서비스를 개발할때 서비스간 네트워크 활동을 캡처하는데 활용하기 용이하다.

a. 설치

[root@ip-192-168-78-195 app (kubeapp:default)]# kubectl krew install sniff
Updated the local copy of plugin index.
Installing plugin: sniff
Installed plugin: sniff
\
 | Use this plugin:
 |      kubectl sniff
 | Documentation:
 |      https://github.com/eldadru/ksniff
 | Caveats:
 | \
 |  | This plugin needs the following programs:
 |  | * wireshark (optional, used for live capture)
 | /
/
WARNING: You installed plugin "sniff" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 app (kubeapp:default)]#

b. sniff를 활용한 tcp dump 생성

[root@ip-192-168-78-195 app (kubeapp:default)]# kubectl sniff my-nginx-6c6c46694f-qv69b 
INFO[0000] using tcpdump path at: '/root/.krew/store/sniff/v1.6.2/static-tcpdump' 
INFO[0000] no container specified, taking first container we found in pod. 
INFO[0000] selected container: 'my-nginx'               
INFO[0000] sniffing method: upload static tcpdump       
INFO[0000] sniffing on pod: 'my-nginx-6c6c46694f-qv69b' [namespace: 'default', container: 'my-nginx', filter: '', interface: 'any'] 
INFO[0000] uploading static tcpdump binary from: '/root/.krew/store/sniff/v1.6.2/static-tcpdump' to: '/tmp/static-tcpdump' 
INFO[0000] uploading file: '/root/.krew/store/sniff/v1.6.2/static-tcpdump' to '/tmp/static-tcpdump' on container: 'my-nginx' 
INFO[0000] executing command: '[/bin/sh -c test -f /tmp/static-tcpdump]' on container: 'my-nginx', pod: 'my-nginx-6c6c46694f-qv69b', namespace: 'default' 
INFO[0000] command: '[/bin/sh -c test -f /tmp/static-tcpdump]' executing successfully exitCode: '1', stdErr :'' 
INFO[0000] file not found on: '/tmp/static-tcpdump', starting to upload 
INFO[0000] verifying file uploaded successfully         
INFO[0000] executing command: '[/bin/sh -c test -f /tmp/static-tcpdump]' on container: 'my-nginx', pod: 'my-nginx-6c6c46694f-qv69b', namespace: 'default' 
INFO[0000] command: '[/bin/sh -c test -f /tmp/static-tcpdump]' executing successfully exitCode: '0', stdErr :'' 
INFO[0000] file found: ''                               
INFO[0000] file uploaded successfully                   
INFO[0000] tcpdump uploaded successfully                
INFO[0000] spawning wireshark!                          
INFO[0000] starting sniffer cleanup                     
INFO[0000] sniffer cleanup completed successfully       
Error: exec: "wireshark": executable file not found in $PATH
[root@ip-192-168-78-195 app (kubeapp:default)]#

위와 같이 아래 경로에 static-tcpdump 파일이 생성된 것을 확인할 수 있다.

[root@ip-192-168-78-195 app (kubeapp:default)]# ls -lah /root/.krew/store/sniff/v1.6.2/static-tcpdump
-rwxr-xr-x 1 root root 2.8M Mar  7 16:51 /root/.krew/store/sniff/v1.6.2/static-tcpdump
[root@ip-192-168-78-195 app (kubeapp:default)]#

wireshark나 tshark 등이 구성되어 있을 경우 연동되어 tcp dump를 바로 기동할수도 있다. 유의할 점은 tcpdump 생성 시 파일이 덮어써지기 때문에 가능하면, sniff로 생성한 파일을 복사해 주는 shell script와 함께 사용하는 것을 권고한다.

4) tree(Object 간 소유권)

tree 플러그인은 kubernetes의 Object 간 소유권 관계를 정리하는 옵션이다.

a. 설치

[root@ip-192-168-78-195 app (kubeapp:default)]# kubectl krew install tree
Updated the local copy of plugin index.
Installing plugin: tree
Installed plugin: tree
\
 | Use this plugin:
 |      kubectl tree
 | Documentation:
 |      https://github.com/ahmetb/kubectl-tree
 | Caveats:
 | \
 |  | * For resources that are not in default namespace, currently you must
 |  |   specify -n/--namespace explicitly (the current namespace setting is not
 |  |   yet used).
 | /
/
WARNING: You installed plugin "tree" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
[root@ip-192-168-78-195 app (kubeapp:default)]#

b. tree plugin 확인

[root@ip-192-168-78-195 app (kubeapp:default)]# kubectl tree deployment my-nginx        
NAMESPACE  NAME                                 READY  REASON  AGE 
default    Deployment/my-nginx                  -              136m
default    ㄴ--ReplicaSet/my-nginx-6c6c46694f   -              136m
default      ㄴ--Pod/my-nginx-6c6c46694f-qv69b  True           136m
[root@ip-192-168-78-195 app (kubeapp:default)]#

위와 같이 Deployment > ReplicaSet > Pod의 소유권 관계를 Tree 구조로 표시해 준다. 이를 통해 오브젝트간 관계를 파악할 수 있다.

5) kubespy (kubernetes object status monitoring)

kubespy는 kubernetes의 오브젝트에 대한 상태변화를 모니터링하는 도구이다.

a. 설치

[root@ip-192-168-84-159 kubespy]# wget https://github.com/pulumi/kubespy/releases/download/v0.6.0/kubespy-v0.6.0-linux-amd64.tar.gz
--2022-03-16 09:34:08--  https://github.com/pulumi/kubespy/releases/download/v0.6.0/kubespy-v0.6.0-linux-amd64.tar.gz
Resolving github.com (github.com)... 15.164.81.167
Connecting to github.com (github.com)|15.164.81.167|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/149165150/0800f600-0a0f-11eb-932f-06ec8a7b8a47?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220316%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220316T093408Z&X-Amz-Expires=300&X-Amz-Signature=c848a739784f0a6b314b6aa0de5042a8ccf1bdecffee1fd3c0354c74d5cbde7f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=149165150&response-content-disposition=attachment%3B%20filename%3Dkubespy-v0.6.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-03-16 09:34:08--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/149165150/0800f600-0a0f-11eb-932f-06ec8a7b8a47?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220316%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220316T093408Z&X-Amz-Expires=300&X-Amz-Signature=c848a739784f0a6b314b6aa0de5042a8ccf1bdecffee1fd3c0354c74d5cbde7f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=149165150&response-content-disposition=attachment%3B%20filename%3Dkubespy-v0.6.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16761346 (16M) [application/octet-stream]
Saving to: [17;13Hkubespy-v0.6.0-linux-amd64.tar.gz[19;1H100%[=====================================================================================================================================================================>] 16,761,346  --.-K/s   in 0.1s

2022-03-16 09:34:08 (107 MB/s) - [21;35Hkubespy-v0.6.0-linux-amd64.tar.gz[21;70Hsaved [16761346/16761346]

[root@ip-192-168-84-159 kubespy]# tar -xzvf kubespy-v0.6.0-linux-amd64.tar.gz
LICENSE
README.md
kubespy
[root@ip-192-168-84-159 kubespy]# cp kubespy /usr/bin/
[root@ip-192-168-84-159 kubespy]# kubespy --help
Spy on your Kubernetes resources

Usage:
  kubespy [command]

Available Commands:
  changes     Displays changes made to a Kubernetes resource in real time. Emitted as JSON diffs
  help        Help about any command
  status      Displays changes to a Kubernetes resources's status in real time. Emitted as JSON diffs
  trace       Traces status of complex API objects
  version     Displays version information for this tool

Flags:
  -h, --help   help for kubespy

Use "kubespy [command] --help" for more information about a command.
[root@ip-192-168-84-159 kubespy]#

b. kubespy trace 활용

먼저 kubespy trace에 대해 알아보자. trace 옵션은 object의 상태 변화를 모니터링하는 옵션이다. 최초 my-nginx deployment를 추가할 경우 ADDED라는 Status로 표시된다.

[root@ip-192-168-84-159 minikube]# kubespy trace deployment my-nginx
[ADDED apps/v1/Deployment]  default/my-nginx
    Rolling out Deployment revision 1
    [55;8HDeployment is currently available
    [56;8HRollout successful: new ReplicaSet marked 'available'

ROLLOUT STATUS:
- [Current rollout | Revision 1] [ADDED]  default/my-nginx-65cff45899
    [60;8HReplicaSet is available [2 Pods available of a 2 minimum]
       - [Ready] my-nginx-65cff45899-cmf4w
       - [Ready] my-nginx-65cff45899-kvv2t

현재 my-nginx의 기동 중인 상태이다. Pod 두개가 기동되어 Ready 상태로 운영중이다.

[root@ip-192-168-84-159 minikube]# kubectl delete -f deployment.yaml
deployment.apps "my-nginx" deleted
[root@ip-192-168-84-159 minikube]#

위와 같이 my-nginx 어플리케이션을 삭제해 보자.

[root@ip-192-168-84-159 minikube]# kubespy trace deployment my-nginx
[DELETED apps/v1/Deployment]  default/my-nginx
    [61;8HDeployment does not have minimum replicas (0 out of 0)
    [59;8HDeployment has not begun to roll out the change

[61;4HWaiting for Deployment controller to create ReplicaSet

위와 같이 상태가 DELETE로 실시간 변하는 것을 볼 수 있다.

[root@ip-192-168-84-159 minikube]# kubectl apply -f deployment.yaml
deployment.apps/my-nginx created
[root@ip-192-168-84-159 minikube]#

다시 위와 같이 my-nginx를 deploy 할 경우

[root@ip-192-168-84-159 minikube]# kubespy trace deployment my-nginx
[MODIFIED apps/v1/Deployment]  default/my-nginx
    Rolling out Deployment revision 1
    [61;8HDeployment is currently available
    [61;8HRollout successful: new ReplicaSet marked 'available'

ROLLOUT STATUS:
- [Current rollout | Revision 1] [MODIFIED]  default/my-nginx-65cff45899
    [61;8HReplicaSet is available [2 Pods available of a 2 minimum]
       - [Ready] my-nginx-65cff45899-kgs7l
       - [Ready] my-nginx-65cff45899-95zcc

MODIFIED 상태로 변경되는 것을 확인할 수 있다.

c. kubespy status 활용

다음으로 kubespy status에 대해 알아보자. status 옵션은 kubernetes object의 status 변화를 yaml로 비교해서 출력한다.

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: my-nginx
    # namespace: app-test
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.21.6
        command: ["/bin/bash"]
        args: ["-c", "echo \"<p>Hello from $(hostname)</p>\" > index.html; sleep 30000 && python -m SimpleHTTPServer 8080"]
        ports:
        - containerPort: 80

위와 같이 nginx:1.21.6 이미지를 사용하는 Deployment의 replicas를 1에서 2로 증가할 경우 모니터링 결과를 살펴보자.

[root@ip-192-168-84-159 minikube]# kubespy status apps/v1 Deployment my-nginx
Watching status of apps/v1 Deployment my-nginx
CREATED
{
  "availableReplicas": 1,
  "conditions": [
    {
      "lastTransitionTime": "2022-03-16T09:36:33Z",
      "lastUpdateTime": "2022-03-16T09:36:33Z",
      "message": "Deployment has minimum availability.",
      "reason": "MinimumReplicasAvailable",
      "status": "True",
      "type": "Available"
    },
    {
      "lastTransitionTime": "2022-03-16T09:36:32Z",
      "lastUpdateTime": "2022-03-16T09:36:33Z",
      "message": "ReplicaSet \"my-nginx-65cff45899\" has successfully progressed.",
      "reason": "NewReplicaSetAvailable",
      "status": "True",
      "type": "Progressing"
    }
  ],
  "observedGeneration": 1,
  "readyReplicas": 1,
  "replicas": 1,
  "updatedReplicas": 1
}
MODIFIED
MODIFIED
 {
   "availableReplicas": 1,
   "conditions": [
-    {
-      "lastTransitionTime": "2022-03-16T09:36:33Z",
-      "lastUpdateTime": "2022-03-16T09:36:33Z",
-      "message": "Deployment has minimum availability.",
-      "reason": "MinimumReplicasAvailable",
-      "status": "True",
-      "type": "Available"
-    },
+    {
+      "lastTransitionTime": "2022-03-16T09:43:30Z",
+      "lastUpdateTime": "2022-03-16T09:43:30Z",
+      "message": "Deployment does not have minimum availability.",
+      "reason": "MinimumReplicasUnavailable",
+      "status": "False",
+      "type": "Available"
+    }
   ],
-  "observedGeneration": 1,
+  "observedGeneration": 2,
   "readyReplicas": 1,
   "replicas": 1,
   "updatedReplicas": 1
 }

MODIFIED
 {
   "availableReplicas": 1,
   "conditions": [
     {
       "lastTransitionTime": "2022-03-16T09:36:32Z",
       "lastUpdateTime": "2022-03-16T09:36:33Z",
       "message": "ReplicaSet "my-nginx-65cff45899" has successfully progressed.",
       "reason": "NewReplicaSetAvailable",
       "status": "True",
       "type": "Progressing"
     },
     {
       "lastTransitionTime": "2022-03-16T09:43:30Z",
       "lastUpdateTime": "2022-03-16T09:43:30Z",
       "message": "Deployment does not have minimum availability.",
       "reason": "MinimumReplicasUnavailable",
       "status": "False",
       "type": "Available"
     }
   ],
   "observedGeneration": 2,
   "readyReplicas": 1,
   "replicas": 1,
   "updatedReplicas": 1
+  "unavailableReplicas": 1
 }

MODIFIED
 {
   "availableReplicas": 1,
   "conditions": [
     {
       "lastTransitionTime": "2022-03-16T09:36:32Z",
       "lastUpdateTime": "2022-03-16T09:36:33Z",
       "message": "ReplicaSet "my-nginx-65cff45899" has successfully progressed.",
       "reason": "NewReplicaSetAvailable",
       "status": "True",
       "type": "Progressing"
     },
     {
       "lastTransitionTime": "2022-03-16T09:43:30Z",
       "lastUpdateTime": "2022-03-16T09:43:30Z",
       "message": "Deployment does not have minimum availability.",
       "reason": "MinimumReplicasUnavailable",
       "status": "False",
       "type": "Available"
     }
   ],
   "observedGeneration": 2,
   "readyReplicas": 1,
-  "replicas": 1,
+  "replicas": 2,
   "unavailableReplicas": 1,
-  "updatedReplicas": 1
+  "updatedReplicas": 2
 }
 MODIFIED
 {
-  "availableReplicas": 1,
+  "availableReplicas": 2,
   "conditions": [
     {
       "lastTransitionTime": "2022-03-16T09:36:32Z",
       "lastUpdateTime": "2022-03-16T09:36:33Z",
       "message": "ReplicaSet "my-nginx-65cff45899" has successfully progressed.",
       "reason": "NewReplicaSetAvailable",
       "status": "True",
       "type": "Progressing"
     },
     {
-      "lastTransitionTime": "2022-03-16T09:43:30Z",
+      "lastTransitionTime": "2022-03-16T09:43:31Z",
-      "lastUpdateTime": "2022-03-16T09:43:30Z",
+      "lastUpdateTime": "2022-03-16T09:43:31Z",
-      "message": "Deployment does not have minimum availability.",
+      "message": "Deployment has minimum availability.",
-      "reason": "MinimumReplicasUnavailable",
+      "reason": "MinimumReplicasAvailable",
-      "status": "False",
+      "status": "True",
       "type": "Available"
     }
   ],
   "observedGeneration": 2,
-  "readyReplicas": 1,
+  "readyReplicas": 2,
   "replicas": 2,
-  "unavailableReplicas": 1,
   "updatedReplicas": 2
 }

위와 같이 상태 변화가 감지되었을 때 status의 변화에 대해 실시간 변경 사항을 표시해 준다.

d. kubespy changes 활용

kubespy changes는 kubespy status와 다르게 status 정보는 물론 전체 object 구성 정보의 변화를 감지하고 출력한다. kubespy가 yaml의 status를 비교하여 출력한다면, changes는 전체 yaml의 status를 비교하여 출력한다는 차이가 있다.

kubespy를 활용하면, 실시간 변화하는 Kubernetes Object를 CLI 환경에서 한눈에 모니터링하기 용이한 정보를 제공한다.

728x90
반응형