티스토리 뷰
개요
본 포스팅에서는 ansible을 활용한 Kubernetes 환경에 적용할 수 있는 명령어들을 살펴보도록 하겠습니다.
Ansible은 대표적인 Configuration Management System으로 수백개의 모듈과 서버를 손쉽게 관리할 수 있습니다.
특히 쿠버네티스에서는 다중화 환경을 Kubespray를 활용하여 손쉽게 설치할 수 있으며, Kubespray는 Ansible을 활용한 자동 구성환경입니다.
앞서 살펴본 ansible을 활용한 kubernetes 설치 가이드는 다음을 참고하시기 바랍니다.
Kubernetes - Kuberspray로 Kubernetes 구축하기
이를 기반으로 ansible을 활용하는 방법에 대해 살펴보겠습니다.
본문
먼저 Ansible은 Playbook이라는 admin을 활용하여 전산자원 및 모듈을 구성합니다. 모든 노드를 하나하나 구성해 나갔단 방식과 다르게 구성을 Code 형태로 관리하여 누락되지 않고 항상 동일한 환경을 구성할 수 있습니다.
ansible이 CLI 명령어를 기반으로 Command를 실행한다면, ansible-playbook은 yaml 파일을 기반으로 Code를 정의하여 실행하는 방식을 사용합니다.
먼저 ansible에 대해 살펴보겠습니다.
1. Ansible 설정
Ansible은 호스트를 관리하기 위해 기본으로 /etc/ansible/hosts 파일을 사용합니다.
[root@kubemaster kubespray]# cat /etc/ansible/hosts
kubemaster ansible_ssh_host=192.168.56.102 ip=192.168.56.102
kubeworker1 ansible_ssh_host=192.168.56.103 ip=192.168.56.103
kubeworker2 ansible_ssh_host=192.168.56.101 ip=192.168.56.101
[kube-master]
kubemaster
[etcd]
kubemaster
[kube-node]
kubeworker1
kubeworker2
[k8s-cluster:children]
kube-node
kube-master
[root@kubemaster kubespray]#
위와 같이 ansible hosts 파일을 작성한 이후 ansible --list-hosts 명령어로 현재 ansible이 참조하는 hosts를 확인할 수 있습니다.
- 전체 호스트 확인
[root@kubemaster kubespray]# ansible --list-hosts all
hosts (3):
kubemaster
kubeworker1
kubeworker2
[root@kubemaster kubespray]#
- 그룹 단위 호스트 확인
[root@kubemaster kubespray]# ansible --list-hosts kube-master
hosts (1):
kubemaster
[root@kubemaster kubespray]# ansible --list-hosts kube-node
hosts (2):
kubeworker1
kubeworker2
[root@kubemaster kubespray]#
- 노드 단위 호스트 확인
[root@kubemaster kubespray]# ansible --list-hosts kubemaster
hosts (1):
kubemaster
[root@kubemaster kubespray]# ansible --list-hosts kubeworker1
hosts (1):
kubeworker1
[root@kubemaster kubespray]#
위와 같이 ansible에서 관리하는 호스트 전체, 노드, 그룹 단위로 ansible 명령어를 수행할 수 있습니다.
2. Ansible 명령어 일괄 수행
위에서 살펴본 바와 같이 Ansible에서는 일관 명령어를 수행하여 노드를 관리할 수 있습니다.
-m shell을 통해 호스트에 명령어를 전달할 수 있습니다.
ansible [all|group|node] -m shell -a "[command]"
[root@kubemaster kubespray]# ansible all -m shell -a 'hostname'
kubeworker1 | CHANGED | rc=0 >>
kubeworker1
kubeworker2 | CHANGED | rc=0 >>
kubeworker2
kubemaster | CHANGED | rc=0 >>
kubemaster
[root@kubemaster kubespray]# ansible kube-node -m shell -a 'hostname'
kubeworker1 | CHANGED | rc=0 >>
kubeworker1
kubeworker2 | CHANGED | rc=0 >>
kubeworker2
[root@kubemaster kubespray]# ansible kubemaster -m shell -a 'hostname'
kubemaster | CHANGED | rc=0 >>
kubemaster
[root@kubemaster kubespray]#
위는 all, group, node 별로 hostname을 확인하는 명령어를 전송하고 응답받은 결과입니다.
-m ping을 통해 호스트의 상태를 체크할 수 있습니다.
ansible [all|group|node] -m ping
다음은 노드 연결 상태를 체크하기 위해 아래와 같이 -m ping을 전송합니다.
[root@kubemaster kubespray]# ansible all -m ping
kubeworker1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kubemaster | SUCCESS => {
"changed": false,
"ping": "pong"
}
kubeworker2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
[root@kubemaster kubespray]#
현재 Kuberbetes를 구성하기 위해 kubespray를 활용하여 inventory.ini를 별도의 위치에 관리하고 있습니다.
이때는 -i 옵션으로 inventory.ini 파일을 Read 하도록 수정할수도 있습니다.
[root@kubemaster kubespray]# ansible -i inventory/tec/inventory.ini all -m ping
kubeworker1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kubemaster | SUCCESS => {
"changed": false,
"ping": "pong"
}
kubeworker2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
[root@kubemaster kubespray]#
-m copy를 통해 호스트 서버에 파일을 전송할 수 있습니다.
ansible [all|group|node] -m copy -ba 'src=[/path/filename] dest=[/path/filename] mode=[auth_level]'
다음은 모든 서버에 일괄로 파일을 복사하는 과정입니다.
[root@kubemaster kubespray]# ansible -i inventory/tec/inventory.ini all -m copy -ba 'src=playbooktest.yml dest=/root/playbooktest.yml mode=0644'
kubeworker1 | CHANGED => {
"changed": true,
"checksum": "c80df42bfbda00d81f7eb430989d4c5e65444e4b",
"dest": "/root/playbooktest.yml",
"gid": 0,
"group": "root",
"md5sum": "032658cd597de911874ab2ce69af0240",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:admin_home_t:s0",
"size": 216,
"src": "/root/.ansible/tmp/ansible-tmp-1582872650.1903136-24517647913429/source",
"state": "file",
"uid": 0
}
kubemaster | CHANGED => {
"changed": true,
"checksum": "c80df42bfbda00d81f7eb430989d4c5e65444e4b",
"dest": "/root/playbooktest.yml",
"gid": 0,
"group": "root",
"md5sum": "032658cd597de911874ab2ce69af0240",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:admin_home_t:s0",
"size": 216,
"src": "/root/.ansible/tmp/ansible-tmp-1582872650.0484235-143011356764154/source",
"state": "file",
"uid": 0
}
kubeworker2 | CHANGED => {
"changed": true,
"checksum": "c80df42bfbda00d81f7eb430989d4c5e65444e4b",
"dest": "/root/playbooktest.yml",
"gid": 0,
"group": "root",
"md5sum": "032658cd597de911874ab2ce69af0240",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:admin_home_t:s0",
"size": 216,
"src": "/root/.ansible/tmp/ansible-tmp-1582872650.2475631-184860444096294/source",
"state": "file",
"uid": 0
}
[root@kubemaster kubespray]# ansible all -m shell -a 'ls -la /root/play*'
kubemaster | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
kubeworker1 | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
kubeworker2 | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
[root@kubemaster kubespray]#
위와 같이 모든 노드에 파일이 복사되었고, 앞서 살펴본 -m shell을 통해 서버에 파일이 복사되었는지 여부까지 확인하였습니다.
마지막으로 특정 노드와 그룹을 여러개 지정할 수도 있습니다.
[root@kubemaster kubespray]# ansible -i inventory/tec/inventory.ini kubemaster,kubeworker1 -m copy -ba 'src=playbooktest.yml dest=/root/playbooktest2.yml mode=0644'
kubeworker1 | CHANGED => {
"changed": true,
"checksum": "c80df42bfbda00d81f7eb430989d4c5e65444e4b",
"dest": "/root/playbooktest2.yml",
"gid": 0,
"group": "root",
"md5sum": "032658cd597de911874ab2ce69af0240",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:admin_home_t:s0",
"size": 216,
"src": "/root/.ansible/tmp/ansible-tmp-1582872805.6563017-66110464015983/source",
"state": "file",
"uid": 0
}
kubemaster | CHANGED => {
"changed": true,
"checksum": "c80df42bfbda00d81f7eb430989d4c5e65444e4b",
"dest": "/root/playbooktest2.yml",
"gid": 0,
"group": "root",
"md5sum": "032658cd597de911874ab2ce69af0240",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:admin_home_t:s0",
"size": 216,
"src": "/root/.ansible/tmp/ansible-tmp-1582872805.603079-80664195257004/source",
"state": "file",
"uid": 0
}
[root@kubemaster kubespray]# ansible all -m shell -a 'ls -la /root/play*'
kubeworker1 | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:53 /root/playbooktest2.yml
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
kubemaster | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:53 /root/playbooktest2.yml
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
kubeworker2 | CHANGED | rc=0 >>
-rw-r--r--. 1 root root 216 Feb 28 15:50 /root/playbooktest.yml
[root@kubemaster kubespray]#
위와 같이 ,를 기준으로 리스트를 나열하여 playbooktest2.yml 파일을 kubemaster, kubeworker1에만 복사를 진행하였고 이를 확인할 수 있습니다.
3. Ansible playbook 설정
Playbook은 Ansible을 수행함에 있어 여러 Task를 yaml 파일 형태로 정의하여 배포를 진행하는 명령어입니다.
Playbook을 활용하여 다수의 노드에 복잡한 작업을 반복하여 진행해야 할 경우 사용할 수 있습니다.
다음은 kube-node에 할당된 서버에 일괄로 도커 커뮤니티 버전을 설치하는 yml 파일입니다.
--- <== 시작
- name: playbook test sample <== Description
hosts: kube-node <== 적용할 ansible 호스트 대상 지정
become: true <== sudo 사용
remote_user: root <== remote user 지정
tasks: <== Task 목록 시작
- name: install docker <== 첫번째 Task Description
yum: name=docker-ce state=present <== yum 명령어로 docker-ce를 즉시 설치
- name: run docker version <== 두번째 Task Description
shell: docker version <== 설치한 docker version 확인
이를 ansible-playbook으로 실행하면 아래와 같이 결과를 받을 수 있습니다.
[root@kubemaster kubespray]# ansible-playbook -i inventory/tec/inventory.ini playbooktest.yml
PLAY [playbook test sample] ****************************************************************************************************************************************************************************************************************
TASK [install docker] **********************************************************************************************************************************************************************************************************************
Friday 28 February 2020 15:44:12 +0900 (0:00:00.380) 0:00:00.380 *******
ok: [kubeworker1]
ok: [kubeworker2]
TASK [run docker version] ******************************************************************************************************************************************************************************************************************
Friday 28 February 2020 15:44:14 +0900 (0:00:01.353) 0:00:01.733 *******
changed: [kubeworker1]
changed: [kubeworker2]
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
kubeworker1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubeworker2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Friday 28 February 2020 15:44:14 +0900 (0:00:00.501) 0:00:02.235 *******
===============================================================================
install docker ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.35s
run docker version ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.50s
[root@kubemaster kubespray]#
4. Kubernetes Node 추가
이와 같이 Kubespray는 ansible 환경에서 Code 형태로 실행할 수 있도록 사전에 정의된 Kubernetes 배포 방식입니다. yaml 파일 형태로 설치 환경을 자동으로 구성할 수 있도록 생성해 두었으며, ansible-playbook으로 손쉽게 자동 설치할 수 있습니다.
설치 방법은 상단의 링크를 참고하며, Kubernetes에 신규 노드를 추가하는 방법에 대해 알아보겠습니다.
노드를 추가하는 방법은 매우 간단합니다.
ansible-playbook -i inventory/[inventory.ini_path] scale.yml -v
형태로 명령어를 수행하면 원하는 노드를 클러스터에 추가 할 수 있습니다.
scale.yml 파일은 다음과 같이 정의되어 있습니다.
---
- hosts: localhost
gather_facts: False
become: no
tasks:
- name: "Check ansible version >=2.7.8"
assert:
msg: "Ansible must be v2.7.8 or higher"
that:
- ansible_version.string is version("2.7.8", ">=")
tags:
- check
vars:
ansible_connection: local
- hosts: bastion[0]
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- name: Bootstrap any new workers
hosts: kube-node
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- name: Generate the etcd certificates beforehand
hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- name: Target only workers to get kubelet installed and checking in on any new nodes
hosts: kube-node
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: container-engine, tags: "container-engine", when: deploy_container_engine|default(true) }
- { role: download, tags: download, when: "not skip_downloads" }
- { role: etcd, tags: etcd, etcd_cluster_setup: false, when: "not etcd_kubeadm_enabled|default(false)" }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/kubeadm, tags: kubeadm }
- { role: network_plugin, tags: network }
- { role: kubernetes/node-label }
environment: "{{ proxy_env }}"
현재 기동 중인 node를 확인해 보면
[root@kubemaster kubespray]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 60m v1.16.3
kubeworker1 Ready <none> 59m v1.16.3
[root@kubemaster kubespray]#
다음과 같으며, 노드를 추가해 보도록 하겠습니다.
추가에 앞서 추가하고자 하는 노드 정보를 inventory.ini 파일에 추가합니다.
kubemaster ansible_ssh_host=192.168.56.102 ip=192.168.56.102
kubeworker1 ansible_ssh_host=192.168.56.103 ip=192.168.56.103
kubeworker2 ansible_ssh_host=192.168.56.101 ip=192.168.56.101
[kube-master]
kubemaster
[etcd]
kubemaster
[kube-node]
kubeworker1
kubeworker2
[k8s-cluster:children]
kube-node
kube-master
kubeworker2를 노드에 추가하고 kube-node에 kubeworker2를 추가합니다.
이후 ansible-playbook을 기동하면
[root@kubemaster kubespray]# ansible-playbook -i inventory/tec/inventory.ini scale.yml -v
Using /root/git_repo/kubespray/ansible.cfg as config file
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [Check ansible version >=2.7.8] *******************************************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:40 +0900 (0:00:00.228) 0:00:00.228 *******
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
[WARNING]: Could not match supplied host pattern, ignoring: bastion
PLAY [bastion[0]] **************************************************************************************************************************************************************************************************************************
skipping: no hosts matched
PLAY [Bootstrap any new workers] ***********************************************************************************************************************************************************************************************************
TASK [download : prep_download | Set a few facts] ******************************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:40 +0900 (0:00:00.245) 0:00:00.473 *******
TASK [download : Set image info command for containerd] ************************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:40 +0900 (0:00:00.161) 0:00:00.635 *******
TASK [download : Register docker images info] **********************************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:40 +0900 (0:00:00.161) 0:00:00.797 *******
TASK [download : prep_download | Create staging directory on remote node] ******************************************************************************************************************************************************************
Friday 28 February 2020 14:35:41 +0900 (0:00:00.158) 0:00:00.955 *******
TASK [download : prep_download | Create local cache for files and images] ******************************************************************************************************************************************************************
Friday 28 February 2020 14:35:41 +0900 (0:00:00.160) 0:00:01.116 *******
TASK [download : prep_download | On localhost, check if passwordless root is possible] *****************************************************************************************************************************************************
Friday 28 February 2020 14:35:41 +0900 (0:00:00.115) 0:00:01.232 *******
...
...
...
...
...
TASK [kubernetes/node-label : Set label to node] *******************************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:15 +0900 (0:00:00.142) 0:03:31.415 *******
RUNNING HANDLER [network_plugin/cilium : restart kubelet] **********************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:15 +0900 (0:00:00.058) 0:03:31.473 *******
RUNNING HANDLER [network_plugin/calico : reset_calico_cni] *********************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:15 +0900 (0:00:00.057) 0:03:31.531 *******
changed: [kubeworker2] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002886", "end": "2020-02-28 14:35:16.178277", "rc": 0, "start": "2020-02-28 14:35:16.175391", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
RUNNING HANDLER [network_plugin/calico : delete 10-calico.conflist] ************************************************************************************************************************************************************************
Friday 28 February 2020 14:35:15 +0900 (0:00:00.245) 0:03:31.776 *******
changed: [kubeworker2] => {"changed": true, "path": "/etc/cni/net.d/10-calico.conflist", "state": "absent"}
RUNNING HANDLER [network_plugin/calico : docker | delete calico-node containers] ***********************************************************************************************************************************************************
Friday 28 February 2020 14:35:15 +0900 (0:00:00.247) 0:03:32.024 *******
changed: [kubeworker2] => {"attempts": 1, "changed": true, "cmd": "docker ps -af name=k8s_POD_calico-node* -q | xargs --no-run-if-empty docker rm -f", "delta": "0:00:00.239852", "end": "2020-02-28 14:35:16.903643", "rc": 0, "start": "2020-02-28 14:35:16.663791", "stderr": "", "stderr_lines": [], "stdout": "fb6008a0e2d2", "stdout_lines": ["fb6008a0e2d2"]}
RUNNING HANDLER [network_plugin/calico : containerd | delete calico-node containers] *******************************************************************************************************************************************************
Friday 28 February 2020 14:35:16 +0900 (0:00:00.489) 0:03:32.513 *******
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
kubemaster : ok=29 changed=2 unreachable=0 failed=0 skipped=50 rescued=0 ignored=0
kubeworker1 : ok=323 changed=16 unreachable=0 failed=0 skipped=366 rescued=0 ignored=0
kubeworker2 : ok=296 changed=24 unreachable=0 failed=0 skipped=321 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Friday 28 February 2020 14:35:16 +0900 (0:00:00.063) 0:03:32.577 *******
===============================================================================
kubernetes/preinstall : Install packages requirements ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 13.00s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.51s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.91s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.86s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.54s
kubernetes/kubeadm : Update server field in kubelet kubeconfig ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.27s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.81s
network_plugin/calico : Calico | Write Calico cni config ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.37s
download : download_container | Download image if required -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.35s
kubernetes/kubeadm : Join to cluster ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.01s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.99s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.51s
kubernetes/preinstall : Create kubernetes directories ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.49s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.44s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.43s
kubernetes/preinstall : Hosts | populate inventory into hosts file ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.43s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.41s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.18s
container-engine/docker : Ensure old versions of Docker are not installed. | RedHat ------------------------------------------------------------------------------------------------------------------------------------------------- 1.18s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.18s
[root@kubemaster kubespray]#
위와 같이 ansible script를 이용하여 kubeworker2가 추가된 것을 확인할 수 있습니다.
추가된 노드를 확인해 보면 아래와 같이
[root@kubemaster kubespray]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 101m v1.16.3
kubeworker1 Ready <none> 100m v1.16.3
kubeworker2 Ready <none> 4m14s v1.16.3
[root@kubemaster kubespray]#
kubeworker2가 Ready 상태로 추가된 것을 확인할 수 있습니다.
5. Kubernetes Node 제거
Kubernetes에 추가되어 있는 노드를 제거하는 방법 역시 간단합니다.
현재 쿠버네티스에 구성되어 있는 노드는 다음과 같습니다.
[root@kubemaster kubespray]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 54m v1.16.3
kubeworker1 Ready <none> 53m v1.16.3
kubeworker2 Ready <none> 53m v1.16.3
[root@kubemaster kubespray]#
위와 같이 Master Node(1), Worker Node(2)로 구성되어 있습니다.
ansible-playbook -i inventory/[inventory.ini_path] remove-node.yml --extra-vars "node=[nodeName]"
형태로 명령어를 수행하면 원하는 노드를 클러스터에서 제거할 수 있습니다.
[root@kubemaster kubespray]# ansible-playbook -i inventory/tec/inventory.ini remove-node.yml --extra-vars "node=kubeworker2"
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [Check ansible version >=2.7.8] *******************************************************************************************************************************************************************************************************
Friday 28 February 2020 13:54:52 +0900 (0:00:00.235) 0:00:00.235 *******
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
Are you sure you want to delete nodes state? Type 'yes' to delete nodes. [no]:
위 예시는 kubeworker2 node를 쿠버네티스 클러스터에서 제거하는 예시입니다.
yes를 입력하면,
[root@kubemaster kubespray]# ansible-playbook -i inventory/tec/inventory.ini remove-node.yml --extra-vars "node=kubeworker2"
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
TASK [Check ansible version >=2.7.8] *******************************************************************************************************************************************************************************************************
Friday 28 February 2020 13:54:52 +0900 (0:00:00.235) 0:00:00.235 *******
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
Are you sure you want to delete nodes state? Type 'yes' to delete nodes. [no]: yes
PLAY [kubeworker2] *************************************************************************************************************************************************************************************************************************
TASK [check confirmation] ******************************************************************************************************************************************************************************************************************
Friday 28 February 2020 13:55:39 +0900 (0:00:47.183) 0:00:47.418 *******
PLAY [kube-master] *************************************************************************************************************************************************************************************************************************
TASK [download : prep_download | Set a few facts] ******************************************************************************************************************************************************************************************
Friday 28 February 2020 13:55:39 +0900 (0:00:00.094) 0:00:47.512 *******
...
...
...
...
...
TASK [kubespray-defaults : Configure defaults] *********************************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:26 +0900 (0:00:01.160) 0:01:34.615 *******
ok: [kubeworker2] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
TASK [remove-node/post-remove : Lookup node IP in kubernetes] ******************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:26 +0900 (0:00:00.098) 0:01:34.713 *******
TASK [remove-node/post-remove : Set node IP] ***********************************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:26 +0900 (0:00:00.057) 0:01:34.770 *******
TASK [remove-node/post-remove : Delete node] ***********************************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:26 +0900 (0:00:00.097) 0:01:34.868 *******
changed: [kubeworker2 -> None]
TASK [remove-node/post-remove : Lookup etcd member id] *************************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:27 +0900 (0:00:00.349) 0:01:35.217 *******
TASK [remove-node/post-remove : Remove etcd member from cluster] ***************************************************************************************************************************************************************************
Friday 28 February 2020 13:56:27 +0900 (0:00:00.102) 0:01:35.319 *******
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
kubemaster : ok=3 changed=2 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0
kubeworker2 : ok=28 changed=20 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Friday 28 February 2020 13:56:27 +0900 (0:00:00.024) 0:01:35.344 *******
===============================================================================
Check ansible version >=2.7.8 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 47.18s
remove-node/pre-remove : remove-node | Drain node except daemonsets resource ------------------------------------------------------------------------------------------------------------------------------------------------------- 13.37s
reset : reset | delete some files and directories ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.25s
reset : reset | restart docker if needed -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.67s
reset : reset | stop services ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.47s
reset : reset | Restart network ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.39s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.23s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.17s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.16s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.16s
download : download | Sync files / images from ansible host to nodes ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.16s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.14s
remove-node/pre-remove : cordon-node | Mark all nodes as unschedulable before drain ------------------------------------------------------------------------------------------------------------------------------------------------- 0.90s
reset : reset | remove services ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.86s
reset : reset | remove all containers ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.80s
reset : flush iptables -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s
reset : reset | remove dns settings from dhclient.conf ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.63s
reset : reset | stop etcd services -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.60s
reset : reset | unmount kubelet dirs ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.58s
reset : reset | remove docker dropins ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.57s
[root@kubemaster kubespray]#
위와 같이 ansible script가 기동되며 원하는 노드를 제거할 수 있습니다.
다시한번 현재 기동 중인 node를 확인해 보면
[root@kubemaster kubespray]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 60m v1.16.3
kubeworker1 Ready <none> 59m v1.16.3
[root@kubemaster kubespray]#
위와 같이 노드가 제거된 것을 확인할 수 있습니다.
결론
지금까지 ansible을 활용하여 Kubernetes 클러스터 환경에서 적용 가능한 다양한 활용 방안을 살펴보았습니다.
실제 저 같은 경우 ansible을 활용하여 수백대에 달하는 서버에 일괄로 docker의 daemon.json에 insecure-registries를 적용하기 위해 다음과 같이 환경에 적용했던 경험이 있습니다.
- ansible -i inventory/hosts.ini kube-worker -m shell -ba 'systemctl status docker'
- ansible -i inventory/hosts.ini kube-worker -m copy -ba 'src=daemon.json dest=/etc/docker/daemon.json mode=0644'
또한, 개발환경을 쿠버네티스 클러스터 내부에 별도로 추가하기 위해 기존 운영환경에 포함되어 있는 노드 이외의 유후 장비를 한대 추가하여 클러스터에 붙이고 이를 별도의 Label로 관리하여 개발환경으로 사용했던 경험도 있습니다.
ansible은 이와 같이 다양한 유저의 경험에 따라 활용 될 수 있지만, 자칫 전체 서버에 일괄로 명령행을 전달하기 때문에 관리나 권한 등의 제한을 두는 것도 중요합니다.
'③ 클라우드 > ⓚ Kubernetes' 카테고리의 다른 글
Helm chart를 활용한 Prometheus & Grafana Kubernetes에 구성하기 (10) | 2020.03.03 |
---|---|
minikube 5분안에 설치하기 (0) | 2020.03.03 |
Kubernetes 특정 node에 pod 배포하기 - label, nodeSelector, affinity(nodeAffinity, podAffinity) (0) | 2020.01.12 |
Kubernetes Persistent Volume 생성하기 - PV, PVC (1) | 2020.01.05 |
Kubernetes SSL 인증서 적용하기 - TLS, SSL, CAChain, RootCA (4) | 2020.01.04 |
- Total
- Today
- Yesterday
- 아키텍처
- SA
- apache
- API Gateway
- 마이크로서비스 아키텍처
- openstack token issue
- Da
- Docker
- TA
- MSA
- aws
- wildfly
- aa
- JEUS6
- 쿠버네티스
- webtob
- JBoss
- Architecture
- OpenStack
- nodejs
- 마이크로서비스
- k8s
- kubernetes
- SWA
- jeus
- node.js
- git
- 오픈스택
- openstack tenant
- JEUS7
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |