티스토리 뷰

728x90
반응형

이번 포스팅에서는 Kubernetes Master Node 설치를 진행해 보겠습니다.

Master Node는 Kubernetes의 전체 노드를 관리하고 오케스트레이션, 서비스, Pod 등을 생성 삭제하는 관리자 역할을 수행합니다.

사전 준비

1. 먼저 Master Node (222.234.124.18)에 대한 /etc/hosts 셋팅을 진행합니다.

[/etc/hosts 파일 등록]


127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
222.234.124.18  guruson


2. 다음으로 firewall-cmd를 활용하여 오픈할 방화벽 port를 정의합니다.


[root@guruson ~]# firewall-cmd --zone=public --permanent --add-port=6443/tcp
[root@guruson ~]# firewall-cmd --zone=public --permanent --add-port=2379-2380/tcp
[root@guruson ~]# firewall-cmd --zone=public --permanent --add-port=10250/tcp
[root@guruson ~]# firewall-cmd --zone=public --permanent --add-port=10251/tcp
[root@guruson ~]# firewall-cmd --zone=public --permanent --add-port=10252/tcp
[root@guruson ~]# firewall-cmd --reload

[root@guruson ~]# 


방화벽 셋팅에 어려움이 있으신 경우에는 SELINUX를 다운하셔서 구성하시는 것도 무방합니다.


[root@guruson ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 


[root@guruson ~]# reboot


3. swapoff를 사용하여 스왑메모리를 비활성화합니다.


[root@nrson ~]# swapoff -a

[root@nrson ~]#


아래 명령어로 스왑메모리에 관련된 작업을 수행할수 있습니다.

> 스왑메모리 생성 : mkswap

> 스왑메모리 활성화 : swapon

> 스왑메모리 비활성화 : swapoff

4. /proc/sys/net/bridge/bridge-nf-call-iptables를 수정합니다.


[root@nrson ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

[root@nrson ~]#


5. Kubernetes를 기동할 Docker를 설치합니다.


# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager \
  --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker


docker 설치 과정은 kubernetes 공식 홈페이지를 참고하였습니다.

CentOS의 경우 /usr/lib/systemd/system/위치에 서비스 파일을 관리합니다.

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

 

Container runtimes

 

kubernetes.io

Kubernetes Master Node 설치

1. 다음으로 Kubernetes 설치를 위한 yum.repos.d에 kubernetes.repo를 생성합니다.


[root@guruson ~]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
[root@guruson ~]# 


2. kubelet, kubeadm, kubectl 설치를 진행합니다.

kubeadm : 클러스터를 bootstrap 하는 명령.

kubelet : 클러스터의 모든 컴퓨터에서 실행되는 구성 요소로, 포드 및 컨테이너 시작과 같은 작업을 수행.

kubectl : 클러스터와 대화하기 위한 명령 유틸리티.


[root@guruson ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Loaded plugins: fastestmirror, langpacks

Loading mirror speeds from cached hostfile

 * base: data.aonenetworks.kr

 * extras: data.aonenetworks.kr

 * updates: mirror.kakao.com

kubernetes/signature                                                                          |  454 B  00:00:00

Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg

Importing GPG key 0xA7317B0F:

 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"

 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f

 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg

Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

kubernetes/signature                                                                          | 1.4 kB  00:00:00 !!!

kubernetes/primary                                                                            |  37 kB  00:00:00

kubernetes                                                                                                   263/263

Resolving Dependencies

--> Running transaction check

---> Package kubeadm.x86_64 0:1.12.1-0 will be installed

--> Processing Dependency: kubernetes-cni >= 0.6.0 for package: kubeadm-1.12.1-0.x86_64

--> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.12.1-0.x86_64

---> Package kubectl.x86_64 0:1.12.1-0 will be installed

---> Package kubelet.x86_64 0:1.12.1-0 will be installed

--> Processing Dependency: socat for package: kubelet-1.12.1-0.x86_64

--> Running transaction check

---> Package cri-tools.x86_64 0:1.12.0-0 will be installed

---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed

---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed

--> Finished Dependency Resolution

 

Dependencies Resolved

 

===================================================================================================================

 Package                       Arch                  Version                         Repository                 Size

===================================================================================================================

Installing:

 kubeadm                       x86_64                1.12.1-0                        kubernetes                7.2 M

 kubectl                       x86_64                1.12.1-0                        kubernetes                7.7 M

 kubelet                       x86_64                1.12.1-0                        kubernetes                 19 M

Installing for dependencies:

 cri-tools                     x86_64                1.12.0-0                        kubernetes                4.2 M

 kubernetes-cni                x86_64                0.6.0-0                         kubernetes                8.6 M

 socat                         x86_64                1.7.3.2-2.el7                   base                      290 k

 

Transaction Summary

===================================================================================================================

Install  3 Packages (+3 Dependent packages)

 

Total download size: 47 M

Installed size: 237 M

Downloading packages:

warning: /var/cache/yum/x86_64/7/kubernetes/packages/53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY

Public key for 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm is not installed

(1/6): 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x8 | 4.2 MB  00:00:00

(2/6): ed7d25314d0fc930c9d0bae114016bf49ee852b3c4f243184630cf2c6cd62d43-kubectl-1.12.1-0.x86_ | 7.7 MB  00:00:00

(3/6): 9c31cf74973740c100242b0cfc8d97abe2a95a3c126b1c4391c9f7915bdfd22b-kubeadm-1.12.1-0.x86_ | 7.2 MB  00:00:02

(4/6): socat-1.7.3.2-2.el7.x86_64.rpm                                                         | 290 kB  00:00:00

(5/6): c4ebaa2e1ce38cda719cbe51274c4871b7ccb30371870525a217f6a430e60e3a-kubelet-1.12.1-0.x86_ |  19 MB  00:00:01

(6/6): fe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni-0.6.0- | 8.6 MB  00:00:00

---------------------------------------------------------------------------------------------------------------------

Total                                                                                 17 MB/s |  47 MB  00:00:02

Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg

Importing GPG key 0xA7317B0F:

 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"

 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f

 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg

Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Importing GPG key 0x3E1BA8D5:

 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"

 Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5

 From       : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : socat-1.7.3.2-2.el7.x86_64                                                                        1/6

  Installing : kubernetes-cni-0.6.0-0.x86_64                                                                     2/6

  Installing : kubelet-1.12.1-0.x86_64                                                                           3/6

  Installing : kubectl-1.12.1-0.x86_64                                                                           4/6

  Installing : cri-tools-1.12.0-0.x86_64                                                                         5/6

  Installing : kubeadm-1.12.1-0.x86_64                                                                           6/6

  Verifying  : cri-tools-1.12.0-0.x86_64                                                                         1/6

  Verifying  : kubectl-1.12.1-0.x86_64                                                                           2/6

  Verifying  : kubeadm-1.12.1-0.x86_64                                                                           3/6

  Verifying  : kubelet-1.12.1-0.x86_64                                                                           4/6

  Verifying  : kubernetes-cni-0.6.0-0.x86_64                                                                     5/6

  Verifying  : socat-1.7.3.2-2.el7.x86_64                                                                        6/6

 

Installed:

  kubeadm.x86_64 0:1.12.1-0             kubectl.x86_64 0:1.12.1-0             kubelet.x86_64 0:1.12.1-0

 

Dependency Installed:

  cri-tools.x86_64 0:1.12.0-0          kubernetes-cni.x86_64 0:0.6.0-0          socat.x86_64 0:1.7.3.2-2.el7

 

Complete!

[root@guruson ~]#


3. 설치가 정상적으로 완료되면 kubelet 활성화 및 기동합니다.


[root@guruson ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@guruson ~]# systemctl start kubelet
[root@guruson ~]# systemctl -l status kubelet
[0m kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           붴10-kubeadm.conf
   Active: active (running) since Wed 2019-07-24 01:13:40 KST; 9s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 14528 (kubelet)
    Tasks: 14
   CGroup: /system.slice/kubelet.service
           붴14528 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

Jul 24 01:13:46 guruson kubelet[14528]: E0724 01:13:46.452115   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://222.234.124.18:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:47 guruson kubelet[14528]: E0724 01:13:47.461613   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://222.234.124.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:47 guruson kubelet[14528]: E0724 01:13:47.461619   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://222.234.124.18:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:47 guruson kubelet[14528]: E0724 01:13:47.461662   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://222.234.124.18:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:48 guruson kubelet[14528]: E0724 01:13:48.462663   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://222.234.124.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:48 guruson kubelet[14528]: E0724 01:13:48.464568   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://222.234.124.18:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:48 guruson kubelet[14528]: E0724 01:13:48.465170   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://222.234.124.18:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:49 guruson kubelet[14528]: E0724 01:13:49.495486   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://222.234.124.18:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:49 guruson kubelet[14528]: E0724 01:13:49.495714   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://222.234.124.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:13:49 guruson kubelet[14528]: E0724 01:13:49.496217   14528 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://222.234.124.18:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
[root@guruson ~]#[root@guruson ~]# systemctl -l status kubelet
[0m kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           붴10-kubeadm.conf
   Active: active (running) since Wed 2019-07-24 01:15:02 KST; 22s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 14705 (kubelet)
    Tasks: 18
   CGroup: /system.slice/kubelet.service
           붴14705 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

Jul 24 01:15:24 guruson kubelet[14705]: I0724 01:15:24.334267   14705 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Jul 24 01:15:24 guruson kubelet[14705]: I0724 01:15:24.338362   14705 kubelet_node_status.go:72] Attempting to register node guruson
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.338633   14705 kubelet_node_status.go:94] Unable to register node "guruson" with API server: Post https://222.234.124.18:6443/api/v1/nodes: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.433624   14705 kubelet.go:2248] node "guruson" not found
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.534870   14705 kubelet.go:2248] node "guruson" not found
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.553323   14705 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://222.234.124.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.554753   14705 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://222.234.124.18:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dguruson&limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.554854   14705 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://222.234.124.18:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 222.234.124.18:6443: connect: connection refused
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.635809   14705 kubelet.go:2248] node "guruson" not found
Jul 24 01:15:24 guruson kubelet[14705]: E0724 01:15:24.736652   14705 kubelet.go:2248] node "guruson" not found
[root@guruson ~]#


kubeadm init이 수행되기 전에는 Master Node에 연결되지 않아 반복적으로 에러 메시지가 발생합니다.

4. kubeadm init 명령어로 kubernetes cluster를 기동합니다.


[root@guruson ~]# kubeadm init
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [guruson localhost] and IPs [222.234.124.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [guruson localhost] and IPs [222.234.124.110 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [guruson kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 222.234.124.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 45.506400 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node guruson as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node guruson as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: deb19a.7yfa212rg0exg0c9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 222.234.124.110:6443 --token deb19a.7yfa212rg0exg0c9 \
    --discovery-token-ca-cert-hash sha256:20d38dd05c158fe88fecd1b219ba9a5e02e5ea66ad612b404678571d104754c3 
[root@guruson ~]#


Kubeadm init을 실행하면 Kubernetes Master Server를 관리하는 모듈이 기동됩니다.


[root@guruson ~]# docker ps -a
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
553b2f5013c8        89a062da739d           "/usr/local/bin/ku..."   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-7plbr_kube-system_f946a39f-b320-4d7e-80be-8ade3a224eec_0
bd2e127e576c        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-proxy-7plbr_kube-system_f946a39f-b320-4d7e-80be-8ade3a224eec_0
8273ba04d071        b0b3c4c404da           "kube-scheduler --..."   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-guruson_kube-system_ecae9d12d3610192347be3d1aa5aa552_0
50819117a16c        68c3eb07bfc3           "kube-apiserver --..."   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-guruson_kube-system_771894cf22aa788ac60030c63ac93c0a_0
ef2e5b80418f        2c4adeb21b4f           "etcd --advertise-..."   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-guruson_kube-system_739bfe2c63b2d1d1b1e833c60fd75422_0
34dc84306624        d75082f1d121           "kube-controller-m..."   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-guruson_kube-system_1eef4eca35083fec456d0af4bccd851c_0
eb959a163fa1        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-guruson_kube-system_ecae9d12d3610192347be3d1aa5aa552_0
3825e239a71e        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-guruson_kube-system_1eef4eca35083fec456d0af4bccd851c_0
0941ac568e1b        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-guruson_kube-system_739bfe2c63b2d1d1b1e833c60fd75422_0
9014baf49b6c        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-guruson_kube-system_771894cf22aa788ac60030c63ac93c0a_0
[root@guruson ~]#


Kubernetes Cluster가 정상적으로 기동되면 kubelet log에서도 연결 정보가 출력됩니다.


......

Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.380863   16209 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jul 24 01:21:02 guruson kubelet[16209]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.385439   16209 kuberuntime_manager.go:205] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.389470   16209 server.go:1083] Started kubelet
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.389593   16209 kubelet.go:1293] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cach
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.390666   16209 server.go:144] Starting to listen on 0.0.0.0:10250
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.393274   16209 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.393290   16209 status_manager.go:152] Starting to sync pod status with apiserver
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.393303   16209 kubelet.go:1805] Starting kubelet main sync loop.
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.393317   16209 kubelet.go:1822] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.393411   16209 server.go:350] Adding debug handlers to kubelet server.
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.394376   16209 volume_manager.go:243] Starting Kubelet Volume Manager
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.399513   16209 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.402997   16209 kubelet.go:2169] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitia
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.416410   16209 remote_runtime.go:128] StopPodSandbox "a8611210c394c86a1a3849aa5b94b01dfdade8f16bebfc2d955e06b3c61e8d5a" from runtime service failed: rpc error: code = Unknown desc =
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.416425   16209 kuberuntime_gc.go:170] Failed to stop sandbox "a8611210c394c86a1a3849aa5b94b01dfdade8f16bebfc2d955e06b3c61e8d5a" before removing: rpc error: code = Unknown desc = Net
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.418234   16209 remote_runtime.go:128] StopPodSandbox "874f207f53255e1ea069e4dd2b84300a22fa5af698d51e76cf7479edd3bb4516" from runtime service failed: rpc error: code = Unknown desc =
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.418249   16209 kuberuntime_gc.go:170] Failed to stop sandbox "874f207f53255e1ea069e4dd2b84300a22fa5af698d51e76cf7479edd3bb4516" before removing: rpc error: code = Unknown desc = Net
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.419954   16209 remote_runtime.go:128] StopPodSandbox "cf5e0f97a72afd79007acd3481aac10a4c761a23708fbb520045046887f93909" from runtime service failed: rpc error: code = Unknown desc =
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.419964   16209 kuberuntime_gc.go:170] Failed to stop sandbox "cf5e0f97a72afd79007acd3481aac10a4c761a23708fbb520045046887f93909" before removing: rpc error: code = Unknown desc = Net
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.421707   16209 remote_runtime.go:128] StopPodSandbox "c832cbbe372b154def85df1d1657141e37c54cc6b8c2f1266f4bd332b1160475" from runtime service failed: rpc error: code = Unknown desc =
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.421719   16209 kuberuntime_gc.go:170] Failed to stop sandbox "c832cbbe372b154def85df1d1657141e37c54cc6b8c2f1266f4bd332b1160475" before removing: rpc error: code = Unknown desc = Net
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.433965   16209 container.go:409] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.437186   16209 container.go:409] Failed to create summary reader for "/system.slice/kubelet.service": none of the resources are being tracked.
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.482531   16209 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.482545   16209 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.482551   16209 policy_none.go:42] [cpumanager] none policy: Start
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.483290   16209 container_manager_linux.go:818] CPUAccounting not enabled for pid: 1387
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.483297   16209 container_manager_linux.go:821] MemoryAccounting not enabled for pid: 1387
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.483335   16209 container_manager_linux.go:818] CPUAccounting not enabled for pid: 16209
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.483339   16209 container_manager_linux.go:821] MemoryAccounting not enabled for pid: 16209
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.483358   16209 plugin_manager.go:116] Starting Kubelet Plugin Manager
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.499583   16209 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.501164   16209 kubelet_node_status.go:72] Attempting to register node guruson
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.511839   16209 pod_container_deletor.go:75] Container "cf5e0f97a72afd79007acd3481aac10a4c761a23708fbb520045046887f93909" not found in pod's containers
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.514734   16209 pod_container_deletor.go:75] Container "a8611210c394c86a1a3849aa5b94b01dfdade8f16bebfc2d955e06b3c61e8d5a" not found in pod's containers
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.514763   16209 pod_container_deletor.go:75] Container "874f207f53255e1ea069e4dd2b84300a22fa5af698d51e76cf7479edd3bb4516" not found in pod's containers
Jul 24 01:21:02 guruson kubelet[16209]: W0724 01:21:02.514801   16209 pod_container_deletor.go:75] Container "c832cbbe372b154def85df1d1657141e37c54cc6b8c2f1266f4bd332b1160475" not found in pod's containers
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.515059   16209 kubelet.go:1647] Failed creating a mirror pod for "kube-apiserver-guruson_kube-system(bd6e0b97d0024f5fa8301b484c3eab5d)": pods "kube-apiserver-guruson" already exists
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.516295   16209 kubelet.go:1647] Failed creating a mirror pod for "kube-controller-manager-guruson_kube-system(1eef4eca35083fec456d0af4bccd851c)": pods "kube-controller-manager-gurus
Jul 24 01:21:02 guruson kubelet[16209]: E0724 01:21:02.516375   16209 kubelet.go:1647] Failed creating a mirror pod for "etcd-guruson_kube-system(522b1b31c00f942225ae62427304b660)": pods "etcd-guruson" already exists
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.600935   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/1eef4eca35083fec456d0af4bccd85
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601064   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/1eef4eca35083fec456d0af4
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601142   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/1eef4eca35083fec456d0af4bccd8
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601224   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7032b961-a4e4-4a38-b178-9512
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601617   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/7032b961-a4e4-4a38-b178-95
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601824   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/522b1b31c00f942225ae62427304b
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.601953   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/bd6e0b97d0024f5fa8301b484c3eab5
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602042   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/bd6e0b97d0024f5fa8301b484c3ea
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602113   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/7032b961-a4e4-4a38-b178-951
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602193   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/522b1b31c00f942225ae62427304
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602388   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/1eef4eca35083fec456d0af4bccd851
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602460   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/bd6e0b97d0024f5fa8301b484c3eab
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.602558   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/1eef4eca35083fec456d0af4bccd
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.603058   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ecae9d12d3610192347be3d1aa5a
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.603113   16209 reconciler.go:150] Reconciler: start to sync state
Jul 24 01:21:02 guruson kubelet[16209]: I0724 01:21:02.708648   16209 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-2dn4r" (UniqueName: "kubernetes.io/secret/7032b961-a4e4-4a38- 

......


또한 기동이 정상적으로 완료되면 docker images와 docker ps -a로 docker에 kubernetes가 정상적으로 이미지를 pull하고 run 했는지 여부를 확인하는 과정을 거칩니다.

kubeadm init을 수행하던 도중 어떠한 이유로 인해 기동에 실패하거나, Timeout이 발생하는 등의 문제가 생겼을 경우에는 kubeadm reset으로 설정을 초기화 한 후 재기동하면 기동 시점에 생성된 이미지나, 파일, 디렉토리 등을 손쉽게 초기화 할 수 있어 유용하게 사용할 수 있습니다.


[root@guruson ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0803 20:09:17.613157   12479 reset.go:98] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://222.234.124.110:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 222.234.124.110:6443: connect: connection refused
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0803 20:09:21.554823   12479 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@guruson ~]#


5. kube/config를 생성하여 root 계정 이외의 사용자도 실행할 수 있도록 아래와 같이 설정을 수행합니다.


[root@guruson ~]# mkdir -p $HOME/.kube 
[root@guruson ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
[root@guruson ~]# chown $(id -u):$(id -g) $HOME/.kube/config 
[root@guruson ~]# 


6. namespace를 확인합니다.


[root@guruson ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-c54tw          0/1       Pending   0          16m
kube-system   coredns-5c98db65d4-km7x2          0/1       Pending   0          16m
kube-system   etcd-guruson                      1/1       Running   0          15m
kube-system   kube-apiserver-guruson            1/1       Running   0          15m
kube-system   kube-controller-manager-guruson   1/1       Running   0          15m
kube-system   kube-proxy-jr9tc                  1/1       Running   0          16m
kube-system   kube-scheduler-guruson            1/1       Running   0          16m
[root@guruson ~]#


위와 같이 namespace가 기동되었으나, coredns가 pending 상태로 남아 있는 것을 볼 수 있습니다.

7. pod 간 통신을 위한 weave를 설치합니다.


[root@guruson ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
[root@guruson ~]#


8. 다시 한번 namespace를 확인합니다.


[root@guruson ~]# kubectl get pods --all-namespaces 
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-c54tw          1/1       Running   0          17m
kube-system   coredns-5c98db65d4-km7x2          1/1       Running   0          17m
kube-system   etcd-guruson                      1/1       Running   0          16m
kube-system   kube-apiserver-guruson            1/1       Running   0          16m
kube-system   kube-controller-manager-guruson   1/1       Running   0          16m
kube-system   kube-proxy-jr9tc                  1/1       Running   0          17m
kube-system   kube-scheduler-guruson            1/1       Running   0          17m
kube-system   weave-net-mkbhk                   2/2       Running   0          25s
[root@guruson ~]# 


위와 같이 weave가 설치되면서 coredns도 함께 Running 상태로 변경되는 것을 볼 수 있습니다.

9. 마지막으로 master node의 pods는 scheduling 하지 않도록 하는 kubectl taint를 실행합니다.


[root@guruson ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/guruson untainted
[root@guruson ~]#


지금까지 Kubernetes master Node에 대한 설치 과정에 대해 알아보았습니다.

기본적으로 몇몇 Kubelet, Kubeadm, Kubectl 사용법 및 어떠한 역할을 하는지와 네트워크 통신을 하는 Weave 설치까지 살펴보았고 namespace를 통해 현재 Status까지 검증하였습니다.

이를 기반으로 다음시간에는 Kubernetes Dashboard 구성 과정을 살펴보겠습니다.

# 참조

저도 오랫만에 설치를 진행하다보니 몇몇 기동 시점에 에러 메시지가 발생하여 당황했는데, 이에 대한 기록 차원에서 남겨 둡니다. 구글에 검색해 보니 결론은 안나와있고 엄청나게 많은 글들이 올라와 있던데, 바로 "Initial timeout of 40s passed."입니다.


[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

 

Unfortunately, an error has occurred:

    timed out waiting for the condition

 

This error is likely caused by:

    - The kubelet is not running

    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

 

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

    - 'systemctl status kubelet'

    - 'journalctl -xeu kubelet'

...


저의 케이스에서는 방화벽 포트 오픈에 문제가 있어서 SELINUX를 해제하고 나니 정상적으로 연결되었습니다. kubeadm에서 기동한 Kubernetes Cluster와 kubelet 간의 통신에 문제가 발생하여 40s를 초과하게 되면 실패가 발생하게 되는데 이때는 SELINUX를 다운하거나 포트 오픈 상태를 다시한번 점검할 필요성이 있습니다.

 

master node에서 사용하는 주요 포트는 다음과 같습니다.

Protocol

Direction

Port Range

Purpose

Used By

TCP

Inbound

6443*

Kubernetes API Server

All

TCP

Inbound

2379 - 2380

etcd server client API

kube-apiserver, etcd

TCP

Inbound

10250

Kubelet API

Self, Control plane

TCP

Inbound

10251

kube-scheduler

Self

TCP

Inbound

10252

kube-controller-manager

Self

 

또 다른 해결 방안으로는

https://github.com/kubernetes/kubernetes/blob/master/build/rpms/10-kubeadm.conf

 

kubernetes/kubernetes

Production-Grade Container Scheduling and Management - kubernetes/kubernetes

github.com

위를 참조하여 kubelet.conf를 작성하여 서비스에 반영하는 방법도 있습니다.

728x90
반응형