*Kubernetes는 다음을 자동으로 처리하는 common API와 self-heading 프레임워크를 제공한다 :
- 머신 장애
- 어플리케이션 배포, 로깅 모니터링 간소화
최소한의 통제로 자율성을 얻기 위한 툴이다!
점차 Operation 팀의 역할은 줄어들고 있다...
비즈니스는 점점 서비스와 어플리케이션에 포커스를 두고...인프라는 비중이 작아지고 있음
*Kubernetes의 구성도 :
1) API Node
Kubernetes의 모든 API서버는 Node들의 모든 정보를 가지고 있어야 한다.
스케줄러가 제일 먼저 하는 일은....어디에 배포할 것인가 결정
API Node 내에 controller manager(replication controller)는 node의 status를 알려준다.
->Kubernetes는 ETCD라는 분산형태 DB를 사용한다,
2) Worker Node
Core OS가 Rocket(docker가 너무 무거워)이라는 컨테이너를 발표!
Kubelet은 컨테이너에 대한 구성요소 제어를 맡고 있음
각 pod간 통신은 VXLAN(overlay)를 사용
proxy는 app 레벨의 인스턴스가 있고,
실제 호스트에 있는 iptables (kernel 레이어에 있음)에 데이터 처리에 관한 내용이 있음
※ kubelet은 유일하게 시스템 서비스로 들어간다! 나머지는 컨테이너로 설치 가능
- 왜 Swarm을 사용하는가? 사용하기 쉬움
- 왜 Kubernetes를 사용하는가? 많은 Resource를 제어할 수 있음
1. Docker 설치 (모든 노드)
1 2 3 4 5 6 7 8 9 | yum install -y docker systemctl enable docker && systemctl start docker touch /etc/docker/daemon.json cat <<EOF > /etc/docker/daemon.json { "insecure-registries":["10.10.12.0/24"] } EOF | cs |
2. Kubernetes 설치 (모든 노드)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux systemctl stop firewalld.service systemctl disable firewalld.service cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet |
*MASTER Node 설치(이미지를 전부 다 Load한다) :
> docker load -i <이미지명>
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | [root@host01-2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE [root@host01-2 ~]# ls anaconda-ks.cfg dns-kube-dns dns-sidecar etcd-amd64 hk [root@host01-2 ~]# cd hk [root@host01-2 hk]# ls dns-kube-dns etcd-amd64 kubeadm.host08-1.root.log.INFO.20180525-060140.2620 kube-apiserver kube-proxy weave-kube dns-sidecar k8s-dns kubeadm.INFO kube-controller kube-scheduler weave-npc [root@host01-2 hk]# ls -al total 974632 drwxr-xr-x. 2 root root 277 May 25 10:27 . dr-xr-x---. 5 root root 226 May 25 10:27 .. -rw-------. 1 root root 50727424 May 25 10:27 dns-kube-dns -rw-------. 1 root root 42481152 May 25 10:27 dns-sidecar -rw-------. 1 root root 193461760 May 25 10:27 etcd-amd64 -rw-------. 1 root root 41239040 May 25 10:27 k8s-dns -rw-r--r--. 1 root root 343 May 25 10:27 kubeadm.host08-1.root.log.INFO.20180525-060140.2620 -rw-r--r--. 1 root root 343 May 25 10:27 kubeadm.INFO -rw-------. 1 root root 225319936 May 25 10:27 kube-apiserver -rw-------. 1 root root 148110336 May 25 10:27 kube-controller -rw-------. 1 root root 98924032 May 25 10:27 kube-proxy -rw-------. 1 root root 50635776 May 25 10:27 kube-scheduler -rw-------. 1 root root 99517952 May 25 10:27 weave-kube -rw-------. 1 root root 47575552 May 25 10:27 weave-npc [root@host01-2 hk]# docker load -i kube-proxy 582b548209e1: Loading layer [==================================================>] 44.2 MB/44.2 MB e20569a478ed: Loading layer [==================================================>] 3.358 MB/3.358 MB 6b4e4941a965: Loading layer [==================================================>] 51.35 MB/51.35 MB Loaded image: k8s.gcr.io/kube-proxy-amd64:v1.10.3 [root@host01-2 hk]# docker load -i weave-kube 5bef08742407: Loading layer [==================================================>] 4.221 MB/4.221 MB c3355c8b5c3e: Loading layer [==================================================>] 19.03 MB/19.03 MB a83fa3df4138: Loading layer [==================================================>] 29.55 MB/29.55 MB 020fdc01af85: Loading layer [==================================================>] 11.6 MB/11.6 MB 2ea881a632b7: Loading layer [==================================================>] 2.048 kB/2.048 kB 396aa46bcbea: Loading layer [==================================================>] 35.09 MB/35.09 MB Loaded image: docker.io/weaveworks/weave-kube:2.3.0 [root@host01-2 hk]# docker load -i weave-npc 8dccfe2dec8c: Loading layer [==================================================>] 2.811 MB/2.811 MB 3249ff6df12f: Loading layer [==================================================>] 40.52 MB/40.52 MB 3dc458d34b22: Loading layer [==================================================>] 2.56 kB/2.56 kB Loaded image: docker.io/weaveworks/weave-npc:2.3.0 [root@host01-2 hk]# |
*기타 Node 설치(kube-proxy, weave-kube, weave-npc를 설치한다) :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | [root@host01-2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE [root@host01-2 ~]# ls anaconda-ks.cfg dns-kube-dns dns-sidecar etcd-amd64 hk [root@host01-2 ~]# cd hk [root@host01-2 hk]# ls dns-kube-dns etcd-amd64 kubeadm.host08-1.root.log.INFO.20180525-060140.2620 kube-apiserver kube-proxy weave-kube dns-sidecar k8s-dns kubeadm.INFO kube-controller kube-scheduler weave-npc [root@host01-2 hk]# ls -al total 974632 drwxr-xr-x. 2 root root 277 May 25 10:27 . dr-xr-x---. 5 root root 226 May 25 10:27 .. -rw-------. 1 root root 50727424 May 25 10:27 dns-kube-dns -rw-------. 1 root root 42481152 May 25 10:27 dns-sidecar -rw-------. 1 root root 193461760 May 25 10:27 etcd-amd64 -rw-------. 1 root root 41239040 May 25 10:27 k8s-dns -rw-r--r--. 1 root root 343 May 25 10:27 kubeadm.host08-1.root.log.INFO.20180525-060140.2620 -rw-r--r--. 1 root root 343 May 25 10:27 kubeadm.INFO -rw-------. 1 root root 225319936 May 25 10:27 kube-apiserver -rw-------. 1 root root 148110336 May 25 10:27 kube-controller -rw-------. 1 root root 98924032 May 25 10:27 kube-proxy -rw-------. 1 root root 50635776 May 25 10:27 kube-scheduler -rw-------. 1 root root 99517952 May 25 10:27 weave-kube -rw-------. 1 root root 47575552 May 25 10:27 weave-npc [root@host01-2 hk]# docker load -i kube-proxy 582b548209e1: Loading layer [==================================================>] 44.2 MB/44.2 MB e20569a478ed: Loading layer [==================================================>] 3.358 MB/3.358 MB 6b4e4941a965: Loading layer [==================================================>] 51.35 MB/51.35 MB Loaded image: k8s.gcr.io/kube-proxy-amd64:v1.10.3 [root@host01-2 hk]# docker load -i weave-kube 5bef08742407: Loading layer [==================================================>] 4.221 MB/4.221 MB c3355c8b5c3e: Loading layer [==================================================>] 19.03 MB/19.03 MB a83fa3df4138: Loading layer [==================================================>] 29.55 MB/29.55 MB 020fdc01af85: Loading layer [==================================================>] 11.6 MB/11.6 MB 2ea881a632b7: Loading layer [==================================================>] 2.048 kB/2.048 kB 396aa46bcbea: Loading layer [==================================================>] 35.09 MB/35.09 MB Loaded image: docker.io/weaveworks/weave-kube:2.3.0 [root@host01-2 hk]# docker load -i weave-npc 8dccfe2dec8c: Loading layer [==================================================>] 2.811 MB/2.811 MB 3249ff6df12f: Loading layer [==================================================>] 40.52 MB/40.52 MB 3dc458d34b22: Loading layer [==================================================>] 2.56 kB/2.56 kB Loaded image: docker.io/weaveworks/weave-npc:2.3.0 [root@host01-2 hk]# | cs |
*Master Node에서 실행 :
1 2 3 4 5 | kubeadm init mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config | cs |
실행방법 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | [root@host01-4 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.10.3 4261d315109d 3 days ago 97.1 MB k8s.gcr.io/kube-apiserver-amd64 v1.10.3 e03746fe22c3 3 days ago 225 MB k8s.gcr.io/kube-controller-manager-amd64 v1.10.3 40c8d10b2d11 3 days ago 148 MB k8s.gcr.io/kube-scheduler-amd64 v1.10.3 353b8f1d102e 3 days ago 50.4 MB docker.io/weaveworks/weave-npc 2.3.0 21545eb3d6f9 6 weeks ago 47.2 MB docker.io/weaveworks/weave-kube 2.3.0 f15514acce73 6 weeks ago 96.8 MB k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 2 months ago 193 MB k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 4 months ago 41 MB k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 4 months ago 42.2 MB k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 4 months ago 50.5 MB [root@host01-4 ~]# clear [root@host01-4 ~]# kubeadm init [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [host01-4.cloud.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.12.14] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [host01-4.cloud.com] and IPs [10.10.12.14] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.003852 seconds [uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node host01-4.cloud.com as master by adding a label and a taint [markmaster] Master host01-4.cloud.com tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: hlz7wp.qjrgmsq2yn9f94wa [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.10.12.14:6443 --token hlz7wp.qjrgmsq2yn9f94wa --discovery-token-ca-cert-hash sha256:43e61417b20ede5ca530fe0638990bc1a805b5f2a9e25b5aa2f40023b392fb50 [root@host01-4 ~]# [root@host01-4 ~]# mkdir -p $HOME/.kube [root@host01-4 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@host01-4 ~]# chown $(id -u):$(id -g) $HOME/.kube/config [root@host01-4 ~]# | cs |
위 실행결과에서 토큰을 복사해놓는다 :
1 2 3 | kubeadm join 10.10.12.14:6443 --token hlz7wp.qjrgmsq2yn9f94wa --discovery-token-ca-cert-hash sha256:43e61417b20ede5ca530fe0638990bc1a805b5f2a9e25b5aa2f40023b392fb50 | cs |
*Pod Network 설치 (마스터)
1 2 3 4 5 6 7 | sysctl net.bridge.bridge-nf-call-iptables=1 export kubever=$(kubectl version | base64 | tr -d '\n') kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" kubectl get pods --all-namespaces | cs |
실행결과
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | [root@host01-4 ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 [root@host01-4 ~]# [root@host01-4 ~]# export kubever=$(kubectl version | base64 | tr -d '\n') [root@host01-4 ~]# [root@host01-4 ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" serviceaccount "weave-net" created clusterrole.rbac.authorization.k8s.io "weave-net" created clusterrolebinding.rbac.authorization.k8s.io "weave-net" created role.rbac.authorization.k8s.io "weave-net" created rolebinding.rbac.authorization.k8s.io "weave-net" created daemonset.extensions "weave-net" created [root@host01-4 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-host01-4.cloud.com 1/1 Running 0 4m kube-system kube-apiserver-host01-4.cloud.com 1/1 Running 0 4m kube-system kube-controller-manager-host01-4.cloud.com 1/1 Running 0 5m kube-system kube-dns-86f4d74b45-t9df2 0/3 Pending 0 5m kube-system kube-proxy-fs9d8 1/1 Running 0 5m kube-system kube-scheduler-host01-4.cloud.com 1/1 Running 0 4m kube-system weave-net-zr5qr 2/2 Running 0 11s [root@host01-4 ~]# | cs |
*이전에 Master Node에서 복사한 토큰을 2개 Worker Node에 각각
입력한다 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | [root@host01-3 ~]# kubeadm join 10.10.12.14:6443 --token hlz7wp.qjrgmsq2yn9f94wa --discovery-token-ca-cert-hash sha256:43e61417b20ede5ca530fe0638990bc1a805b5f2a9e25b5aa2f40023b392fb50 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "10.10.12.14:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.10.12.14:6443" [discovery] Requesting info from "https://10.10.12.14:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.12.14:6443" [discovery] Successfully established connection with API Server "10.10.12.14:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [root@host01-3 ~]# | cs |
*Master Node에서 멤버 Join이 이루어졌는지 확인한다(Status에 Ready가 떠야 한다.)
1 2 3 4 5 6 7 8 | [root@host01-4 ~]# kubectl get no NAME STATUS ROLES AGE VERSION host01-2.cloud.com NotReady <none> 6s v1.10.3 host01-3.cloud.com NotReady <none> 11s v1.10.3 host01-4.cloud.com Ready master 9m v1.10.3 [root@host01-4 ~]# ^C | cs |
1 2 3 4 5 6 7 | [root@host01-4 ~]# kubectl get no NAME STATUS ROLES AGE VERSION host01-2.cloud.com Ready <none> 2m v1.10.3 host01-3.cloud.com Ready <none> 2m v1.10.3 host01-4.cloud.com Ready master 11m v1.10.3 [root@host01-4 ~]# | cs |
*Tab 자동 입력
AWS, Azure에서는 kubectl 명령어가 그냥 'k'임(alias로 등록되어 있음)
*Alias 등록 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | [root@host01-4 ~]# alias k=kubectl [root@host01-4 ~]# source <(kubectl completion bash | sed s/kubectl/k/g) [root@host01-4 ~]# k get You must specify the type of resource to get. Valid resource types include: * all * certificatesigningrequests (aka 'csr') * clusterrolebindings * clusterroles * componentstatuses (aka 'cs') * configmaps (aka 'cm') * controllerrevisions * cronjobs * customresourcedefinition (aka 'crd') * daemonsets (aka 'ds') * deployments (aka 'deploy') * endpoints (aka 'ep') * events (aka 'ev') * horizontalpodautoscalers (aka 'hpa') * ingresses (aka 'ing') * jobs * limitranges (aka 'limits') * namespaces (aka 'ns') * networkpolicies (aka 'netpol') * nodes (aka 'no') * persistentvolumeclaims (aka 'pvc') * persistentvolumes (aka 'pv') * poddisruptionbudgets (aka 'pdb') * podpreset * pods (aka 'po') * podsecuritypolicies (aka 'psp') * podtemplates * replicasets (aka 'rs') * replicationcontrollers (aka 'rc') * resourcequotas (aka 'quota') * rolebindings * roles * secrets * serviceaccounts (aka 'sa') * services (aka 'svc') * statefulsets (aka 'sts') * storageclasses (aka 'sc')error: Required resource not specified. Use "kubectl explain <resource>" for a detailed description of that resource (e.g. kubectl explain pods). See 'kubectl get -h' for help and examples. [root@host01-4 ~]# echo "alias l=kubectl" >> ~/.bashrc [root@host01-4 ~]# echo "source <(kubectl completion bash | sed s/kubectl/k/g)" >> ~/.bashrc | cs |
*Pod 개념
MS 아키텍처의 기초는 하나의 컨테이너에는 하나의 어플리케이션만 집어넣자!
- 네트워크 공유 : Pod로 구성하면 Container끼리 네트워크를 공유할 수 있게된다(네트워크 Name Space 공유 - localhost로)
- 스토리지 공유 : 볼륨을 pod에 마운트 하여 Storage까지 공유한다!
- 1개 pod 내에서 콘테이너들은 동일 호스트로만 구성이 된다. 똑같은 포트의 서비스를 공유할 수 없다.
(pod를 잘못구성하는 예 : DB Container와 Web Container를 동일 pod 내에 두었을 때 Scale Out 시 이슈가 있음,
따라서 web pod, WAS pod, DB pod 식으로 나눠야 함)
pod가 kubenetes의 최소 단위로 구성해야 함!
각 Work Node의 Pod 들은 Master Node의 replication controller(RC)에서 제어된다.
- 참조/관리 형태
참조를 하는 주체는 selector (key/value 형태)
참조를 받은 대상은 Label (key/value 형태) = Pod(Node Selector)
Pod/Label의 참조를 받는 대상은 Label (key/value 형태)
라고 부른다.
- Kubenetes의 Namespace란 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@host01-4 hk]# k get po -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-host01-4.cloud.com 1/1 Running 0 58m 10.10.12.14 host01-4.cloud.com kube-apiserver-host01-4.cloud.com 1/1 Running 0 58m 10.10.12.14 host01-4.cloud.com kube-controller-manager-host01-4.cloud.com 1/1 Running 0 59m 10.10.12.14 host01-4.cloud.com kube-dns-86f4d74b45-t9df2 3/3 Running 0 1h 10.32.0.2 host01-4.cloud.com kube-proxy-fs9d8 1/1 Running 0 1h 10.10.12.14 host01-4.cloud.com kube-proxy-r5bzj 1/1 Running 0 51m 10.10.12.13 host01-3.cloud.com kube-proxy-tvwnv 1/1 Running 0 51m 10.10.12.12 host01-2.cloud.com kube-scheduler-host01-4.cloud.com 1/1 Running 0 59m 10.10.12.14 host01-4.cloud.com weave-net-hf9d5 2/2 Running 1 51m 10.10.12.12 host01-2.cloud.com weave-net-p5drv 2/2 Running 1 51m 10.10.12.13 host01-3.cloud.com weave-net-zr5qr 2/2 Running 0 54m 10.10.12.14 host01-4.cloud.com | cs |
1 2 3 4 5 | [root@host01-4 hk]# k cluster-info Kubernetes master is running at https://10.10.12.14:6443 KubeDNS is running at https://10.10.12.14:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. | cs |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | [root@host01-4 hk]# ls dns-kube-dns etcd-amd64 kubeadm.host08-1.root.log.INFO.20180525-060140.2620 kube-apiserver kube-proxy weave-kube dns-sidecar k8s-dns kubeadm.INFO kube-controller kube-scheduler weave-npc [root@host01-4 hk]# k run --image=reg.cloud.com/nginx --port=80 --generator=run/v1 error: NAME is required for run See 'kubectl run -h' for help and examples. [root@host01-4 hk]# k run --image=reg.cloud.com/nginx --port=80 --generator=run/v1 error: NAME is required for run See 'kubectl run -h' for help and examples. [root@host01-4 hk]# k run --image=reg.cloud.com/nginx nginx-app --port=80 --generator=run/v1 replicationcontroller "nginx-app" created [root@host01-4 hk]# k get rc NAME DESIRED CURRENT READY AGE nginx-app 1 1 0 8s [root@host01-4 hk]# k get po NAME READY STATUS RESTARTS AGE nginx-app-gb6ch 1/1 Running 0 15s [root@host01-4 hk]# get po -o wide -bash: get: command not found [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gb6ch 1/1 Running 0 32s 10.36.0.1 host01-3.cloud.com [root@host01-4 hk]# k logs error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'. POD or TYPE/NAME is a required argument for the logs command See 'kubectl logs -h' for help and examples. [root@host01-4 hk]# k logs nginx-app-df618 Error from server (NotFound): pods "nginx-app-df618" not found [root@host01-4 hk]# k logs nginx-app-gb6ch [root@host01-4 hk]# k exec -it error: expected 'exec POD_NAME COMMAND [ARG1] [ARG2] ... [ARGN]'. POD_NAME and COMMAND are required arguments for the exec command See 'kubectl exec -h' for help and examples. [root@host01-4 hk]# k exec -it nginx-app-gb6ch bash root@nginx-app-gb6ch:/# | cs |
-podname이 container의 hostname이 된다.
*pod name으로 삭제 해도 다시 살아난다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gb6ch 1/1 Running 0 5m 10.36.0.1 host01-3.cloud.com [root@host01-4 hk]# delete po nginx-app-gb6ch -bash: delete: command not found [root@host01-4 hk]# k delete po nginx-app-gb6ch pod "nginx-app-gb6ch" deleted [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gnpsd 0/1 ContainerCreating 0 6s <none> host01-2.cloud.com [root@host01-4 hk]# k get rc NAME DESIRED CURRENT READY AGE nginx-app 1 1 1 5m [root@host01-4 hk]# | cs |
*Scale in Scale out
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | [root@host01-4 hk]# k describe rc nginx-app dns-kube-dns kubeadm.host08-1.root.log.INFO.20180525-060140.2620 kube-proxy dns-sidecar kubeadm.INFO kube-scheduler etcd-amd64 kube-apiserver weave-kube k8s-dns kube-controller weave-npc [root@host01-4 hk]# k describe rc nginx-app Name: nginx-app Namespace: default Selector: run=nginx-app Labels: run=nginx-app Annotations: <none> Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: run=nginx-app Containers: nginx-app: Image: reg.cloud.com/nginx Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 6m replication-controller Created pod: nginx-app-gb6ch Normal SuccessfulCreate 1m replication-controller Created pod: nginx-app-gnpsd [root@host01-4 hk]# k get po NAME READY STATUS RESTARTS AGE nginx-app-gnpsd 1/1 Running 0 1m [root@host01-4 hk]# k describe po nginx-app-gnpsd Name: nginx-app-gnpsd Namespace: default Node: host01-2.cloud.com/10.10.12.12 Start Time: Fri, 25 May 2018 11:58:20 +0900 Labels: run=nginx-app Annotations: <none> Status: Running IP: 10.44.0.1 Controlled By: ReplicationController/nginx-app Containers: nginx-app: Container ID: docker://6d0e9cb190b31334dee5dba4877ace52d8afd5a9956d7c50eae35d3107722a58 Image: reg.cloud.com/nginx Image ID: docker-pullable://reg.cloud.com/nginx@sha256:a4fb15454c43237dbc6592c4f8e0b50160ceb03e852a10c9895cf2a6d16c7fe2 Port: 80/TCP Host Port: 0/TCP State: Running Started: Fri, 25 May 2018 11:58:29 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-85hdm (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-85hdm: Type: Secret (a volume populated by a Secret) SecretName: default-token-85hdm Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 1m kubelet, host01-2.cloud.com MountVolume.SetUp succeeded for volume "default-token-85hdm" Normal Scheduled 1m default-scheduler Successfully assigned nginx-app-gnpsd to host01-2.cloud.com Normal Pulling 1m kubelet, host01-2.cloud.com pulling image "reg.cloud.com/nginx" Normal Pulled 1m kubelet, host01-2.cloud.com Successfully pulled image "reg.cloud.com/nginx" Normal Created 1m kubelet, host01-2.cloud.com Created container Normal Started 1m kubelet, host01-2.cloud.com Started container [root@host01-4 hk]# k get rc NAME DESIRED CURRENT READY AGE nginx-app 1 1 1 8m [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gnpsd 1/1 Running 0 3m 10.44.0.1 host01-2.cloud.com [root@host01-4 hk]# k scale rc nginx-app --replicas=3 replicationcontroller "nginx-app" scaled [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gnpsd 1/1 Running 0 4m 10.44.0.1 host01-2.cloud.com nginx-app-jfmkd 1/1 Running 0 10s 10.44.0.2 host01-2.cloud.com nginx-app-ww6sn 1/1 Running 0 10s 10.36.0.1 host01-3.cloud.com [root@host01-4 hk]# k scale rc nginx-app --replicas=0 replicationcontroller "nginx-app" scaled [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-gnpsd 0/1 Terminating 0 5m 10.44.0.1 host01-2.cloud.com nginx-app-jfmkd 0/1 Terminating 0 34s 10.44.0.2 host01-2.cloud.com [root@host01-4 hk]# k get po -o wide No resources found. [root@host01-4 hk]# rc 0 -bash: rc: command not found [root@host01-4 hk]# k scale rc nginx-app --replicas=1 replicationcontroller "nginx-app" scaled [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-7qpbv 1/1 Running 0 4s 10.36.0.1 host01-3.cloud.com [root@host01-4 hk]# | cs |
*Yaml으로 pod 조회
1 2 | [root@host01-4 hk]# k get po nginx-app-7qpbv -o yaml | cs |
*Yaml 파일로 추출 후 pod 생성
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | apiVersion: v1 kind: Pod metadata: labels: type: web name: nginx-hk-app spec: containers: - image: reg.cloud.com/nginx name: nginx-app ports: #expose와 동일 - containerPort: 80 protocol: TCP [root@host01-4 hk]# k get po nginx-app-7qpbv -o yaml > temp.yaml [root@host01-4 hk]# vi temp.yaml [root@host01-4 hk]# k create -f temp.yaml pod "nginx-hk-app" created [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-7qpbv 1/1 Running 0 6m 10.36.0.1 host01-3.cloud.com nginx-hk-app 1/1 Running 0 13s 10.44.0.1 host01-2.cloud.com [root@host01-4 hk]# | cs |
*Yaml 파일로 rc 생성 후 배포 :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | [root@host01-4 hk]# k get rc nginx-app -o yaml > hk.yaml [root@host01-4 hk]# vi hk.yaml apiVersion: v1 kind: ReplicationController metadata: labels: run: nginx-app name: nginx-app spec: replicas: 1 selector: type: test template: metadata: labels: type: test spec: containers: - image: reg.cloud.com/nginx name: nginx-app ports: - containerPort: 80 protocol: TCP [root@host01-4 hk]# k create -f hk.yaml replicationcontroller "nginx-app2" created [root@host01-4 hk]# k get rc NAME DESIRED CURRENT READY AGE nginx-app 1 1 1 24m nginx-app2 1 1 1 9s [root@host01-4 hk]# k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-app-7qpbv 1/1 Running 0 14m 10.36.0.1 host01-3.cloud.com nginx-app2-tgqqf 1/1 Running 0 16s 10.36.0.2 host01-3.cloud.com nginx-hk-app 1/1 Running 0 7m 10.44.0.1 host01-2.cloud.com [root@host01-4 hk]# | cs |
'Docker(도커) Kubernetes' 카테고리의 다른 글
Docker Kubernetes, Readiness Probe 생성 방법 (0) | 2018.05.25 |
---|---|
Pod 개념, 멀티 컨테이너 생성, kubectl 명령어 사용법 (0) | 2018.05.25 |
도커 관련 설정 파일 모음 (2) | 2018.05.24 |
docker swarm 사용법 (0) | 2018.05.24 |
Docker cp 명령어, Docker Compose (0) | 2018.05.24 |