Kubernetes The Hard Wayをやってみる
公式ドキュメントを読んだり Kubernetes完全ガイド 第2版 - インプレスブックス を読んだりして使用者側のキモチはちょっとわかった気がするものの、管理者側のキモチがよくわからなかったので、実践的に試せるらしい
をやってみることにした。
WindowsマシンのWSL2(Ubuntu 20.04)から作業してみる。
01 Prerequisites
ローカルに強いマシンを持っているわけでもないので、とりあえず原文どおりGCPを借りてやってみることにした。 (東京リージョンだともうちょっと高いかもだが、)1日$5.50くらいで済むらしい。
まずは、 Installing Google Cloud SDK | Cloud SDK Documentation に従ってGoogle Cloud SDKをインストールした。
$ gcloud version Google Cloud SDK 321.0.0 alpha 2020.12.11 beta 2020.12.11 bq 2.0.64 core 2020.12.11 gsutil 4.57
$ gcloud init # イロイロ入力 $ gcloud auth login # イロイロ入力 You are now logged in as [xxxx@xxx.xxx]. Your current project is [xxx-xxx-000000]. You can change this setting by running: $ gcloud config set project PROJECT_ID $ gcloud config set compute/region asia-northeast1 Updated property [compute/region]. $ gcloud config set compute/zone asia-northeast1-a Updated property [compute/zone].
tmuxの紹介があったが、普段から使っているので特にそこは気にしなかった。
02 Installing the Client Tools
cfssl, cfssljsonをインストールする。
$ wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson $ chmod +x cfssl cfssljson $ sudo mv cfssl cfssljson /opt/bin
kubectlをインストールする。
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl $ chmod +x kubectl $ sudo mv kubectl /opt/bin
03 Provisioning Compute Resources
VPCを作成する。
$ gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom # 失敗したのでGCPコンソールを開いて「お支払い」で課金を有効にした。 # 略 NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 kubernetes-the-hard-way CUSTOM REGIONAL
VPCのサブネットを作成する。
$ gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 # [3] asia-northeast1 を選ぶ。 NAME REGION NETWORK RANGE kubernetes asia-northeast1 kubernetes-the-hard-way 10.240.0.0/24
ファイアウォールルールを作成する。
$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16 NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False $ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0 NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
パブリックなIPアドレスを作成する。
$ gcloud compute addresses create kubernetes-the-hard-way --region $(gcloud config get-value compute/region) $ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS kubernetes-the-hard-way 34.84.85.xxx EXTERNAL asia-northeast1 RESERVED
インスタンスを作成する。
for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-2004-lts \ --image-project ubuntu-os-cloud \ --machine-type e2-standard-2 \ --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done
つぎにノード用のインスタンスを3つたてる。
for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-2004-lts \ --image-project ubuntu-os-cloud \ --machine-type e2-standard-2 \ --metadata pod-cidr=10.200.${i}.0/24 \ --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done
インスタンス一覧を確認する。
$ gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way" NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 asia-northeast1-a e2-standard-2 10.240.0.10 34.84.212.xxx RUNNING controller-1 asia-northeast1-a e2-standard-2 10.240.0.11 34.84.218.xxx RUNNING controller-2 asia-northeast1-a e2-standard-2 10.240.0.12 34.84.37.xxx RUNNING worker-0 asia-northeast1-a e2-standard-2 10.240.0.20 35.190.225.xxx RUNNING worker-1 asia-northeast1-a e2-standard-2 10.240.0.21 35.194.105.xxx RUNNING worker-2 asia-northeast1-a e2-standard-2 10.240.0.22 35.200.46.xxx RUNNING
SSHログインできるか確認する。
$ gcloud compute ssh controller-0 # controller-1, controller-2, worker-0, worker-1, worker-2も同様に確認
04 Provisioning a CA and Generating TLS Certificates
CAを作成する。
$ { cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "CA", "ST": "Oregon" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca } $ ls -ls total 20 4 -rw-r--r-- 1 rkmathi rkmathi 232 Dec 31 14:21 ca-config.json 4 -rw-r--r-- 1 rkmathi rkmathi 1005 Dec 31 14:21 ca.csr 4 -rw-r--r-- 1 rkmathi rkmathi 211 Dec 31 14:21 ca-csr.json 4 -rw------- 1 rkmathi rkmathi 1679 Dec 31 14:21 ca-key.pem 4 -rw-r--r-- 1 rkmathi rkmathi 1318 Dec 31 14:21 ca.pem
ClientとServer Certificateを作成する。
$ { cat > admin-csr.json <<EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:masters", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin }
KubeletのClient Certificateを作成する。
$ for instance in worker-0 worker-1 worker-2; do cat > ${instance}-csr.json <<EOF { "CN": "system:node:${instance}", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:nodes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF EXTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') INTERNAL_IP=$(gcloud compute instances describe ${instance} \ --format 'value(networkInterfaces[0].networkIP)') cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \ -profile=kubernetes \ ${instance}-csr.json | cfssljson -bare ${instance} done
Controller ManagerのClient Certificateを作成する。
$ { cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:kube-controller-manager", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager }
Kube ProxyのClient Certificateを作成する。
$ { cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:node-proxier", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare kube-proxy }
SchedulerのClient Certificateを作成する。
$ { cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "system:kube-scheduler", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-scheduler-csr.json | cfssljson -bare kube-scheduler }
Kubernetes APIのServer Certificateを作成する。
$ { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \ -profile=kubernetes \ kubernetes-csr.json | cfssljson -bare kubernetes }
Service AccountのKey Pairを作成する。
$ { cat > service-account-csr.json <<EOF { "CN": "service-accounts", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "Kubernetes The Hard Way", "ST": "Oregon" } ] } EOF cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ service-account-csr.json | cfssljson -bare service-account }
作成したCertificateを各インスタンスに配布する。
$ for instance in worker-0 worker-1 worker-2; do gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ done $ for instance in controller-0 controller-1 controller-2; do gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem ${instance}:~/ done
05 Generating Kubernetes Configuration Files for Authentication
前に作成したパブリックIPを拾ってくる。
$ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') $ echo $KUBERNETES_PUBLIC_ADDRESS 34.84.85.XXX
Kubeletのコンフィグファイルを作成する。
$ for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${instance}.kubeconfig kubectl config set-credentials system:node:${instance} \ --client-certificate=${instance}.pem \ --client-key=${instance}-key.pem \ --embed-certs=true \ --kubeconfig=${instance}.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:node:${instance} \ --kubeconfig=${instance}.kubeconfig kubectl config use-context default --kubeconfig=${instance}.kubeconfig done $ ls worker-*.kubeconfig worker-0.kubeconfig worker-1.kubeconfig worker-2.kubeconfig
kube-proxyのコンフィグファイルを作成する。
$ { kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig }
kube-controller-managerのコンフィグファイルを作成する。
{ kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig }
kube-schedulerのコンフィグファイルを作成する。
$ { kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig }
adminユーザのコンフィグファイルを作成する。
$ { kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig }
作成したコンフィグファイルを配布する。
$ for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done $ for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done
06 Generating the Data Encryption Config and Key
暗号キーを作り、設定ファイルを生成する。
$ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) $ cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: ${ENCRYPTION_KEY} - identity: {} EOF $ for instance in controller-0 controller-1 controller-2; do gcloud compute scp encryption-config.yaml ${instance}:~/ done
07 Bootstrapping the etcd Cluster
※controller-0, controller-1, controller-2にSSHして、それぞれのインスタンスで同じことを行う。
$ wget -q --show-progress --https-only --timestamping \ "https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz" $ { tar -xvf etcd-v3.4.10-linux-amd64.tar.gz sudo mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/ } $ { sudo mkdir -p /etc/etcd /var/lib/etcd sudo chmod 700 /var/lib/etcd sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ } $ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) $ echo $INTERNAL_IP $ ETCD_NAME=$(hostname -s) $ cat <<EOF | sudo tee /etc/systemd/system/etcd.service [Unit] Description=etcd Documentation=https://github.com/coreos [Service] Type=notify ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF $ { sudo systemctl daemon-reload sudo systemctl enable etcd sudo systemctl start etcd }
etcdが立ち上がっているか確認する。
$ sudo ETCDCTL_API=3 etcdctl member list \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem # 次のように出力されればOK 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
08 Bootstrapping the Kubernetes Control Plane
※controller-0, controller-1, controller-2にSSHして、それぞれのインスタンスで同じことを行う。
コンフィグ置き場を作成する。
$ sudo mkdir -p /etc/kubernetes/config
Controllerのバイナリをダウンロードしてインストールする。
$ wget -q --show-progress --https-only --timestamping \ "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-controller-manager" \ "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-scheduler" \ "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl" $ { chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ }
API Serverを設定する。
$ { sudo mkdir -p /var/lib/kubernetes/ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem \ encryption-config.yaml /var/lib/kubernetes/ } $ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) $ cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/audit.log \\ --authorization-mode=Node,RBAC \\ --bind-address=0.0.0.0 \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\ --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\ --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\ --event-ttl=1h \\ --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\ --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ --kubelet-https=true \\ --runtime-config='api/all=true' \\ --service-account-key-file=/var/lib/kubernetes/service-account.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --service-node-port-range=30000-32767 \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Controller Managerを設定する。
$ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/ $ cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --bind-address=0.0.0.0 \\ --cluster-cidr=10.200.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --use-service-account-credentials=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Schedulerを設定する。
$ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/ $ cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml apiVersion: kubescheduler.config.k8s.io/v1alpha1 kind: KubeSchedulerConfiguration clientConnection: kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig" leaderElection: leaderElect: true EOF $ cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --config=/etc/kubernetes/config/kube-scheduler.yaml \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
Control Planeに必要なサービスを起動する。
$ { sudo systemctl daemon-reload sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler }
HTTP Health Checkを有効にする。
$ sudo apt-get update $ sudo apt-get install -y nginx $ cat > kubernetes.default.svc.cluster.local <<EOF server { listen 80; server_name kubernetes.default.svc.cluster.local; location /healthz { proxy_pass https://127.0.0.1:6443/healthz; proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem; } } EOF $ { sudo mv kubernetes.default.svc.cluster.local \ /etc/nginx/sites-available/kubernetes.default.svc.cluster.local sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/ } $ sudo systemctl restart nginx $ sudo systemctl enable nginx
確認する。
$ kubectl get componentstatuses --kubeconfig admin.kubeconfig NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} $ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Thu, 31 Dec 2020 06:29:54 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 2 Connection: keep-alive Cache-Control: no-cache, private X-Content-Type-Options: nosniff
RBACを設定する。
※この設定は、controller-0だけで実行する。
$ cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" EOF $ cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF
Frontend Load Balancerを設定する。
※この設定はローカルから実行する。
$ { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') gcloud compute http-health-checks create kubernetes \ --description "Kubernetes Health Check" \ --host "kubernetes.default.svc.cluster.local" \ --request-path "/healthz" gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \ --network kubernetes-the-hard-way \ --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \ --allow tcp gcloud compute target-pools create kubernetes-target-pool \ --http-health-check kubernetes gcloud compute target-pools add-instances kubernetes-target-pool \ --instances controller-0,controller-1,controller-2 gcloud compute forwarding-rules create kubernetes-forwarding-rule \ --address ${KUBERNETES_PUBLIC_ADDRESS} \ --ports 6443 \ --region $(gcloud config get-value compute/region) \ --target-pool kubernetes-target-pool }
確認する。
$ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') $ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version { "major": "1", "minor": "18", "gitVersion": "v1.18.6", "gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037", "gitTreeState": "clean", "buildDate": "2020-07-15T16:51:04Z", "goVersion": "go1.13.9", "compiler": "gc", "platform": "linux/amd64" }
09 Bootstrapping the Kubernetes Worker Nodes
※worker-0, worker-1, worker-2にSSHして、それぞれのインスタンスで同じことを行う。
依存パッケージをインストールする。
$ { sudo apt-get update sudo apt-get -y install socat conntrack ipset }
swapを無効にする。
$ sudo swapon --show # 出力がないので最初からswapが無効になっていた。
必要なバイナリをダウンロード・インストールする。
$ wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc91/runc.amd64 \ https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz \ https://github.com/containerd/containerd/releases/download/v1.3.6/containerd-1.3.6-linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubelet $ sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes $ { mkdir containerd tar -xvf crictl-v1.18.0-linux-amd64.tar.gz tar -xvf containerd-1.3.6-linux-amd64.tar.gz -C containerd sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/ sudo mv runc.amd64 runc chmod +x crictl kubectl kube-proxy kubelet runc sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/ sudo mv containerd/bin/* /bin/ }
CNIネットワーキングを設定する。
$ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) $ cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf { "cniVersion": "0.3.1", "name": "bridge", "type": "bridge", "bridge": "cnio0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "ranges": [ [{"subnet": "${POD_CIDR}"}] ], "routes": [{"dst": "0.0.0.0/0"}] } } EOF $ cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf { "cniVersion": "0.3.1", "name": "lo", "type": "loopback" } EOF
containerdを設定する。
$ sudo mkdir -p /etc/containerd $ cat << EOF | sudo tee /etc/containerd/config.toml [plugins] [plugins.cri.containerd] snapshotter = "overlayfs" [plugins.cri.containerd.default_runtime] runtime_type = "io.containerd.runtime.v1.linux" runtime_engine = "/usr/local/bin/runc" runtime_root = "" EOF $ cat <<EOF | sudo tee /etc/systemd/system/containerd.service [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=/sbin/modprobe overlay ExecStart=/bin/containerd Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity [Install] WantedBy=multi-user.target EOF
kubeletを設定する。
$ { sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ca.pem /var/lib/kubernetes/ } $ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/var/lib/kubernetes/ca.pem" authorization: mode: Webhook clusterDomain: "cluster.local" clusterDNS: - "10.32.0.10" podCIDR: "${POD_CIDR}" resolvConf: "/run/systemd/resolve/resolv.conf" runtimeRequestTimeout: "15m" tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem" tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem" EOF $ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --config=/var/lib/kubelet/kubelet-config.yaml \\ --container-runtime=remote \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --image-pull-progress-deadline=2m \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\ --network-plugin=cni \\ --register-node=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
kube-proxyを設定する。
$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig $ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 clientConnection: kubeconfig: "/var/lib/kube-proxy/kubeconfig" mode: "iptables" clusterCIDR: "10.200.0.0/16" EOF $ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/var/lib/kube-proxy/kube-proxy-config.yaml Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF $ { sudo systemctl daemon-reload sudo systemctl enable containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy }
確認する。
※ローカルから実行する。
$ gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" NAME STATUS ROLES AGE VERSION worker-0 Ready <none> 37s v1.18.6 worker-1 Ready <none> 38s v1.18.6 worker-2 Ready <none> 38s v1.18.6
10 Configuring kubectl for Remote Access
設定する。
$ { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem kubectl config set-context kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \ --user=admin kubectl config use-context kubernetes-the-hard-way }
確認する。
$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} $ kubectl get nodes NAME STATUS ROLES AGE VERSION worker-0 Ready <none> 8m2s v1.18.6 worker-1 Ready <none> 8m3s v1.18.6 worker-2 Ready <none> 8m3s v1.18.6
11 Provisioning Pod Network Routes
ルーティングテーブルを確認する。
$ for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' done 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24
ルートの設定をする。
$ for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done
確認する。
$ gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-e5216601b60de276 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0 default-route-fa62aeb3502b12f0 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
12 Deploying the DNS Cluster Add-on
CoreDNSをデプロイする。
$ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.7.0.yaml
確認する。
$ kubectl get pods -l k8s-app=kube-dns -n kube-system NAME READY STATUS RESTARTS AGE coredns-5677dc4cdb-4km4b 1/1 Running 0 31s coredns-5677dc4cdb-jjjkj 1/1 Running 0 31s
試しにPodをデプロイしてみる。
$ kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 $ kubectl get pods -l run=busybox NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 23s $ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") $ kubectl exec -ti $POD_NAME -- nslookup kubernetes Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
13 Smoke Test
Secretが暗号化されているか確かめる。
$ kubectl create secret generic kubernetes-the-hard-way --from-literal="mykey=mydata"
Deploymentを作成してみる。
$ kubectl create deployment nginx --image=nginx deployment.apps/nginx created $ kubectl get pods -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-f89759699-4p5cz 1/1 Running 0 18s 10.200.0.3 worker-0 <none> <none>
port-forwardを確かめる。
$ POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") $ echo $POD_NAME nginx-f89759699-4p5cz $ kubectl port-forward $POD_NAME 8080:80 # 別タブで $ curl --head http://127.0.0.1:8080 HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Thu, 31 Dec 2020 07:16:38 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT Connection: keep-alive ETag: "5fd8c14a-264" Accept-Ranges: bytes
logを確かめる。
$ kubectl logs $POD_NAME /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 127.0.0.1 - - [31/Dec/2020:07:16:38 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.68.0" "-"
execを確かめる。
$ kubectl exec -ti $POD_NAME -- nginx -v nginx version: nginx/1.19.6
Serviceを確かめる。
$ kubectl expose deployment nginx --port 80 --type NodePort service/nginx exposed $ NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}') $ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-nginx-service kubernetes-the-hard-way INGRESS 1000 tcp:30410 False $ EXTERNAL_IP=$(gcloud compute instances describe worker-0 --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') $ curl -I http://${EXTERNAL_IP}:${NODE_PORT} HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Thu, 31 Dec 2020 07:25:16 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT Connection: keep-alive ETag: "5fd8c14a-264" Accept-Ranges: bytes
14 Cleaning Up
使用していたリソースを消すだけなので省略。
Cleaning Up前にみてみた
# default以外に、kube-node-lease, kube-public, kube-systemネームスペースが勝手に作成される。 $ k get ns NAME STATUS AGE default Active 60m kube-node-lease Active 60m kube-public Active 60m kube-system Active 60m # node一覧を見るとちゃんとworker-{0,1,2}がある。 $ k get no -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0 Ready <none> 30m v1.18.6 10.240.0.20 <none> Ubuntu 20.04.1 LTS 5.4.0-1029-gcp containerd://1.3.6 worker-1 Ready <none> 31m v1.18.6 10.240.0.21 <none> Ubuntu 20.04.1 LTS 5.4.0-1029-gcp containerd://1.3.6 worker-2 Ready <none> 31m v1.18.6 10.240.0.22 <none> Ubuntu 20.04.1 LTS 5.4.0-1029-gcp containerd://1.3.6 # 全ネームスペースにデプロイされているPodを一覧すると、最後の方で追加したcorednsがデプロイされていることがわかる。 $ $ k get po -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default busybox 1/1 Running 0 22m 10.200.0.2 worker-0 <none> <none> default nginx-f89759699-4p5cz 1/1 Running 0 18m 10.200.0.3 worker-0 <none> <none> kube-system coredns-5677dc4cdb-4km4b 1/1 Running 0 23m 10.200.2.2 worker-2 <none> <none> kube-system coredns-5677dc4cdb-jjjkj 1/1 Running 0 23m 10.200.1.2 worker-1 <none> <none>
感想
そもそもサーバの証明書とかのあたりはよく分かっていなかったので、そこらへんの手順が必要なことを知らなかった。 一回通しただけではよく分からなかったので、何度か試してみるのが良さそう。
kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxyなど、kubernetesに必要な各コンポーネントは、かなり独立していることがわかった。 動かすためには全部必要だけど、デプロイ順序が決まっているわけでもないし。
Control PlaneやNodeを追加するたびにこの作業をやることを考えると、やっぱりマネージドサービスを使うほうがいいなと思った。 ホストOSの管理とかもしなくていいし。