本例中Master节点和Node节点部署在同一台主机上。
| 主机名 | 角色 | IP |
|---|---|---|
| CFZX55-21.host.com | kubelet | 10.211.55.21 |
| CFZX55-22.host.com | kubelet | 10.211.55.22 |
在21主机上操作。
#!/bin/bashKUBE_CONFIG="/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig"KUBE_APISERVER="https://10.211.55.10:7443"kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/bin/certs/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=${KUBE_CONFIG} kubectl config set-credentials kubelet-bootstrap \ --token=$(awk -F "," '{print $1}' /opt/kubernetes/bin/certs/kube-apiserver.token.csv) \ --kubeconfig=${KUBE_CONFIG} kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=${KUBE_CONFIG} kubectl config use-context default --kubeconfig=${KUBE_CONFIG}kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap说明:
关于
kubectl create clusterrolebinding命令中的"kubelet-bootstrap"第一个"kubelet-bootstrap":会在K8S集群中创建一个名为"kubelet-bootstrap"的"ClusterRoleBinding"资源,用
kubectl get clusterrolebinding查看第二个"--user=kubelet-bootstrap":表示将对应"ClusterRoleBinding"资源中的"subjects.kind"="User"、"subjects.name"="kubelet-bootstrap"
用
kubectl get clusterrolebinding kubelet-bootstrap -o yaml查看在经过本命令的配置后,KUBE-APISERVER的"kube-apiserver.token.csv"配置文件中的用户名"kubelet-bootstrap"便真正的在K8S集群中有了意义
执行脚本
[root@cfzx55-21 k8s-shell]# vim kubelet-config.sh[root@cfzx55-21 k8s-shell]# chmod +x kubelet-config.sh[root@cfzx55-21 k8s-shell]# ./kubelet-config.shCluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".[root@cfzx55-21 k8s-shell]#把生成的kubeconfig文件拷贝到22主机上
[root@cfzx55-21 k8s-shell]# scp /opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig root@cfzx55-22:/opt/kubernetes/cfg/root@cfzx55-22's password:kubelet-bootstrap.kubeconfig 100% 2102 1.2MB/s 00:00[root@cfzx55-21 k8s-shell]#/opt/kubernetes/bin/kubelet-startup.sh
#!/bin/sh./kubelet \ --v=2 \ --log-dir=/data/logs/kubernetes/kubelet \ --hostname-override=cfzx55-21.host.com \ --network-plugin=cni \ --cluster-domain=cluster.local \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet-config.yml \ --cert-dir=/opt/kubernetes/bin/certs \ --pod-infra-container-image=ibmcom/pause:3.1说明: 本例中为了方便测试,先删除--network-plugin
配置参数文件
/opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 0.0.0.0port: 10250readOnlyPort: 10255cgroupDriver: systemdclusterDNS:- 192.168.0.2clusterDomain: cluster.local failSwapOn: falseauthentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/bin/certs/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30sevictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5%maxOpenFiles: 1000000maxPods: 110生成上面两个文件
[root@cfzx55-21 bin]# vim kubelet-startup.sh[root@cfzx55-21 bin]# chmod +x kubelet-startup.sh[root@cfzx55-21 cfg]# mkdir -pv /data/logs/kubernetes/kubelet[root@cfzx55-21 bin]# cd ../cfg/[root@cfzx55-21 cfg]# vim kubelet-config.yml[root@cfzx55-21 cfg]#创建supervisor启动文件
/etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-55-21]command=/opt/kubernetes/bin/kubelet-startup.shnumprocs=1directory=/opt/kubernetes/binautostart=trueautorestart=truestartsecs=30startretries=3exitcodes=0,2stopsignal=QUITstopwaitsecs=10user=rootredirect_stderr=truestdout_logfile=/data/logs/kubernetes/kubelet/kubelet.stdout.logstdout_logfile_maxbytes=64MBstdout_logfile_backups=4stdout_capture_maxbytes=1MBstdout_events_enabled=false启动服务
[root@cfzx55-21 cfg]# supervisorctl updatekube-kubelet-55-21: added process group[root@cfzx55-21 cfg]# supervisorctl statusetcd-server-55-21 RUNNING pid 1033, uptime 6:49:47kube-apiserver-55-21 RUNNING pid 1034, uptime 6:49:47kube-controller-manager-55-21 RUNNING pid 3558, uptime 1:16:30kube-kubelet-55-21 RUNNING pid 3762, uptime 0:00:31kube-scheduler-55-21 RUNNING pid 3486, uptime 1:33:53[root@cfzx55-21 cfg]#把以上几个文件拷贝到22主机
[root@cfzx55-21 ~]# scp /opt/kubernetes/bin/kubelet-startup.sh root@cfzx55-22:/opt/kubernetes/bin/root@cfzx55-22's password:kubelet-startup.sh 100% 451 224.6KB/s 00:00[root@cfzx55-21 ~]# scp /opt/kubernetes/cfg/kubelet-config.yml root@cfzx55-22:/opt/kubernetes/cfg/root@cfzx55-22's password:kubelet-config.yml 100% 620 379.6KB/s 00:00[root@cfzx55-21 ~]# scp /etc/supervisord.d/kube-kubelet.ini root@cfzx55-22:/etc/supervisord.d/root@cfzx55-22's password:kube-kubelet.ini 100% 428 325.6KB/s 00:00在22主机上操作
# 创建目录[root@cfzx55-22 ~]# mkdir -pv /data/logs/kubernetes/kubelet# 修改[root@cfzx55-22 ~]# vim /opt/kubernetes/bin/kubelet-startup.sh[root@cfzx55-22 ~]# vim /etc/supervisord.d/kube-kubelet.ini启动服务
[root@cfzx55-22 ~]# supervisorctl update[root@cfzx55-22 ~]# supervisorctl statusetcd-server-55-22 RUNNING pid 1013, uptime 6:57:00kube-apiserver-55-22 RUNNING pid 1012, uptime 6:57:00kube-controller-manager-55-22 RUNNING pid 3256, uptime 1:23:46kube-kubelet-55-22 RUNNING pid 3357, uptime 0:00:44kube-scheduler-55-22 RUNNING pid 3187, uptime 1:34:20[root@cfzx55-22 ~]#查看kubelet证书请求
[root@cfzx55-22 ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITIONnode-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66M 8m23s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pendingnode-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dU 62s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending[root@cfzx55-22 ~]#批准证书
[root@cfzx55-22 ~]# kubectl certificate approve node-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66Mcertificatesigningrequest.certificates.k8s.io/node-csr-wDoAeuoFDj7XW1J4CeJqF9nZ7-uaWxi-kcQI55as66M approved[root@cfzx55-22 ~]# kubectl certificate approve node-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dUcertificatesigningrequest.certificates.k8s.io/node-csr-ytydzudqHyxrhrWO0MLxIs51gDxgGsxuwIts6C9r0dU approved查看节点
[root@cfzx55-22 ~]# kubectl get noNAME STATUS ROLES AGE VERSIONcfzx55-21.host.com NotReady <none> 2m19s v1.23.4cfzx55-22.host.com NotReady <none> 2m9s v1.23.4[root@cfzx55-22 ~]#由于没有安装网络插件,节点状态为NotReady
| 主机名 | 角色 | IP |
|---|---|---|
| CFZX55-21.host.com | kube-proxy | 10.211.55.21 |
| CFZX55-22.host.com | kube-proxy | 10.211.55.22 |
在运维主机200上操作
/opt/certs/kube-proxy-csr.json
{ "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "system:masters", "OU": "system" } ]}生成证书
[root@cfzx55-200 certs]# pwd/opt/certs[root@cfzx55-200 certs]# vim kube-proxy-csr.json[root@cfzx55-200 certs]# cfssl gencert \> -ca=ca.pem \> -ca-key=ca-key.pem \> -config=ca-config.json \> -profile=kubernetes \> kube-proxy-csr.json | cfssl-json -bare kube-proxy2022/03/13 14:58:59 [INFO] generate received request2022/03/13 14:58:59 [INFO] received CSR2022/03/13 14:58:59 [INFO] generating key: rsa-20482022/03/13 14:58:59 [INFO] encoded CSR2022/03/13 14:58:59 [INFO] signed certificate with serial number 7059336546962976839011302564466447811174926650952022/03/13 14:58:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[root@cfzx55-200 certs]# ll kube-proxy*.pem-rw------- 1 root root 1679 Mar 13 14:58 kube-proxy-key.pem-rw-r--r-- 1 root root 1415 Mar 13 14:58 kube-proxy.pem[root@cfzx55-200 certs]#把生成的证书拷贝到21和22节点
[root@cfzx55-200 certs]# scp kube-proxy*.pem root@cfzx55-21:/opt/kubernetes/bin/certs/root@cfzx55-21's password:kube-proxy-key.pem 100% 1679 591.9KB/s 00:00kube-proxy.pem 100% 1415 895.5KB/s 00:00[root@cfzx55-200 certs]# scp kube-proxy*.pem root@cfzx55-22:/opt/kubernetes/bin/certs/root@cfzx55-22's password:kube-proxy-key.pem 100% 1679 587.6KB/s 00:00kube-proxy.pem 100% 1415 737.5KB/s 00:00[root@cfzx55-200 certs]#创建脚本
#!/bin/bashKUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"KUBE_APISERVER="https://10.211.55.10:7443"kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/bin/certs/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=${KUBE_CONFIG} kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/bin/certs/kube-proxy.pem \ --client-key=/opt/kubernetes/bin/certs/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=${KUBE_CONFIG} kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=${KUBE_CONFIG} kubectl config use-context default --kubeconfig=${KUBE_CONFIG}执行脚本
[root@cfzx55-21 k8s-shell]# vim kube-proxy-config.sh[root@cfzx55-21 k8s-shell]# chmod +x kube-proxy-config.sh[root@cfzx55-21 k8s-shell]# ./kube-proxy-config.shCluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".[root@cfzx55-21 k8s-shell]#把生成kubeconfig文件拷贝到22主机。
[root@cfzx55-21 cfg]# scp kube-proxy.kubeconfig root@cfzx55-22:/opt/kubernetes/cfg/root@cfzx55-22's password:kube-proxy.kubeconfig 100% 6224 2.9MB/s 00:00[root@cfzx55-21 cfg]#编写脚本
#!/bin/bashipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*")do /sbin/modinfo -F filename $i &>/dev/null if [ $? -eq 0 ];then /sbin/modprobe $i fidone在21上操作
[root@cfzx55-21 k8s-shell]# vim ipvs.sh[root@cfzx55-21 k8s-shell]# chmod +x ipvs.sh[root@cfzx55-21 k8s-shell]# ./ipvs.sh[root@cfzx55-21 k8s-shell]# lsmod | grep ip_vsip_vs_wlc 12519 0ip_vs_sed 12519 0ip_vs_pe_sip 12740 0nf_conntrack_sip 33780 1 ip_vs_pe_sipip_vs_nq 12516 0ip_vs_lc 12516 0ip_vs_lblcr 12922 0ip_vs_lblc 12819 0ip_vs_ftp 13079 0ip_vs_dh 12688 0ip_vs_sh 12688 0ip_vs_wrr 12697 0ip_vs_rr 12600 0ip_vs 145458 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblcnf_nat 26583 4 ip_vs_ftp,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4nf_conntrack 139264 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack[root@cfzx55-21 k8s-shell]#在22主机上同样操作。本处略。
/opt/kubernetes/bin/kube-proxy-startup.sh
#!/bin/sh./kube-proxy \ --v=2 \ --log-dir=/data/logs/kubernetes/kube-proxy \ --config=/opt/kubernetes/cfg/kube-proxy-config.yml生成文件、调整权限,创建目录
[root@cfzx55-22 bin]# vim kube-proxy-startup.sh[root@cfzx55-22 bin]# chmod +x kube-proxy-startup.sh[root@cfzx55-22 bin]# mkdir -p /data/logs/kubernetes/kube-proxy配置参数文件
/opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfigurationapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0metricsBindAddress: 0.0.0.0:10249clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfighostnameOverride: cfzx55-22.host.comclusterCIDR: 192.168.0.0/16把脚本和参数文件拷贝到21主机。
[root@cfzx55-22 ~]# scp /opt/kubernetes/bin/kube-proxy-startup.sh root@cfzx55-21:/opt/kubernetes/bin/root@cfzx55-21's password:kube-proxy-startup.sh 100% 135 79.4KB/s 00:00[root@cfzx55-22 ~]# scp /opt/kubernetes/cfg/kube-proxy-config.yml root@cfzx55-21:/opt/kubernetes/cfg/root@cfzx55-21's password:kube-proxy-config.yml 100% 268 162.6KB/s 00:00[root@cfzx55-22 ~]#在21主机上修改
# 修改主机名[root@cfzx55-21 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml# 创建目录[root@cfzx55-21 ~]# mkdir -p /data/logs/kubernetes/kube-proxy创建supervisor启动文件
/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-55-21]command=/opt/kubernetes/bin/kube-proxy-startup.shnumprocs=1directory=/opt/kubernetes/binautostart=trueautorestart=truestartsecs=30startretries=3exitcodes=0,2stopsignal=QUITstopwaitsecs=10user=rootredirect_stderr=truestdout_logfile=/data/logs/kubernetes/kube-proxy/kube-proxy.stdout.logstdout_logfile_maxbytes=64MBstdout_logfile_backups=4stdout_capture_maxbytes=1MBstdout_events_enabled=false启动服务
[root@cfzx55-21 ~]# supervisorctl statusetcd-server-55-21 RUNNING pid 1033, uptime 7:42:48kube-apiserver-55-21 RUNNING pid 1034, uptime 7:42:48kube-controller-manager-55-21 RUNNING pid 3558, uptime 2:09:31kube-kubelet-55-21 RUNNING pid 4143, uptime 0:37:23kube-proxy-55-21 RUNNING pid 8899, uptime 0:00:31kube-scheduler-55-21 RUNNING pid 3486, uptime 2:26:54[root@cfzx55-21 ~]#把ini文件拷贝到22主机
[root@cfzx55-21 ~]# scp /etc/supervisord.d/kube-proxy.ini root@cfzx55-22:/etc/supervisord.d/root@cfzx55-22's password:kube-proxy.ini 100% 435 245.6KB/s 00:00[root@cfzx55-21 ~]#在22主机上启动服务
# 修改程序名称[root@cfzx55-22 ~]# vim /etc/supervisord.d/kube-proxy.ini# 启动服务[root@cfzx55-22 ~]# supervisorctl updatekube-proxy-55-22: added process group[root@cfzx55-22 ~]# supervisorctl statusetcd-server-55-22 RUNNING pid 1013, uptime 7:44:52kube-apiserver-55-22 RUNNING pid 1012, uptime 7:44:52kube-controller-manager-55-22 RUNNING pid 3256, uptime 2:11:38kube-kubelet-55-22 RUNNING pid 3740, uptime 0:39:37kube-proxy-55-22 RUNNING pid 8714, uptime 0:00:32kube-scheduler-55-22 RUNNING pid 3187, uptime 2:22:12[root@cfzx55-22 ~]#至此,kubernetes集群部署完成。