前提条件,确保单一节点的master测试ok,再进行这边的,免得猜错困难
环境拓扑:
|master(221)|master(222)|master(223)|work(224)|work(225)|keepalive(226)|
机器配置:3个master都要2核2G内存,否则会报错
一、在master进行操作
1.1、在三个master节点安装keepalived软件
# yum install -y socat keepalived ipvsadm conntrack
1.1.1、 创建如下keepalived的配置文件
2. 创建如下keepalived的配置文件 # cat /etc/keepalived/keepalived.conf global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interfaceens33virtual_router_id 80 priority 100 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress {10.21.213.226} } virtual_server10.21.213.2266443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server10.21.213.2266443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } real_server10.21.213.2226443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } real_server10.21.213.2236443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
1.1.2、复制keepalive文件过其他两个master
# ssh-keygen # ssh-copy-id root@10.21.213.222 -pxxx # ssh-copy-id root@10.21.213.223 -pxxx # scp -Pxxx /etc/keepalived/keepalived.conf 10.21.213.222:/etc/keepalived/ # scp -Pxxx /etc/keepalived/keepalived.conf 10.21.213.223:/etc/keepalived/
1.1.3、修改两个master的权重
# vim /etc/keepalived/keepalived.conf 222:223:
1.1.4、3台master启动keepalive
确定设定的虚拟ip没有被使用 # ping 10.21.213.226 # systemctl start keepalived # systemctl enable keepalived # systemctl status keepalived
1.2、安装kube
1.2.1、在随便一台master上执行
#vim /etc/kubernetes/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.17.0 controlPlaneEndpoint: "10.21.213.226:6443" apiServer: certSANs: - 10.21.213.221 - 10.21.213.222 - 10.21.213.223 - 10.21.213.226 networking: podSubnet: 10.244.0.0/16 imageRepository: "registry.aliyuncs.com/google_containers" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs
1.2.2、确定环境是否完成,否则会有很多错误
ping 10.21.213.226 ping baidu.com systemctl status dockersystemctl status kubelet
#是否已经做过kubeadm的配置,如果是,就需要参考上一篇,重置 #初始化发生如下的报错的,一般都是上面的环境没弄好
1.2.3、初始化
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml | sed "s@192.168.0.0/16@10.244.0.0/16@g" | kubectl apply -f - # ss -anlp | grep 6443 tcp LISTEN 0 128 [::]:6443 [::]:* users:(("kube-apiserver",pid=85654,fd=5))
1.3.1、同步kubeadm到其他两个master
各个master之间建立无密码可以互访,然后执行如下:
# cat k8s-cluster-other-init.sh #!/bin/bash IPS=(10.21.213.222 10.21.213.223) JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null` for index in 0 1; do ip=${IPS[${index}]} ssh -p1022 $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/" scp -P1022 /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt scp -P1022 /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key scp -P1022 /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key scp -P1022 /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub scp -P1022 /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt scp -P1022 /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key scp -P1022 /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt scp -P1022 /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key scp -P1022 /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf scp -P1022 /etc/kubernetes/admin.conf $ip:~/.kube/config ssh -p1022 ${ip} "${JOIN_CMD} --control-plane" done # kubectl get nodes NAME STATUS ROLES AGE VERSION 10.21.213.221 Ready master 39m v1.17.0 10.21.213.222 Ready master 17m v1.17.0 10.21.213.223 Ready master 4m48s v1.17.0
1.3.2、两个work节点加入
在两个work节点上执行
kubeadm join 10.21.213.226:6443 --token iukrll.q642vyk9qw9b4xu4 \ > --discovery-token-ca-cert-hash sha256:df987565d46a6641613626e4581ff4a3d5af13863c3f60b31ccd935d265a551e
Comments | NOTHING