k8s安装单一节 master集群构建

发布于 2020-03-02  1120 次阅读


在生产环境中,不推荐使用,因为没有冗余

前提条件:已经安装好了kuber,请安装前篇k8s安装操作

架构:

在第一台的基础上复制5台机器,配置好环境

1、在master节点:10.21.213.221配置k8s配置文件

cat /etc/kubernetes/kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controlPlaneEndpoint: "10.21.213.221:6443"
apiServer:
   certSANs:
   - 10.21.213.221
networking:
   podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

检查环境是否已经准备好:

systemctl status docker
systemctl status kubelet
---会报错,但是不用管他,确保已经安装了kubelet就行,后续会自动启动的

初始化

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml

可能遇到的问题:

解决

问题二:

这个报错,可能是因为我分配的内存过小,导致启动很慢,超了40s,所以报错,

解决:

只能重新安装

kubeadm reset --ignore-preflight-errors=Swap
rm -rf /var/lib/kubelet/
rm -rf /var/lib/cni/
rm -rf /var/lib/etcd/
rm -rf /etc/kubernetes/
rm -rf /root/.kube/config
#不删除,总是报错,可能是因为yaml配置的问题,导致生成镜像无法重复使用
docker rm `docker ps | awk '{print$1}'` 
docker rm `docker ps -a| awk '{print$1}'`

mkdir /etc/kubernetes
systemctl  restart kubelet
vim /etc/kubernetes/kubeadm-config.yaml
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml

 

成功标识:

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 10.21.213.221:6443 --token raduip.by90yi3u3crjd2qp \
--discovery-token-ca-cert-hash sha256:05d7114b1b32d67e40d01d63b8f9a72cb9c290636ab198fdb20cb93cf05b6611 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.21.213.221:6443 --token raduip.by90yi3u3crjd2qp \
--discovery-token-ca-cert-hash sha256:05d7114b1b32d67e40d01d63b8f9a72cb9c290636ab198fdb20cb93cf05b6611

按照提示继续操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
iptables -I INPUT -p tcp --dport=6443 -j ACCEPT
curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml | sed "s@192.168.0.0/16@10.244.0.0/16@g" | kubectl apply -f -
查看有没有问题

# 查看kubelet的日志

journalctl -fu kubelet

 

work节点加入master

在222节点上执行

kubeadm join 10.21.213.221:6443 --token raduip.by90yi3u3crjd2qp \
--discovery-token-ca-cert-hash sha256:05d7114b1b32d67e40d01d63b8f9a72cb9c290636ab198fdb20cb93cf05b6611

在master(221)上查看状态

kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.21.213.221 Ready master 23m v1.17.0
10.21.213.223 NotReady <none> 2m58s v1.17.0

---一般过了2-5分钟,才会变成Ready

 

---加入有问题

如果太久了也没有变成ready,那就是有问题,你可以通过命令查看

# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9d85f5447-8b24d 0/1 ContainerCreating 0 26m
coredns-9d85f5447-mp2w7 0/1 ContainerCreating 0 26m
etcd-10.21.213.221 1/1 Running 0 26m
kube-apiserver-10.21.213.221 1/1 Running 0 26m
kube-controller-manager-10.21.213.221 1/1 Running 0 26m
kube-proxy-qc2mk 1/1 Running 0 26m
kube-proxy-t6bqj 1/1 Running 0 21m
kube-proxy-wtpmb 0/1 ImagePullBackOff 0 6m26s
kube-scheduler-10.21.213.221 1/1 Running 0 26m

查看详细内容

# kubectl describe pod kube-proxy-wtpmb -n kube-system
Name: kube-proxy-wtpmb
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: 10.21.213.223/10.21.213.223
Start Time: Mon, 02 Mar 2020 15:45:47 +0800
Labels: controller-revision-hash=bdc8bbd47
。。。
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m41s default-scheduler Successfully assigned kube-system/kube-proxy-wtpmb to 10.21.213.223
Warning Failed 7m16s kubelet, 10.21.213.223 Failed to pull image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0": rpc error: code = Unknown desc = dial tcp: lookup dockerauth.cn-hangzhou.aliyuncs.com on [::1]:53: read udp [::1]:33942->[::1]:53: read: connection refused
Warning Failed 7m4s kubelet, 10.21.213.223 Failed to pull image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.aliyuncs.com/v2/: dial tcp: lookup registry.aliyuncs.com on [::1]:53: read udp [::1]:49098->[::1]:53: read: connection refused
Warning Failed 6m36s kubelet, 10.21.213.223 Failed to pull image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.aliyuncs.com/v2/: dial tcp: lookup registry.aliyuncs.com on [::1]:53: read udp [::1]:55991->[::1]:53: read: connection refused
Warning Failed 5m48s (x4 over 7m16s) kubelet, 10.21.213.223 Error: ErrImagePull
Normal Pulling 5m48s (x4 over 8m17s) kubelet, 10.21.213.223 Pulling image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0"
Warning Failed 5m48s kubelet, 10.21.213.223 Failed to pull image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.aliyuncs.com/v2/: dial tcp: lookup registry.aliyuncs.com on [::1]:53: read udp [::1]:41242->[::1]:53: read: connection refused
Warning Failed 5m23s (x7 over 7m15s) kubelet, 10.21.213.223 Error: ImagePullBackOff
Normal BackOff 3m8s (x16 over 7m15s) kubelet, 10.21.213.223 Back-off pulling image "registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0"

上面的报错是说防火墙的问题

不想排错,就直接iptables -F

下面的是如果重新加入node

在master节点上执行

#kubectl get node
NAME               STATUS   ROLES    AGE    VERSION
master.hanli.com   Ready    master   3d7h   v1.13.0
slave1.hanli.com   Ready    <none>   3d7h   v1.13.0
slave2.hanli.com   Ready    <none>   3d7h   v1.13.0
slave3.hanli.com   Ready    <none>   3d7h   v1.13.0

#查看下pod情况
[root@master] ~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE               NOMINATED NODE   READINESS GATES
curl-66959f6557-r4crd    1/1     Running   1          6m32s   10.244.2.7   slave2.hanli.com   <none>           <none>
nginx-58db6fdb58-5wt7p   1/1     Running   0          3d6h    10.244.1.4   slave1.hanli.com   <none>           <none>
nginx-58db6fdb58-7qkfn   1/1     Running   0          3d6h    10.244.3.2   slave3.hanli.com   <none>           <none>

#封锁node,排干node上的pod
#kubectl drain slave3.hanli.com --delete-local-data --force --ignore-daemonsets
node/slave3.hanli.com cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-8hhsb, kube-proxy-6vjcb; Deleting pods with local storage: monitoring-grafana-8445c4b56d-j2wfl
pod/monitoring-grafana-8445c4b56d-j2wfl evicted
pod/nginx-58db6fdb58-7qkfn evicted
node/slave3.hanli.com evicted

#删除slave3节点
[root@master] ~$ kubectl delete node slave3.hanli.com
node "slave3.hanli.com" deleted

在node接点上执行

在slave3上执行:
kubeadm reset

ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

2、重新使node加入集群
使节点加入集群的命令格式是kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

如果我们忘记了Master节点的token和hash,可以使用下面的命令来查看:

kubeadm token list
TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS

默认情况下,token的有效期是24小时,如果token已经过期的话,可以使用以下命令重新生成:

[root@master] ~$ kubeadm token create
sek6z6.knv9grhe9ggvtts0

如果你找不到–discovery-token-ca-cert-hash的值,可以使用以下命令生成:

[root@master] ~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
7845e6615fcae889eedd6fe55174d904ddd4d3ca5257f04c4438cc67cf06ba58

现在登录到工作节点服务器,然后用root权限运行如下命令加入集群

[root@slave3] /var/lib/cni$ kubeadm join 192.168.255.130:6443 --token sek6z6.knv9grhe9ggvtts0 --discovery-token-ca-cert-hash sha256:7845e6615fcae889eedd6fe55174d904ddd4d3ca5257f04c4438cc67cf06ba58

# 稍等即可看到节点已加入
[root@master] ~$ kubectl get nodes 
NAME               STATUS   ROLES    AGE     VERSION
master.hanli.com   Ready    master   3d10h   v1.13.2
slave1.hanli.com   Ready    <none>   3d10h   v1.13.2
slave2.hanli.com   Ready    <none>   3d10h   v1.13.2
slave3.hanli.com   Ready    <none>   85s     v1.13.2

原文:https://blog.csdn.net/fanren224/article/details/86610799