Kubernetes/K8S部署之kubeadm

安装K8S

环境

CentOS7.6

内存>2G

服务器两台及以上

IP 角色 安装软件
192.168.10.39 k8s-master docker
flannel
kubelet
kube-apiserver
kube-schduler
kube-controller-manager
192.168.10.40 k8s-node01 docker
flannel
kubelet
kube-proxy

镜像源

1
2
3
cd /etc/yum.repos.d
mkdir bak
mv *.repo bak/

CentOS-Base.repo

1
vi CentOS-Base.repo

内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#

[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

epel-7.repo

1
vi epel-7.repo

如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0

docker-ce.repo

1
vi docker-ce.repo

如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

kubernetes.repo

1
vi kubernetes.repo

如下

1
2
3
4
5
6
7
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装Docker

所有主机上均安装

安装Docker

1
2
3
4
5
6
7
8
9
10
11
# 卸载旧版本(如果安装过旧版本的话)
yum remove docker docker-common docker-selinux docker-engine
yum makecache
yum list docker-ce --showduplicates
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker

# 查看版本
docker --version
# 或者
docker version

显示Docker版本

Docker version 18.06.1-ce, build e68fc7a

如果报错

获取 GPG 密钥失败:[Errno 14] curl#7 - “Failed to connect to 2600:9000:2003:ca00:3:db06:4200:93a1: Network is unreachable”

解决方法

查看系统版本

1
cat /etc/redhat-release

mirrors.163.com 找到系统对应密钥

1
rpm --import http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7

针对Docker客户端版本大于 1.10.0 的用户

创建或修改 /etc/docker/daemon.json 文件

1
vi /etc/docker/daemon.json

添加或修改

1
2
3
{
"registry-mirrors": ["https://tiaudqrq.mirror.aliyuncs.com"]
}

重启Docker

1
2
systemctl daemon-reload
systemctl restart docker.service

环境初始化

所有主机上均安装

所有节点加载ipvs模块

1
2
3
4
5
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

关闭防火墙及selinux

1
2
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0

关闭 swap 分区

1
2
swapoff -a # 临时
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久

设置主机名及配置hosts

192.168.10.39 上执行

1
hostnamectl set-hostname k8s-master

192.168.10.40 上执行

1
hostnamectl set-hostname k8s-node01

在所有主机上上添加如下命令

1
2
3
4
cat >> /etc/hosts << EOF
192.168.10.39 k8s-master
192.168.10.40 k8s-node01
EOF

内核调整,将桥接的IPv4流量传递到iptables的链

1
2
3
4
5
6
7
8
9
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

设置系统时区并同步时间服务器

1
2
yum install -y ntpdate && \
ntpdate time.windows.com

检查所需端口

控制平面节点

协议 方向 端口范围 作用 使用者
TCP 入站 6443 Kubernetes API 服务器 所有组件
TCP 入站 2379-2380 etcd 服务器客户端 API kube-apiserver, etcd
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 10251 kube-scheduler kube-scheduler 自身
TCP 入站 10252 kube-controller-manager kube-controller-manager 自身

工作节点

协议 方向 端口范围 作用 使用者
TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
TCP 入站 30000-32767 NodePort 服务 所有组件

安装K8S

所有主机上均安装

Ubuntu

1
2
3
4
5
6
7
8
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

CentOS

1
2
3
4
5
6
7
8
9
10
11
12
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
systemctl enable kubelet && systemctl start kubelet

如果安装前有旧版本先调用卸载

1
2
kubeadm reset
yum remove -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

查看

1
kubectl version --client

你需要在每台机器上安装以下的软件包:

  • kubeadm:用来初始化集群的指令。
  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与集群通信的命令行工具。

准备镜像

获取kubeadm需要的所有镜像

1
kubeadm config images list

结果

k8s.gcr.io/kube-apiserver:v1.20.4
k8s.gcr.io/kube-controller-manager:v1.20.4
k8s.gcr.io/kube-scheduler:v1.20.4
k8s.gcr.io/kube-proxy:v1.20.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

编写脚本批量pull,在文本中输入:

1
vi kubeadm_config_images_list.sh

输入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#! /bin/bash
images=(
kube-apiserver:v1.20.4
kube-controller-manager:v1.20.4
kube-scheduler:v1.20.4
kube-proxy:v1.20.4
pause:3.2
etcd:3.4.13-0
coredns:1.7.0
)

for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

其中docker tag用于标记本地镜像,将其归入某一仓库(使用阿里镜像避免翻墙).

保存脚本kubeadm_config_images_list.sh后运行:

1
sudo chmod +x kubeadm_config_images_list.sh

让其变得可执行,然后在当前文件夹运行:

1
./kubeadm_config_images_list.sh

即可.

部署Kubernetes Master

只在Master节点执行

这里的apiserve需要修改成自己的master地址

执行

1
2
3
4
5
6
7
8
9
10
# 删除之前的配置
rm -rf /var/lib/etcd && \
rm -rf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && \
rm -rf /etc/kubernetes/manifests/* && \
kubeadm init \
--apiserver-advertise-address=192.168.10.39 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.4 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

成功提示信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.39:6443 –token am6y16.jfvsunhm28h7nn92 \
–discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

根据提示执行

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root用户执行

1
export KUBECONFIG=/etc/kubernetes/admin.conf

报错

The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz' failed with error

原因是

使用minikube部署测试环境时,它会修改kubeadm的配置文件,删除即可

解决方法 删除原来的配置文件

1
2
rm -rf /var/lib/etcd
rm -rf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

查看错误

1
2
systemctl status kubelet
systemctl list-units --failed

重新生成token

只在Master节点执行

token失效时执行

默认token的有效期为24小时,当过期之后,该token就不可用了,

如果后续有nodes节点加入,解决方法如下:

重新生成新的token

1
kubeadm token create

显示

vyaztf.snjf2za5636nekcv

查看列表

1
kubeadm token list

获取ca证书sha256编码hash值

1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

显示

21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

在Node节点就可以这样加入

1
2
kubeadm join 192.168.10.39:6443 --token vyaztf.snjf2za5636nekcv \
--discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

加入Kubernetes Node

在 Node 节点执行

使用kubeadm join 注册Node节点到Matser

kubeadm join 的内容,在上面kubeadm init 已经生成好了

1
2
kubeadm join 192.168.10.39:6443 --token am6y16.jfvsunhm28h7nn92 \
--discovery-token-ca-cert-hash sha256:21c1254f9da1bf4c46a6841b603ca1c21193eb32c27639ebf775f6fab8ba39de

查看是否成功

在Master节点执行

1
kubectl get nodes

安装网络插件

只需要在Master 节点执行

执行

1
wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

修改镜像地址:(有可能默认不能拉取,确保能够访问到quay.io这个registery,否则修改如下内容)

1
vi kube-flannel.yml

进入编辑,把106行,120行的内容,替换如下image,替换之后查看如下为正确

image 修改为 lizhenliang/flannel:v0.11.0-amd64

1
cat -n  kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64

出现

106 image: lizhenliang/flannel:v0.11.0-amd64
120 image: lizhenliang/flannel:v0.11.0-amd64

应用

1
2
kubectl apply -f kube-flannel.yml
ps -ef|grep flannel

出现

root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld –ip-masq –kube-subnet-mgr

查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作

1
kubectl get nodes

结果

NAME STATUS ROLES AGE VERSION
k8s-master Ready master 37m v1.15.0
k8s-node01 Ready 5m22s v1.15.0

查看所有

1
kubectl get pod -n kube-system

只有全部都为1/1则可以成功执行后续步骤,如果flannel需检查网络情况,重新进行如下操作

1
kubectl delete -f kube-flannel.yml

然后重新wget,然后修改镜像地址,然后

1
kubectl apply -f kube-flannel.yml

报错处理

如果使用1.20.x版本就会报错

1
yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4

错误

unable to recognize “kube-flannel.yml”: no matches for kind “DaemonSet” in version “extensions/v1beta1”

原因:DaemonSetDeploymentStatefulSetReplicaSet 在 v1.16 中将不再从 extensions/v1beta1apps/v1beta1apps/v1beta2 提供服务

解决方法是:

将yml配置文件内的api接口修改为 apps/v1

导致原因为之间使用的kubernetes 版本是1.15.x版本,1.16.x 及以上版本放弃部分API支持

所以有两种办法

  1. 使用旧版本的k8s
  2. 修改配置

即上面配置中的

1
apiVersion: extensions/v1beta1

修改为

1
apiVersion: apps/v1

1
apiVersion: rbac.authorization.k8s.io/v1beta1

修改为

1
apiVersion: rbac.authorization.k8s.io/v1

1
2
3
4
5
6
7
8
9
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel

所有DaemonSet下template的同级填上

1
2
3
selector:
matchLabels:
app: flannel

修改之后的全部配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: lizhenliang/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: lizhenliang/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

测试集群

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

1
kubectl create deployment nginx --image=nginx

出现

deployment.apps/nginx created

service

1
kubectl expose deployment nginx --port=80 --type=NodePort

出现

service/nginx exposed

查看

1
kubectl get pods,svc

出现

image-20210301175143560

访问地址:http://NodeIP:Port

此例就是:http://192.168.10.39:32338

部署 Dashboard

官网地址 https://github.com/kubernetes/dashboard

提示:

一定要注意Dashboard和Kubernetes的版本要匹配,否则无法获取数据

下载

1
2
3
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

vi recommended.yaml

修改内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 新添加这行
ports:
- port: 443
targetPort: 8443
nodePort: 30001 # 新添加这行
selector:
k8s-app: kubernetes-dashboard

设置证书

1
2
3
4
5
6
7
8
9
cd /etc/kubernetes/pki/
mkdir ui
cp apiserver.crt ui/
cp apiserver.key ui/
cd ui/
mv apiserver.crt dashboard.pem
mv apiserver.key dashboard-key.pem
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

回到这个yaml的路径下修改

1
vi kubernetes-dashboard.yaml

修改 dashboard-controller.yaml 文件,在args下面增加证书两行

1
2
- --tls-key-file=dashboard-key.pem
- --tls-cert-file=dashboard.pem

下载所需镜像

1
2
docker pull kubernetesui/dashboard:v2.2.0
docker pull kubernetesui/metrics-scraper:v1.0.6

执行

1
kubectl apply -f recommended.yaml

在浏览器访问地址: https://NodeIP:30001

即:https://192.168.10.39:30001

创建用户并绑定默认cluster-admin管理员集群角色:

1
kubectl create serviceaccount dashboard-admin -n kube-system

显示

serviceaccount/dashboard-admin created

输入

1
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

显示

clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

获取token

1
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

显示

1
2
3
4
5
6
7
8
9
10
11
12
13
Name:         dashboard-admin-token-qkhrg
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 058c8a26-ede4-47c1-90bf-9f1e46964555

Type: kubernetes.io/service-account-token

Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpkNjNmczBPclBWenJVLS1ZaU5qc2ttYlFUVVotdTh4cDkxRF9FUHd6M3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcWtocmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDU4YzhhMjYtZWRlNC00N2MxLTkwYmYtOWYxZTQ2OTY0NTU1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.AO34RH4-Ll8oM7wNCqLcjjnfR2e3Tb5vqzd_sdL3ZbeI3NGgpW2p58IiuWFGq2VBeF1UUhPtHZktRAULO3nK34pyBGH7J98HMSU_BPp3S8dIsz6bZapl-HYrySoehuE5IJv9-BfUNQtwMV-E0Ruh1zQ7MKSZDuG35p_31Ou2dtaZ0eBPSA1XuO7z53kfw5qOBNEppjcy4njhOyPZXcOjhKXXBz1Dwf7Y9O92DtlfXem_UaaeI7nrFmN1vJL8hjMtCGpGrp5ree3jeAlpifFO8t1Ma2OdOISrj_WpZ6wDdsIcO8MStsuAjLht9R60VC37xG_9Nd5VcXqOQBYmgs6tfg
ca.crt: 1066 bytes
namespace: 11 bytes

就可以通过上面的token登录了

查看所有

1
2
kubectl get svc --all-namespaces
kubectl get pods --all-namespaces

删除

1
kubectl delete -f recommended.yaml

卸载

卸载服务

1
kubeadm reset

停止所有容器

1
docker stop `docker ps -a -q`

删除所有容器

1
docker rm `docker ps -a -q`

删除所有的镜像

1
docker rmi --force `docker images -q`