安装Docker
K8S是基于Docker的,所以我们要先安装Docker
https://www.psvmc.cn/article/2018-12-13-docker-install.html
安装K8S
- 配置
/etc/hosts文件,将所有机器配置成通过主机名可以访问。
- 如果环境中有代理,请一定要在环境变量中将no_proxy配置正确。
- master还需要执行下面的命令
k8s.conf
1 2 3
| cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF
|
k8s.conf
1 2 3 4
| cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
|
配置生效
Ubuntu
1 2 3 4 5 6 7 8 9
| apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
|
CentOS
kubernetes.repo
1 2 3 4 5 6 7 8 9
| cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
|
selinux
1 2 3
| setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
|
安装
1 2
| yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4 systemctl enable kubelet && systemctl start kubelet
|
查看
1
| kubectl version --client
|
你需要在每台机器上安装以下的软件包:
kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
kubectl:用来与集群通信的命令行工具。
kubeadm:用来初始化集群的指令。
Minikube
minikube的下载和启动
minikube 是一个工具, 能让你在本地运行 Kubernetes。 minikube 在你本地的个人计算机(包括 Windows、macOS 和 Linux PC)运行一个单节点的 Kubernetes 集群,以便你来尝试 Kubernetes 或者开展每天的开发工作。
注意
这个创建的是单节点的,这个节点是创建一个虚拟机,依赖外部的vritualbox的方式。
Github:https://github.com/kubernetes/minikube
下载
1 2 3
| curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.17.1/minikube-linux-amd64 chmod +x minikube cp minikube /usr/local/sbin
|
启动
1
| minikube start --vm-driver=none --image-mirror-country='cn'
|
注意
--image-mirror-country='cn' 这个选项是专门为中国准备的,这个选项会让你使用阿里云的镜像仓库.
卸载
1 2
| minikube stop minikube delete
|
Kind
Kind 文档 https://kind.sigs.k8s.io/docs/user/quick-start/
Kind 特点?
基于 Docker 而不是虚拟化
Kind 不是打包一个虚拟化镜像,而是直接将 K8S 组件运行在 Docker。带来了什么好处呢?
- 不需要运行 GuestOS 占用资源更低。
- 不基于虚拟化技术,可以在 VM 中使用。
- 文件更小,更利于移植。
支持多节点 K8S 集群和 HA
Kind 支持多角色的节点部署,你可以通过配置文件控制你需要几个 Master 节点,几个 Worker 节点,以更好的模拟生产中的实际环境。
但是
因为Kind运行在容器内,所以即使配置了nodePort,也无法外部访问。
所以还是建议测试时使用Minikube
安装
查看环境变量
下载
1 2 3
| curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-amd64 chmod +x ./kind cp ./kind /usr/local/sbin
|
查看安装情况
最简单的情况,我们使用一条命令就能创建出一个单节点的 K8S 环境
查看信息
1
| kubectl cluster-info --context kind-kind
|
显示信息
Kubernetes control plane is running at https://127.0.0.1:35517
KubeDNS is running at https://127.0.0.1:35517/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
删除
可是呢,默认配置有几个限制大多数情况是不满足实际需要的,默认配置的主要限制如下:
- APIServer 只监听了 127.0.0.1,也就意味着在 Kind 的本机环境之外无法访问 APIServer
- 由于国内的网络情况关系,Docker Hub 镜像站经常无法访问或超时,会导致无法拉取镜像或拉取镜像非常的慢
这边提供一个配置文件来解除上诉的限制:
创建配置文件
内容如下
1 2 3 4 5 6 7 8
| kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: apiServerAddress: "192.168.7.11" containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://tiaudqrq.mirror.aliyuncs.com"]
|
注意
apiServerAddress 配置局域网 IP 或想监听的 IP
https://tiaudqrq.mirror.aliyuncs.com 配置 Docker Hub 加速镜像站点
之后运行
1 2 3 4
| kind delete cluster
kind create cluster --config kind.yaml
|
如果长时间卡在 Ensuring node image (kindest/node:v1.20.2) 这个步骤,可以使用
1
| docker pull kindest/node:v1.20.2
|
来得到镜像拉取进度条。
查看信息
1
| kubectl cluster-info --context kind-kind
|
可以看到
Kubernetes control plane is running at https://192.168.7.11:34418
KubeDNS is running at https://192.168.7.11:34418/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
查看节点
查看集群运行情况
1
| kubectl get po -n kube-system
|
实践
Deployment 实践
首先配置好 Deployment 的配置文件(这里用的是 nginx 镜像)
创建文件夹
1 2 3
| mkdir /root/k8s cd /root/k8s vi app.yaml
|
app.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: web replicas: 1 template: metadata: labels: app: web spec: containers: - name: web image: nginx ports: - containerPort: 80
|
通过 kubectl 命令创建服务
创建
1 2
| kubectl delete -f app.yaml kubectl create -f app.yaml
|
等待一会后,查看 Pod 调度、运行情况。
我看可以看到 Pod 的名字、运行状态、Pod 的 ip、还有所在Node的名字等信息
1
| kubectl get Pods -o wide
|

启动成功 READY 的状态会是1/1
如果不是,查看pod启动失败的原因
1
| kubectl describe pod web
|
删除
1
| kubectl delete -f app.yaml
|
Service 实践
k8s中创建service时,需要指定type类型,可以分别指定ClustrerIP,NodePort,LoadBalancer三种。
- ClusterIP服务是k8s默认的服务。给你一个集群内的服务,集群内的其他应用都可以访问该服务,集群外部无法访问它。
- NodePort 服务是引导外部流量到你的服务的最原始方式。NodePort,正如这个名字所示,在所有节点(虚拟机)上开放一个特定端口,任何发送到该端口的流量都被转发到对应服务。
- LoadBalancer 如果你想要直接暴露服务,这就是默认方式。所有通往你指定的端口的流量都会被转发到对应的服务。它没有过滤条件,没有路由等。这意味着你几乎可以发送任何种类的流量到该服务,像 HTTP,TCP,UDP,Websocket,gRPC 或其它任意种类。
LoadBalancer 和NodePort 其实是同一种方式,区别在于LoadBalancer 比 NodePort多了一步,就是调用cloud provider去创建LB来向节点导流。
通过上面创建的 Deployment 我们还没法合理的访问到应用,下面我们就创建一个 service 作为我们访问应用的入口。
访问此服务,可以通过以下几种方式:
- K8s集群外部,访问
External-IP:Port
- K8s集群外部,访问
NodeIP:NodePort
- K8s集群内部,访问
Cluster-IP:Port
- K8s集群内部,访问
PodIP:TargetPort
首先创建service配置
service.yaml
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: Service metadata: name: web spec: ports: - port: 8888 protocol: TCP targetPort: 80 selector: app: web
|
创建服务
1
| kubectl create -f service.yaml
|
提示创建成功
service/web created
查看服务
结果
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 10m
web ClusterIP 10.96.145.233 8888/TCP 2m31s
接下来就可以在任意节点通过ClusterIP负载均衡的访问后端应用了
查看
1
| kubectl describe service web
|
外部访问
为了方便测试可以先把防火墙关了
1
| systemctl stop firewalld
|
这里提供四种方式
代理 该方式只建议测试时使用。
端口转发 该方式只建议测试时使用。
NodePort 该方式可以配合Nginx使用,可用于正式环境。
ingress-nginx 该方式就相当于把Nginx集成到了K8S这种,可用于正式环境。
代理
启动 Kubernetes proxy 模式:
1
| kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8080
|
注意
--address='0.0.0.0'允许远程访问
--accept-hosts='^*$' 设置API server接收所有主机的请求
这样你可以通过Kubernetes API,使用如下模式来访问这个服务:
1
| http://ip:8080/api/v1/namespaces/[namespace-name]/services/[service-name]/proxy
|
要访问我们上面定义的服务,你可以使用如下地址:
1
| http://192.168.7.11:8080/api/v1/namespaces/default/services/web/proxy/
|
端口转发
通过 kubectl port-forward 端口转发的方式访问 K8S 中的应用
1
| kubectl port-forward --address 0.0.0.0 service/web 30010:80
|
注意
一定要添加--address 0.0.0.0,否则只能本地访问。
查看端口
1
| netstat -nltp | grep 30010
|
访问
1 2
| curl http://localhost:30010 curl http://192.168.7.11:30010
|
通过局域网访问
http://192.168.7.11:30010
NodePort
注意这种方式在使用Kind的时候无效,但是实际是可用的。使用minikube就可以。
service3.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| apiVersion: v1 kind: Service metadata: labels: app: nginx name: web spec: ports: - port: 8888 protocol: TCP targetPort: 80 nodePort: 30010 type: NodePort selector: app: web
|
运行
1 2 3
| kubectl delete -f service3.yaml kubectl create -f service3.yaml kubectl describe service web
|
查看服务
访问
http://192.168.7.11:30010
ingress-nginx
- ingress是k8s集群的请求入口,可以理解为service的service
- 通常说的ingress一般包括ingress资源对象及ingress-controller两部分组成
- ingress-controller有多种实现,社区原生的是ingress-nginx,根据具体需求选择
- ingress自身的暴露有多种方式,需要根据基础环境及业务类型选择合适的方式
官网
https://github.com/kubernetes/ingress-nginx
ingress-controller
下载
1
| wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
|
配置文件中我们可以搜索kind: Deployment,可以看到使用到的端口
注意
官方的配置文件中的镜像下载不下来,这里提供一个更换过镜像地址的配置文件
ingress-controller.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643
| apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx
---
apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx automountServiceAccountToken: true ---
apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: ---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch ---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch ---
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx ---
apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller ---
apiVersion: apps/v1 kind: DaemonSet metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: hostNetwork: true dnsPolicy: ClusterFirst containers: - name: controller image: registry.cn-beijing.aliyuncs.com/kole_chang/controller:v1.0.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key - --watch-ingress-without-class=true securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission ---
apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginx spec: controller: k8s.io/ingress-nginx ---
apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission webhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses ---
apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook ---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update ---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx ---
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - '' resources: - secrets verbs: - get - create ---
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx ---
apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 ---
apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000
|
运行
1
| kubectl apply -f ingress-controller.yaml
|
查看运行状态
1
| kubectl get pods --namespace=ingress-nginx
|
ingress配置
部署完ingress-controller,接下来就按照测试的需求来创建ingress资源。
也就是我们的转发的规则
ingresstest.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: ingressClassName: nginx rules: - host: www.abc.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 80
|
部署资源
1
| kubectl apply -f ingresstest.yaml
|
如果报错
Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: Post https://ingress-nginx-controller-admission.kube-system.svc:443/networking/v1beta1/ingresses? timeout=10s: dial tcp 10.0.0.5:8443: connect: connection refused
原因分析
我刚开始使用yaml的方式创建nginx-ingress,之后删除了它创建的命名空间以及 clusterrole and clusterrolebinding ,但是没有删除ValidatingWebhookConfiguration ingress-nginx-admission,这个ingress-nginx-admission是在yaml文件中安装的。当我再次使用helm安装nginx-ingress之后,创建自定义的ingress就会报这个错误。
解决方案
使用下面的命令查看 webhook
1
| kubectl get validatingwebhookconfigurations
|
删除ingress-nginx-admission
1
| kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
|
域名映射
找到hosts文件
1
| C:\Windows\System32\drivers\etc
|
添加 以下内容并保存即可恢复
1
| 192.168.7.11 www.abc.com
|
再ping一下
这时候就可以访问了
http://www.abc.com
WEB UI
https://github.com/kubernetes/dashboard/releases/tag/v2.4.0
注意版本的支持情况,这里用的k8s为1.20.4,所以要用2.4.0版本
下载
1
| wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
|
找到
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
|
修改为
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30020 selector: k8s-app: kubernetes-dashboard type: NodePort
|
其中添加了nodePort: 30020和type: NodePort
运行
1
| kubectl apply -f recommended.yaml
|
这时候就可以访问了
https://192.168.7.11:30020
创建用户并绑定默认cluster-admin管理员集群角色:
1
| kubectl create serviceaccount dashboard-admin -n kube-system
|
显示
serviceaccount/dashboard-admin created
输入
1
| kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
|
显示
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
获取token
1
| kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
|
可以看到