更新:今天抽时间写了昨天部署的一键脚本: date:Aug 3,2019
《Kubernetes最新版本1.15.1,shell脚本一键部署,刚刚完成测试,实用。》

最近利用空闲时间,把之前部署和学习k8s时的整个过程和遇到的问题总结了一下,分享给有需要的小伙伴。对自己也是一种知识的加固
针对于K8S的安装有很多种方法,像二进制啊等,这里介绍的是kubeadm方法,在线拉取镜像,使用的是最新版的镜像。废话不多说。

组件版本清单:

镜像组件、应用软件 版本
Virtual Box 6.x
Secure CRT X
Docker version 19.03.1
OS centos7.x
k8s.gcr.io/kube-scheduler v1.15.1
k8s.gcr.io/kube-proxy v1.15.1
k8s.gcr.io/kube-controller-manager v1.15.1
k8s.gcr.io/kube-apiserver v1.15.1
quay.io/calico/node v3.1.6
quay.io/calico/cni v3.1.6
quay.io/calico/kube-controllers v3.1.6
k8s.gcr.io/coredns 1.3.1
k8s.gcr.io/etcd 3.3.10
quay.io/calico/node v3.1.0
quay.io/calico/cni v3.1.0
quay.io/calico/kube-controllers v3.1.0
k8s.gcr.io/pause 3.1
一、准备工作

建议每个虚拟机的配置如下:

内存 处理器个数
2048M 2

K8S的各个节点配置情况:

hostname ip addr
k8s-node1 192.168.10.9
k8s-node2 192.168.10.10
k8s-node3 192.168.10.11

首先开启我们的linux的安装步骤。

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
设置内存大小:我这里设置4G
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
虚拟磁盘动态分配:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
虚拟硬盘设置100G大小:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
选着Centos-7-X86_64-1511.iso镜像盘:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
处理器数量大小设置为2:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

整个linux的操作系统安装较为简单,这里不是本内容重点,就不一一介绍了,若安装过程遇到问题,请移步:https://blog.csdn.net/qq_28513801/article/details/90143552

等待安装完成之后,修改一下网卡配置文件,然后重启网卡。这里可以直接使用vi 编辑配置文件,也可以更快速的利用sed修改。

[root@localhost ~]# sed -i 's/^ONBOOT=no/ONBOOT=yes/g'  /etc/sysconfig/network-scripts/ifcfg-enp0s3
[root@localhost ~]# /etc/init.d/network restart

如下图所示:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
为了使用linux系统方便,我们直接使用终端工具Secure CRT进行连接操作。由于我们安装k8s时需要使用因特网,这里就使用了NAT网络,那么我们可以打开我们的虚拟机,设置一个端口转发,来便捷操作LINUX系统。
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
点击端口转发,设置一下端口,这里要避开常用的端口。
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
设置好了端口号之后,就可以使用crt进行连接了,添加一个规则,这里使用真实的物理机端口2222来映射到虚拟机的22端口:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
由于进行了端口转发,那么ip地址就使用本地地址127.0.0.1,端口号不在是22,而是我们设置的2222.注意,因为端口是映射到宿主机上的,所以主机地址要填写为127.0.0.1:2222
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
点击接受并保存
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
输入我们的密码
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
然后进行一些简单设置:防止乱码,设置成UTF-8编码
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
这已经正常连接了
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
然后修改我们的主机名为k8s-node1

[root@localhost ~]# hostnamectl set-hostname k8s-node1
[root@localhost ~]# bash
[root@k8s-node1 ~]# 

1 开启安装之路

不建议使用CentOS 7 自带的yum源,因为安装软件和依赖时会非常慢甚至超时失败。这里,我们使用阿里云的源予以替换,执行如下命令,替换文件/etc/yum.repos.d/CentOS-Base.repo:

[root@k8s-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
bash: wget: command not found这里由于我采用最小化安装,所以不带wget命令,那么我们先安装一下该命令。
[root@k8s-node1 ~]# yum search wget
Loaded plugins: fastestmirror
base                                                                                                                      | 3.6 kB  00:00:00     
extras                                                                                                                    | 3.4 kB  00:00:00     
updates                                                                                                                   | 3.4 kB  00:00:00     
(1/4): base/7/x86_64/group_gz                                                                                             | 166 kB  00:00:00     
(2/4): extras/7/x86_64/primary_db                                                                                         | 205 kB  00:00:01     
(3/4): base/7/x86_64/primary_db                                                                                           | 6.0 MB  00:00:01     
(4/4): updates/7/x86_64/primary_db                                                                                        | 7.4 MB  00:00:03     
Determining fastest mirrors* base: mirrors.163.com* extras: mirrors.neusoft.edu.cn* updates: mirrors.163.com
=============================================================== N/S matched: wget ===============================================================
wget.x86_64 : A utility for retrieving files using the HTTP or FTP protocolsName and summary matches only, use "search all" for everything.
[root@k8s-node1 ~]# yum install -y wget安装完成之后,重新执行命令:[root@k8s-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
--2019-08-01 07:16:22--  http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 150.138.121.102, 150.138.121.100, 150.138.121.98, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|150.138.121.102|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to:/etc/yum.repos.d/CentOS-Base.repo’100%[=======================================================================================================>] 2,523       --.-K/s   in 0s      2019-08-01 07:16:22 (287 MB/s) -/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523][root@k8s-node1 ~]# [root@k8s-node1 ~]# yum makecache     #建立一个缓存
Loaded plugins: fastestmirror
base                                                                                                                      | 3.6 kB  00:00:00     
extras                                                                                                                    | 3.4 kB  00:00:00     
updates                                                                                                                   | 3.4 kB  00:00:00     
(1/8): extras/7/x86_64/prestodelta                                                                                        |  65 kB  00:00:00     
(2/8): extras/7/x86_64/other_db                                                                                           | 127 kB  00:00:00     
(3/8): extras/7/x86_64/filelists_db                                                                                       | 246 kB  00:00:00     
(4/8): base/7/x86_64/other_db                                                                                             | 2.6 MB  00:00:01     
(5/8): updates/7/x86_64/prestodelta                                                                                       | 945 kB  00:00:01     
(6/8): base/7/x86_64/filelists_db                                                                                         | 7.1 MB  00:00:03     
(7/8): updates/7/x86_64/other_db                                                                                          | 764 kB  00:00:01     
(8/8): updates/7/x86_64/filelists_db                                                                                      | 5.2 MB  00:00:03     
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
Metadata Cache Created

1.2 关闭防火墙

防火墙一定要提前关闭,否则在后续安装K8S集群的时候是个trouble maker。执行下面语句关闭,并禁用开机启动:

[root@k8s-node1 ~]#  systemctl stop firewalld & systemctl disable firewalld
[1] 17699
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k8s-node1 ~]# 

1.3 关闭Swap

类似ElasticSearch集群,在安装K8S集群时,Linux的Swap内存交换机制是一定要关闭的,否则会因为内存交换而影响性能以及稳定性。这里,我们可以提前进行设置:

执行swapoff -a可临时关闭,但系统重启后恢复

[root@k8s-node1 ~]# swapoff -a
[1]+  Done                    systemctl stop firewalld
[root@k8s-node1 ~]# 

编辑/etc/fstab,注释掉包含swap的那一行即可,重启后可永久关闭,如下所示:

[root@k8s-node1 ~]# vi /etc/fstab 
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=dedcd30c-93a8-4e26-b111-d7c68a752bf9 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
~或直接执行:sed -i '/ swap / s/^/#/' /etc/fstab

关闭成功后,使用top命令查看,如下图所示表示正常:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

2 安装Docker

当然,安装K8S必须要先安装Docker。这里,我们使用yum方式安装Docker社区最新版。Docker官方文档是最好的教材:
https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites
但由于方教授的防火墙,文档网站经常无法查看,并且使用yum安装也经常会超时失败。我们使用如下方式解决:

2.1 添加仓库

添加阿里云的Docker仓库:

[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
bash: yum-config-manager: command not found   
如果出现上面报错,那么我们就安装该命令[root@k8s-node1 ~]# yum search yum-config-manager
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.aliyun.com* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
========================================================== Matched: yum-config-manager ==========================================================
yum-utils.noarch : Utilities based around the yum package manager
[root@k8s-node1 ~]# yum install -y yum-utils.noarch  #安装该命令重新执行该命令:
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-node1 ~]# yum makecache

2.2 安装Docker

执行以下命令,安装最新版Docker:

[root@k8s-node1 ~]# yum install docker-ce -y 

出现如下则已安装:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
安装完成后,查询下docker版本:运行docker –version,可以看到安装了截止目前最新的19.03.1版本:

[root@k8s-node1 ~]# docker --version
Docker version 19.03.1, build 74b1e89
2.3 启动Docker

启动Docker服务并激活开机启动:

[root@k8s-node1 ~]# systemctl start docker & systemctl enable docker
[1] 20629
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-node1 ~]# 

运行一条命令验证一下:

[root@k8s-node1 ~]# docker run hello-world

出现如下代表成功启动:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

下面将详细介绍在Node1上安装Kubernetes的过程,安装完毕后,再进行虚拟机的复制出Node2、Node3即可。

我们将现有的虚拟机称之为Node1,用作主节点。为了减少工作量,在Node1安装Kubernetes后,我们利用VirtualBox的虚拟机复制功能,复制出两个完全一样的虚拟机作为工作节点。三者角色为:

k8s-node1:Master
k8s-node2:Woker
k8s-node3:Woker

2 安装Kubernetes

官方文档永远是最好的参考资料:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 仅供参考

2.1 配置K8S的yum源

官方仓库无法使用,建议使用阿里源的仓库,执行以下命令添加kubernetes.repo仓库:

[root@k8s-node1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-node1 ~]# 
2.2 关闭SeLinux

执行:setenforce 0

[root@k8s-node1 ~]# setenforce 0
[root@k8s-node1 ~]# getenforce 
Permissive

一个小建议

这里建议如果做高可用的话,要打开IP_VS模块
因为:pod的负载均衡是用kube-proxy来实现的,实现方式有两种,一种是默认的iptables,一种是ipvs,ipvs比iptable的性能更好而已。
后面master的高可用和集群服务的负载均衡要用到ipvs,所以加载内核的以下模块

需要开启的模块是
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4检查有没有开启
cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4没有的话,使用以下命令加载
[root@k8s-node1 ~]# modprobe -- ip_vs
[root@k8s-node1 ~]# modprobe -- ip_vs_rr
[root@k8s-node1 ~]# modprobe -- ip_vs_wrr
[root@k8s-node1 ~]# modprobe -- ip_vs_sh
[root@k8s-node1 ~]# modprobe -- nf_conntrack_ipv4

下面继续进行我们的安装

2.3 安装K8S组件

执行以下命令安装 kubelet、kubeadm、kubectl :

[root@k8s-node1 ~]# yum install -y kubelet kubeadm kubectl

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

2.4 配置kubelet的cgroup drive

确保docker 的cgroup drive 和kubelet的cgroup drive一样:

[root@k8s-node1 ~]# docker info | grep -i cgroup
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.Cgroup Driver: cgroupfs
[root@k8s-node1 ~]# 

如图所示:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
然后再查看我们的kubectl的Cgroup。

[root@k8s-node1 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cat: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf: No such file or directory如果提示找不到该文件,就再去我们的:/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf [root@k8s-node1 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf 
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
[root@k8s-node1 ~]# 

没有的话,我们就加入进去

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

如图所示:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
然后使用命令重新加载一下:

[root@k8s-node1 ~]# systemctl daemon-reload

3 启动kubelet

注意,根据官方文档描述,安装kubelet、kubeadm、kubectl三者后,要求启动kubelet:

[root@k8s-node1 ~]#  systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

但实际测试发现,无法启动,报如下错误:

[root@k8s-node1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: activating (auto-restart) (Result: exit-code) since Thu 2019-08-01 07:52:48 EDT; 6s agoDocs: https://kubernetes.io/docs/Process: 21245 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)Main PID: 21245 (code=exited, status=255)Aug 01 07:52:48 k8s-node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 01 07:52:48 k8s-node1 systemd[1]: Unit kubelet.service entered failed state.
Aug 01 07:52:48 k8s-node1 systemd[1]: kubelet.service failed.
[root@k8s-node1 ~]# 

对于上面的报错,我们查看了一下日志:发现报错

error: open /var/lib/kubelet/config.yaml: no such file or directory
事实上,我们还没有配置kubelet,所以这个是很正常的。不需理会。也就是说,现在无法启动并不影响后续操作,继续!

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

4 下载K8S的Docker镜像(重点)

本文使用的是K8S官方提供的kubeadm工具来初始化K8S集群,而初始化操作kubeadm init会默认去访问谷歌的服务器,以下载集群所依赖的Docker镜像,因此也会超时失败。
但是,只要我们可以提前导入这些镜像,kubeadm init操作就会发现这些镜像已经存在,就不会再去访问谷歌。
网上有一些方法可以获得这些镜像,如利用Docker Hub制作镜像等,但稍显繁琐。

方法一:(推荐方法二)

这里,我已将初始化时用到的所有Docker镜像整理好了,镜像版本是V1.15.1。这个也推荐大家使用大家使用。

链接:https://pan.baidu.com/s/1Pk5B6e2-14yZW11PYMdtbQ 
提取码:7wox 
···········

这里顺便带一下,打包的命令:

[root@k8s-node1 mnt]# docker save $(docker images |  grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS="\n"}{print $1,$2}') -o k8s_images_v1.5.1.tar那么下载好之后导入命令就是:
[root@k8s-node1 mnt]# docker load < k8s_images_v1.5.1.tar
然后再使用
[root@k8s-node1 mnt]# docker images 
就可以看到我们的所需的基础镜像。或者自己有每一个基础镜像的tar包,那么只需要写一个sh脚本,该脚本和这些tar包放在一个目录下就可以拉取了.如下所示:
[root@k8s-node1 mnt]# vi docker_load.sh
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-controllers.tar
docker load < k8s.gcr.io#kube-proxy-amd64.tar
docker load < k8s.gcr.io#kube-scheduler-amd64.tar
docker load < k8s.gcr.io#kube-controller-manager-amd64.tar
docker load < k8s.gcr.io#kube-apiserver-amd64.tar
docker load < k8s.gcr.io#etcd-amd64.tar
docker load < k8s.gcr.io#k8s-dns-dnsmasq-nanny-amd64.tar
docker load < k8s.gcr.io#k8s-dns-sidecar-amd64.tar
docker load < k8s.gcr.io#k8s-dns-kube-dns-amd64.tar
docker load < k8s.gcr.io#pause-amd64.tar
docker load < quay.io#coreos#etcd.tar
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-policy-controller.tar
docker load < gcr.io#google_containers#etcd.tar
[root@k8s-node1 mnt]#  source docker_load.sh

将镜像与该脚本放置同一目录,执行即可导入Docker镜像。运行docker images.

方法二:(推荐)

先提前下载初始化时需要用到的Images:

第一种 从国内源下载好然后修改tag(推荐方式)

先查看要用到的镜像有哪些,这里要注意的是:要拉取的4个核心组件的镜像版本和你安装的kubelet、kubeadm、kubectl 版本需要是一致的。

[root@k8s-node1 ~]# kubeadm config images list
W0801 08:08:18.271449   21980 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0801 08:08:18.271760   21980 version.go:99] falling back to the local client version: v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

下载镜像:(这里我们使用一条组合命令来拉取镜像,不过从下面的最后报错:coredns/coredns:1.3.1镜像拉取失败,需要在手动拉取)

[root@k8s-node1 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x
W0801 08:09:14.832272   22033 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0801 08:09:14.832330   22033 version.go:99] falling back to the local client version: v1.15.1
+ docker pull mirrorgooglecontainers/kube-apiserver:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-apiserver
6cf6a0b0da0d: Pull complete 
5899bcec7bbf: Pull complete 
Digest: sha256:db15b7caa01ebea2510605f391fabaed06674438315a7b6313e18e93affa15bb
Status: Downloaded newer image for mirrorgooglecontainers/kube-apiserver:v1.15.1
docker.io/mirrorgooglecontainers/kube-apiserver:v1.15.1
+ docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-controller-manager
6cf6a0b0da0d: Already exists 
5c943020ad72: Pull complete 
Digest: sha256:271de9f26d55628cc58e048308bef063273fe68352db70dca7bc38df509d1023
Status: Downloaded newer image for mirrorgooglecontainers/kube-controller-manager:v1.15.1
docker.io/mirrorgooglecontainers/kube-controller-manager:v1.15.1
+ docker pull mirrorgooglecontainers/kube-scheduler:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-scheduler
6cf6a0b0da0d: Already exists 
66ca8e0fb424: Pull complete 
Digest: sha256:ffac8b6f6b9fe21f03c92ceb0855a7fb65599b8a7e7f8090182a02470a7d2ea6
Status: Downloaded newer image for mirrorgooglecontainers/kube-scheduler:v1.15.1
docker.io/mirrorgooglecontainers/kube-scheduler:v1.15.1
+ docker pull mirrorgooglecontainers/kube-proxy:v1.15.1
v1.15.1: Pulling from mirrorgooglecontainers/kube-proxy
6cf6a0b0da0d: Already exists 
8e1ce322a1d9: Pull complete 
3a8a38f10886: Pull complete 
Digest: sha256:3d4e2f537c121bf6a824e564aaf406ead9466f04516a34f8089b4e4bb7abb33b
Status: Downloaded newer image for mirrorgooglecontainers/kube-proxy:v1.15.1
docker.io/mirrorgooglecontainers/kube-proxy:v1.15.1
+ docker pull mirrorgooglecontainers/pause:3.1
3.1: Pulling from mirrorgooglecontainers/pause
67ddbfb20a22: Pull complete 
Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610
Status: Downloaded newer image for mirrorgooglecontainers/pause:3.1
docker.io/mirrorgooglecontainers/pause:3.1
+ docker pull mirrorgooglecontainers/etcd:3.3.10
3.3.10: Pulling from mirrorgooglecontainers/etcd
860b4e629066: Pull complete 
3de3fe131c22: Pull complete 
12ec62a49b1f: Pull complete 
Digest: sha256:8a82adeb3d0770bfd37dd56765c64d082b6e7c6ad6a6c1fd961dc6e719ea4183
Status: Downloaded newer image for mirrorgooglecontainers/etcd:3.3.10
docker.io/mirrorgooglecontainers/etcd:3.3.10
+ docker pull mirrorgooglecontainers/coredns:1.3.1
Error response from daemon: pull access denied for mirrorgooglecontainers/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
#这里报错了
[root@k8s-node1 ~]# 所以手动拉取报错的这个:
[root@k8s-node1 ~]# docker pull coredns/coredns:1.3.11.3.1: Pulling from coredns/coredns
e0daa8927b68: Pull complete 
3928e47de029: Pull complete 
Digest: sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
Status: Downloaded newer image for coredns/coredns:1.3.1
docker.io/coredns/coredns:1.3.1
[root@k8s-node1 ~]# #修改tag,将镜像标记为k8s.gcr.io的名称
[root@k8s-node1 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x
+ docker tag mirrorgooglecontainers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
+ docker tag mirrorgooglecontainers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
+ docker tag mirrorgooglecontainers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
+ docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
+ docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
+ docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
+ 手动修改coredns的tag[root@k8s-node1 ~]# docker tag coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1
[root@k8s-node1 ~]# docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
mirrorgooglecontainers/kube-apiserver            v1.15.1             68c3eb07bfc3        2 weeks ago         207MB
k8s.gcr.io/kube-apiserver                        v1.15.1             68c3eb07bfc3        2 weeks ago         207MB
mirrorgooglecontainers/kube-controller-manager   v1.15.1             d75082f1d121        2 weeks ago         159MB
k8s.gcr.io/kube-controller-manager               v1.15.1             d75082f1d121        2 weeks ago         159MB
mirrorgooglecontainers/kube-scheduler            v1.15.1             b0b3c4c404da        2 weeks ago         81.1MB
k8s.gcr.io/kube-scheduler                        v1.15.1             b0b3c4c404da        2 weeks ago         81.1MB
mirrorgooglecontainers/kube-proxy                v1.15.1             89a062da739d        2 weeks ago         82.4MB
k8s.gcr.io/kube-proxy                            v1.15.1             89a062da739d        2 weeks ago         82.4MB
coredns/coredns                                  1.3.1               eb516548c180        6 months ago        40.3MB
k8s.gcr.io/coredns                               1.3.1               eb516548c180        6 months ago        40.3MB
hello-world                                      latest              fce289e99eb9        7 months ago        1.84kB
mirrorgooglecontainers/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
k8s.gcr.io/etcd                                  3.3.10              2c4adeb21b4f        8 months ago        258MB
mirrorgooglecontainers/pause                     3.1                 da86e6ba6ca1        19 months ago       742kB
k8s.gcr.io/pause                                 3.1                 da86e6ba6ca1        19 months ago       742kB
[root@k8s-node1 ~]# 可以看到镜像很多重复了,那么我们删除无用的镜像:[root@k8s-node1 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi "  $1":"$2}' | sh -x
+ docker rmi mirrorgooglecontainers/kube-scheduler:v1.15.1
Untagged: mirrorgooglecontainers/kube-scheduler:v1.15.1
Untagged: mirrorgooglecontainers/kube-scheduler@sha256:ffac8b6f6b9fe21f03c92ceb0855a7fb65599b8a7e7f8090182a02470a7d2ea6
+ docker rmi mirrorgooglecontainers/kube-proxy:v1.15.1
Untagged: mirrorgooglecontainers/kube-proxy:v1.15.1
Untagged: mirrorgooglecontainers/kube-proxy@sha256:3d4e2f537c121bf6a824e564aaf406ead9466f04516a34f8089b4e4bb7abb33b
+ docker rmi mirrorgooglecontainers/kube-apiserver:v1.15.1
Untagged: mirrorgooglecontainers/kube-apiserver:v1.15.1
Untagged: mirrorgooglecontainers/kube-apiserver@sha256:db15b7caa01ebea2510605f391fabaed06674438315a7b6313e18e93affa15bb
+ docker rmi mirrorgooglecontainers/kube-controller-manager:v1.15.1
Untagged: mirrorgooglecontainers/kube-controller-manager:v1.15.1
Untagged: mirrorgooglecontainers/kube-controller-manager@sha256:271de9f26d55628cc58e048308bef063273fe68352db70dca7bc38df509d1023
+ docker rmi mirrorgooglecontainers/etcd:3.3.10
Untagged: mirrorgooglecontainers/etcd:3.3.10
Untagged: mirrorgooglecontainers/etcd@sha256:8a82adeb3d0770bfd37dd56765c64d082b6e7c6ad6a6c1fd961dc6e719ea4183
+ docker rmi mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause:3.1
Untagged: mirrorgooglecontainers/pause@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610手动删除无用镜像:
[root@k8s-node1 ~]# docker rmi coredns/coredns:1.3.1
Untagged: coredns/coredns:1.3.1
Untagged: coredns/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4

最后再看一下我们的镜像:
查看准备好的镜像

[root@k8s-node1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.15.1             89a062da739d        2 weeks ago         82.4MB
k8s.gcr.io/kube-scheduler            v1.15.1             b0b3c4c404da        2 weeks ago         81.1MB
k8s.gcr.io/kube-apiserver            v1.15.1             68c3eb07bfc3        2 weeks ago         207MB
k8s.gcr.io/kube-controller-manager   v1.15.1             d75082f1d121        2 weeks ago         159MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        6 months ago        40.3MB
hello-world                          latest              fce289e99eb9        7 months ago        1.84kB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        19 months ago       742kB
[root@k8s-node1 ~]# 

·——————————————————————————————————————————

第二种:修改kubeadm配置文件中的docker仓库地址imageRepository,注意:此方法只适用于1.11(?)版本以上

一开始没有配置文件,先使用下面的命令生成配置文件

[root@k8s-node1 ~]# kubeadm config print init-defaults > kubeadm.conf

将配置文件中的 imageRepository: k8s.gcr.io 改为你自己的私有docker仓库,比如

注意这里的xxxxx为你的阿里云的加速器的字符:
找不到的请移步:https://blog.csdn.net/qq_28513801/article/details/93381492[root@k8s-node1 ~]# sed -i '/^imageRepository/ s/k8s\.gcr\.io/xxxxxxx\.mirror\.aliyuncs\.com\/google_containers/g' kubeadm.conf
imageRepository: xxxxxx.mirror.aliyuncs.com/mirrorgooglecontainers

然后运行命令拉镜像

[root@k8s-node1 ~]#  kubeadm config images list --config kubeadm.conf
[root@k8s-node1 ~]#  kubeadm config images pull --config kubeadm.conf
[root@k8s-node1 ~]#  docker images #查看镜像

5 复制虚拟机

前面讲过:当k8s-node1的Kubernetes安装完毕后,就需要进行虚拟机的复制了.

5.1 复制前需要退出虚拟机,我们选择“正常关机”。右键虚拟机点击复制:

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
克隆虚拟机,可以使用快捷键ctrl + o
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

选着完全复制:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

采用同样的操作,再复制一个虚拟机。最终的复制结果如下图所示:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

6 添加网卡,网络。(重点)

复制结束后,如果直接启动三个虚拟机,你会发现每个机子的IP地址(网卡enp0s3)都是一样的:
这是因为复制虚拟机时连同网卡的地址也复制了,这样的话,三个节点之间是无法访问的。
因此,我建议复制结束后,不要马上启动虚拟机,而先要为每一个虚拟机添加一个网卡,用于节点间的互通访问。
如下图所示,连接方式选择“Host-Only”模式:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

同理 另外两个虚拟机也这样操作,然后依次启动node1 、 node2、node3三个虚拟机
然后设置另外两个主机名分别为k8s-node2 、 k8s-node3
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
网卡添加结束后,即可启动三个虚拟机,我们需要进行一些简单的设置,

6.1 继续设置端口转发

那么为了方便操作,我们按照第一个虚拟机的操作方式继续设置端口转发。
node2:设置端口为3333
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
node3 设置端口为4444
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
我们已经顺利连接了:
K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

然后先看下各自 IP地址:

k8s-node1:[root@k8s-node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:87:4d:7b brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86026sec preferred_lft 86026secinet6 fe80::a00:27ff:fe87:4d7b/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:bb:7d:bb brd ff:ff:ff:ff:ff:ffinet 192.168.10.9/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 826sec preferred_lft 826secinet6 fe80::a00:27ff:febb:7dbb/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:4d:d1:78:98 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node1 ~]# k8s-node2:[root@k8s-node2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:90:ef:0f brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86053sec preferred_lft 86053secinet6 fe80::a00:27ff:fe90:ef0f/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:80:99:e9 brd ff:ff:ff:ff:ff:ffinet 192.168.10.10/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 853sec preferred_lft 853secinet6 fe80::a00:27ff:fe80:99e9/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:74:cf:5e:b2 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node2 ~]# k8s-node3:[root@k8s-node3 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:dd:a5:a9 brd ff:ff:ff:ff:ff:ffinet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3valid_lft 86078sec preferred_lft 86078secinet6 fe80::a00:27ff:fedd:a5a9/64 scope link valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:fc:6b:97 brd ff:ff:ff:ff:ff:ffinet 192.168.10.11/24 brd 192.168.10.255 scope global dynamic enp0s8valid_lft 878sec preferred_lft 878secinet6 fe80::a00:27ff:fefc:6b97/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:25:5a:24:2c brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@k8s-node3 ~]# 

6.2 配置免密(三个节点都需要操作)

为了操作方便我们直接配置免密登录。()

[root@k8s-node1 ~]# ssh-keygen  #连续按4次回车
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
d5:0d:91:66:3c:1d:6d:25:c7:3e:a6:ff:49:75:41:bd root@k8s-node1
The key's randomart image is:
+--[ RSA 2048]----+
|           .o+o==|
|           .*ooo=|
|          .o...+.|
|         .     Eo|
|        S     o +|
|             .  o|
|              .. |
|              ...|
|               .o|
+-----------------+
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# ssh-copy-id k8s-node1
The authenticity of host 'k8s-node1 (192.168.10.9)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# ssh-copy-id k8s-node2
The authenticity of host 'k8s-node2 (192.168.10.10)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# ssh-copy-id k8s-node3
The authenticity of host 'k8s-node3 (192.168.10.11)' can't be established.
ECDSA key fingerprint is 80:4b:68:67:55:3a:b6:57:64:0a:98:e9:0e:df:c0:21.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node3's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'k8s-node3'"
and check to make sure that only the key(s) you wanted were added.[root@k8s-node1 ~]# 然后依次在k8s-node2 、 k8s-node3上操作。

6.3 设置hosts文件映射

[root@k8s-node3 ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.9 k8s-node1
192.168.10.10 k8s-node2
192.168.10.11 k8s-node3#由于上面步骤配置了免密,可以直接发送过去
[root@k8s-node1 ~]# scp /etc/hosts k8s-node2:/etc/hosts
hosts                                                                                                          100%  229     0.2KB/s   00:00    
[root@k8s-node1 ~]# scp /etc/hosts k8s-node3:/etc/hosts
hosts                  

7 创建集群

7.1 kubeadm介绍

前面的工作都准备好后,我们就可以真正的创建集群了。这里使用的是官方提供的kubeadm工具,它可以快速、方便的创建一个K8S集群。kubeadm的具体介绍大家可以参考官方文档:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/。

创建集群:初始化之前最好先了解一下 kubeadm init 参数


--apiserver-advertise-address string
API Server将要广播的监听地址。如指定为 `0.0.0.0` 将使用缺省的网卡地址。--apiserver-bind-port int32     缺省值: 6443
API Server绑定的端口--apiserver-cert-extra-sans stringSlice
可选的额外提供的证书主题别名(SANs)用于指定API Server的服务器证书。可以是IP地址也可以是DNS名称。--cert-dir string     缺省值: "/etc/kubernetes/pki"
证书的存储路径。--config string
kubeadm配置文件的路径。警告:配置文件的功能是实验性的。--cri-socket string     缺省值: "/var/run/dockershim.sock"
指明要连接的CRI socket文件--dry-run
不会应用任何改变;只会输出将要执行的操作。--feature-gates string
键值对的集合,用来控制各种功能的开关。可选项有:
Auditing=true|false (当前为ALPHA状态 - 缺省值=false)
CoreDNS=true|false (缺省值=true)
DynamicKubeletConfig=true|false (当前为BETA状态 - 缺省值=false)-h, --help
获取init命令的帮助信息--ignore-preflight-errors stringSlice
忽视检查项错误列表,列表中的每一个检查项如发生错误将被展示输出为警告,而非错误。 例如: 'IsPrivilegedUser,Swap'. 如填写为 'all' 则将忽视所有的检查项错误。--kubernetes-version string     缺省值: "stable-1"
为control plane选择一个特定的Kubernetes版本。--node-name string
指定节点的名称。--pod-network-cidr string
指明pod网络可以使用的IP地址段。 如果设置了这个参数,control plane将会为每一个节点自动分配CIDRs。--service-cidr string     缺省值: "10.96.0.0/12"
为service的虚拟IP地址另外指定IP地址段--service-dns-domain string     缺省值: "cluster.local"
为services另外指定域名, 例如: "myorg.internal".--skip-token-print
不打印出由 `kubeadm init` 命令生成的默认令牌。--token string
这个令牌用于建立主从节点间的双向受信链接。格式为 [a-z0-9]{6}\.[a-z0-9]{16} - 示例: abcdef.0123456789abcdef--token-ttl duration     缺省值: 24h0m0s
令牌被自动删除前的可用时长 (示例: 1s, 2m, 3h). 如果设置为 '0', 令牌将永不过期。
7.2 在master上开始初始化

在Master主节点(k8s-node1)上执行:

[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9含义:
1.选项--pod-network-cidr=192.168.0.0/16表示集群将使用Calico网络,这里需要提前指定Calico的子网范围
2.选项--kubernetes-version=v1.15.1指定K8S版本,这里必须与之前导入到Docker镜像版本一致,否则会访问谷歌去重新下载K8S最新版的Docker镜像
3.选项--apiserver-advertise-address表示绑定的网卡IP,这里一定要绑定前面提到的enp0s8网卡,否则会默认使用enp0s3网卡
4.若执行kubeadm init出错或强制终止,则再需要执行该命令时,需要先执行kubeadm reset重置

如果出现下面报错:

[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@k8s-node1 ~]# 

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网
处理方法:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

我们按照要求把值写为1 就可以了.
输入命令:

[root@k8s-node1 ~]#  echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

然后再重新操作:

[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@k8s-node1 ~]#  echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.10.9
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.9]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [192.168.10.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.003530 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ts9i67.6sn3ylpxri4qimgr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 
[root@k8s-node1 ~]# 

可以看到已经顺利初始化了。这里为了方便我们要保存好:

kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 

可以看到,提示集群成功初始化,并且我们需要执行以下命令:

[root@k8s-node1 ~]# mkdir -p $HOME/.kube
[root@k8s-node1 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-node1 ~]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-node1 ~]# 

另外, 提示我们还需要创建网络,并且让其他节点执行kubeadm join…加入集群。

7.3 创建网络

如果不创建网络,查看pod状态时,可以看到kube-dns组件是阻塞状态,集群时不可用的:

[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-8nftr            0/1     Pending   0          3m28s #阻塞
coredns-5c98db65d4-n2zbj            0/1     Pending   0          3m28s #阻塞
etcd-k8s-node1                      1/1     Running   0          2m44s
kube-apiserver-k8s-node1            1/1     Running   0          2m51s
kube-controller-manager-k8s-node1   1/1     Running   0          2m41s
kube-proxy-cdvhk                    1/1     Running   0          3m28s
kube-scheduler-k8s-node1            1/1     Running   0          2m35s

大家可以参考官方文档,根据需求选择适合的网络,这里,我们使用Calico(在前面初始化集群的时候就已经确定了)。
根据官方文档,在主节点上,需要执行如下命令:

[root@k8s-node1 ~]# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

执行成功后:

[root@k8s-node1 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-etcd-2hmhv                          1/1     Running   0          60s
calico-kube-controllers-6b6f4f7c64-c7v8p   1/1     Running   0          115s
calico-node-fzzmh                          2/2     Running   2          115s
coredns-5c98db65d4-8nftr                   1/1     Running   0          6m33s
coredns-5c98db65d4-n2zbj                   1/1     Running   0          6m33s
etcd-k8s-node1                             1/1     Running   0          5m49s
kube-apiserver-k8s-node1                   1/1     Running   0          5m56s
kube-controller-manager-k8s-node1          1/1     Running   0          5m46s
kube-proxy-cdvhk                           1/1     Running   0          6m33s
kube-scheduler-k8s-node1                   1/1     Running   0          5m40s
[root@k8s-node1 ~]# 

8 集群设置

将Master作为工作节点
K8S集群默认不会将Pod调度到Master上,这样Master的资源就浪费了。在Master(即k8s-node1)上,可以运行以下命令使其作为一个工作节点:(利用该方法,我们可以不使用minikube而创建一个单节点的K8S集群)

[root@k8s-node1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-node1 untainted

8.1 将其他节点加入集群

在其他两个节点k8s-node2和k8s-node3上,执行主节点生成的kubeadm join命令即可加入集群:
(最好是在每一个新的节点上先执行如下命令:)

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables  

然后再加入新的节点:

kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr \--discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 

加入成功后,提示:

[root@k8s-node2 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables  
[root@k8s-node2 ~]# kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr     --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-node2 ~]# 

k8s-node3节点:

[root@k8s-node3 ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables  
[root@k8s-node3 ~]# kubeadm join 192.168.10.9:6443 --token ts9i67.6sn3ylpxri4qimgr     --discovery-token-ca-cert-hash sha256:32de69c3d3241cab71ef58afd09b9bf16a551b6e4b498d5134b1baae498ac8c0 
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@k8s-node3 ~]# 

如果有其他报错,要尝试看日志:

 [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...
例如这样的报错,那么要看看是否忘记关闭swap了。
[root@k8s-node1 ~]#  swapoff -a
或者永久性修改/etc/fstab文件
[root@k8s-node1 ~]#  free -mtotal        used        free      shared  buff/cache   available
Mem:            992         524          74           7         392         284
Swap:             0           0           0

当所有节点加入集群后,稍等片刻,在主节点上运行kubectl get nodes可以看到:

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    master   13m    v1.15.1
k8s-node2   Ready    <none>   2m2s   v1.15.1
k8s-node3   Ready    <none>   68s    v1.15.1
[root@k8s-node1 ~]# 

如上,若提示notReady则表示节点尚未准备好,可能正在进行其他初始化操作,等待全部变为Ready即可。
另外,建议查看所有pod状态,运行kubectl get pods -n kube-system:

[root@k8s-node1 ~]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
calico-etcd-2hmhv                          1/1     Running   0          8m49s   192.168.10.9    k8s-node1   <none>           <none>
calico-kube-controllers-6b6f4f7c64-c7v8p   1/1     Running   0          9m44s   192.168.10.9    k8s-node1   <none>           <none>
calico-node-fzzmh                          2/2     Running   2          9m44s   192.168.10.9    k8s-node1   <none>           <none>
calico-node-r9hh6                          2/2     Running   0          2m59s   192.168.10.10   k8s-node2   <none>           <none>
calico-node-rcqnp                          2/2     Running   0          2m5s    192.168.10.11   k8s-node3   <none>           <none>
coredns-5c98db65d4-8nftr                   1/1     Running   0          14m     192.168.36.65   k8s-node1   <none>           <none>
coredns-5c98db65d4-n2zbj                   1/1     Running   0          14m     192.168.36.66   k8s-node1   <none>           <none>
etcd-k8s-node1                             1/1     Running   0          13m     192.168.10.9    k8s-node1   <none>           <none>
kube-apiserver-k8s-node1                   1/1     Running   0          13m     192.168.10.9    k8s-node1   <none>           <none>
kube-controller-manager-k8s-node1          1/1     Running   0          13m     192.168.10.9    k8s-node1   <none>           <none>
kube-proxy-8g5sw                           1/1     Running   0          2m5s    192.168.10.11   k8s-node3   <none>           <none>
kube-proxy-9z62p                           1/1     Running   0          2m59s   192.168.10.10   k8s-node2   <none>           <none>
kube-proxy-cdvhk                           1/1     Running   0          14m     192.168.10.9    k8s-node1   <none>           <none>
kube-scheduler-k8s-node1                   1/1     Running   0          13m     192.168.10.9    k8s-node1   <none>           <none>
[root@k8s-node1 ~]# 

===========================================================================

以上便是K8S的整个部署过程。
下面将介绍基本服务检查。

节点状态

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   17m     v1.15.1
k8s-node2   Ready    <none>   5m44s   v1.15.1
k8s-node3   Ready    <none>   4m50s   v1.15.1
[root@k8s-node1 ~]# 

组件状态

[root@k8s-node1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-node1 ~]# 

服务账户

[root@k8s-node1 ~]# kubectl get serviceaccount
NAME      SECRETS   AGE
default   1         18m
[root@k8s-node1 ~]#

集群信息

[root@k8s-node1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.10.9:6443
KubeDNS is running at https://192.168.10.9:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-node1 ~]# 

验证dns功能

[root@k8s-node1 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-6bf6db5c4f-8ldwk:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-6bf6db5c4f-8ldwk:/ ]$ 

K8S最新版本集群部署超详细(k8s版本1.5.1)docker 版本19.03.1以及基本操作和服务介绍。-编程知识网

9 测试集群功能是否正常

我们创建一个nginx的service试一下集群是否可用。

创建并运行deployment

[root@k8s-node1 ~]# kubectl run nginx1 --replicas=2 --labels="run=load-balancer-example0" --image=nginx  --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx1 created
[root@k8s-node1 ~]# 

把服务通过nodeport的形式暴露出来

[root@k8s-node1 ~]#  kubectl expose deployment nginx1 --type=NodePort --name=example-service
service/example-service exposed
[root@k8s-node1 ~]# 

查看服务的详细信息

[root@k8s-node1 ~]# kubectl describe service example-service
Name:                     example-service
Namespace:                default
Labels:                   run=load-balancer-example0
Annotations:              <none>
Selector:                 run=load-balancer-example0
Type:                     NodePort
IP:                       10.105.173.123
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30102/TCP
Endpoints:                192.168.107.196:80,192.168.169.133:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

服务状态

[root@k8s-node1 ~]# kubectl get service
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
example-service   NodePort    10.105.173.123   <none>        80:30102/TCP   70s
kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP        30m
[root@k8s-node1 ~]# 

查看pod

[root@k8s-node1 ~]# kubectl get pods 
NAME                      READY   STATUS    RESTARTS   AGE
curl-6bf6db5c4f-8ldwk     1/1     Running   1          12m
nginx-5c464d5cf5-b7xlh    1/1     Running   0          3m44s
nginx-5c464d5cf5-klqfd    1/1     Running   0          3m34s
nginx1-7c5744bf79-lc6sz   1/1     Running   0          2m18s
nginx1-7c5744bf79-pt7q4   1/1     Running   0          2m18s

访问服务ip

[root@k8s-node1 ~]# curl 10.105.173.123:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-node1 ~]# 

访问endpoint,与访问服务ip结果相同。这些 IP 只能在 Kubernetes Cluster 中的容器和节点访问。endpoint与service 之间有映射关系。service实际上是负载均衡着后端的endpoint。其原理是通过iptables实现的.

访问节点ip,与访问集群ip相同,可以在集群外部访问。

[root@k8s-node1 ~]# curl 192.168.10.9:31257
[root@k8s-node1 ~]# curl 192.168.10.10:31257
[root@k8s-node1 ~]# curl 192.168.10.11:31257

整个部署过程概述:
① kubectl 发送部署请求到 API Server。

② API Server 通知 Controller Manager 创建一个 deployment 资源。

③ Scheduler 执行调度任务,将两个副本 Pod 分发到 node1 和 node2。

④ node1 和 node2 上的 kubelet 在各自的节点上创建并运行 Pod。

这里加上一个新的docker 镜像的批量导入和导出脚本。基于python实现。
myGithub_Docker_images_load.py

myGithub_Docker_images_save.py

下篇则介绍WEB UI界面部署。
这里是kubernetes-dashboard的镜像地址。
https://github.com/kubernetes/dashboard/releases

这里先放上yaml文件

apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
rules:
- apiGroups: [""]resources: ["secrets"]verbs: ["create"]
- apiGroups: [""]resources: ["configmaps"]verbs: ["create"]
- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]verbs: ["get", "update", "delete"]
- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]
- apiGroups: [""]resources: ["services"]resourceNames: ["heapster"]verbs: ["proxy"]
- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:"]verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: kubernetes-dashboard-minimalnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system
---
kind: Deployment
apiVersion: apps/v1beta2
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0ports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificatesvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardtolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard

文件大致解读:

apiVersion: v1             #指定api版本,此值必须在kubectl apiversion中  
kind: Pod                  #指定创建资源的角色/类型  
metadata:                  #资源的元数据/属性  name: web04-pod          #资源的名字,在同一个namespace中必须唯一  labels:                  #设定资源的标签,详情请见http://blog.csdn.net/liyingke112/article/details/77482384k8s-app: apache  version: v1  kubernetes.io/cluster-service: "true"  annotations:             #自定义注解列表  - name: String         #自定义注解名字  
spec:#specification of the resource content 指定该资源的内容  restartPolicy: Always    #表明该容器一直运行,默认k8s的策略,在此容器退出后,会立即创建一个相同的容器  nodeSelector:            #节点选择,先给主机打标签kubectl label nodes kube-node1 zone=node1  zone: node1  containers:  - name: web04-pod        #容器的名字  image: web:apache      #容器使用的镜像地址  imagePullPolicy: Never #三个选择Always、Never、IfNotPresent,每次启动时检查和更新(从registery)images的策略,# Always,每次都检查# Never,每次都不检查(不管本地是否有)# IfNotPresent,如果本地有就不检查,如果没有就拉取command: ['sh']        #启动容器的运行命令,将覆盖容器中的Entrypoint,对应Dockefile中的ENTRYPOINT  args: ["$(str)"]       #启动容器的命令参数,对应Dockerfile中CMD参数  env:                   #指定容器中的环境变量  - name: str            #变量的名字  value: "/etc/run.sh" #变量的值  resources:             #资源管理,请求请见http://blog.csdn.net/liyingke112/article/details/77452630requests:            #容器运行时,最低资源需求,也就是说最少需要多少资源容器才能正常运行  cpu: 0.1           #CPU资源(核数),两种方式,浮点数或者是整数+m,0.1=100m,最少值为0.001核(1m)memory: 32Mi       #内存使用量  limits:              #资源限制  cpu: 0.5  memory: 32Mi  ports:  - containerPort: 80    #容器开发对外的端口name: httpd          #名称protocol: TCP  livenessProbe:         #pod内容器健康检查的设置,详情请见http://blog.csdn.net/liyingke112/article/details/77531584httpGet:             #通过httpget检查健康,返回200-399之间,则认为容器正常  path: /            #URI地址  port: 80  #host: 127.0.0.1   #主机地址  scheme: HTTP  initialDelaySeconds: 180 #表明第一次检测在容器启动后多长时间后开始  timeoutSeconds: 5    #检测的超时时间  periodSeconds: 15    #检查间隔时间  #也可以用这种方法  #exec: 执行命令的方法进行监测,如果其退出码不为0,则认为容器正常  #  command:  #    - cat  #    - /tmp/health  #也可以用这种方法  #tcpSocket: //通过tcpSocket检查健康   #  port: number   lifecycle:             #生命周期管理  postStart:           #容器运行之前运行的任务  exec:  command:  - 'sh'  - 'yum upgrade -y'  preStop:             #容器关闭之前运行的任务  exec:  command: ['service httpd stop']  volumeMounts:          #详情请见http://blog.csdn.net/liyingke112/article/details/76577520- name: volume         #挂载设备的名字,与volumes[*].name 需要对应    mountPath: /data     #挂载到容器的某个路径下  readOnly: True  volumes:                 #定义一组挂载设备  - name: volume           #定义一个挂载设备的名字  #meptyDir: {}  hostPath:  path: /opt           #挂载设备类型为hostPath,路径为宿主机下的/opt,这里设备类型支持很多种  

对于写yaml文件时,经常会因为严格的格式要求而出现error。这里有一个常用的yaml文件格式检查工具。