以上就是给各位分享linux安装部署k8s(kubernetes)和解决遇到的坑,其中也会对centosk8s安装进行解释,同时本文还将给你拓展2021年Kubernetes(k8s)服务集群安装部署
以上就是给各位分享linux 安装部署 k8s (kubernetes) 和解决遇到的坑,其中也会对centos k8s安装进行解释,同时本文还将给你拓展2021年Kubernetes(k8s)服务集群安装部署、30.kubernetes(k8s)笔记 Promethues(一) 部署安装、CentOS 部署 Kubernetes1.13 集群 - 1(使用 kubeadm 安装 K8S)、centos7 使用 kubeadm 安装部署 kubernetes 1.14等相关知识,如果能碰巧解决你现在面临的问题,别忘了关注本站,现在开始吧!
本文目录一览:- linux 安装部署 k8s (kubernetes) 和解决遇到的坑(centos k8s安装)
- 2021年Kubernetes(k8s)服务集群安装部署
- 30.kubernetes(k8s)笔记 Promethues(一) 部署安装
- CentOS 部署 Kubernetes1.13 集群 - 1(使用 kubeadm 安装 K8S)
- centos7 使用 kubeadm 安装部署 kubernetes 1.14
linux 安装部署 k8s (kubernetes) 和解决遇到的坑(centos k8s安装)
先安装 Docker
Centos7 离线安装 Docker
设置主机名称
#查看Linux内核版本
uname -r
3.10.0-957.el7.x86_64
#或者使用 uname -a
#设置主机名称为k8s-master,重新连接显示生效
hostnamectl --static set-hostname k8s-master
#查看主机名称
hostname
禁用 SELinux
#永久禁用SELinux
vim /etc/sysconfig/selinux
SELINUX=disabled
#临时 禁用SELinux,让容器可以读取主机文件系统
setenforce 0
关闭系统 Swap
#关闭sawp分区 (可以不关闭,使用参数--ignore-preflight-errors=swap)
#临时关闭
swapoff -a
vi /etc/fstab
#注释掉swap分区
#/dev/mapper/centos-swap swap
配置 docker 国内镜像加速
#修改daemon.json 文件,没有就创建一个
vim /etc/docker/daemon.json
{
"registry-mirrors" : ["https://q5bf287q.mirror.aliyuncs.com", "https://registry.docker-cn.com","http://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries":["192.168.1.5"]
}
#重新加载配置
systemctl daemon-reload
#重启docker
systemctl restart docker
192.168.1.5 为私有仓库地址
默认采用 cgroupfs 作为驱动 修改为 systemd 驱动 native.cgroupdriver=systemd
配置 k8s 的 yum 源 x86_64 的源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#清除缓存
yum clean all
#把服务器的包信息下载到本地电脑缓存起来,makecache建立一个缓存
yum makecache
#列出kubectl可用的版本
yum list kubectl --showduplicates | sort -r
#列出信息如下:
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
已加载插件:fastestmirror, langpacks
已安装的软件包
可安装的软件包
* updates: mirrors.ustc.edu.cn
kubectl.x86_64 1.9.9-0 kubernetes
kubectl.x86_64 1.9.8-0 kubernetes
kubectl.x86_64 1.9.7-0 kubernetes
kubectl.x86_64 1.9.6-0 kubernetes
kubectl.x86_64 1.9.5-0 kubernetes
kubectl.x86_64 1.9.4-0 kubernetes
kubectl.x86_64 1.9.3-0 kubernetes
kubectl.x86_64 1.9.2-0 kubernetes
kubectl.x86_64 1.9.11-0 kubernetes
kubectl.x86_64 1.9.1-0 kubernetes
kubectl.x86_64 1.9.10-0 kubernetes
kubectl.x86_64 1.9.0-0 kubernetes
kubectl.x86_64 1.8.9-0 kubernetes
kubectl.x86_64 1.8.8-0 kubernetes
kubectl.x86_64 1.8.7-0 kubernetes
kubectl.x86_64 1.8.6-0 kubernetes
kubectl.x86_64 1.8.5-0 kubernetes
kubectl.x86_64 1.8.4-0 kubernetes
kubectl.x86_64 1.8.3-0 kubernetes
kubectl.x86_64 1.8.2-0 kubernetes
kubectl.x86_64 1.8.15-0 kubernetes
kubectl.x86_64 1.8.14-0 kubernetes
kubectl.x86_64 1.8.13-0 kubernetes
kubectl.x86_64 1.8.12-0 kubernetes
kubectl.x86_64 1.8.11-0 kubernetes
kubectl.x86_64 1.8.1-0 kubernetes
kubectl.x86_64 1.8.10-0 kubernetes
kubectl.x86_64 1.8.0-0 kubernetes
kubectl.x86_64 1.7.9-0 kubernetes
kubectl.x86_64 1.7.8-1 kubernetes
kubectl.x86_64 1.7.7-1 kubernetes
kubectl.x86_64 1.7.6-1 kubernetes
kubectl.x86_64 1.7.5-0 kubernetes
kubectl.x86_64 1.7.4-0 kubernetes
kubectl.x86_64 1.7.3-1 kubernetes
kubectl.x86_64 1.7.2-0 kubernetes
kubectl.x86_64 1.7.16-0 kubernetes
kubectl.x86_64 1.7.15-0 kubernetes
kubectl.x86_64 1.7.14-0 kubernetes
kubectl.x86_64 1.7.11-0 kubernetes
kubectl.x86_64 1.7.1-0 kubernetes
kubectl.x86_64 1.7.10-0 kubernetes
kubectl.x86_64 1.7.0-0 kubernetes
kubectl.x86_64 1.6.9-0 kubernetes
kubectl.x86_64 1.6.8-0 kubernetes
kubectl.x86_64 1.6.7-0 kubernetes
kubectl.x86_64 1.6.6-0 kubernetes
kubectl.x86_64 1.6.5-0 kubernetes
kubectl.x86_64 1.6.4-0 kubernetes
kubectl.x86_64 1.6.3-0 kubernetes
kubectl.x86_64 1.6.2-0 kubernetes
kubectl.x86_64 1.6.13-0 kubernetes
配置 iptables
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
#让上述配置命令生效
sysctl --system
#或者这样去设置
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
echo "1" >/proc/sys/net/bridge/bridge-nf-call-ip6tables
#保证输出的都是1
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
安装 kubelet,kubeadm,kubectl
#安装最新版本,也可安装指定版本
yum install -y kubelet kubeadm kubectl
#安装指定版本的kubelet,kubeadm,kubectl
yum install -y kubelet-1.19.3-0 kubeadm-1.19.3-0 kubectl-1.19.3-0
#查看kubelet版本
kubelet --version
#版本如下:
Kubernetes v1.19.3
#查看kubeadm版本
kubeadm version
#版本信息如下:
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
启动 kubelet 并设置开机启动服务
#重新加载配置文件
systemctl daemon-reload
#启动kubelet
systemctl start kubelet
#查看kubelet启动状态
systemctl status kubelet
#没启动成功,报错先不管,后面的kubeadm init会拉起
#设置开机自启动
systemctl enable kubelet
#查看kubelet开机启动状态 enabled:开启, disabled:关闭
systemctl is-enabled kubelet
#查看日志
journalctl -xefu kubelet
初始化 k8s 集群 Master
--apiserver-advertise-address=192.168.0.5 为 Master 的 IP
--image-repository registry.aliyuncs.com/google_containers 指定镜像仓库,如果不指定默认是 k8s.gcr.io, 国内需要翻墙才能下载镜像
#执行初始化命令
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.0.5 --kubernetes-version=v1.19.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
#报错一: [ERROR Swap]: running with swap on is not supported. Please disable swap
#报错如下: 如果没关闭swap, 需要关闭swap 或者使用 --ignore-preflight-errors=swap
W0525 15:17:52.768575 19864 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#报错二:The HTTP call equal to ''curl -sSL http://localhost:10248/healthz'' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused
报错如下:
[kubelet-check] The HTTP call equal to ''curl -sSL http://localhost:10248/healthz'' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn''t running or healthy.
[kubelet-check] The HTTP call equal to ''curl -sSL http://localhost:10248/healthz'' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn''t running or healthy.
[kubelet-check] The HTTP call equal to ''curl -sSL http://localhost:10248/healthz'' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
#添加文件: 主要是这个配置:--cgroup-driver=systemd
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
#Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
#成功, 打印如下信息表示成功:
W0511 11:11:24.998096 15272 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ''kubeadm config images pull''
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.0.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501683 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rt0fpo.4axz6cd6eqpm1ihf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.5:6443 --token rt0fpo.4axz6.....m1ihf \
--discovery-token-ca-cert-hash sha256:ac20e89e8bf43b56......516a41305c1c1fd5c7
一定要记住输出的最后一个命令: kubeadm join...
###记住这个命令,后续添加节点时,需要此命令
###kubeadm join 192.168.0.5:6443 --token rt0fpo.4axz6....
#按提示要求执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看 k8s 集群节点
#查看节点
kubectl get node
#输出如下:
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m13s v1.19.3
#发现状态是NotReady,是因为没有安装网络插件
#查看kubelet的日志
journalctl -xef -u kubelet -n 20
#输出如下: 提示未安装cni 网络插件
May 11 11:15:26 k8s-master kubelet[16678]: W0511 11:15:26.356793 16678 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
May 11 11:15:28 k8s-master kubelet[16678]: E0511 11:15:28.237122 16678 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
安装 flannel 网络插件 (CNI)
#创建文件夹
mkdir flannel && cd flannel
#下载文件
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kube-flannel.yml里需要下载镜像,我这里提前先下载
docker pull quay.io/coreos/flannel:v0.14.0-rc1
#创建flannel网络插件
kubectl apply -f kube-flannel.yml
#过一会查看k8s集群节点,变成Ready状态了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9m39s v1.19.3
节点添加到 k8s 集群中
参考上面的,在节点安装好 docker、kubelet、kubectl、kubeadm 执行 k8s 初始化最后输出的命令
kubeadm join 192.168.0.5:6443 --token rt0fpo.4axz6....
#节点成功加入后,在Master上执行命令查看
kubectl get nodes
k8s-master Ready master 147d v1.19.3
Node-1 Ready <none> 146d v1.19.3
#列出k8s需要下载的镜像
kubeadm config images list
#如下:
I0511 09:36:15.377901 9508 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.19
W0511 09:36:17.124062 9508 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.10
k8s.gcr.io/kube-controller-manager:v1.19.10
k8s.gcr.io/kube-scheduler:v1.19.10
k8s.gcr.io/kube-proxy:v1.19.10
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
如果初始化没有配置 --image-repository registry.aliyuncs.com/google_containers 指定镜像仓库,就会要翻墙下载这些镜像,或者找其他镜像,然后修改镜像名
注意:--apiserver-advertise-address=192.168.0.5 的 IP 使用内网 IP, 如果使用外网 IP 会报如下错误:
W0511 09:58:49.950542 20273 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ''kubeadm config images pull''
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 116.65.37.123]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [116.65.37.123 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [116.65.37.123 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ''systemctl status kubelet''
- ''journalctl -xeu kubelet''
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- ''docker ps -a | grep kube | grep -v pause''
Once you have found the failing container, you can inspect its logs with:
- ''docker logs CONTAINERID''
error execution phase wait-control-plane: couldn''t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
提示加上 --v=5 可以打印详细信息
#在次执行时
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=116.73.117.123 --kubernetes-version=v1.19.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 --v=5
#输出错误如下:
W0511 10:04:28.999779 24707 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#10259 10257等端口已经被使用等错误信息
#重置k8s
kubeadm reset
#或者使用 kubeadm reset -f 命令
#在重新初始化
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=116.73.117.123 --kubernetes-version=v1.19.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 --v=5
#还是报错,卡在这里,原因就是用了外网IP导致,坑了自己一把:
[kubelet-check] Initial timeout of 40s passed.
2021年Kubernetes(k8s)服务集群安装部署
一、环境准备
1、服务器环境
节点CPU核数必须是 :>= 2核 ,否则k8s无法启动 DNS网络: 最好设置为 本地网络连通的DNS,否则网络不通,无法下载一些镜像 linux内核: linux内核必须是 4 版本以上,因此必须把linux核心进行升级 准备3台虚拟机环境,或者是3台阿里云服务器都可。 k8s-master01: 此机器用来安装k8s-master的操作环境 k8s-node01: 此机器用来安装k8s node节点的环境 k8s-node02: 此机器用来安装k8s node节点的环境
2、依赖环境
1、给每一台机器设置主机名
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
#查看主机名
hostname
#配置IP host映射关系
vi /etc/hosts
192.168.140.128 k8s-master01
192.168.140.140 k8s-node01
192.168.140.139 k8s-node02
2、安装依赖环境,注意:每一台机器都需要安装此依赖环境
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc
3、关闭防火墙、Selinux
systemctl stop firewalld && systemctl disable firewalld
# 置空iptables
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
setenforce 0 && sed -i ''s/^SELINUX=.*/SELINUX=disabled/'' /etc/selinux/config
4、升级Linux内核为4.44版本
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
#安装内核
yum --enablerepo=elrepo-kernel install -y kernel-lt
查看当前的所有内核版本
cat /boot/grub2/grub.cfg | grep menuentry
查看当前启动内核版本
grub2-editenv list
修改启动内核版本,设置开机从新内核启动
grub2-set-default ''CentOS Linux (5.7.7-1.el7.elrepo.x86_64) 7 (Core)''
#注意:设置完内核后,需要重启服务器才会生效。
#查询内核
uname -r
5、调整内核参数,对于k8s
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
#将优化内核文件拷贝到/etc/sysctl.d/文件夹下,这样优化文件开机的时候能够被调用
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
#手动刷新,让优化文件立即生效
sysctl -p /etc/sysctl.d/kubernetes.conf
6、调整系统临时区 — 如果已经设置时区,可略过
#设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
#将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
7、关闭系统不需要的服务
systemctl stop postfix && systemctl disable postfix
8、设置日志保存方式
1.创建保存日志的目录
mkdir /var/log/journal
2.创建配置文件存放目录
mkdir /etc/systemd/journald.conf.d
3.创建配置文件
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
4.重启systemd journald的配置
systemctl restart systemd-journald
9、打开文件数调整 (可忽略,不执行)
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
10、kube-proxy 开启 ipvs 前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
注意:nf_conntrack_ipv4适用于4的内核版本,本内核版本部署时需要修改成modprobe -- nf_conntrack,下面也一样
##使用lsmod命令查看这些文件是否被引导
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3、docker部署
1、安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
#紧接着配置一个稳定(stable)的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#更新Yum安装的相关Docke软件包&安装Docker CE
yum update -y && yum install docker-ce
2、设置docker daemon文件
#创建/etc/docker目录
mkdir /etc/docker
#更新daemon.json文件
cat > /etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}}
EOF
#注意: 一定注意编码问题,出现错误:查看命令:journalctl -amu docker 即可发现错误
#创建,存储docker配置文件
mkdir -p /etc/systemd/system/docker.service.d
3、重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
4、安装kubeadm
1、安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum update -y
2、安装kubeadm、kubelet、kubectl
yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
3、启动 kubelet
systemctl enable kubelet && systemctl start kubelet
二、集群安装
1、依赖镜像
上传镜像压缩包,把压缩包中的镜像导入到本地镜像仓库:kubeadm-basic.images.tar.gz
百度网盘链接:链接:https://pan.baidu.com/s/1SplT... 提取码:grcd
kubeadm 初始化k8s集群的时候,会从gce Google云中下载(pull)相应的镜像,且镜像相对比较大,下载比较慢,所以使用下载好的镜像。
编写脚本问题,导入镜像包到本地docker镜像仓库:
1、导入镜像脚本代码 (在任意目录下创建sh脚本文件:image-load.sh)
#!/bin/bash
#注意 镜像解压的目录位置
ls /root/kubeadm-basic.images > /tmp/images-list.txt
cd /root/kubeadm-basic.images
for i in $(cat /tmp/images-list.txt)
do
docker load -i $i
done
rm -rf /tmp/images-list.txt
2、修改权限,可执行权限
chmod 755 image-load.sh
3、开始执行,镜像导入
./image-load.sh
4、传输文件及镜像到其他node节点
#拷贝到node01节点
scp -r image-load.sh kubeadm-basic.images root@k8s-node01:/root/
#拷贝到node02节点
scp -r image-load.sh kubeadm-basic.images root@k8s-node02:/root/
#其他节点依次执行sh脚本,导入镜像
2、k8s部署
#初始化主节点 — 只需要在主节点执行
1、拉去yaml资源配置文件
kubeadm config print init-defaults > kubeadm-config.yaml
2、修改yaml资源文件
localAPIEndpoint:
advertiseAddress: 192.168.66.10 # 注意:修改配置文件的IP地址
kubernetesVersion: v1.15.1 #注意:修改版本号,必须和kubectl版本保持一致
networking:
# 指定flannel模型通信 pod网段地址,此网段和flannel网段一致
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
#指定使用ipvs网络进行通信
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: kubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
3、初始化主节点,开始部署
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
#注意:执行此命令,CPU核心数量必须大于1核,否则无法执行成功
如果初始化失败就执行:kubeadm reset
kubernetes主节点初始化成功后,如下所示:
<img src="https://img-blog.csdnimg.cn/20200710103612329.png" >
按照k8s指示,执行下面的命令:
4、初始化成功后执行如下命令
#创建目录,保存连接配置缓存,认证文件
mkdir -p $HOME/.kube
#拷贝集群管理配置文件
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#授权给配置文件
chown $(id -u):$(id -g) $HOME/.kube/config
执行命令后查询node:、
<img src="https://img-blog.csdnimg.cn/20200710103901890.png" alt="在这里插入图片描述">
我们发现已经可以成功查询node节点信息了,但是节点的状态却是NotReady,不是Runing的状态。原因是此时我们使用ipvs+flannel的方式进行网络通信,但是flannel网络插件还没有部署,因此节点状态此时为NotReady。
3、flannel插件
#部署flannel网络插件 — 只需要在主节点执行
1、下载flannel网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2、部署flannel
kubectl create -f kube-flannel.yml
#也可进行部署网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
部署flannel网络插件时候,注意网络连通的问题:
<img src="https://img-blog.csdnimg.cn/20200710104145814.png" alt="在这里插入图片描述">
加入主节点以及其余工作节点,执行安装日志中的命令即可
#查看日志文件
cat kubeadm-init.log
# 复制命令到其他几个node节点进行执行即可
kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8
执行完毕,查看效果如下所示,发现还有一些节点处于NotReady状态,是因为这些节点pod容器还处于初始化的状态,需要等一点时间:
<img src="https://img-blog.csdnimg.cn/20200710104316889.png" alt="在这里插入图片描述">
查询工作空间中pod容器的详细信息
<img src="https://img-blog.csdnimg.cn/20200710110637493.png" alt="在这里插入图片描述">
扫码关注:【Kubernetes中文社区】获取更多学习教程
30.kubernetes(k8s)笔记 Promethues(一) 部署安装
Promethues官网:
Prometheus介绍
一、Prometheus介绍
Prometheus是一个开源的系统监控和报警系统,现在已经加入到CNCF基金会,成为继k8s之后第二个在CNCF托管的项目,在kubernetes容器管理系统中,通常会搭配prometheus进行监控,同时也支持多种exporter采集数据,还支持pushgateway
进行数据上报,Prometheus性能足够支撑上万台规模的集群。
二、Prometheus特点
2.1、prometheus特点
1)多维度数据模型
每一个时间序列数据都由metric度量指标名称和它的标签labels键值对集合唯一确定:这个metric度量指标名称指定监控目标系统的测量特征(如:http_requests_total-
接收http请求的总计数)。labels开启了Prometheus的多维数据模型:对于相同的度量名称,通过不同标签列表的结合, 会形成特定的度量维度实例。(例如:所有包含度量名称为/api/tracks
的http请求,打上method=POST的标签,则形成了具体的http请求)。这个查询语言在这些度量和标签列表的基础上进行过滤和聚合。改变任何度量上的任何标签值,则会形成新的时间序列图。
2)灵活的查询语言(PromQL):可以对采集的metrics指标进行加法,乘法,连接等操作;
3)可以直接在本地部署,不依赖其他分布式存储;
4)通过基于HTTP的pull方式采集时序数据;
5)可以通过中间网关pushgateway
的方式把时间序列数据推送到prometheus server
端;
6)可通过服务发现或者静态配置来发现目标服务对象(targets)。
7)有多种可视化图像界面,如Grafana等。
8)高效的存储,每个采样数据占3.5 bytes左右,300万的时间序列,30s间隔,保留60天,消耗磁盘大概200G。
9)做高可用,可以对数据做异地备份,联邦集群,部署多套prometheus,pushgateway上报数据
2.2、什么是样本
样本:在时间序列中的每一个点称为一个样本(sample),样本由以下三部分组成:
指标(metric):指标名称和描述当前样本特征的 labelsets;
时间戳(timestamp):一个精确到毫秒的时间戳;
样本值(value): 一个 folat64 的浮点型数据表示当前样本的值。
表示方式:通过如下表达方式表示指定指标名称和指定标签集合的时间序列:<metric name>{<label name>=<label value>, ...}
例如,指标名称为 api_http_requests_total,标签为 method="POST" 和 handler="/messages" 的时间序列可以表示为:api_http_requests_total{method="POST", handler="/messages"}
Metric类型:
Counter: 一种累加的metric,如请求的个数,结束的任务数,出现的错误数等
Gauge: 常规的metric,如温度,可任意加减。其为瞬时的,与时间没有关系的,可以任意变化的数据。
Histogram: 柱状图,用于观察结果采样,分组及统计,如:请求持续时间,响应大小。其主要用于表示一段时间内对数据的采样,并能够对其指定区间及总数进行统计。根据统计区间计算
Summary: 类似Histogram,用于表示一段时间内数据采样结果,其直接存储quantile数据,而不是根据统计区间计算出来的。不需要计算,直接存储结果
PromQL 基本使用
PromQL (Prometheus Query Language) 是 Prometheus 自己开发的数据查询 DSL 语言,语言表现力非常丰富,内置函数很多,在日常数据可视化以及rule 告警中都会使用到它。在后面章节会介绍
更多请参见详情
三、Prometheus组件介绍
1)Prometheus Server: 用于收集和存储时间序列数据。
2)Client Library: 客户端库,检测应用程序代码,当Prometheus抓取实例的HTTP端点时,客户端库会将所有跟踪的metrics指标的当前状态发送到prometheus server端。
3)Exporters: prometheus支持多种exporter,通过exporter可以采集metrics数据,然后发送到prometheus server端,所有向promtheus server提供监控数据的程序都可以被称为exporter
4)Alertmanager: 从 Prometheus server 端接收到 alerts 后,会进行去重,分组,并路由到相应的接收方,发出报警,常见的接收方式有:电子邮件,微信,钉钉, slack等。
5)Grafana:监控仪表盘,可视化监控数据
6)pushgateway: 各个目标主机可上报数据到pushgateway,然后prometheus server统一从pushgateway拉取数据。
helm 部署prometheus
从官网仓库搜索prometheus 并按提示添加仓库并安装
https://artifacthub.io/
- 添加仓库
[root@k8s-master prometheus]# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
[root@k8s-master prometheus]# helm repo update #更新仓库
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@k8s-master prometheus]# helm search repo prometheus
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/kube-prometheus 6.1.8 0.50.0 kube-prometheus collects Kubernetes manifests t...
bitnami/prometheus-operator 0.31.1 0.41.0 DEPRECATED The Prometheus Operator for Kubernet...
bitnami/wavefront-prometheus-storage-adapter 1.0.7 1.0.3 Wavefront Storage Adapter is a Prometheus integ...
prometheus-community/kube-prometheus-stack 18.0.10 0.50.0 kube-prometheus-stack collects Kubernetes manif...
prometheus-community/prometheus 14.6.1 2.26.0 Prometheus is a monitoring system and time seri...
prometheus-community/prometheus-adapter 2.17.0 v0.9.0 A Helm chart for k8s prometheus adapter
prometheus-community/prometheus-blackbox-exporter 5.0.3 0.19.0 Prometheus Blackbox Exporter
[root@k8s-master prometheus]# helm show readme prometheus-community/prometheus #安装前看下包含的组件是否合适 此版本包含的插件比较全面
[root@k8s-master prometheus]# kubectl create ns monitor
namespace/monitor created
- 下载Chart包 根据自己需求修改
values.yaml
[root@k8s-master prometheus]# helm pull prometheus-community/prometheus
[root@k8s-master prometheus]# ls
prometheus-14.6.1.tgz
[root@k8s-master prometheus]# tar -xf prometheus-14.6.1.tgz
[root@k8s-master prometheus]# cd prometheus/
[root@k8s-master prometheus]# ls
Chart.lock charts Chart.yaml README.md templates values.yaml
- 把
prometheus-server
和prometheus-alertmanager
两个字段PVC取消掉 正式环境需要先
部署PVC存储卷
[root@k8s-master prometheus]# vim values.yaml
persistentVolume:
## If true, alertmanager will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: false #修改为false
#配置prometheus、alertmanager ingress hosts
ingress:
## If true, pushgateway Ingress will be created
##
enabled: true
ingressClassName: nginx #存在多个ingress 指定使用 ingress-nginx
hosts:
- prometheus.com
...
hosts:
- alertmanager.com
- 部署安装
[root@k8s-master prometheus]# helm install prometheus prometheus -n monitor
NAME: prometheus
LAST DEPLOYED: Sat Sep 18 15:40:47 2021
NAMESPACE: monitor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.monitor.svc.cluster.local
[root@k8s-master prometheus]# kubectl get pod -n monitor
NAME READY STATUS RESTARTS AGE
prometheus-alertmanager-769488c787-h9s7z 2/2 Running 0 2d21h
prometheus-kube-state-metrics-68b6c8b5c5-fgqjg 1/1 Running 0 2d21h
prometheus-node-exporter-hfw4c 1/1 Running 0 2d21h
prometheus-node-exporter-rzjzj 1/1 Running 0 2d21h
prometheus-node-exporter-vhr9p 1/1 Running 0 2d21h
prometheus-pushgateway-8655bf87b9-xwzjx 1/1 Running 0 2d21h
prometheus-server-7df4f9b485-7pz8j 2/2 Running 0 2d21h
[root@k8s-master prometheus]# kubectl get svc -n monitor
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-alertmanager ClusterIP 10.96.135.142 <none> 80/TCP 2d21h
prometheus-kube-state-metrics ClusterIP 10.96.153.83 <none> 8080/TCP 2d21h
prometheus-node-exporter ClusterIP None <none> 9100/TCP 2d21h
prometheus-pushgateway ClusterIP 10.109.40.211 <none> 9091/TCP 2d21h
prometheus-server ClusterIP 10.104.231.248 <none> 80/TCP 2d21h
[root@k8s-master prometheus]# kubectl get ingress -n monitor
NAME CLASS HOSTS ADDRESS PORTS AGE
prometheus-alertmanager nginx alertmanager.com 192.168.103.211 80 2d21h
prometheus-server nginx prometheus.com 192.168.103.211 80 2d21h
[root@k8s-master prometheus]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.108.78.161 <none> 80:32601/TCP,443:31371/TCP 85d
ingress-nginx-controller-admission ClusterIP 10.101.146.205 <none> 443/TCP 85d
- 访问主机添加hosts
192.168.103.211 prometheus.com
- 打开浏览器访问prometheus
http://prometheus.com:32601
CentOS 部署 Kubernetes1.13 集群 - 1(使用 kubeadm 安装 K8S)
参考:https://www.kubernetes.org.cn/4956.html
1. 准备
说明:准备工作需要在集群所有的主机上执行
1.1 系统配置
在安装之前,需要先做如下准备。三台 CentOS 主机如下:
cat /etc/hosts
192.168.0.19 tf-01
192.168.0.20 tf-02
192.168.0.21 tf-03
如果各个主机启用了防火墙,需要开放 Kubernetes 各个组件所需要的端口,可以查看 Installing kubeadm 中的”Check required ports” 一节。 这里简单起见在各节点禁用防火墙:
systemctl stop firewalld
systemctl disable firewalld
禁用 SELINUX:
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
创建 /etc/sysctl.d/k8s.conf 文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
执行命令使修改生效。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
1.2kube-proxy 开启 ipvs 的前置条件
由于 ipvs 已经加入到了内核的主干,所以为 kube-proxy 开启 ipvs 的前提需要加载以下的内核模块:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
在所有的Kubernetes节点上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的 /etc/sysconfig/modules/ipvs.modules 文件,保证在节点重启后能自动加载所需模块。
使用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
接下来还需要确保各个节点上已经安装了 ipset 软件包 yum install ipset。 为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm yum install ipvsadm。
yum install ipset
ipvsadm yum install ipvsadm
如果以上前提条件如果不满足,则即使 kube-proxy 的配置开启了 ipvs 模式,也会退回到 iptables 模式
1.3 安装 Docker
Kubernetes 从 1.6 开始使用 CRI (Container Runtime Interface) 容器运行时接口。默认的容器运行时仍然是 Docker,使用的是 kubelet 中内置 dockershim CRI 实现。
安装 docker 的 yum 源:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
查看最新的 Docker 版本:
yum list docker-ce.x86_64 --showduplicates |sort -r
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.12 已经针对 Docker 的 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06 等版本做了验证,需要注意 Kubernetes 1.12 最低支持的 Docker 版本是 1.11.1。Kubernetes 1.13 对 Docker 的版本依赖方面没有变化。 我们这里在各节点安装 docker 的 18.06.1 版本。
yum makecache fast
yum install -y --setopt=obsoletes=0 \
docker-ce-18.06.1.ce-3.el7
systemctl start docker
systemctl enable docker
确认一下 iptables filter 表中 FOWARD 链的默认策略 (pllicy) 为 ACCEPT。
iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Docker 从 1.13 版本开始调整了默认的防火墙规则,禁用了 iptables filter 表中 FOWARD 链,这样会引起 Kubernetes 集群中跨 Node 的 Pod 无法通信。但这里通过安装 docker 1806,发现默认策略又改回了 ACCEPT,这个不知道是从哪个版本改回的,因为我们线上版本使用的 1706 还是需要手动调整这个策略的。
2. 使用 kubeadm 部署 Kubernetes
2.1 安装 kubeadm 和 kubelet
下面在各节点安装 kubeadm 和 kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
注意:此处与参考的原文不同,原文用了 google 的资源,因为 google 联不上,此处改为 aliyun,且不开启 check(check=0)
yum makecache fast
yum install -y kubelet kubeadm kubectl
...
Installed:
kubeadm.x86_64 0:1.13.0-0 kubectl.x86_64 0:1.13.0-0 kubelet.x86_64 0:1.13.0-0
Dependency Installed:
cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.6.0-0 socat.x86_64 0:1.7.3.2-2.el7
从安装结果可以看出还安装了 cri-tools, kubernetes-cni, socat 三个依赖:
- 官方从 Kubernetes 1.9 开始就将 cni 依赖升级到了 0.6.0 版本,在当前 1.12 中仍然是这个版本
- socat 是 kubelet 的依赖
- cri-tools 是 CRI (Container Runtime Interface) 容器运行时接口的命令行工具
运行 kubelet –help 可以看到原来 kubelet 的绝大多数命令行 flag 参数都被 DEPRECATED 了,如:
......
--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet''s --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
......
而官方推荐我们使用–config 指定配置文件,并在配置文件中指定原来这些 flag 所配置的内容。具体内容可以查看这里 Set Kubelet parameters via a config file。这也是 Kubernetes 为了支持动态 Kubelet 配置(Dynamic Kubelet Configuration)才这么做的,参考 Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet 的配置文件必须是 json 或 yaml 格式,具体可查看这里。
Kubernetes 1.8 开始要求关闭系统的 Swap,如果不关闭,默认配置下 kubelet 将无法启动。(如果不能关闭 Swap,则需要修改 kubelet 的配置,下附)
关闭系统的 Swap 方法如下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用 free -m 确认 swap 已经关闭。
vim /etc/fstab
free -m
swappiness 参数调整,修改 /etc/sysctl.d/k8s.conf 添加下面一行:
vm.swappiness=0
执行 sysctl -p /etc/sysctl.d/k8s.conf 使修改生效。
sysctl -p /etc/sysctl.d/k8s.conf
如果集群主机上还运行其他服务,关闭 swap 可能会对其他服务产生影响,则可以修改 kubelet 的配置去掉这个限制:
使用 kubelet 的启动参数–fail-swap-on=false 去掉必须关闭 Swap 的限制。 修改 /etc/sysconfig/kubelet,加入 KUBELET_EXTRA_ARGS=--fail-swap-on=false
KUBELET_EXTRA_ARGS=--fail-swap-on=false
2.2 使用 kubeadm init 初始化集群
在各节点开机启动 kubelet 服务:
centos7 使用 kubeadm 安装部署 kubernetes 1.14
应用背景:
截止目前为止,高热度的 kubernetes 版本已经发布至 1.14,在此记录一下安装部署步骤和过程中的问题排查。
部署 k8s 一般两种方式:kubeadm(官方称目前已经 GA,可以在生产环境使用);二进制安装(比较繁琐)。
这里暂且采用 kubeadm 方式部署测试。
测试环境:
System | Hostname | IP |
CentOS 7.6 | k8s-master | 138.138.82.14 |
CentOS 7.6 | k8s-node1 | 138.138.82.15 |
CentOS 7.6 | k8s-node2 | 138.138.82.16 |
网络插件:calico
具体步骤:
1. 环境预设(在所有主机上操作)
关闭 firewalld:
systemctl stop firewalld && systemctl disable firewalld
关闭 SElinux:
setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
关闭 Swap:
swapoff -a && sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
使用阿里云 yum 源:
wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo
更新 /etc/hosts 文件:在每一台主机的该文件中添加 k8s 所有节点的 IP 和对应主机名,否则初始化的时候回出现告警甚至错误。
2. 安装 docker 引擎(在所有主机上操作)
安装阿里云 docker 源:
wget -O /etc/yum.repos.d/docker-ce http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装 docker:
yum install docker-ce -y
启动 docker:
systemctl enable docker && systemctl start docker
调整 docker 部分参数:
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-''EOF''
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"], // 改为阿里镜像
"exec-opts": ["native.cgroupdriver=systemd"] // 默认cgroupfs,k8s官方推荐systemd,否则初始化出现Warning
}
EOF
systemctl daemon-reload
systemctl restart docker
检查确认 docker 的 Cgroup Driver 信息:
[root@k8s-master ~]# docker info |grep Cgroup
Cgroup Driver: systemd
3. 安装 kubernetes 初始化工具(在所有主机上操作)
使用阿里云的 kubernetes 源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装工具: yum install -y kubelet kubeadm kubectl // 此时最新版本 1.14.1
启动 kubelet: systemctl enable kubelet && systemctl start kubelet // 此时启动不成功正常,后面初始化的时候会变成功
4. 预下载相关镜像(在 master 节点上操作)
查看集群初始化所需镜像及对应依赖版本号:
[root@k8s-master ~]# kubeadm config images list
……
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
因为这些重要镜像都被墙了,所以要预先单独下载好,然后才能初始化集群。
下载脚本:


#!/bin/bash
set -e
KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
5. 初始化集群(在 master 节点上操作)
kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/16
注意:初始化之后会安装网络插件,这里选择了 calico,所以修改 --pod-network-cidr=192.168.0.0/16
初始化输出记录样例:


[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ''kubeadm config images pull''
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 138.138.82.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002739 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 57iu95.6narx7y8peauts76
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 138.138.82.14:6443 --token 57iu95.6narx7y8peauts76 \
--discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4
以上输出显示初始化成功,并给出了接下来的必要步骤和节点加入集群的命令,照着做即可。
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看已经运行的 pod
[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-6mgks 0/1 Pending 0 9m6s <none> <none> <none> <none>
coredns-fb8b8dccf-cbtlx 0/1 Pending 0 9m6s <none> <none> <none> <none>
etcd-k8s-master 1/1 Running 0 8m22s 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 8m19s 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 8m30s 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 1/1 Running 0 9m7s 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 8m6s 138.138.82.14 k8s-master <none> <none>
到这里,会发现除了 coredns 未 ready,这是正常的,因为还没有网络插件,接下来安装 calico 后就变为正常 running 了。
6. 安装 calico(在 master 节点上操作)
Calico 官网:https://docs.projectcalico.org/v3.6/getting-started/kubernetes/
kubectl apply -f \
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
应用官方的 yaml 文件之后,过一会查看所有 pod 已经正常 running 状态了,也分配出了对应 IP:
[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-node-r5mlj 1/1 Running 0 72s 138.138.82.14 k8s-master <none> <none>
coredns-fb8b8dccf-6mgks 1/1 Running 0 15m 192.168.0.7 k8s-master <none> <none>
coredns-fb8b8dccf-cbtlx 1/1 Running 0 15m 192.168.0.6 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 0 15m 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 15m 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 15m 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 1/1 Running 0 15m 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 14m 138.138.82.14 k8s-master <none> <none>
查看节点状态
[root@k8s-master ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 22m v1.14.1 138.138.82.14 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5
至此,集群初始化和主节点都准备就绪,接下来就是加入其他工作节点至集群中。
7. 加入集群(在非 master 节点上操作)
先在需要加入集群的节点上下载必要镜像,下载脚本如下:


#!/bin/bash
set -e
KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
然后在主节点初始化输出中获取加入集群的命令,复制到工作节点执行即可:
[root@k8s-node1 ~]# kubeadm join 138.138.82.14:6443 --token 57iu95.6narx7y8peauts76 \
> --discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ''kubectl -n kube-system get cm kubeadm-config -oyaml''
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ''kubectl get nodes'' on the control-plane to see this node join the cluster.
8. 在 master 节点上查看各节点工作状态
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 26m v1.14.1
k8s-node1 Ready <none> 84s v1.14.1
k8s-node2 Ready <none> 74s v1.14.1
至此,最简单的集群已经部署完成。
接下来,部署其他插件。
下一篇:calico 客户端工具 calicoctl
结束.
关于linux 安装部署 k8s (kubernetes) 和解决遇到的坑和centos k8s安装的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于2021年Kubernetes(k8s)服务集群安装部署、30.kubernetes(k8s)笔记 Promethues(一) 部署安装、CentOS 部署 Kubernetes1.13 集群 - 1(使用 kubeadm 安装 K8S)、centos7 使用 kubeadm 安装部署 kubernetes 1.14的相关知识,请在本站寻找。
本文标签: