GVKun编程网logo

centos7 使用二进制方式安装 kubernetes v1.11.2(centos执行二进制文件)

17

如果您想了解centos7使用二进制方式安装kubernetesv1.11.2的相关知识,那么本文是一篇不可错过的文章,我们将对centos执行二进制文件进行全面详尽的解释,并且为您提供关于CentO

如果您想了解centos7 使用二进制方式安装 kubernetes v1.11.2的相关知识,那么本文是一篇不可错过的文章,我们将对centos执行二进制文件进行全面详尽的解释,并且为您提供关于CentOS 二进制安装 Kubernetes、CentOS 使用二进制部署 Kubernetes 1.13集群、CentOS7 (mini) 安装 Kubernetes 集群(kubeadm 方式)、Centos7 二进制安装 Kubernetes 1.13的有价值的信息。

本文目录一览:

centos7 使用二进制方式安装 kubernetes v1.11.2(centos执行二进制文件)

centos7 使用二进制方式安装 kubernetes v1.11.2(centos执行二进制文件)

 

以下部署是单节点的 master,也没有证书相关内容。

 

一、. 环境准备

1. 设备环境

192.168.56.10 k8s-m1
192.168.56.11 k8s-n1
192.168.56.12 k8s-n2
192.168.56.13 k8s-n3
master节点内存不小于1Gnode节点内存不小于768M
内核版本大于 3.10.0

2. 系统环境

2.1 添加 firewall 规则,所有节点互通,关闭 selinux

centos 7 firewalld 永久添加规则:
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -s 192.168.56.0/24  -j ACCEPT
firewall-cmd --reload

[root@k8s-n1 ~]# cat /etc/selinux/config |grep SELINUX
SELINUX=disabled

2.2 设置 host 解析

192.168.56.10 k8s-m1
192.168.56.11 k8s-n1
192.168.56.12 k8s-n2
192.168.56.13 k8s-n3

2.3 所有节点设置 k8s 参数

[root@k8s-m1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@k8s-m1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

2.4 关闭 swap,如果不关闭需要调整 kubelet 参数。/etc/fstab 也要注解掉 SWAP 的挂载。

$ swapoff -a && sysctl -w vm.swappiness=0
node节点可以不关闭swap

 

3. 软件版本

Kubernetes v1.11.2
etcd-v3.2.9
flannel-v0.10.0
docker://18.6.1 或者 docker://18.3.1

 

二、docker 安装

在 centos 系统下,这里介绍两种 docker 的安装方式,使用哪种都可以。

1. yum 方式安装 docker

最新版 DockerCE 安装步骤(当前版本 2018.09.10 Docker version 18.06.1-ce, build e68fc7a )

#配置yum源
yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo

#安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --enable docker-ce-edge

#安装最新版DockerCE
yum makecache fast;  yum -y install docker-ce

#启动
systemctl enable docker; systemctl start docker

#测试
docker run hello-world

2. 二进制方式安装 docker

 2.1 下载二进制文件

wget  https://download.docker.com/linux/static/stable/x86_64/docker-18.06.1-ce.tgz
tar -xvf docker-18.06.1-ce.tgz
cp docker/docker* /usr/bin

2.2 配置文件

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

3.3 启动服务

systemctl daemon-reload
systemctl enable docker
systemctl start docker

 

 三、kubernetes 二进制文件下载
下载页面: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md
下载链接:wget https://dl.k8s.io/v1.11.2/kubernetes-server-linux-amd64.tar.gz
 
四、kubernetes 部署
1. 相关组件
master 节点包含的组件:
kube-apiserver
kube-scheduler
kube-controller-manager
etcd
flannel
docker

node 节点包含的组件:

flanneld
docker
kubelet
kube-proxy

 

2. master 节点部署
2.1 部署思路
在 centos7 系统部署二进制的时候,所有组件都需要 4 个步骤:
1)复制对应的二进制文件到 /usr/bin/ 目录下
2)创建 systemd  service 启动服务文件
3)创建 service 启动文件中对应的配置参数文件
4)将应用加入到开机自启动
2.2 etcd 数据库安装
# etcd安装包下载
wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz
tar xf etcd-v3.2.9-linux-amd64.tar.gz
cd etcd-v3.2.9-linux-amd64/
cp etcd etcdctl /usr/bin/

#设置etcd.service服务文件
[root@k8s-m1 ~]# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf  
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#创建配置文件/etc/etcd/etcd.conf
[root@k8s-m1 ~]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"


#配置开机启动
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service

#检查是否安装成功
[root@k8s-m1 ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

 2.3 创建一个公用的 kubernetes 配置文件

[root@k8s-m1 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=1"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.10:8080"

2.4 kube-apiserver 服务

#复制二进制文件
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/bin/

#启动文件
[root@k8s-m1 ~]# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.10:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=100.0.100.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

2.5 kube-controller-manager 服务

#启动文件
[root@k8s-m1 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=" "

2.6 kube-scheduler 服务

#启动文件
[root@k8s-m1 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/scheduler
#KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

2.7 将各组件加入开机启动

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

2.8 验证 master 节点功能

[root@k8s-m1 ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok

3. node 节点安装二进制文件

3.1 先创建一个公共配置文件

[root@k8s-m1 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.10:8080"

3.1 kubelet 组件

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://192.168.56.10:8080
    name: local
contexts:
  - context:
      cluster: local
    name: local
current-context: local

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/kubelet
# 启用日志标准错误
KUBE_LOGTOSTDERR="--logtostderr=true"
# 日志级别
KUBE_LOG_LEVEL="--v=0"
# Kubelet服务IP地址(本机地址)
NODE_ADDRESS="--address=192.168.56.11"
# Kubelet服务端口
NODE_PORT="--port=10250"
# 自定义节点名称(本机地址)
NODE_HOSTNAME="--hostname-override=192.168.56.11"
# kubeconfig路径,指定连接API服务器
KUBELET_KUBECONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
# 允许容器请求特权模式,默认false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"
# DNS信息
KUBELET_DNS_IP="--cluster-dns=192.168.12.19"
KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"
# 禁用使用Swap
KUBELET_SWAP="--fail-swap-on=false"

#启动文件
[root@k8s-n1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
${KUBE_LOGTOSTDERR} \
${KUBE_LOG_LEVEL} \
${NODE_ADDRESS} \
${NODE_PORT} \
${NODE_HOSTNAME} \
${KUBELET_KUBECONFIG} \
${KUBE_ALLOW_PRIV} \
${KUBELET_DNS_IP} \
${KUBELET_DNS_DOMAIN} \
${KUBELET_SWAP}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

3.2 启动 kubelet

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

3.3 kube-proxy 服务

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""

#启动文件
[root@k8s-n1 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3.4 启动 kube-proxy 服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

3.5 检查节点状态

#master上执行检查命令
[root@k8s-m1 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.11   Ready     <none>    1d        v1.11.2

3.6 部署 flannel 网络

下载组件,版本 flannel-v0.10.0

mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
cp flannel/{flanneld,mk-docker-opts.sh} /usr/bin

3.7 服务配置

#在master节点执行下面命令,将网段信息写入etcd库
[root@k8s-m1 ~]# etcdctl set /k8s/network/config ''{"Network": "100.0.0.0/16"}''

#配置文件
[root@k8s-n1 flannel-v0.10.0]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.56.10:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

#启动文件
[root@k8s-n1 flannel-v0.10.0]# cat  /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/flanneld
#EnvironmentFile=-/etc/sysconfig/docker-network
#ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStart=/usr/bin/flanneld  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS}  -etcd-prefix=${FLANNEL_ETCD_PREFIX}
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service

#启动flannel服务
systemctl daemon-reload
systemctl enable flanneld.service
systemctl start flanneld.service


#将doc启动参数加入到docker启动文件里
[root@k8s-n1 flannel-v0.10.0]#  vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS #增加一个$DOCKER_OPTS
EnvironmentFile=/run/flannel/docker  #添加一行环境变量

3.8 启动服务

systemctl daemon-reload
systemctl restart docker

3.9 测试

3.9.1 flannel 测试

1)网卡多出一个 flannel0 接口
2)分别在 master 和 node 上运行一个 busybox 容器,相互 ping 一下是否通,ping 通,说明部署成功。
[root@k8s-m1 ~]# docker run -it busybox sh
/ # ping 100.0.100.2

3.9.2 kubernetes 测试

#master节点上测试
[root@k8s-m1 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.11   Ready     <none>    1d        v1.11.2
[root@k8s-m1 ~]# kubectl get pod
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-phv8v   1/1       Running   0          36s
[root@k8s-m1 ~]# kubectl get deployment
No resources found.
#创建一个部署
[root@k8s-m1 ~]# kubectl run nginx --image=nginx --replicas=2
deployment.apps/nginx created
[root@k8s-m1 ~]# kubectl get deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     2         2         2            1           4s
[root@k8s-m1 ~]# kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
nginx-64f497f8fd-75fd7   1/1       Running   0          1m
nginx-64f497f8fd-ldv7s   1/1       Running   0          1m

#同时在node节点上,也可以看到相应的container已经启动

 

 

 

 

 

CentOS 二进制安装 Kubernetes

CentOS 二进制安装 Kubernetes

前言

最近在私有云部署一套自动化运维平台;

其中 k8s 是重头戏,这篇文章为大家分享一下二进制安装方式

k8s 架构图

在这里插入图片描述

安装过程

  • 基本环境

    • CentOS 版本 7.9.2009
    • Etcd 版本 3.4.14
    • Docker
    • k8s 版本 1.17.16
  • kube-apiserver 服务安装

    • 下载并解压
cd /soft
wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
  • 创建启动配置文件
mkdir /soft/kubernetes/server/conf/
vim /soft/kubernetes/server/conf/apiserver
  
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8886"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.9.0.46:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=169.169.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_LOG="--logtostderr=false --log-dir=/home/k8s-t/log/kubernets --v=2"
KUBE_API_ARGS=" "
  • 修改配置文件
vim /etc/systemd/system/kube-apiserver.service
  
[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service
  
[Service]
EnvironmentFile=/soft/kubernetes/server/conf/apiserver
ExecStart=/soft/kubernetes/server/bin/kube-apiserver  \
          $KUBE_ETCD_SERVERS \
          $KUBE_API_ADDRESS \
          $KUBE_API_PORT \
          $KUBE_SERVICE_ADDRESSES \
          $KUBE_ADMISSION_CONTROL \
        $KUBE_API_LOG \
          $KUBE_API_ARGS 
Restart=on-failure
  Type=notify
  LimitNOFILE=65536
  
[Install]
WantedBy=multi-user.target
  • kube-controller-manger 服务安装
  • 创建启动配置文件
vim /soft/kubernetes/server/conf/controller-manager

KUBE_MASTER="--master=http://10.9.0.46:8886"
KUBE_CONTROLLER_MANAGER_ARGS=" "
  • 修改配置文件
vim /etc/systemd/system/kube-controller-manager.service
  
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/soft/kubernetes/server/conf/controller-manager
ExecStart=/soft/kubernetes/server/bin/kube-controller-manager \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • kube-scheduler 服务安装
  • 创建启动配置文件
vim /soft/kubernetes/server/conf/scheduler

KUBE_MASTER="--master=http://10.9.0.46:8886"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"
  • 创建服务配置文件
vim /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service 
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/soft/kubernetes/server/conf/scheduler
ExecStart=/soft/kubernetes/server/bin/kube-scheduler \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target 
  • 将各个组件启动
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service

systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service

systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

总结

  • k8s 改变了项目运维的方式,使得发布变得十分简单的同时降低了项目风险;

  • 多动手多实践;

CentOS 使用二进制部署 Kubernetes 1.13集群

CentOS 使用二进制部署 Kubernetes 1.13集群

一、概述

kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现普遍可用性。

Kubernetes 1.13 的核心特性包括:利用 kubeadm 简化集群管理、容器存储接口(CSI )以及将 CoreDNS 作为默认 DNS 。

利用 kubeadm 简化集群管理功能

大多数与 Kubernetes 接触频繁的人或多或少都会亲自动手使用 kubeadm ,它是管理集群生命周期的重要工具,能够帮助从创建到配置再到升级的整个流程。;随着 1.13 版本的发布,kubeadm 功能进入 GA 版本,正式普遍可用。kubeadm 处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心 Kubernetes 组件,以便为新节点提供安全而简单的连接流程并支持轻松升级。

该 GA 版本中最值得注意的是已经毕业的高级功能,尤其是可插拔性和可配置性。kubeadm 旨在为管理员与高级自动化系统提供一套工具箱,如今已迈出重要一步。

容器存储接口(CSI)

容器存储接口最初于 1.9 版本中作为 alpha 测试功能引入,在 1.10 版本中进入 beta 测试,如今终于进入 GA 阶段正式普遍可用。在 CSI 的帮助下,Kubernetes 卷层将真正实现可扩展性。通过 CSI ,第三方存储供应商将可以直接编写可与 Kubernetes 互操作的代码,而无需触及任何 Kubernetes 核心代码。事实上,相关规范也已经同步进入 1.0 阶段。

随着 CSI 的稳定,插件作者将能够按照自己的节奏开发核心存储插件,详见 CSI 文档。

CoreDNS 成为 Kubernetes 的默认 DNS 服务器

在 1.11 版本中,开发团队宣布 CoreDNS 已实现基于 DNS 服务发现的普遍可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成为 Kubernetes 中的默认 DNS 服务器。CoreDNS 是一种通用的、权威的 DNS 服务器,能够提供与 Kubernetes 向下兼容且具备可扩展性的集成能力。由于 CoreDNS 自身单一可执行文件与单一进程的特性,因此 CoreDNS 的活动部件数量会少于之前的 DNS 服务器,且能够通过创建自定义 DNS 条目来支持各类灵活的用例。此外,由于 CoreDNS 采用 Go 语言编写,它具有强大的内存安全性。

CoreDNS 现在是 Kubernetes 1.13 及后续版本推荐的 DNS 解决方案,Kubernetes 已将常用测试基础设施架构切换为默认使用 CoreDNS ,因此,开发团队建议用户也尽快完成切换。KubeDNS 仍将至少支持一个版本,但现在是时候开始规划迁移了。另外,包括 1.11 中 Kubeadm 在内的许多 OSS 安装工具也已经进行了切换。

1、安装环境准备:

部署节点说明

IP地址 主机名 CPU 内存 磁盘
192.168.4.100 master 1C 1G 40G
192.168.4.21 node 1C 1G 40G
192.168.4.56 node1 1C 1G 40G

k8s安装包下载

链接:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ
提取码:pm9u

部署网络说明

2、架构图

Kubernetes 架构图

Flannel网络架构图

  • 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。
  • Flannel通过Etcd服务维护了一张节点间的路由表,在稍后的配置部分我们会介绍其中的内容。
  • 源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,
    然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一下的有docker0路由到达目标容器。

3、 Kubernetes工作流程


集群功能各模块功能描述:

Master节点:
Master 节点上面主要由四个模块组成,APIServer,schedule , controller-manager , etcd

APIServer: APIServer 负责对外提供 RESTful 的 kubernetes API 的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给 APIServer 处理后再交给 etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对 kubernetes API 的调用)是直接和 APIServer 交互的。

schedule: schedule 负责调度 Pod 到合适的 Node 上,如果把 scheduler 看成一个黑匣子,那么它的输入是 pod 和由多个 Node 组成的列表,输出是 Pod 和一个 Node 的绑定。 kubernetes 目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。

controller manager: 如果 APIServer 做的是前台的工作的话,那么 controller manager 就是负责后台的。每一个资源都对应一个控制器。而 control manager 就是负责管理这些控制器的,比如我们通过 APIServer 创建了一个Pod,当这个 Pod 创建成功后,APIServer 的任务就算完成了。

etcd:etcd 是一个高可用的键值存储系统,kubernetes 使用它来存储各个资源的状态,从而实现了 Restful 的 API。

Node节点:
每个Node节点主要由三个模板组成:kublet, kube-proxy

kube-proxy: 该模块实现了 kubernetes 中的服务发现和反向代理功能。kube-proxy 支持 TCP 和 UDP 连接转发,默认基 Round Robin 算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy 使用etcd 的 watch 机制监控集群中 service 和 endpoint 对象数据的动态变化,并且维护一个 service 到 endpoint 的映射关系,从而保证了后端 pod 的 IP 变化不会对访问者造成影响,另外,kube-proxy 还支持 session affinity。

kublet:kublet 是 Master 在每个 Node 节点上面的 agent,是 Node 节点上面最重要的模块,它负责维护和管理该 Node 上的所有容器,但是如果容器不是通过 kubernetes 创建的,它并不会管理。本质上,它负责使 Pod 的运行状态与期望的状态一致。

二、Kubernetes 安装及配置

1、初始化环境

1.1、设置关闭防火墙及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

1.2、关闭Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

1.3、设置Docker所需参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

1.4、安装 Docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

1.5、创建安装目录

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

1.6、安装及配置CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.7、创建认证证书

创建 ETCD 证书

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

创建 ETCD CA 配置文件

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

创建 ETCD Server 证书

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.4.100",
    "192.168.4.21",
    "192.168.4.56"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

生成 ETCD CA 证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

创建 Kubernetes CA 证书

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成 API_SERVER 证书

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.4.100",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

创建 Kubernetes Proxy 证书

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shenzhen",
      "ST": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

1.8、 ssh-key认证

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory ''/root/.ssh''.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o root@qas-k8s-master01
The key''s randomart image is:
+---[RSA 2048]----+
|o.==o o. ..      |
|ooB+o+ o.  .     |
|B++@o o   .      |
|=X**o    .       |
|o=O. .  S        |
|..+              |
|oo .             |
|* .              |
|o+E              |
+----[SHA256]-----+

# 复制 SSH 密钥到目标主机,开启无密码 SSH 登录
# ssh-copy-id 192.168.4.21
# ssh-copy-id 192.168.4.56

 

2 、部署ETCD

解压安装文件

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.100:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.100:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

创建 etcd的 systemd unit 启动文件

vim /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

拷贝证书文件

cp ca*pem server*pem  /k8s/etcd/ssl

启动ETCD服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

将启动文件、配置文件拷贝到 节点1、节点2

cd /k8s/ 
scp -r etcd 192.168.4.21:/k8s/
scp -r etcd 192.168.4.56:/k8s/
scp /usr/lib/systemd/system/etcd.service  192.168.4.21:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service  192.168.4.56:/usr/lib/systemd/system/etcd.service 

#--节点1
vim /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.21:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.21:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.21:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.21:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://172.16.8.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#--节点2
vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.56:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.56:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.56:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.56:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

验证集群是否正常运行

[root@master ~]# cd /k8s/etcd/bin/
[root@master bin]# ./etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.100:2379,\
> https://192.168.4.21:2379,\
> https://192.168.4.56:2379" cluster-health

member 2345cdd5020eb294 is healthy: got healthy result from https://192.168.4.100:2379
member 91d74712f79e544f is healthy: got healthy result from https://192.168.4.21:2379
member b313b7e8d0a528cc is healthy: got healthy result from https://192.168.4.56:2379
cluster is healthy


注意:
启动ETCD集群同时启动二个节点,启动一个节点集群是无法正常启动的(或将处于activing状态)

 

3、部署Flannel网络

向 etcd 写入集群 Pod 网段信息

cd /k8s/etcd/ssl/

/k8s/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.4.100:2379,\
https://192.168.4.21:2379,https://192.168.4.56:2379" \
set /coreos.com/network/config  ''{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}''
  • flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
  • 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

解压安装

tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

配置Flannel

vim /k8s/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

创建 flanneld 的 systemd unit 文件

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥;
  • flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth0 接口;
  • flanneld 运行时需要 root 权限;

配置Docker启动指定子网段

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

配置Docker启动指定子网段

vim /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

将flanneld systemd unit 文件到所有节点

cd /k8s/
scp -r kubernetes 192.168.4.21:/k8s/
scp -r kubernetes 192.168.4.56:/k8s/
scp /k8s/kubernetes/cfg/flanneld 192.168.4.21:/k8s/kubernetes/cfg/flanneld
scp /k8s/kubernetes/cfg/flanneld 192.168.4.56:/k8s/kubernetes/cfg/flanneld
scp /usr/lib/systemd/system/docker.service  192.168.4.21:/usr/lib/systemd/system/docker.service 
scp /usr/lib/systemd/system/docker.service  192.168.4.56:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/flanneld.service  192.168.4.21:/usr/lib/systemd/system/flanneld.service 
scp /usr/lib/systemd/system/flanneld.service  192.168.4.56:/usr/lib/systemd/system/flanneld.service 

# 启动服务
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

查看是否生效

[root@node ssl]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:a5:99:6a brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.21/16 brd 192.168.255.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::93dc:dfaf:2ddf:1aa9/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:5a:29:34:85 brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.1/24 brd 172.18.58.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 16:6e:22:47:d0:cd brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever

4、部署 master 节点

kubernetes master 节点运行如下组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

将二进制文件解压拷贝到master 节点

tar -xvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

拷贝认证

cp *pem /k8s/kubernetes/ssl/

部署 kube-apiserver 组件

创建 TLS Bootstrapping Token

[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d '' ''
91af09d8720f467def95b65704862025

[root@master ~]# cat /k8s/kubernetes/cfg/token.csv 
91af09d8720f467def95b65704862025,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

创建apiserver配置文件

vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 \
--bind-address=192.168.4.100 \
--secure-port=6443 \
--advertise-address=192.168.4.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

创建 kube-apiserver systemd unit 文件

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

查看apiserver是否运行

[root@master ~]# ps -ef |grep kube-apiserver
root      90572 118543  0 10:27 pts/0    00:00:00 grep --color=auto kube-apiserver
root     119804      1  1 Feb26 ?        00:22:45 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 --bind-address=192.168.4.100 --secure-port=6443 --advertise-address=192.168.4.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem

 

部署kube-scheduler

创建kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
  • –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
  • –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

创建kube-scheduler systemd unit 文件

vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service

查看kube-scheduler是否运行

[root@master ~]# ps -ef |grep kube-scheduler 
root       3591      1  0 Feb25 ?        00:16:17 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      90724 118543  0 10:28 pts/0    00:00:00 grep --color=auto kube-scheduler
[root@master ~]# 
[root@master ~]# systemctl status kube-scheduler 
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 14:58:31 CST; 1 day 19h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 3591 (kube-scheduler)
   Memory: 36.9M
   CGroup: /system.slice/kube-scheduler.service
           └─3591 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 27 10:22:54 master kube-scheduler[3591]: I0227 10:22:54.611139    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:01 master kube-scheduler[3591]: I0227 10:23:01.496338    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:02 master kube-scheduler[3591]: I0227 10:23:02.346595    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:19 master kube-scheduler[3591]: I0227 10:23:19.677905    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:26:36 master kube-scheduler[3591]: I0227 10:26:36.850715    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:21 master kube-scheduler[3591]: I0227 10:27:21.523891    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:22 master kube-scheduler[3591]: I0227 10:27:22.520733    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:12 master kube-scheduler[3591]: I0227 10:28:12.498729    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:33 master kube-scheduler[3591]: I0227 10:28:33.519011    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:50 master kube-scheduler[3591]: I0227 10:28:50.573353    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Hint: Some lines were ellipsized, use -l to show in full.

 

部署kube-controller-manager

创建kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

 

创建kube-controller-manager systemd unit 文件

vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

查看kube-controller-manager是否运行

[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-26 14:14:18 CST; 20h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 120023 (kube-controller)
   Memory: 76.2M
   CGroup: /system.slice/kube-controller-manager.service
           └─120023 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elec...

Feb 27 10:31:30 master kube-controller-manager[120023]: I0227 10:31:30.722696  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.088697  120023 gc_controller.go:144] GC''ing orphaned
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.094678  120023 gc_controller.go:173] GC''ing unsche...ting.
Feb 27 10:31:34 master kube-controller-manager[120023]: I0227 10:31:34.271634  120023 attach_detach_controller.go:634] pr...4.21"
Feb 27 10:31:35 master kube-controller-manager[120023]: I0227 10:31:35.723490  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.377876  120023 attach_detach_controller.go:634] pr....100"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.498005  120023 attach_detach_controller.go:634] pr...4.56"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.500915  120023 cronjob_controller.go:111] Found 0 jobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505005  120023 cronjob_controller.go:119] Found 0 cronjobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505021  120023 cronjob_controller.go:122] Found 0 groups
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]# 
[root@master ~]# ps -ef|grep  kube-controller-manager
root      90967 118543  0 10:31 pts/0    00:00:00 grep --color=auto kube-controller-manager
root     120023      1  0 Feb26 ?        00:08:42 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem

 

将可执行文件路/k8s/kubernetes/ 添加到 PATH 变量中

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin
​​​​​​​
# 生效变量
source /etc/profile

查看master集群状态

[root@master ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}   

 

5、部署node 节点

kubernetes work 节点运行如下组件:

  • docker 前面已经部署
  • kubelet
  • kube-proxy

部署 kubelet 组件

  • kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等;
  • kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
  • 为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)。

将kubelet 二进制文件拷贝node节点

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.21:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.56:/k8s/kubernetes/bin/

创建 kubelet bootstrap kubeconfig 文件(master节点)

# 在master节点
cd /k8s/kubernetes/ssl/

# 编辑并运行该脚本
vim  environment.sh
# 创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=91af09d8720f467def95b65704862025
KUBE_APISERVER="https://192.168.4.100:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

将bootstrap kubeconfig kube-proxy.kubeconfig 文件拷贝到所有 nodes节点(master节点)

cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.21:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.56:/k8s/kubernetes/cfg/

创建kubelet 参数配置文件拷贝到所有 nodes节点

创建 kubelet 参数配置模板文件

# 节点1
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.21
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true


# 节点2
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.56
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

创建kubelet配置文件

# 节点1
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.21 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

# 节点2
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.56 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

创建kubelet systemd unit 文件(所有节点)

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

将kubelet-bootstrap用户绑定到系统集群角色(所有节点)

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

启动服务(所有节点)

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

approve kubelet CSR 请求

可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。
手动 approve CSR 请求
查看 CSR 列表:

# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   39m    kubelet-bootstrap   Pending
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   5m5s   kubelet-bootstrap   Pending

# kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs
certificatesigningrequest.certificates.k8s.io/node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 

# kubectl certificate approve node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s  
certificatesigningrequest.certificates.k8s.io/node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s approved
[
# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   41m     kubelet-bootstrap   Approved,Issued
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   7m32s   kubelet-bootstrap   Approved,Issued
  • Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和授权;
  • Subject:请求签名的证书信息;
  • 证书的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;

查看集群状态

[root@master ssl]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.4.100   Ready             43h   v1.13.0
192.168.4.21    Ready             20h   v1.13.0
192.168.4.56    Ready             20h   v1.13.0

部署 kube-proxy 组件

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

创建 kube-proxy 配置文件

vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.100 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

创建kube-proxy systemd unit 文件

vim /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@node ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 15:38:16 CST; 1 day 19h ago
 Main PID: 2887 (kube-proxy)
   Memory: 8.2M
   CGroup: /system.slice/kube-proxy.service
           ‣ 2887 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.4.100 --cluster-cidr=10....

Feb 27 11:06:44 node kube-proxy[2887]: I0227 11:06:44.625875    2887 config.go:141] Calling handler.OnEndpointsUpdate

集群状态

打node 或者master 节点的标签

kubectl label node 192.168.4.100  node-role.kubernetes.io/master=''master''
kubectl label node 192.168.4.21  node-role.kubernetes.io/node=''node''
kubectl label node 192.168.4.56  node-role.kubernetes.io/node=''node''
[root@master ~]# kubectl get node,cs
NAME                 STATUS   ROLES    AGE   VERSION
node/192.168.4.100   Ready    master   43h   v1.13.0
node/192.168.4.21    Ready    node     20h   v1.13.0
node/192.168.4.56    Ready    node     20h   v1.13.0

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   

 

CentOS7 (mini) 安装 Kubernetes 集群(kubeadm 方式)

CentOS7 (mini) 安装 Kubernetes 集群(kubeadm 方式)

CentOS7 (mini) 安装 Kubernetes 集群(kubeadm 方式)

安装 CentOS

  1. 安装net-tools
[root@localhost ~]# yum install -y net-tools
  1. 关闭 firewalld
[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i ''s/SELINUX=enforcing/SELINUX=disabled/g'' /etc/selinux/config

安装 Docker

如今 Docker 分为了 Docker-CE 和 Docker-EE 两个版本,CE 为社区版即免费版,EE 为企业版即商业版。我们选择使用 CE 版。

  1. 安装 yum 源工具包
[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
  1. 下载 docker-ce 官方的 yum 源配置文件
[root@localhost ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  1. 禁用 docker-c-edge 源配 edge 是不开发版,不稳定,下载 stable 版
yum-config-manager --disable docker-ce-edge
  1. 更新本地 YUM 源缓存
yum makecache fast
  1. 安装 Docker-ce 相应版本的
yum -y install docker-ce
  1. 运行 hello world
[root@localhost ~]# systemctl start docker
[root@localhost ~]# docker run hello-world
Unable to find image ''hello-world:latest'' locally
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

安装 kubelet 与 kubeadm 包

使用 kubeadm init 命令初始化集群之下载 Docker 镜像到所有主机的实始化时会下载 kubeadm 必要的依赖镜像,同时安装 etcd,kube-dns,kube-proxy, 由于我们 GFW 防火墙问题我们不能直接访问,因此先通过其它方法下载下面列表中的镜像,然后导入到系统中,再使用 kubeadm init 来初始化集群

  1. 使用 DaoCloud 加速器 (可以跳过这一步)
[root@localhost ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.io
docker version >= 1.12
{"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}
Success.
You need to restart docker to take effect: sudo systemctl restart docker
[root@localhost ~]# systemctl restart docker
  1. 下载镜像,自己通过 Dockerfile 到 dockerhub 生成对镜像,也可以克隆我的
images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)
for imageName in ${images[@]} ; do
  docker pull champly/$imageName
  docker tag champly/$imageName gcr.io/google_containers/$imageName
  docker rmi champly/$imageName
done
  1. 修改版本
docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \
docker rmi gcr.io/google_containers/etcd-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \
docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \
docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \
docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \
docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \
docker rmi gcr.io/google_containers/kube-proxy-amd64 && \
docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \
docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \
docker rmi gcr.io/google_containers/pause-amd64
  1. 添加阿里源
[root@localhost ~]#  cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
  1. 查看 kubectl kubelet kubeadm kubernetes-cni 列表
[root@localhost ~]# yum list kubectl kubelet kubeadm kubernetes-cni
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.tuna.tsinghua.edu.cn
 * extras: mirrors.sohu.com
 * updates: mirrors.sohu.com
可安装的软件包
kubeadm.x86_64                                                     1.7.5-0                                              kubernetes
kubectl.x86_64                                                     1.7.5-0                                              kubernetes
kubelet.x86_64                                                     1.7.5-0                                              kubernetes
kubernetes-cni.x86_64                                              0.5.1-0                                              kubernetes
[root@localhost ~]#
  1. 安装 kubectl kubelet kubeadm kubernetes-cni
[root@localhost ~]# yum install -y kubectl kubelet kubeadm kubernetes-cni

修改 cgroups

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

update KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs

修改 kubelet 中的 cAdvisor 监控的端口,默认为 0 改为 4194,这样就可以通过浏器查看 kubelet 的监控 cAdvisor 的 web 页

[root@kub-master ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194"

启动所有主机上的 kubelet 服务

[root@master ~]# systemctl enable kubelet && systemctl start kubelet

初始化 master master 节点上操作

[root@master ~]# kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.0.100 --kubernetes-version=v1.7.5 --pod-network-cidr=10.200.0.0/16
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 34.002949 seconds
[token] Using token: 0696ed.7cd261f787453bd9
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443

[root@master ~]#

kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443 这个一定要记住,以后无法重现,添加节点需要

添加节点

[root@node1 ~]# kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight] WARNING: kubelet service is not enabled, please run ''systemctl enable kubelet.service''
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.0.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.100:6443"
[discovery] Successfully established connection with API Server "192.168.0.100:6443"
[bootstrap] Detected server version: v1.7.10
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run ''kubectl get nodes'' on the master to see this machine join.

在 master 配置 kubectl 的 kubeconfig 文件

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

在 Master 上安装 flannel

docker pull quay.io/coreos/flannel:v0.8.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

查看集群

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
[root@master ~]# kubectl get nodes
NAME      STATUS     AGE       VERSION
master    Ready      24m       v1.7.5
node1     NotReady   45s       v1.7.5
node2     NotReady   7s        v1.7.5
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS              RESTARTS   AGE
kube-system   etcd-master                      1/1       Running             0          24m
kube-system   kube-apiserver-master            1/1       Running             0          24m
kube-system   kube-controller-manager-master   1/1       Running             0          24m
kube-system   kube-dns-2425271678-h48rw        0/3       ImagePullBackOff    0          25m
kube-system   kube-flannel-ds-28n3w            1/2       CrashLoopBackOff    13         24m
kube-system   kube-flannel-ds-ndspr            0/2       ContainerCreating   0          41s
kube-system   kube-flannel-ds-zvx9j            0/2       ContainerCreating   0          1m
kube-system   kube-proxy-qxxzr                 0/1       ImagePullBackOff    0          41s
kube-system   kube-proxy-shkmx                 0/1       ImagePullBackOff    0          25m
kube-system   kube-proxy-vtk52                 0/1       ContainerCreating   0          1m
kube-system   kube-scheduler-master            1/1       Running             0          24m
[root@master ~]#

如果出现:The connection to the server localhost:8080 was refused - did you specify the right host or port?

解决办法: 为了使用 kubectl 访问 apiserver,在~/.bash_profile 中追加下面的环境变量: export KUBECONFIG=/etc/kubernetes/admin.conf source ~/.bash_profile 重新初始化 kubectl

Centos7 二进制安装 Kubernetes 1.13

Centos7 二进制安装 Kubernetes 1.13

1、目录

[TOC]

1.1、什么是 Kubernetes?

  Kubernetes,简称 k8s(k,8 个字符,s)或者 kube,是一个开源的 Linux 容器自动化运维平台,它消除了容器化应用程序在部署、伸缩时涉及到的许多手动操作。   Kubernetes 最开始是由 Google 的工程师设计开发的。Google 作为 Linux 容器技术的早期贡献者之一,曾公开演讲介绍 Google 如何将一切都运行于容器之中(这是 Google 的云服务背后的技术)。Google 一周内的容器部署超过 20 亿次,全部的工作都由内部平台 Borg 支撑。Borg 是 Kubernetes 的前身,几年来开发 Borg 的经验教训也成了影响 Kubernetes 中许多技术的主要因素。   

1.2、Kubernetes 有哪些优势?

  使用 Kubernetes,你可以快速、高效地满足用户以下的需求:

  • 快速精准地部署应用程序
  • 即时伸缩你的应用程序
  • 无缝展现新特征
  • 限制硬件用量仅为所需资源

     Kubernetes 的优势

  • 可移动: 公有云、私有云、混合云、多态云
  • 可扩展: 模块化、插件化、可挂载、可组合
  • 自修复: 自动部署、自动重启、自动复制、自动伸缩

     Google 公司于 2014 年启动了 Kubernetes 项目。Kubernetes 是在 Google 的长达 15 年的成规模的产品级任务的经验下构建的,结合了来自社区的最佳创意和实践经验 Kubernetes-4   

2、环境准备

本文中的案例会有两台机器,他们的Host和IP地址如下

IP地址 主机名
10.0.0.100 c0(master)
10.0.0.101 c1(master)
10.0.0.102 c2
10.0.0.103 c3

   四台机器的 hostc0 为例:

[root@c0 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.100 c0
10.0.0.101 c1
10.0.0.102 c2
10.0.0.103 c3

  

2.1、网络配置

  以下以c0为例

[root@c0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6d8d9ad6-37b5-431a-ab16-47d0aa00d01f
DEVICE=eth0
ONBOOT=yes
IPADDR0=10.0.0.100
PREFIXO0=24
GATEWAY0=10.0.0.1
DNS1=10.0.0.1
DNS2=8.8.8.8

     重启网络:

[root@c0 ~]# service network restart

     更改源为阿里云

[root@c0 ~]# yum install -y wget
[root@c0 ~]# cd /etc/yum.repos.d/
[root@c0 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak
[root@c0 yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo
[root@c0 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo
[root@c0 yum.repos.d]# yum clean all
[root@c0 yum.repos.d]# yum makecache

     安装网络工具包和基础工具包

[root@c0 ~]# yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y

  

2.2、更改 HOSTNAME

  在四台机器上依次设置 hostname,以下以c0为例

[root@c0 ~]# hostnamectl --static set-hostname c0
[root@c0 ~]# hostnamectl status
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: ba02919abe4245aba673aaf5f778ad10
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

  

2.3、配置ssh免密码登录登录

  每一台机器都单独生成

[root@c0 ~]# ssh-keygen
#一路按回车到最后

     将 ssh-keygen 生成的密钥,分别复制到其他三台机器,以下以 c0 为例

[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ''c0 (10.0.0.100)'' can''t be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0''s password:
[root@c0 ~]# rm -rf ~/.ssh/known_hosts
[root@c0 ~]# clear
[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ''c0 (10.0.0.100)'' can''t be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0''s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh ''c0''"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ''c1 (10.0.0.101)'' can''t be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1''s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh ''c1''"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ''c2 (10.0.0.102)'' can''t be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2''s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh ''c2''"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ''c3 (10.0.0.103)'' can''t be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3''s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh ''c3''"
and check to make sure that only the key(s) you wanted were added.

     测试密钥是否配置成功

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N hostname; done;
c0
c1
c2
c3

  

2.4、关闭防火墙

  在每一台机器上运行以下命令,以 c0 为例:

[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  

2.5、关闭交换分区

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N swapoff -a; done;

关闭前和关闭后,可以使用free -h命令查看swap的状态,关闭后的total应该是0

     在每一台机器上编辑配置文件: /etc/fstab , 注释最后一条/dev/mapper/centos-swap swap,以c0为例

[root@c0 ~]# sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
[root@c1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jan 28 11:49:11 2019
#
# Accessible filesystems, by reference, are maintained under ''/dev/disk''
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=93572ab6-90da-4cfe-83a4-93be7ad8597c /boot                   xfs     defaults        0 0
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

  

2.6、关闭 SeLinux

  在每一台机器上,关闭 SeLinux,以 c0 为例

[root@c0 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
[root@c0 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

SELinux就是安全加强的Linux

  

2.7、安装 NTP

  安装 NTP 时间同步工具,并启动 NTP

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N yum install ntp -y; done;

     在每一台机器上,设置 NTP 开机启动

[root@c0 ~]# systemctl enable ntpd && systemctl start ntpd

     依次查看每台机器上的时间:

[root@c0 ~]# for N in $(seq 0 3); do ssh c$N date; done;
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:49 CST 2019
Sat Feb  9 18:11:49 CST 2019

  

2.8、安装及配置 CFSSL

  使用 CFSSL 能够构建本地CA,生成后面需要使用的证书。

[root@c0 ~]# mkdir -p /home/work/_src
[root@c0 ~]# cd /home/work/_src
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@c0 _src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@c0 _src]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@c0 _src]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@c0 _src]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@c0 _src]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

  

2.9、创建安装目录

  创建后面要用到的 ETCDKubernetes 使用目录

[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/etcd/{bin,cfg,ssl} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_app/k8s/kubernetes/{bin,cfg,ssl,ssl_cert} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/etcd -p; done;

  

2.10、升级内核

  因为3.10版本内核且缺少 ip_vs_fo.ko 模块,将导致 kube-proxy 无法开启ipvs模式。ip_vs_fo.ko 模块的最早版本为3.19版本,这个内核版本在 RedHat 系列发行版的常见RPM源中是不存在的。

[root@c0 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@c0 ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

     重启系统 reboot 后,手动选择新内核,然后输入以下命令,可以查看新内核的状态:

[root@c0 ~]# hostnamectl
   Static hostname: c0
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 04c3f6d56e788345859875d9f49bd4bd
           Boot ID: 40a19388698f4907bd233a8cff76f36e
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 4.20.7-1.el7.elrepo.x86_64
      Architecture: x86-64

  

3、安装 Docker 18.06.1-ce

3.1、删除旧版本的 Docker

  官方提供的删除方法

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

     另外一种删除旧版的 Docker 方法,先查询安装过的 Docker

[root@c0 ~]# yum list installed | grep docker
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
containerd.io.x86_64            1.2.2-3.el7                    @docker-ce-stable
docker-ce.x86_64                3:18.09.1-3.el7                @docker-ce-stable
docker-ce-cli.x86_64            1:18.09.1-3.el7                @docker-ce-stable

     删除已安装的 Docker

[root@c0 ~]# yum -y remove docker-ce.x86_64 docker-ce-cli.x86_64 containerd.io.x86_64

     删除 Docker 镜像/容器

[root@c0 ~]# rm -rf /var/lib/docker

  

3.2、设置存储库

  安装所需要的包,yum-utils 提供了 yum-config-manager 实用程序, device-mapper-persistent-datalvm2devicemapper 需要的存储驱动程序。   在每一台机器上操作,以 c0 为例

[root@c0 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@c0 ~]# sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  

3.3、安装 Docker

[root@c0 ~]# sudo yum install docker-ce-18.06.1.ce-3.el7 -y

  

3.4、启动 Docker

[root@c0 ~]# systemctl enable docker && systemctl start docker

  

4、安装 ETCD 3.3.10

4.1、创建 ETCD 证书

4.1.1、生成 ETCD SERVER 证书用到的JSON请求文件

[root@c0 ~]# mkdir -p /home/work/_src/ssl_etcd
[root@c0 ~]# cd /home/work/_src/ssl_etcd
[root@c0 ssl_etcd]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

默认策略,指定了证书的有效期是10年(87600h) etcd策略,指定了证书的用途 signing, 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证 client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证

  

4.1.2、创建 ETCD CA 证书配置文件

[root@c0 ssl_etcd]# cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.3、创建 ETCD SERVER 证书配置文件

[root@c0 ssl_etcd]# cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.100",
    "10.0.0.101",
    "10.0.0.102",
    "10.0.0.103"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

  

4.1.4、生成 ETCD CA 证书和私钥

[root@c0 ssl_etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/14 18:44:37 [INFO] generating a new CA key and certificate from CSR
2019/02/14 18:44:37 [INFO] generate received request
2019/02/14 18:44:37 [INFO] received CSR
2019/02/14 18:44:37 [INFO] generating key: rsa-2048
2019/02/14 18:44:38 [INFO] encoded CSR
2019/02/14 18:44:38 [INFO] signed certificate with serial number 384346866475232855604658229421854651219342845660
[root@c0 ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server-csr.json

  

4.1.5、生成 ETCD SERVER 证书和私钥

[root@c0 ssl_etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/02/09 20:52:57 [INFO] generate received request
2019/02/09 20:52:57 [INFO] received CSR
2019/02/09 20:52:57 [INFO] generating key: rsa-2048
2019/02/09 20:52:57 [INFO] encoded CSR
2019/02/09 20:52:57 [INFO] signed certificate with serial number 373071566605311458179949133441319838683720611466
2019/02/09 20:52:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl_etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
[root@c0 _src]# cp server.pem server-key.pem /home/work/_app/k8s/etcd/ssl/

     将生成的证书,复制到 etchd 使用目录

[root@c0 ssl_etcd]# cp *.pem /home/work/_app/k8s/etcd/ssl/

  

4.2、安装 ETCD

4.2.1、下载 ETCD

[root@c0 ssl_etcd]# cd /home/work/_src/
[root@c0 _src]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
[root@c0 _src]# cd etcd-v3.3.10-linux-amd64
[root@c0 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /home/work/_app/k8s/etcd/bin/

  

4.2.2、创建 ETCD 系统启动文件

  创建 /usr/lib/systemd/system/etcd.service 文件并保存,内容如下:

[root@c0 etcd-v3.3.10-linux-amd64]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/etcd/cfg/etcd.conf
ExecStart=/home/work/_app/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/home/work/_app/k8s/etcd/ssl/server.pem \
--peer-key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

  

4.2.3、将 ETCD 启动文件、证书文件、系统启动文件复制到其他节点

[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/etcd c$N:/home/work/_app/k8s/; done;
[root@c0 ~]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/etcd.service c$N:/usr/lib/systemd/system/etcd.service; done;

  

4.2.4、ETCD 主配置文件

  在 c0 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c0 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
# ETCD的节点名
ETCD_NAME="etcd00"
# ETCD的数据存储目录
ETCD_DATA_DIR="/home/work/_data/etcd"
# 该节点与其他节点通信时所监听的地址列表,多个地址使用逗号隔开,其格式可以划分为scheme://IP:PORT,这里的scheme可以是http、https
ETCD_LISTEN_PEER_URLS="https://10.0.0.100:2380"
# 该节点与客户端通信时监听的地址列表
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.100:2379"
 
#[Clustering]
# 该成员节点在整个集群中的通信地址列表,这个地址用来传输集群数据的地址。因此这个地址必须是可以连接集群中所有的成员的。
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.100:2380"
# 配置集群内部所有成员地址,其格式为:ETCD_NAME=ETCD_INITIAL_ADVERTISE_PEER_URLS,如果有多个使用逗号隔开
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.100:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
# 初始化集群token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
# 初始化集群状态,new表示新建
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

     在 c1 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c1 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.101:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.101:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

     在 c2 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c2 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.102:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.102:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

     在 c3 创建 /home/work/_app/k8s/etcd/cfg/etcd.conf 文件,内容如下:

[root@c3 _src]# cat << EOF | tee /home/work/_app/k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/home/work/_data/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.103:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.103:2379"
ETCD_INITIAL_CLUSTER="etcd00=https://10.0.0.100:2380,etcd01=https://10.0.0.101:2380,etcd02=https://10.0.0.102:2380,etcd03=https://10.0.0.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/home/work/_app/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/home/work/_app/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/home/work/_app/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"

EOF

  

4.2.5、启动 ETCD 服务

  在每一台节点机器上单独运行

[root@c0 _src]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

  

4.2.6、检查 ETCD 服务运行状态

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem cluster-health
member 2cba54b8e3ba988a is healthy: got healthy result from https://10.0.0.103:2379
member 7c12135a398849e3 is healthy: got healthy result from https://10.0.0.102:2379
member 99c2fd4fe11e28d9 is healthy: got healthy result from https://10.0.0.100:2379
member f2fd0c12369e0d75 is healthy: got healthy result from https://10.0.0.101:2379
cluster is healthy

  

4.2.7、查看 ETCD 集群成员信息

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem  member list
2cba54b8e3ba988a: name=etcd03 peerURLs=https://10.0.0.103:2380 clientURLs=https://10.0.0.103:2379 isLeader=false
7c12135a398849e3: name=etcd02 peerURLs=https://10.0.0.102:2380 clientURLs=https://10.0.0.102:2379 isLeader=false
99c2fd4fe11e28d9: name=etcd00 peerURLs=https://10.0.0.100:2380 clientURLs=https://10.0.0.100:2379 isLeader=true
f2fd0c12369e0d75: name=etcd01 peerURLs=https://10.0.0.101:2380 clientURLs=https://10.0.0.101:2379 isLeader=false

  

5、安装 Flannel v0.11.0

5.1、Flanneld 网络安装

  Flannel 实质上是一种“覆盖网络(overlay network)”,也就是将TCP数据包装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VxLAN、AWS VPC和GCE路由等数据转发方式。FlannelKubernetes中用于配置第三层(网络层)网络结构。   Flannel 负责在集群中的多个节点之间提供第 3 层 IPv4 网络。Flannel 不控制容器如何与主机联网,只负责主机之间如何传输流量。但是,Flannel 确实为 Kubernetes 提供了 CNI 插件,并提供了与 Docker 集成的指导。 Kubernetes-3

没有 Flanneld 网络,Node节点间的 pod 不能通信,只能 Node 内通信。 有 Flanneld 服务启动时主要做了以下几步的工作: 从 ETCD 中获取 NetWork 的配置信息划分 Subnet,并在 ETCD 中进行注册,将子网信息记录到 /run/flannel/subnet.env

  

5.2、向 ETCD 集群写入网段信息

[root@c0 _src]# /home/work/_app/k8s/etcd/bin/etcdctl --ca-file=/home/work/_app/k8s/etcd/ssl/ca.pem --cert-file=/home/work/_app/k8s/etcd/ssl/server.pem --key-file=/home/work/_app/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379"  set /coreos.com/network/config  ''{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}''
{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}

Flanneld 当前版本 (v0.11.0) 不支持 ETCD v3,所以使用 ETCD v2 API 写入配置 key 和网段数据; 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

  

5.3、安装 Flannel

[root@c0 _src]# pwd
/home/work/_src
[root@c0 _src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@c0 _src]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
[root@c0 _src]# mv flanneld mk-docker-opts.sh /home/work/_app/k8s/kubernetes/bin/

  

5.4、配置 Flannel

  创建 /home/work/_app/k8s/kubernetes/cfg/flanneld 文件并保存,写入以下内容:

[root@c0 _src]# cat /home/work/_app/k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 -etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem -etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem -etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"

  

5.5、创建 Flannel 系统启动文件

  创建 /usr/lib/systemd/system/flanneld.service 文件并保存,内容如下:

[root@c0 _src]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/flanneld
ExecStart=/home/work/_app/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/home/work/_app/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

mk-docker-opts.sh 脚本将分配给 Flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 Docker 启动时 使用这个文件中的环境变量配置 docker0 网桥. Flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;

  

5.6、配置 Docker 启动指定子网段

  编辑 /usr/lib/systemd/system/docker.service 文件,内容如下:

[root@c0 _src]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# 加入环境变量的配件文件,并在 ExecStart 附加参数
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

  

5.7、将 Flannel 相关文件复制到其他机器

  主要复制 Flannel 执行文件、Flannel 配置文件、Flannel 系统启动文件、Docker 系统启动文件

[root@c0 _src]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/* c$N:/home/work/_app/k8s/kubernetes/; done;
[root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/docker.service c$N:/usr/lib/systemd/system/docker.service; done;
[root@c0 _src]# for N in $(seq 0 3); do scp -r /usr/lib/systemd/system/flanneld.service c$N:/usr/lib/systemd/system/flanneld.service; done;

  

5.8、启动服务

  在每一台机器上单独运行,以 c0 为例:

[root@c0 _src]# systemctl daemon-reload && systemctl stop docker && systemctl enable flanneld && systemctl start flanneld && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

启动 Flannel 前要关闭 Docker 及相关的 kubelet 这样 Flannel 才会覆盖 docker0 网桥

  

5.9、查看 Flannel 服务设置 docker0 网桥状态

[root@c0 _src]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:1c:42:50:8c:6a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/8 brd 10.255.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::49d:e3e6:c623:9582/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 3e:80:5d:97:53:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3c80:5dff:fe97:53c4/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9e:df:b9:87 brd ff:ff:ff:ff:ff:ff
    inet 10.172.46.1/24 brd 10.172.46.255 scope global docker0
       valid_lft forever preferred_lft forever

  

5.10、验证 Flannel 服务

[root@c0 _src]# for N in $(seq 0 3); do ssh c$N cat /run/flannel/subnet.env ; done;
DOCKER_OPT_BIP="--bip=10.172.46.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.46.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.90.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.90.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.5.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.5.1/24 --ip-masq=false --mtu=1450"
DOCKER_OPT_BIP="--bip=10.172.72.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.172.72.1/24 --ip-masq=false --mtu=1450"

  

6、安装Kubernetes

6.1、创建 Kubernetes 需要的证书

6.1.1、生成 Kubernetes 证书请求的JSON请求文件

[root@c0 ~]# cd /home/work/_app/k8s/kubernetes/ssl/
[root@c0 ssl]# cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "server": {
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ],
        "expiry": "8760h"
      },
      "client": {
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF

  

6.1.2、生成 Kubernetes CA 配置文件和证书

[root@c0 ssl]# cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

     初始化一个 Kubernetes CA 证书

[root@c0 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/02/15 00:33:49 [INFO] generating a new CA key and certificate from CSR
2019/02/15 00:33:49 [INFO] generate received request
2019/02/15 00:33:49 [INFO] received CSR
2019/02/15 00:33:49 [INFO] generating key: rsa-2048
2019/02/15 00:33:49 [INFO] encoded CSR
2019/02/15 00:33:49 [INFO] signed certificate with serial number 19178419085322799829088564182237651657158569707
[root@c0 ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

  

6.1.3、生成 Kube API Server 配置文件和证书

  创建证书配置文件

[root@c0 ssl]# cat << EOF | tee kube-apiserver-server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "10.0.0.1",
      "10.0.0.100",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "API Server"
        }
    ]
}
EOF

     生成 kube-apiserver 证书

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kube-apiserver-server-csr.json | cfssljson -bare kube-apiserver-server
2019/02/15 00:40:17 [INFO] generate received request
2019/02/15 00:40:17 [INFO] received CSR
2019/02/15 00:40:17 [INFO] generating key: rsa-2048
2019/02/15 00:40:17 [INFO] encoded CSR
2019/02/15 00:40:17 [INFO] signed certificate with serial number 73791614256163825800646464302566039201359288928
2019/02/15 00:40:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-apiserver-server.csr  kube-apiserver-server-csr.json  kube-apiserver-server-key.pem  kube-apiserver-server.pem

  

6.1.4、生成 kubelet client 配置文件和证书

  创建证书配置文件

[root@c0 ssl]# cat << EOF | tee kubelet-client-csr.json
{
  "CN": "kubelet",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Kubelet",
      "ST": "Beijing"
    }
  ]
}
EOF

     生成 kubelet client 证书

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubelet-client-csr.json | cfssljson -bare kubelet-client
2019/02/15 00:44:43 [INFO] generate received request
2019/02/15 00:44:43 [INFO] received CSR
2019/02/15 00:44:43 [INFO] generating key: rsa-2048
2019/02/15 00:44:43 [INFO] encoded CSR
2019/02/15 00:44:43 [INFO] signed certificate with serial number 285651868701760571162897366975202301612567414209
2019/02/15 00:44:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem

  

6.1.5、生成 Kube-Proxy 配置文件和证书

  创建证书配置文件

[root@c0 ssl]# cat << EOF | tee kube-proxy-client-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "System",
      "ST": "Beijing"
    }
  ]
}
EOF

     生成 Kube-Proxy 证书

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-client-csr.json | cfssljson -bare kube-proxy-client
2019/02/15 01:14:39 [INFO] generate received request
2019/02/15 01:14:39 [INFO] received CSR
2019/02/15 01:14:39 [INFO] generating key: rsa-2048
2019/02/15 01:14:39 [INFO] encoded CSR
2019/02/15 01:14:39 [INFO] signed certificate with serial number 535503934939407075396917222976858989138817338004
2019/02/15 01:14:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-csr.json  ca.pem                     kube-apiserver-server-csr.json  kube-apiserver-server.pem  kubelet-client-csr.json  kubelet-client.pem     kube-proxy-client-csr.json  kube-proxy-client.pem
ca.csr          ca-key.pem   kube-apiserver-server.csr  kube-apiserver-server-key.pem   kubelet-client.csr         kubelet-client-key.pem   kube-proxy-client.csr  kube-proxy-client-key.pem

  

6.1.6、生成 kubectl 管理员配置文件和证书

  创建 kubectl 管理员证书配置文件

[root@c0 ssl]# cat << EOF | tee kubernetes-admin-user.csr.json
{
  "CN": "admin",
  "hosts": [""],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "k8s",
      "OU": "Cluster Admins",
      "ST": "Beijing"
    }
  ]
}
EOF

     生成 kubectl 管理员证书

[root@c0 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kubernetes-admin-user.csr.json | cfssljson -bare kubernetes-admin-user
2019/02/15 01:23:22 [INFO] generate received request
2019/02/15 01:23:22 [INFO] received CSR
2019/02/15 01:23:22 [INFO] generating key: rsa-2048
2019/02/15 01:23:22 [INFO] encoded CSR
2019/02/15 01:23:22 [INFO] signed certificate with serial number 724413523889121871668676123719532667068182658276
2019/02/15 01:23:22 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c0 ssl]# ls
ca-config.json  ca-key.pem                 kube-apiserver-server-csr.json  kubelet-client.csr       kubelet-client.pem          kube-proxy-client-key.pem  kubernetes-admin-user.csr.json
ca.csr          ca.pem                     kube-apiserver-server-key.pem   kubelet-client-csr.json  kube-proxy-client.csr       kube-proxy-client.pem      kubernetes-admin-user-key.pem
ca-csr.json     kube-apiserver-server.csr  kube-apiserver-server.pem       kubelet-client-key.pem   kube-proxy-client-csr.json  kubernetes-admin-user.csr  kubernetes-admin-user.pem

  

6.1.7、将相关证书复制到 Kubernetes Node 节点

[root@c0 ~]# for N in $(seq 0 3); do scp -r /home/work/_app/k8s/kubernetes/ssl/*.pem c$N:/home/work/_app/k8s/kubernetes/ssl/; done;

  

6.2、部署 Kubernetes Master 节点并加入集群

  Kubernetes Master 节点运行如下组件:

  • APIServer   APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
  • Schedule   schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
  • Controller manager   如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。
  • ETCD   etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。
  • Flannel   默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录

kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

  

6.2.1、下载文件并安装 Kubernetes Server

[root@c0 ~]# cd /home/work/_src/
[root@c0 _src]# wget https://dl.k8s.io/v1.13.0/kubernetes-server-linux-amd64.tar.gz
[root@c0 _src]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@c0 _src]# cd kubernetes/server/bin/
[root@c0 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kubelet kube-proxy /home/work/_app/k8s/kubernetes/bin/

     从 c0 复制 kubelet、kubectl、kube-proxy,同时复制到其他节点

[root@c0 ~]# cd /home/work/_src/kubernetes/server/bin/
[root@c0 bin]# for N in $(seq 1 3); do scp -r kubelet kubectl kube-proxy c$N:/home/work/_app/k8s/kubernetes/bin/; done;
kubelet                       100%  108MB 120.3MB/s   00:00
kubectl                       100%   37MB 120.0MB/s   00:00
kube-proxy                    100%   33MB 113.7MB/s   00:00
kubelet                       100%  108MB 108.0MB/s   00:00
kubectl                       100%   37MB 108.6MB/s   00:00
kube-proxy                    100%   33MB 106.1MB/s   00:00
kubelet                       100%  108MB 117.8MB/s   00:00
kubectl                       100%   37MB 116.6MB/s   00:00
kube-proxy                    100%   33MB 119.0MB/s   00:00

  

6.2.2、部署 Apiserver

  创建 TLS Bootstrapping Token

[root@c0 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d '' ''
4470210dbf9d9c57f8543bce4683c3ce

这里我们生成的随机Token是4470210dbf9d9c57f8543bce4683c3ce,记下来后面要用到

  创建 /home/work/_app/k8s/kubernetes/cfg/token-auth-file 文件并保存,内容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/token-auth-file
4470210dbf9d9c57f8543bce4683c3ce,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

  

6.2.2.1、创建 Apiserver 配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-apiserver 文件并保存,内容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 \
--bind-address=10.0.0.100 \
--secure-port=6443 \
--advertise-address=10.0.0.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.244.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/home/work/_app/k8s/kubernetes/cfg/token-auth-file \
--service-node-port-range=30000-50000 \
--tls-cert-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server.pem  \
--tls-private-key-file=/home/work/_app/k8s/kubernetes/ssl/kube-apiserver-server-key.pem \
--client-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/home/work/_app/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/home/work/_app/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/home/work/_app/k8s/etcd/ssl/server-key.pem"

  

6.2.2.2、创建 Apiserver 启动文件

  创建 /usr/lib/systemd/system/kube-apiserver.service 文件并保存,内容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.2.3、启动 Kube Apiserver 服务

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

  

6.2.2.4、检查 Apiserver 服务是否运行

[root@c0 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:28:03 CST; 19s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4708 (kube-apiserver)
    Tasks: 10
   Memory: 370.9M
   CGroup: /system.slice/kube-apiserver.service
           └─4708 /home/work/_app/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.0.0.100:2379,https://10.0.0.101:2379,https://10.0.0.102:2379,https://10.0.0.103:2379 --bind-address=10.0.0.100 ...

Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.510271    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.032168ms) 200 [kube-api...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.513149    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.1516...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.515603    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.88011ms) 200 ...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.518209    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.980109ms) 200 [k...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.520474    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.890751ms) 200 [kub...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.522918    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.80026ms) 200 [kube-...10.0.0.100:59408]
Feb 19 22:28:11 c0 kube-apiserver[4708]: I0219 22:28:11.525952    4708 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.148966ms) 200 [k...10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.403713    4708 wrap.go:47] GET /api/v1/namespaces/default: (2.463889ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.406610    4708 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.080766ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]
Feb 19 22:28:20 c0 kube-apiserver[4708]: I0219 22:28:20.417019    4708 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.134397ms) 200 [kube-apiserver/v1.13.0 (linux/amd64) kubernetes/ddf47ac 10.0.0.100:59408]

  

6.2.3、部署 Scheduler

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-scheduler 文件并保存,内容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

  

6.2.3.1、创建 Kube-scheduler 系统启动文件

  创建 /usr/lib/systemd/system/kube-scheduler.service 文件并保存,内容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.3.2、启动 Kube-scheduler 服务

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

  

6.2.3.3、检查 Kube-scheduler 服务是否运行

[root@c0 ~]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:07 CST; 7s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4839 (kube-scheduler)
    Tasks: 9
   Memory: 47.0M
   CGroup: /system.slice/kube-scheduler.service
           └─4839 /home/work/_app/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.679756    4839 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779894    4839 shared_informer.go:123] caches populated
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779928    4839 controller_utils.go:1034] Caches are synced for scheduler controller
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.779990    4839 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-scheduler...
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784100    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:09 c0 kube-scheduler[4839]: I0219 22:29:09.784135    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829896    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:12 c0 kube-scheduler[4839]: I0219 22:29:12.829921    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941554    4839 leaderelection.go:289] lock is held by c0_ca2f3489-3316-11e9-87c6-001c42508c6a and has not yet expired
Feb 19 22:29:14 c0 kube-scheduler[4839]: I0219 22:29:14.941573    4839 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler

  

6.2.4、部署 Kube-Controller-Manager 组件

6.2.4.1、创建 kube-controller-manager 配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager 文件并保存,内容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.244.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/home/work/_app/k8s/kubernetes/ssl/ca-key.pem"

  

6.2.4.2、创建 kube-controller-manager 系统启动文件

  创建 /usr/lib/systemd/system/kube-controller-manager.service 文件并保存,内容如下

[root@c0 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  

6.2.4.3、启动 kube-controller-manager 服务

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

  

6.2.4.4、检查 kube-controller-manager 服务是否运行

[root@c0 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:29:40 CST; 12s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 4933 (kube-controller)
    Tasks: 7
   Memory: 106.7M
   CGroup: /system.slice/kube-controller-manager.service
           └─4933 /home/work/_app/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.244.0.0/16 --cluster-name=kubernet...

Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.276841    4933 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.278183    4933 leaderelection.go:205] attempting to acquire leader lease  kube-system/kube-controller-manager...
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301326    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:41 c0 kube-controller-manager[4933]: I0219 22:29:41.301451    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679518    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:44 c0 kube-controller-manager[4933]: I0219 22:29:44.679550    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078743    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:47 c0 kube-controller-manager[4933]: I0219 22:29:47.078762    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529247    4933 leaderelection.go:289] lock is held by c0_cae65875-3316-11e9-9071-001c42508c6a and has not yet expired
Feb 19 22:29:49 c0 kube-controller-manager[4933]: I0219 22:29:49.529266    4933 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager

  

6.2.5、验证 API Server 服务

  将 kubectl 加入到$PATH变量中

[root@c0 ~]# echo "PATH=/home/work/_app/k8s/kubernetes/bin:$PATH:$HOME/bin" >> /etc/profile
[root@c0 ~]# source /etc/profile

     查看节点状态

[root@c0 ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}

  

6.2.6、部署 Kubelet

6.2.6.1、创建 bootstrap.kubeconfig、kube-proxy.kubeconfig 配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/env.sh 文件并保存,内容如下:

[root@c0 cfg]# pwd
/home/work/_app/k8s/kubernetes/cfg
[root@c0 cfg]# cat env.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=4470210dbf9d9c57f8543bce4683c3ce
KUBE_APISERVER="https://10.0.0.100:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=/home/work/_app/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client.pem \
  --client-key=/home/work/_app/k8s/kubernetes/ssl/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

BOOTSTRAP_TOKEN使用在创建 TLS Bootstrapping Token 生成的4470210dbf9d9c57f8543bce4683c3ce

     执行脚本:

[root@c0 cfg]# pwd
/home/work/_app/k8s/kubernetes/cfg
[root@c0 cfg]# sh env.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@c0 cfg]# ls
bootstrap.kubeconfig  env.sh  flanneld  kube-apiserver  kube-controller-manager  kube-proxy.kubeconfig  kube-scheduler  token.csv

     将 bootstrap.kubeconfigkube-proxy.kubeconfig 复制到其他节点

[root@c0 cfg]# for N in $(seq 1 3); do scp -r kube-proxy.kubeconfig bootstrap.kubeconfig c$N:/home/work/_app/k8s/kubernetes/cfg/; done;
kube-proxy.kubeconfig                                100% 6294    10.2MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     4.2MB/s   00:00
kube-proxy.kubeconfig                                100% 6294    10.8MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     3.3MB/s   00:00
kube-proxy.kubeconfig                                100% 6294     9.6MB/s   00:00
bootstrap.kubeconfig                                 100% 2176     2.3MB/s   00:00

  

6.2.6.2、创建 kubelet 配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[root@c0 cfg]# cat
/home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.100
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

     创建 /home/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:

[root@c0 cfg]# cat
/home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.100 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

kubelet 启动时,如果通过 --kubeconfig 指定的文件不存在,则通过 --bootstrap-kubeconfig 指定的 bootstrap kubeconfig 用于从API服务器请求客户端证书。 在通过 kubelet 批准证书请求时,引用生成的密钥和证书将放在 --cert-dir 目录中。

  

6.2.6.3、将 kubelet-bootstrap 用户绑定到系统集群角色

[root@c0 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

  

6.2.6.4、创建 kubelet 系统启动文件

  创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:

[root@c0 cfg]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.2.6.5、启动 kubelet 服务

[root@c0 cfg]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.2.6.6、查看 kubelet 服务运行状态

[root@c0 cfg]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-19 22:31:23 CST; 14s ago
 Main PID: 5137 (kubelet)
    Tasks: 13
   Memory: 128.7M
   CGroup: /system.slice/kubelet.service
           └─5137 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.100 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/home/work/_app/k8s/kub...

Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.488086    5137 eviction_manager.go:226] eviction manager: synchronize housekeeping
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502001    5137 helpers.go:836] eviction manager: observations: signal=imagefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.48876...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502103    5137 helpers.go:836] eviction manager: observations: signal=pid.available, available: 32554, capacity: 32Ki, time: 2019-02-19 22:31:34.50073593 +0800 CST m=+10.750931769
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502132    5137 helpers.go:836] eviction manager: observations: signal=memory.available, available: 2179016Ki, capacity: 2819280Ki, time: 2019-02-19 22:31:34.4887683...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502143    5137 helpers.go:836] eviction manager: observations: signal=allocatableMemory.available, available: 2819280Ki, capacity: 2819280Ki, time: 2019-02-19 22:31...T m=+10.751961068
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502151    5137 helpers.go:836] eviction manager: observations: signal=nodefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.4887...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502161    5137 helpers.go:836] eviction manager: observations: signal=nodefs.inodesFree, available: 107287687, capacity: 107374144, time: 2019-02-19 22:31:34.488768...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502170    5137 helpers.go:836] eviction manager: observations: signal=imagefs.available, available: 1068393320Ki, capacity: 1048064Mi, time: 2019-02-19 22:31:34.488...T m=+10.738964114
Feb 19 22:31:34 c0 kubelet[5137]: I0219 22:31:34.502191    5137 eviction_manager.go:317] eviction manager: no resources are starved
Feb 19 22:31:36 c0 kubelet[5137]: I0219 22:31:36.104200    5137 kubelet.go:1995] SyncLoop (housekeeping)

  

6.2.7、批准 Master 加入集群

  CSR 可以在内置批准流程之外做手动批准加入集群。   管理员也可以使用 kubectl 手动批准证书请求。   管理员可以使用 kubectl get csr 列出 CSR 请求, 并使用 kubectl describe csr <name> 列出详细描述。   管理员也可以使用 kubectl certificate approve <name>kubectl certificate deny <name> 工具批准或拒绝 CSR 请求。   

6.2.7.1、查看 CSR 列表

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   14m   kubelet-bootstrap   Pending

  

6.2.7.2、批准加入集群

[root@c0 cfg]# kubectl certificate approve node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k
certificatesigningrequest.certificates.k8s.io/node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k approved

  

6.2.7.3、验证 Master 是否加入集群

  再次查看 CSR 列表

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   15m   kubelet-bootstrap   Approved,Issued

  

6.3、部署 kube-proxy 组件

  kube-proxy 运行在所有 Node 节点上,它监听 apiserverserviceEndpoint 的变化情况,创建路由规则来进行服务负载均衡,以下操作以 c0 为例   

6.3.1、创建 kube-proxy 参数配置文件

  创建 /home/work/_app/k8s/kubernetes/cfg/kube-proxy 配置文件并保存,内容如下:

[root@c0 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.100 \
--cluster-cidr=10.244.0.0/16 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

--hostname-override在不同的节点处,要换成节点的IP

  

6.3.2、创建 kube-proxy 系统启动文件

  创建 /usr/lib/systemd/system/kube-proxy.service 文件并保存,内容如下:

[root@c0 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/home/work/_app/k8s/kubernetes/cfg/kube-proxy
ExecStart=/home/work/_app/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target 

  

6.3.3、启动 kube-proxy 服务

[root@c0 ~]# systemctl daemon-reload && systemctl enable kube-proxy &&  systemctl start kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

  

6.3.4、检查 kube-proxy 服务状态

[root@c0 cfg]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:08:51 CST; 3h 49min ago
 Main PID: 12660 (kube-proxy)
    Tasks: 0
   Memory: 1.9M
   CGroup: /system.slice/kube-proxy.service
           ‣ 12660 /home/work/_app/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.0.0.100 --cluster-cidr=10.244.0.0/16 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/...

Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.205387   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:38 c0 kube-proxy[12660]: I0218 09:58:38.250931   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.249487   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:40 c0 kube-proxy[12660]: I0218 09:58:40.290336   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.264320   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:42 c0 kube-proxy[12660]: I0218 09:58:42.318954   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.273290   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:44 c0 kube-proxy[12660]: I0218 09:58:44.359236   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.287980   12660 config.go:141] Calling handler.OnEndpointsUpdate
Feb 18 09:58:46 c0 kube-proxy[12660]: I0218 09:58:46.377475   12660 config.go:141] Calling handler.OnEndpointsUpdate

  

6.4、验证 Server 服务

  查看 Master 状态

[root@c0 cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok
componentstatus/controller-manager   Healthy   ok
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}
componentstatus/etcd-3               Healthy   {"health":"true"}
componentstatus/etcd-2               Healthy   {"health":"true"}

NAME              STATUS   ROLES    AGE   VERSION
node/10.0.0.100   Ready    <none>   51m   v1.13.0

  

6.5、Kubernetes Node 节点加入集群

  Kubernetes Node 节点运行如下组件:

  • Proxy:   该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
  • Kublet   kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。 kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)
  • Flannel   默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,Flannel从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录
  • ETCD   ETCD是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

  

6.5.1、创建 kubelet 配置文件

  在所有节点上都要运行,以 c1 为例。   创建 /home/work/_app/k8s/kubernetes/cfg/kubelet.config 参数配置文件并保存,内容如下:

[root@c1 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.101
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.244.0.1"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

address在不同的节点处,要改成节点的IP

     创建 /home/work/_app/k8s/kubernetes/cfg/kubelet 启动参数文件并保存,内容如下:

[root@c1 ~]# cat /home/work/_app/k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.101 \
--kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/home/work/_app/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/work/_app/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/home/work/_app/k8s/kubernetes/ssl_cert \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

  

6.5.2、创建 kubelet 系统启动文件

  创建 /usr/lib/systemd/system/kubelet.service 并保存,内容如下:

[root@c1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/home/work/_app/k8s/kubernetes/cfg/kubelet
ExecStart=/home/work/_app/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

  

6.5.3、启动 kubelet 服务

[root@c1 ~]# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

  

6.5.4、查看 kubelet 服务运行状态

[root@c1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-18 06:27:54 CST; 6s ago
 Main PID: 19123 (kubelet)
    Tasks: 12
   Memory: 18.3M
   CGroup: /system.slice/kubelet.service
           └─19123 /home/work/_app/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.0.0.101 --kubeconfig=/home/work/_app/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-k...

Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784286   19123 mount_linux.go:179] Detected OS with systemd
Feb 18 06:27:54 c1 kubelet[19123]: I0218 06:27:54.784416   19123 server.go:407] Version: v1.13.0

  

6.5.5、批准 Node 加入集群

  查看 CSR 列表,可以看到节点有 Pending 请求

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   84m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   2m45s   kubelet-bootstrap   Pending

     通过以下命令,查看请求的详细信息,能够看到是 c1 的IP地址10.0.0.101发来的请求

[root@c0 cfg]# kubectl describe csr node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Name:               node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Mon, 18 Feb 2019 06:26:08 +0800
Requesting User:    kubelet-bootstrap
Status:             Pending
Subject:
         Common Name:    system:node:10.0.0.101
         Serial Number:
         Organization:   system:nodes
Events:  <none>

     批准加入集群

[root@c0 cfg]# kubectl certificate approve node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA
certificatesigningrequest.certificates.k8s.io/node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA approved

     再次查看 CSR 列表,可以看到节点的加入请求已经被批准

[root@c0 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-9TZm3EM65hqCTbfYMA46NVtQL4tW06loq01YfKQsm8k   88m     kubelet-bootstrap   Approved,Issued
node-csr-W7DXhrhm4Xhrpr5Qz2Br1dW-3t4VuYORSajbTFrNiqA   6m57s   kubelet-bootstrap   Approved,Issued

  

6.5.6、从集群删除 Node

  要删除一个节点前,要先清除掉上面的 pod   然后运行下面的命令删除节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

     如果想要有效删除节点,在节点启动时,重新向集群发送 CSR 请求,还需要在被删除的点节上,删除 CSR 缓存数据

[root@c1 ~]# ls /home/work/_app/k8s/kubernetes/ssl_cert/
kubelet-client-2019-02-19-23-20-05.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[root@c1 ~]# rm -rf //home/work/_app/k8s/kubernetes/ssl_cert/*

     删除完 CSR 缓存数据以后,重启启动 kubelet 就可以在 Master 上收到新的 CSR 请求。   

6.5.7、给 Node 打标签

     查看所有节点状态

[root@c0 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
10.0.0.100   Ready    <none>   96m   v1.13.0
10.0.0.101   Ready    <none>   24m   v1.13.0

     c0Master 打标签

[root@c0 ~]# kubectl label node 10.0.0.100 node-role.kubernetes.io/master=''master''
node/10.0.0.100 labeled

     c1Node 打标签

[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/master=''node-c1''
node/10.0.0.101 labeled
[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/node=''node-c1''
node/10.0.0.101 labeled
[root@c0 ~]# kubectl get node
NAME         STATUS   ROLES         AGE    VERSION
10.0.0.100   Ready    master        106m   v1.13.0
10.0.0.101   Ready    master,node   33m    v1.13.0

     删除掉 c1 上的 master 标签

[root@c0 ~]# kubectl label node 10.0.0.101 node-role.kubernetes.io/master-
node/10.0.0.101 labeled
[root@c0 cfg]# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
10.0.0.100   Ready    master   108m   v1.13.0
10.0.0.101   Ready    node     35m    v1.13.0

  

7、参考文章

  Linux7/Centos7 Selinux介绍   Kubernetes网络原理及方案   Installing a Kubernetes Cluster on CentOS 7   How to install Kubernetes(k8) in RHEL or Centos in just 7 steps   docker-kubernetes-tls-guide   kubernetes1.13.1+etcd3.3.10+flanneld0.10集群部署   

8、常见问题

用虚拟机如何生成新的网卡UUID?

  例如我是在Parallels上安装的一个 c1 ,克隆 c2 后,根据本文上面的内容可以更改IP,UUID如果要更改,可以使用以下命令查看网卡的UUID:

[root@c2 ~]# uuidgen eth0
6ea1a665-0126-456c-80c7-1f69f32e83b7

  


博文作者:迦壹 博客地址:Centos7 二进制安装 Kubernetes 1.13 转载声明:可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作!    假设您认为这篇文章对您有帮助,可以通过以下方式进行捐赠,谢谢! 向陌上花开捐赠 比特币地址:1KdgydfKMcFVpicj5w4vyn3T88dwjBst6Y 以太坊地址:0xbB0a92d634D7b9Ac69079ed0e521CC2e0a97c420


关于centos7 使用二进制方式安装 kubernetes v1.11.2centos执行二进制文件的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于CentOS 二进制安装 Kubernetes、CentOS 使用二进制部署 Kubernetes 1.13集群、CentOS7 (mini) 安装 Kubernetes 集群(kubeadm 方式)、Centos7 二进制安装 Kubernetes 1.13的相关知识,请在本站寻找。

本文标签: