对于想了解CentOS7实现Keepalived+Nginx实现高可用Web负载均衡的读者,本文将提供新的信息,我们将详细介绍keepalivednginx负载均衡配置,并且为您提供关于CentOS6
对于想了解CentOS7 实现 Keepalived + Nginx 实现高可用 Web 负载均衡的读者,本文将提供新的信息,我们将详细介绍keepalived nginx负载均衡配置,并且为您提供关于CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡、centos 7 LVS+keepalived实现nginx的高可用以及负载均衡、CentOS7 Keepalived+LVS 实现高可用、Centos7+Lvs+keeplived实现Apache高可用的负载均衡的有价值信息。
本文目录一览:- CentOS7 实现 Keepalived + Nginx 实现高可用 Web 负载均衡(keepalived nginx负载均衡配置)
- CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
- centos 7 LVS+keepalived实现nginx的高可用以及负载均衡
- CentOS7 Keepalived+LVS 实现高可用
- Centos7+Lvs+keeplived实现Apache高可用的负载均衡
CentOS7 实现 Keepalived + Nginx 实现高可用 Web 负载均衡(keepalived nginx负载均衡配置)
一、基础环境
系统版本 | nginx 版本 | keepalived 版本 | ip | 作用 |
---|---|---|---|---|
CentOS Linux release 7.6 | nginx/1.18.0 | keepalived-2.1.5 | 192.168.86.135 | master |
CentOS Linux release 7.6 | nginx/1.18.0 | keepalived-2.1.5 | 192.168.86.136 | slave |
VIP:192.168.86.137
二、编译环境安装
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel pcre pcre-devel
挂镜像到系统
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
mv /etc/yum.repos.d/CentOS-Debuginfo.repo /etc/yum.repos.d/CentOS-Debuginfo.repo.bak
mv /etc/yum.repos.d/CentOS-Vault.repo /etc/yum.repos.d/CentOS-Vault.repo.bak
cp /etc/yum.repos.d/CentOS-Media.repo /etc/yum.repos.d/CentOS-Media.repo.bak
sed -i ''s#file:///media/cdrom/##g'' /etc/yum.repos.d/CentOS-Media.repo
sed -i ''s#file:///media/cdrecorder/##g'' /etc/yum.repos.d/CentOS-Media.repo
sed -i ''s/^ *enabled *=.*$/enabled=1/g'' /etc/yum.repos.d/CentOS-Media.repo
sed -i ''s#^ *baseurl *=.*$#baseurl=file:///mnt/centos/#g'' /etc/yum.repos.d/CentOS-Media.repo
mount -t iso9660 -o loop /tools/CentOS-7-x86_64-DVD-1908.iso /mnt/centos
三、安裝 nginx
-
安裝 nignx
#进入目录 cd /tools #上传安装文件,并解压 tar -zxvf nginx-1.18.0.tar.gz #进入安装目录 cd nginx-1.18.0 #检查配置 ./configure --prefix=/usr/local/nginx --with-stream make&&make install
-
修改 nginx 配置文件
修改 Nginx 欢迎首页内容(用于后面测试, 用于区分两个节点的 Nginx):
-
master
echo ''this is master 135'' > /usr/local/nginx/html/index.html
-
slave
echo ''this is slave 136'' > /usr/local/nginx/html/index.html
-
修改 nginx.conf
-
# vi /usr/local/nginx/conf/nginx.conf
-
user root; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main ''$remote_addr - $remote_user [$time_local] "$request" '' # ''$status $body_bytes_sent "$http_referer" '' # ''"$http_user_agent" "$http_x_forwarded_for"''; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 88; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
-
#user nobody; worker_processes 2; error_log logs/error.log; error_log logs/error.log debug; error_log logs/error.log info; #pid logs/nginx.pid; worker_rlimit_nofile 65535; events { use epoll; multi_accept on; accept_mutex on; worker_connections 65535; } stream{ upstream tcp21001{ server 192.168.10.130:11001 weight=1 max_fails=2 fail_timeout=30s;#负载均衡的主机IP与端口 weight权重 代表分到资源的比例 max_fails=2 失败超过2次,剔除 fail_timeout=30s 30秒不会在连接些服务 } server{ #listen [::1]:7777 #IPV6 listen 7777;#监听本机端口号IPV4 proxy_pass tcp11001; proxy_connect_timeout 3s; #服务不通时 超时时间 proxy_timeout 3600s; #无数据传送,超时时间
-
upstream tcp21002{ server 192.178.17.130:11002 weight=1 max_fails=2 fail_timeout=30s;#负载均衡的主机IP与端口 weight权重 代表分到资源的比例 max_fails=2 失败超过2次,剔除 fail_timeout=30s 30秒不会在连接些服务 } server{ #listen [::1]:8888 #IPV6 listen 8888;#监听本机端口号IPV4 proxy_pass tcp11002; proxy_connect_timeout 3s; #服务不通时 超时时间 proxy_timeout 3600s; #无数据传送,超时时间 } }
-
-
启动 nginx
systemctl start nginx
-
测试 nginx 启动
curl localhost this is master135
4.CentOS 7 设置 Nginx 开机启动
第一步:在/lib/systemd/system目录下创建nginx.service文件 第二步:编辑nginx.service文件: vi /lib/systemd/system/nginx.service [Unit] Description=nginx After=network.target [Service] Type=forking ExecStart=/usr/local/nginx/sbin/nginx ExecReload=/usr/local/nginx/sbin/nginx reload ExecStop=/usr/local/nginx/sbin/nginx quit PrivateTmp=true [Install] WantedBy=multi-user.target [Unit]:服务的说明 Description:描述服务 After:描述服务类别 [Service]服务运行参数的设置 Type=forking是后台运行的形式 ExecStart为服务的具体运行命令 ExecReload为重启命令 ExecStop为停止命令 PrivateTmp=True表示给服务分配独立的临时空间 注意:[Service]的启动、重启、停止命令全部要求使用绝对路径 [Install]运行级别下服务安装的相关设置,可设置为多用户,即系统运行级别为3 #赋予脚本执行权限 chmod +x /usr/lib/systemd/system/nginx.service 第三步:加入开机启动 # systemctl enable nginx.service 取消开机自启动 #systemctl disable nginx.service 启动nginx服务 #systemctl start nginx.service 停止nginx服务 #systemctl stop nginx.service 重启nginx服务 #systemctl restart nginx.service 查看所有以启动的服务 #systemctl list-units --type=service 查看服务当前状态 #systemctl status nginx.service 遇到的错误 Warning: nginx.service changed on disk. Run ''systemctl daemon-reload'' to reload units. 按照提示执行命令systemctl daemon-reload即可。
生成系统 service 管理文件
vvim /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx-The High-performance HTTP Server
After=network.target
[Service]
Type=forking
PIDFile= /usr/local/nginx/logs/nginx.pid
ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop
PrivateTmp=true
[Install]
WantedBy=multi-user.target
6. 加防火墙
添加
firewall-cmd --zone=public --add-port=88/tcp --permanent
删除
firewall-cmd --zone=public --remove-port=80/tcp --permanent
更新防火墙规则:
firewall-cmd --reload
查看所有打开的端口:
firewall-cmd --zone=public --list-ports
四、安装 keepalived
1、 创建依赖环境
yum -y install libnl libnl-devel gcc openssl-devel libnl3-devel pcre-devel net-snmp-agent-libs libnfnetlink-devel curl wget
2、安装 keepalived
tar -zxvf keepalived-2.1.5.tar.gz cd keepalived-2.1.5 ./configure --prefix=/usr/local/keepalived-2.1.5 make && make install
3、创建启动文件
ln -s /usr/local/keepalived-2.1.5 /usr/local/keepalived mkdir /etc/keepalived/ cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ cp /usr/local/keepalived-2.1.5/etc/sysconfig/keepalived /etc/sysconfig/ ln -s /usr/local/keepalived-2.1.5/sbin/keepalived /usr/sbin/ 下面文件要从keepalived源码目录复制,安装目录中没有cp cp /tools/keepalived-2.1.5/keepalived/keepalived.service /etc/systemd/system/ cp /tools/keepalived-2.1.5/keepalived/etc/init.d/keepalived /etc/init.d/
4、创建配置文件
-
master
cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { #一个没重复的名字即可 router_id xxoo_master } # 检测nginx是否运行 vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight -20} vrrp_instance VI_1 { # 此处不设置为MASTER,通过priority来竞争master state BACKUP # 网卡名字,文章下方会给出如何获取网卡名字的方法 interface enp0s3 # 同一个keepalived集群的virtual_router_id相同 virtual_router_id 51 # 权重,master要大于slave priority 100 # 主备通讯时间间隔 advert_int 1 # 如果两节点的上联交换机禁用了组播,则采用vrrp单播通告的方式 # 本机ip unicast_src_ip 192.168.0.182 unicast_peer { # 其他机器ip 192.168.0.189 } # 设置nopreempt防止抢占资源 nopreempt # 主备保持一致 authentication { auth_type PASS auth_pass 1111 } # 与上方nginx运行状况检测呼应 track_script { chk_nginx } virtual_ipaddress { # 虚拟ip地址(VIP,一个尚未占用的内网ip即可) 192.168.0.180 } } EOF
-
slave
cat > /etc/keepalived/keepalived.conf << EOF ! Configuration File for keepalived global_defs { #一个没重复的名字即可 router_id xxoo_slave } # 检测nginx是否运行 vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight -20} vrrp_instance VI_1 { # 此处不设置为MASTER,通过priority来竞争master state BACKUP # 网卡名字,文章下方会给出如何获取网卡名字的方法 interface enp0s3 # 同一个keepalived集群的virtual_router_id相同 virtual_router_id 51 # 权重,master要大于slave priority 90 # 主备通讯时间间隔 advert_int 1 # 如果两节点的上联交换机禁用了组播,则采用vrrp单播通告的方式 # 本机ip unicast_src_ip 192.168.0.189 unicast_peer { # 其他机器ip 192.168.0.182 } # 设置nopreempt防止抢占资源 nopreempt # 主备保持一致 authentication { auth_type PASS auth_pass 1111 } # 与上方nginx运行状况检测呼应 track_script { chk_nginx } virtual_ipaddress { # 虚拟ip地址(VIP,一个尚未占用的内网ip即可) 192.168.0.180 } } EOF
5、启编写 Nginx 状态检测脚本
编写 Nginx 状态检测脚本 /etc/keepalived/nginx_check.sh (已在 keepalived.conf 中配置) 脚本要求:如果 nginx 停止运行,
尝试启动,如果无法启动则杀死本机的 keepalived 进程, keepalied 将虚拟 ip 绑定到 BACKUP 机器上。 内容如下:
cat > /etc/keepalived/nginx_check.sh << EOF #!/bin/bash A=`ps -C nginx --no-header | wc -l` if [ $A -eq 0 ]; then systemctl start nginx # /usr/local/nginx/sbin/nginx #尝试重新启动nginx sleep 2 #睡眠2秒 if [ `ps -C nginx --no-header | wc -l` -eq 0 ]; then killall keepalived fi fi
保存后,给脚本赋执行权限:
# chmod +x /etc/keepalived/nginx_check.sh
每5秒执行一次脚本
crontab -e
* * * * * /etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 5;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 10;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 15;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 20;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 25;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 30;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 35;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 40;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 45;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 50;/etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
* * * * * sleep 55; /etc/keepalived/nginx_check.sh >>/etc/keepalived/check_nginx.log
systemctl reload crond
systemctl restart crond
6、启动 keepalived
systemctl start keepalived systemctl enable keepalived
7、日志处理
1.编辑配置文件/etc/sysconfig/keepalived,将第14行的KEEPALIVED_OPTIONS="-D"修改为KEEPALIVED_OPTIONS="-D -d -S 0" # 说明: # --dump-conf -d 导出配置数据 # --log-detail -D 详细日志信息 # --log-facility -S 设置本地的syslog设备,编号0-7# -S 0 表示指定为local0设备 2.修改rsyslog的配置文件 vi /etc/rsyslog.conf,在结尾加入如下2行内容,将local0设备的所有日志都记录到/var/log/keepalived.log文件 echo "local0.* /var/log/keepalived.log" >> /etc/rsyslog.conf 在42行第一列尾加入";local0.none",表示来自local0设备的所有日志信息不再记录于/var/log/messages里 3.重启rsyslog服务 systemctl restart rsyslog 4.测试keepalived日志记录结果。重启keepalived服务,查看日志信息。 tail -f /var/log/keepalived.log systemctl start keepalived.service 配置日志轮转 /var/log/keepalived/*.log { #切分的两个文件名 daily #按天切分 rotate 7 #保留7份 create 0644 haproxy haproxy #创建新文件的权限、用户、用户组 compress #压缩旧日志 delaycompress #延迟一天压缩 missingok #忽略文件不存在的错误 dateext #旧日志加上日志后缀 sharedscripts #切分后的重启脚本只运行一次 postrotate #切分后运行脚本重载rsyslog,让rsyslog向新的日志文件中输出日志 /bin/kill -HUP $(/bin/cat /var/run/syslogd.pid 2>/dev/null) &>/dev/null endscript }
防止出现脑裂现象(主备同时获取了 VIP 地址)

# 指定keepalived配置的网卡:enp0s3,固定的VRRP广播地址:224.0.0.18 firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface ens192 --destination 224.0.0.18 --protocol vrrp -j ACCEPT firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --out-interface ens192 --destination 224.0.0.18 --protocol vrrp -j ACCEPT firewall-cmd --reload # 查看配置的规则 firewall-cmd --direct --get-rules ipv4 filter INPUT firewall-cmd --direct --get-rules ipv4 filter OUTPUT # 清除规则 firewall-cmd --direct --permanent --remove-rule ipv4 filter INPUT 0 --in-interface ens192 --destination 224.0.0.18 --protocol vrrp -j ACCEPT firewall-cmd --direct --permanent --remove-rule ipv4 filter OUTPUT 0 --out-interface ens192 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
五、测试
1、在两台服务器上测试
-
master
$ curl localhost this is master root@centos7[14:46:07]:~ $ curl 192.168.86.135 this is master root@centos7[15:03:29]:~
-
slave
$ curl localhost this is slave root@centos7[15:03:59]:/etc/keepalived $ curl 192.168.86.136 this is master
2、关闭 master 的 keepalived 模仿 down 机
-
master 关闭 keepalived
$ systemctl stop keepalived
- 在 slave 上面进行测试
$ curl localhost this is slave root@centos7[15:10:29]:/etc/keepalived $ curl 192.168.86.136 this is slave
到此 keepalived 完成
4.NetworkManager-nmcli 管理网络
nmcli general status #显示 NetworkManager 的整体状态
nmcli connection show #显示所有的连接
nmcli connection show -a #显示活动的连接
nmcli device status #显示 NetworkManager 识别的设备列表和它们当前的状态
nmcli device disconnect/connect eno16777736 #停止 / 启动网卡 ==ifup/down
4、关闭 NetworkManager 服务
#停止服务
service NetworkManager stop
#禁用服务,下次不自动启动
chkconfig NetworkManager off
问题处理:
【keepalived】关于 keepalived 配置中的 mcast_src_ip 和 unicast_src_ip
https://blog.csdn.net/mofiu/article/details/76644012
keepalived 配置文件参数详解
找到你要虚拟的网卡对应的域,并且开启 vrrp 协议即可,命令为
sudo firewall-cmd --zone=public --add-protocol=vrrp --permanent #重载配置 sudo firewall-cmd --reload
来自 “开源世界” ,链接:http://ym.baisou.ltd/post/518.html,如需转载,请注明出处,否则将追究法律责任。
CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡
一、简介
VS/NAT原理图:
二、系统环境
实验拓扑:
系统平台:CentOS 6.3
Kernel:2.6.32-279.el6.i686
LVS版本:ipvsadm-1.26
keepalived版本:keepalived-1.2.4
三、安装
0、安装LVS前系统需要安装popt-static,kernel-devel,make,gcc,openssl-devel,lftp,libnl*,popt*
1、在两台Director Server上分别配置LVS+Keepalived
LVS install -------------
[root@CentOS-LVS_MASTER~]# wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
[root@CentOS-LVS_MASTER~]# ln -s /usr/src/kernels/2.6.32-279.el6.i686//usr/src/linux/
[root@CentOS-LVS_MASTER~]# tar zxvf ipvsadm-1.26.tar.gz
[root@CentOS-LVS_MASTER~]# cd ipvsadm-1.26
[root@CentOS-LVS_MASTERipvsadm-1.26]# make && make install
Keepalived install-------------
[root@CentOS-LVS_MASTER~]# wget http://www.keepalived.org/software/keepalived-1.2.4.tar.gz
[root@CentOS-LVS_MASTER~]# tar zxvf keepalived-1.2.4.tar.gz
[root@CentOS-LVS_MASTER~]# cd keepalived-1.2.4
[root@CentOS-LVS_MASTERkeepalived-1.2.4]# ./configure && make && make install
#########将keepalived做成启动服务,方便管理##########
[root@CentOS-LVS_MASTER~]# cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/
[root@CentOS-LVS_MASTER~]# cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
[root@CentOS-LVS_MASTER~]# mkdir /etc/keepalived/
[root@CentOS-LVS_MASTER~]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
[root@CentOS-LVS_MASTER~]# cp /usr/local/sbin/keepalived /usr/sbin/
[root@CentOS-LVS_MASTER~]# service keepalived start | stop
2、开启路由转发
[root@CentOS-LVS_MASTER~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@CentOS-LVS_MASTER~]# sysctl -p
3、配置Keepalived
[root@CentOS-LVS_MASTER~]# less /etc/keepalived/keepalived.conf
!ConfigurationFileforkeepalived global_defs{ router_idLVS_MASTER#BACKUP上修改为LVS_BACKUP} vrrp_instanceVI_1{ stateMASTER#BACKUP上修改为BACKUP interfaceeth0 virtual_router_id51 priority100#BACKUP上修改为80 advert_int1 authentication{ auth_typePASS auth_pass1111 } virtual_ipaddress{ 10.0.0.227 } } vrrp_instanceLAN_GATEWAY{ stateMASTER#BACKUP上修改为LVS_BACKUP interfaceeth1 virtual_router_id52 priority100#BACKUP上修改为80 advert_int1 authentication{ auth_typePASS auth_pass1111 } virtual_ipaddress{ 192.168.10.10 } } virtual_server10.0.0.22780{ delay_loop6 lb_algorr lb_kindNAT #persistence_timeout5 protocolTCP real_server192.168.10.480{ weight3 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } real_server192.168.10.580{ weight3 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } }
BACKUP服务器同上配置,先安装lvs再安装keepalived,然后配置/etc/keepalived/keepalived.conf,只需将批注部分改一下即可。
4、分别在2台Real Server上面设置网关
把网关都设置成:192.168.10.10
5、在2台RealServer中分别配置HTTP
[root@WEB1~]# yum -y install httpd
[root@WEB1~]# cd /var/www/html/
[root@WEB1html]# cat index.html
<h1>WEB1/192.168.10.4</h1>
[root@WEB1html]# /etc/init.d/httpd start
另一台机器配置一样,过程略。
6、分别在CentOS-LVS_MASTER、CentOS-LVS_BACKUP上执行service keepalived start启动keepalived就可实现负载均衡及高可用集群了;
[root@CentOS-LVS_MASTER keepalived]# service keepalived start
四、测试
####高可用性测试####
模拟故障,将CentOS-LVS_MASTER上的keepalived服务停掉,然后观察CentOS-LVS_BACKUP上的日志,信息如下
从日志中可知,主机出现故障后,备机立刻检测到,此时备机变为MASTER角色,并且接管了主机的虚拟IP资源,最后将虚拟IP绑定在etho设备上。
将CentOS-LVS_MASTER上的keepalived服务开启后,CentOS-LVS_BACKUP的日志状态。
从日志可知,备机在检测到主机重新恢复正常后,释放了虚拟IP资源重新成为BACKUP角色
####故障切换测试####
故障切换是测试当某个节点出现故障后,Keepalived监制模块是否能及时发现然后屏蔽故障节点,同时将服务器转移到正常节点来执行。
将WEB2节点服务停掉,假设这个节点出现故障,然后主、备机日志信息如下
从以上可以看出,Keepalived监控模块检测到192.168.10.5这台主机出现故障后,将WEB2从集群系统中剔除掉了。此时访问http://10.0.0.227只能看到WEB1了)
重新启动WEB2节点的服务,日志信息如下:
Keepalived监控模块检测到192.168.10.5这台主机恢复正常后,又将此节点加入集群系统中,再次访问就可以访问到WEB2页面了)
RHEL 5.4下部署LVS(DR)+keepalived实现高性能高可用负载均衡
http://www.cnblogs.com/mchina/archive/2012/05/23/2514728.html
centos 7 LVS+keepalived实现nginx的高可用以及负载均衡
一、准备工作:关闭防火墙,selinux以免对实验结果造成影响,准备虚机,设置IP地址、主机名
hostname:Nginx01
IP:192.168.1.87
Role:Nginx Server
hostname:Nginx02
IP: 192.168.1.88
Role:Nginx Server
hostname:LVS01
IP: 192.168.1.89
Role:LVS+Keepalived
hostname:LVS02
IP: 192.168.1.90
Role:LVS+Keepalived
VIP:192.168.1.253
二、安装相关软件、以及配置文件修改:LVS、keepalived、Nginx 该文档软件安装全部使用yum
1、Nginx01 以及Nginx02 都执行
yuminstall-yNginx #安装以后启动Nginx systemctlstartNginx.service
修改测试页面内容便于区分,根据主机名修改index.html文件
vim/usr/share/Nginx/html/index.html
修改完访问页面如下:
Nginx两台服务器准备完成,在LVS01、LVS02上安装LVS以及keepalived
yuminstall-yipvsadm #安装keepalived依赖的软件 yuminstall-ygccopensslopenssl-devel yuminstall-ykeepalived
安装完成以后,LVS01上先清空keepalived的配置文件
:>/etc/keepalived/keepalived.conf 复制下面内如到/etc/keepalived/keepalived.conf !ConfigurationFileforkeepalived global_defs{ router_idlvs_clu_1 } virrp_sync_groupProx{ group{ mail } } vrrp_instancemail{ stateMASTER interfaceeno16777736 lvs_sync_daemon_interfaceeno16777736 virtual_router_id50 priority80 advert_int1 authentication{ auth_typePASS auth_pass1111 } virtual_ipaddress{ 192.168.1.253 } } virtual_server192.168.1.25380{ delay_loop6 lb_algowrr lb_kindDR persistence_timeout0 protocolTCP nat_mask255.255.255.0 real_server192.168.1.8780{ weight1 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } real_server192.168.1.8880{ weight1 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } }
在LVS02上也先清空keepalived的配置文件
:>/etc/keepalived/keepalived.conf !ConfigurationFileforkeepalived global_defs{ router_idlvs_clu_1 } virrp_sync_groupProx{ group{ mail } } vrrp_instancemail{ stateBACKUP interfaceens33 lvs_sync_daemon_interfaceens33 virtual_router_id50 priority60 advert_int1 authentication{ auth_typePASS auth_pass1111 } virtual_ipaddress{ 192.168.1.253 } } virtual_server192.168.1.25380{ delay_loop6 lb_algowrr lb_kindDR persistence_timeout0 protocolTCP real_server192.168.1.8780{ weight1 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } real_server192.168.1.8880{ weight1 TCP_CHECK{ connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } }
配置完成以后在两台机器上启动 keepalived
注意 interface 需要修改为自己电脑的网卡
systemctlstartkeepalived
三、测试结果
1、正常情况下
2、当Nginx01宕机以后(在Nginx01上执行命令),结果如图:
systemctlstopNginx.service
Nginx01启动以后,在Nginx02上停止Nginx,效果雷同,不在贴图。
3、当LVS01上keepalived死掉以后,LVS02上的keepalived会启动配置VIP到自己机器上
systemctlstopkeepalived
结果显示正常,LVS01上keepalived启动以后会把VIP配置到自己的网卡。
四、问题总结:
关于keepalived配置文件中相关参数说明
persistence_timeout0#保持客户端的请求在这个时间段内全部发到同一个真实服务器,单位秒,动态 网站此参数很重要。 router_idLVS_DEVEL #设置lvs的id,在一个网络内应该是唯一的 vrrp_instanceVI_1{ #vrrp实例定义部分 stateMASTER #设置lvs的状态,MASTER和BACKUP两种,必须大写 interfaceeno16777736 #设置对外服务的接口 lvs_sync_daemon_inteface#负载均衡器之间的监控接口,类似于HAHeartBeat的心跳线。但它的机制优于Heartbeat,因为它没有“裂脑”这个问题,它是以优先级这个机制来规避这个麻烦的。在DR模式中,lvs_sync_daemon_inteface与服务接口interface使用同一个网络接口。 virtual_router_id60 #设置虚拟路由标示,这个标示是一个数字,同一个vrrp实例使用唯一标示 priority80 #定义优先级,数字越大优先级越高,在一个vrrp――instance下,master的优先级必须大于backup advert_int1 #设定master与backup负载均衡器之间同步检查的时间间隔,单位是秒 authentication{ #设置验证类型和密码 auth_typePASS #主要有PASS和AH两种 auth_pass1111 #验证密码,同一个vrrp_instance下MASTER和BACKUP密码必须相同 } virtual_ipaddress{ #设置虚拟ip地址,可以设置多个,每行一个 192.168.1.253 } } virtual_server192.168.1.25380{ #设置虚拟服务器,需要指定虚拟ip和服务端口 delay_loop3 #健康检查时间间隔 lb_algorr #负载均衡调度算法 lb_kindDR #负载均衡转发规则 persistence_timeout50 #设置会话保持时间,对动态网页非常有用 protocolTCP #指定转发协议类型,有TCP和UDP两种 real_server192.168.1.8780{ #配置服务器节点1,需要指定realserver的真实IP地址和端口 weight1 #设置权重,数字越大权重越高 TCP_CHECK{ #realserver的状态监测设置部分单位秒 connect_timeout3 #超时时间 nb_get_retry3 #重试次数 delay_before_retry3 #重试间隔 connect_port80 #监测端口 }
CentOS7 Keepalived+LVS 实现高可用
系统环境:
操作系统:Centos7.2
依赖软件:net-tools
网络环境:
Keepalived Master:192.168.5.251
Keepalived Backup:192.168.5.252
VIP: 192.168.5.100
RIP: 192.168.5.254
Keepalived Master
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_mantaince_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
}
virtual_ipaddress {
192.168.5.100/24 label eno16777736:0
}
track_script {
chk_mantaince_down
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 192.168.5.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.255
persistence_timeout 50
protocol TCP
#
real_server 192.168.5.254 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
}
}
}
Keepalived Backup
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_mantaince_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
}
virtual_ipaddress {
192.168.5.100/24 label eno16777736:0
}
track_script {
chk_mantaince_down
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 192.168.5.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.255
persistence_timeout 50
protocol TCP
#
real_server 192.168.5.254 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
}
}
}
Real Script
#!/bin/bash
#!/bin/bash
#chkconfig: 2345 79 20
#description:LVS realserver
DR_VIP=192.168.5.100
case "$1" in
start)
/sbin/ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
/sbin/route add -host $SNS_VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "LVS RealServer Start OK"
;;
stop)
ifconfig lo:0 down
route del $SNS_VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS RealServer Stoped"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
notify Scrip
# !/bin/bash
# Author: MageEdu <linuxedu@foxmail.com>
# description: An example of notify script
#
vip=172.16.100.1
contact=''root@localhost''
notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date ''+%F %H:%M:%S''`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mailx -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
exit 0
;;
backup)
notify backup
exit 0
;;
fault)
notify fault
exit 0
;;
*)
echo ''Usage: `basename $0` {master|backup|fault}''
exit 1
;;
esac
Centos7+Lvs+keeplived实现Apache高可用的负载均衡
Centos7+Lvs+keeplived实现Apache高可用的负载均衡
近期一直在练习Linux相关的服务部署,前面文章也介绍了一部分Linux的相关日常服务,今天我们就介绍Centos7+Lvs+keeplived实现Apache高可用的负载均衡,其实该功能对于一个企业运维人员来说是必须要掌握的技能,而且要熟悉相关负载均衡及高可用的相关参数,这样才可以部署出一个属于特殊环境的配置应用,我们都知道lvs、Nginx、haproxy可以实现应用的负载均衡,但是不能实现单点故障,所以我们还需要借助一个应用就是keepalived,当然我们见的最多的就是lvs+keepalived配置应用了。所以我们今天主要介绍的是Centos7+Lvs+keeplived实现Apache高可用的负载均衡, keepalived--master和keepalived--backup 两者之间通过vrrp协议利用组播进行通信,master主机对外接受请求并将请求转发至后方的realserver,backup主机只接受请求而不转发请求。某时刻当backup主机没有接受到master主机发送的信息时,于是发送vrrp通告信息并广播arp信息,宣城自己是master,如果收到其他主机发送的通告信息的优先级比自己的高,那么自己将继续转为backup,优先级别高的机器,此时就是新master主机,并接替原master主机的工作。
每个keepalived机器都对后方的realserver进行监控,只不过master负责将外部请求转发至后方的realserver,backup则不作该处理。而LVS呢,LVS将其控制程序ipvs嵌套至传输层数据流的Input钩子函数上,ipvs将发送至本控制器主机(director)上的数据流在input链上进行截流,通过对数据报文的分析根据自身的算法将数据流转发至后台真正提供服务的主机(Real Server)上,达到根据后端服务器负载能力均衡分配处理任务的效果。对于LVS的更多参数介绍,我们将在下一篇文章中介绍:
环境介绍:
VIP:192.168.7.50
Hostname:AA-S
IP:192.168.7.51
Role:Apache
Hostname:BB-S
IP:192.168.7.52
Role:Apache
Hostname:CC-S
IP:192.168.7.53
Role:LVS+Keepalived
Hostname:DD-S
IP:192.168.7.54
Role:LVS+Keepalived
我们同样使用Apache作为web服务器
首先是准备Apache服务
Yuminstallhttpd
查看httpd 版本
接下来我们首先要为apache定义一个 默认的页面,方便区分
Vim/var/www/httml/index.html
</html> <!DOCTYPEhtml> <html> <head> <title>WelcometoApache</title> <style> body{ 35em; margin:0auto; font-family:Tahoma,Verdana,Arial,sans-serif; } </style> <styletype="text/css"> h1{color:red} h2{color:blue} h3{color:green} h4{color:yellow} } </style> </head><bodybgcolor='#46A3FF'> <h1>WelcometoAA-SApache</h1> <h2>HostName:AA-S</h2> <h3>IP:192.168.7.51</h3> <h4>Service:Apache</h4> <inputtype=buttonvalue="Refresh"onclick="window.location.href('http://192.168.7.51')"> </body> </html>
启动服务
Systemctlstarthttpd
然后添加默认的防火墙端口8o
Firewall-cmd--zone=public--add-port='80/tcp'--permanent
或者vim/etc/firewalld/zone/public.xml
添加一下格式
<portportocal='tcp'port='80'>
接下来我们在本地访问测试一下
接下来我们也同样按照方法部署第二台apache服务,方法跟上面完全一样
</html> <!DOCTYPEhtml> <html> <head> <title>WelcometoApache</title> <style> body{ 35em; margin:0auto; font-family:Tahoma,sans-serif; } </style> <styletype="text/css"> h1{color:red} h2{color:blue} h3{color:green} h4{color:yellow} } </style> </head><bodybgcolor='#FF7F50'> <h1>WelcometoBB-SApache</h1> <h2>HostName:BB-S</h2> <h3>IP:192.168.7.52</h3> <h4>Service:Apache</h4> <inputtype=buttonvalue="Refresh"onclick="window.location.href('http://192.168.7.52')"> </body> </html>
测试访问
接下来开始安装LVS
我们在192.168.7.53上
yuminstallipvsdm
安装完成
我们同样在第二台服务器上也安装IVS
192.168.7.54
yuminstall-yipvsadm
接下来我们继续安装keeplived,我们是Keepalived服务和lvs在同一台服务器上
我们首先在第一台服务器上安装,192.168.7.53
yuminstallkeepalived
vim/etc/keepalived/keepalived.conf
我们查看默认配置
!ConfigurationFileforkeepalived global_defs{ notification_email{ acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_fromAlexandre.Cassen@firewall.loc smtp_server192.168.200.1 smtp_connect_timeout30 router_idLVS_DEVEL } vrrp_instanceVI_1{ stateMASTER interfaceeth0 virtual_router_id51 priority100 advert_int1 authentication{ auth_typePASS auth_pass1111 } virtual_ipaddress{ 192.168.200.16 192.168.200.17 192.168.200.18 } } virtual_server192.168.200.100443{ delay_loop6 lb_algorr lb_kindNAT nat_mask255.255.255.0 persistence_timeout50 protocolTCP real_server192.168.201.100443{ weight1 SSL_GET{ url{ path/ digestff20ad2481f97b1754ef3e12ecd3a9cc } url{ path/mrtg/ digest9b3a0c85a887a256d6939da88aabd8cd } connect_timeout3 nb_get_retry3 delay_before_retry3 } } } virtual_server10.10.10.21358{ delay_loop6 lb_algorr lb_kindNAT persistence_timeout50 protocolTCP sorry_server192.168.200.2001358 real_server192.168.200.21358{ weight1 HTTP_GET{ url{ path/testurl/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl2/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl3/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } connect_timeout3 nb_get_retry3 delay_before_retry3 } } real_server192.168.200.31358{ weight1 HTTP_GET{ url{ path/testurl/test.jsp digest640205b7b0fc66c1ea91c463fac6334c } url{ path/testurl2/test.jsp digest640205b7b0fc66c1ea91c463fac6334c } connect_timeout3 nb_get_retry3 delay_before_retry3 } } } virtual_server10.10.10.31358{ delay_loop3 lb_algorr lb_kindNAT nat_mask255.255.255.0 persistence_timeout50 protocolTCP real_server192.168.200.41358{ weight1 HTTP_GET{ url{ path/testurl/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl2/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl3/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } connect_timeout3 nb_get_retry3 delay_before_retry3 } } real_server192.168.200.51358{ weight1 HTTP_GET{ url{ path/testurl/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl2/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } url{ path/testurl3/test.jsp digest640205b7b0fc66c1ea91c463fac6334d } connect_timeout3 nb_get_retry3 delay_before_retry3 } } }
我们为了保证服务的正确修改,我们在修改前,先对keepalived.conf进行备份一份
cp/etc/keepalived/keepalived.conf/etc/keepalived/keepalived.conf.bak
接下来清空配置
Echo>keepalivedkeepalived.conf
然后粘贴以下代码,需要注意的是里面interface需要跟当前的系统对应,不然会无法监听端口;
通过ip a sh 查看,比如默认一般是eth0,而我的系统是ens160
!ConfigurationFileforkeepalived global_defs{ notification_email{ acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_fromAlexandre.Cassen@firewall.loc smtp_server192.168.200.1 smtp_connect_timeout30 router_idLVS_DEVEL } vrrp_instanceVI_1{#定义一个vrrp组,组名唯一 stateMASTER#定义改主机为keepalived的master主机 interfaceens160#监控eth0号端口 virtual_router_id58#虚拟路由id号为58,id号唯一,这个id决定了多播的MAC地址 priority150#设置本节点的优先级,master的优先级要比backup的优先级别高,数值要大 advert_int1#检查间隔,默认为1秒 authentication{ auth_typePASS#认证方式,密码认证 auth_pass1111#认证的密码,这个密码必须和backup上的一致 } virtual_ipaddress{#设置虚拟的ip,这个ip是以后对外提供服务的ip。 192.168.7.50 } } virtual_server192.168.7.5080{#虚拟主机设置,ip同上。 delay_loop2#服务器轮询的时间间隔 lb_algorr#lvs的调度算法 lb_kindDR#lvs的集群模式 nat_mask255.255.255.0 persistence_timeout50#会话超时50s protocolTCP#健康检查是用tcp还是udp real_server192.168.7.5180{#后端真实主机1 weight100#每台机器的权重,0表示不给该机器转发请求,知道它恢复正常。 TCP_CHECK{#健康检查项目,以下 connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } real_server192.168.7.5280{#后端真实主机2 weight100#每台机器的权重,0表示不给该机器转发请求,知道它恢复正常。 TCP_CHECK{#健康检查项目,以下 connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } }
我们接着在第二台服务器也安装keepalived
我们同样备份keepalived.conf文件
cp/etc/keepalived/keepalived.conf/etc/keepalived/keepalived.conf.bak
echo>/etc/keepalived/keepalived.conf
接下来我们需要配置backup的keepalived服务,
其实差别跟master的稍微有点差别:
state MASTER >> > state BACKUP #定义改主机为keepalived的backup主机,监控主master
priority 150 >>> priority 100 #设置本节点的优先级,数值要比master主机上的小
vim/etc/keepalived/keepalived.conf
然后粘贴以下代码,需要注意的是里面interface需要跟当前的系统对应,不然会无法监听端口;
通过ip a sh 查看,比如默认一般是eth0,而我的系统是ens160
!ConfigurationFileforkeepalived global_defs{ notification_email{ acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_fromAlexandre.Cassen@firewall.loc smtp_server192.168.200.1 smtp_connect_timeout30 router_idLVS_DEVEL } vrrp_instanceVI_1{#定义一个vrrp组,组名唯一 stateBACKUP#定义改主机为keepalived的master主机 interfaceens160#监控eth0号端口 virtual_router_id58#虚拟路由id号为58,id号唯一,这个id决定了多播的MAC地址 priority100#设置本节点的优先级,master的优先级要比backup的优先级别高,数值要大 advert_int1#检查间隔,默认为1秒 authentication{ auth_typePASS#认证方式,密码认证 auth_pass1111#认证的密码,这个密码必须和backup上的一致 } virtual_ipaddress{#设置虚拟的ip,这个ip是以后对外提供服务的ip。 192.168.7.50 } } virtual_server192.168.7.5080{#虚拟主机设置,ip同上。 delay_loop2#服务器轮询的时间间隔 lb_algorr#lvs的调度算法 lb_kindDR#lvs的集群模式 nat_mask255.255.255.0 persistence_timeout50#会话超时50s protocolTCP#健康检查是用tcp还是udp real_server192.168.7.5180{#后端真实主机1 weight100#每台机器的权重,0表示不给该机器转发请求,知道它恢复正常。 TCP_CHECK{#健康检查项目,以下 connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } real_server192.168.7.5280{#后端真实主机2 weight100#每台机器的权重,0表示不给该机器转发请求,知道它恢复正常。 TCP_CHECK{#健康检查项目,以下 connect_timeout3 nb_get_retry3 delay_before_retry3 connect_port80 } } }
接下来就是测试keepalived的配置了
我们在两台服务器上分别启动keepalived服务
systemctlstartkeepalived
我们在master上查看网络监听状态。
然后在Master主机上查看网络监听状态。
然后我们查看master主机上的ipvsadm
我们通过以上可以得知,虚拟ip此时绑定在192.168.7.53--master上,然后停止192.168.7.53上的keepalived服务,查看查看192.168.7.54--Backup上的虚拟ip状态,已经监听到了Backup主机的网络端口上。
然后主机的状态就没有虚拟ip了
另外我们查看一下ipvsadm的状态。
首先是master主机上的:
然后是backup上的
我们可以查看一下切换的log
我们在第二台服务器上查看log
cat/etc/log/message
以上的配置,keepalived的高可用功能已经实现。
我们也可以查看一下keepalived的状态
systemctlstatuskeepalived
代理服务器之间的负载均衡
接下来我们要实现realserver服务器配置
我们需要在两台web(http)192.168.7.51,192.168.7.52,需要在这两台服务器上配置虚拟VIP,所以在服务器上执行以下脚本
#!/bin/bash #chkconfig:23458535 #Description:Startrealserverwithhostboot VIP=192.168.7.50 functionstart(){ ifconfiglo:0$VIPnetmask255.255.255.255broadcast$VIP echo1>/proc/sys/net/ipv4/conf/lo/arp_ignore echo2>/proc/sys/net/ipv4/conf/lo/arp_announce echo1>/proc/sys/net/ipv4/conf/all/arp_ignore echo2>/proc/sys/net/ipv4/conf/all/arp_announce echo“RealServer$(uname-n)started” } functionstop(){ ifconfiglo:0down ifconfiglo:0$VIPnetmask255.255.255.255broadcast$VIP echo0>/proc/sys/net/ipv4/conf/lo/arp_ignore echo0>/proc/sys/net/ipv4/conf/lo/arp_announce echo0>/proc/sys/net/ipv4/conf/all/arp_ignore echo0>/proc/sys/net/ipv4/conf/all/arp_announce echo“RealServer$(uname-n)stopped” } case$1in start) start ;; stop) stop ;; *) echo“Usage:$0{start|stop}” exit1 esac
我们保存退出后,需要给执行权限
chmoea+xrealserver
另外我们将脚本拷贝到第二台web服务器上,然后执行上面同样的操作
scprealserverroot@192.168.7.52:/DATA
然后在两台服务器上都需要执行这个脚本
./realserverstart
接着我们可以再查看一下ipvsadm
我们在两台服务器上分别执行
接下来我们就尝试通虚拟IP来尝试访问
192.168.7.50
我们回头看看里面lvs策略
vim/etc/keepalived/keepalived.conf #回话超时50s
最后我们说一下如何将realserver的脚本添加到随系统启动
我们将使用chkconfig --add 来管理服务器的添加、顺序
chkconfig--addrealserver cprealserver/etc/init.d/将脚本拷贝到指定目录 chkconfig--addrealserver添加realserver脚本到自动启动 chkconfig--list查看自动启动服务
通过以上配置后,我们就可以通过服务进行操作了
/etc/init.d/realserverstop /etc/init.d/realserverstart
chkconfigrealserveron设置为自动启动
我们今天的关于CentOS7 实现 Keepalived + Nginx 实现高可用 Web 负载均衡和keepalived nginx负载均衡配置的分享就到这里,谢谢您的阅读,如果想了解更多关于CentOS 6.3下部署LVS(NAT)+keepalived实现高性能高可用负载均衡、centos 7 LVS+keepalived实现nginx的高可用以及负载均衡、CentOS7 Keepalived+LVS 实现高可用、Centos7+Lvs+keeplived实现Apache高可用的负载均衡的相关信息,可以在本站进行搜索。
本文标签: