本文的目的是介绍Redis_集群配置的详细情况,特别关注redis集群配置的相关信息。我们将通过专业的研究、有关数据的分析等多种方式,为您呈现一个全面的了解Redis_集群配置的机会,同时也不会遗漏关
本文的目的是介绍Redis_集群配置的详细情况,特别关注redis集群配置的相关信息。我们将通过专业的研究、有关数据的分析等多种方式,为您呈现一个全面的了解Redis_集群配置的机会,同时也不会遗漏关于009-docker-安装-redis:5.0.3-单点配置、集群配置、centos 关于redis 集群配置安装、centos6.5 下 redis 集群配置(多机多节点)、java操作redis集群配置[可配置密码]和工具类(比较好用)的知识。
本文目录一览:- Redis_集群配置(redis集群配置)
- 009-docker-安装-redis:5.0.3-单点配置、集群配置
- centos 关于redis 集群配置安装
- centos6.5 下 redis 集群配置(多机多节点)
- java操作redis集群配置[可配置密码]和工具类(比较好用)
Redis_集群配置(redis集群配置)
集群配置
准备工作
cd /am/usr/redis
mkdir cluster-test
cd cluster-test
mkdir 7000 7001 7002 7003 7004 7005
cd /am/usr/redis/redis-3.0.7
cp src/redis-server /am/usr/redis/cluster-test
cp src/redis-config /am/usr/redis/cluster-test
cp src/redis-trib.rb /am/usr/redis/cluster-test
cd /am/usr/redis/cluster-test
cp redis.config /am/usr/redis/cluster-test/redis-7000.config
//-- ... 7001 7002 7003 7004 7005
启用集群配置
cd /am/usr/redis/cluster-test
vim redis-xxx.config
//-- 修改配置如下:
daemonize yes
port 7000
logfile "/am/usr/redis/cluster-test/7000/redis.log"
dir /am/usr/redis/cluster-test/7000
appendonly yes
cluster-enabled yes
cluster-config-file nodes-7000.conf
cluster-node-timeout 15000
vim redis-cluster-start
//-- 内容如下
#!/bin/sh
./redis-server redis-7000.conf
./redis-server redis-7001.conf
./redis-server redis-7002.conf
./redis-server redis-7003.conf
./redis-server redis-7004.conf
./redis-server redis-7005.conf
chmod +x redis-cluster-start
配置集群
//-- 集群配置指令依赖ruby
yum install ruby
yum install rubygems
gem install redis
//-- 安装完ruby相关指令后使用 redis-trib.rb create --replicas X ....配置集群
//-- Redis Cluster requires at least 3 master nodes
./redis-trib.rb create --replicas 1 192.168.197.128:7000 192.168.197.128:7001 192.168.197.128:7002 192.168.197.128:7003 192.168.197.128:7004 192.168.197.128:7005
//-- Adding replica 192.168.197.128:7003 to 192.168.197.128:7000
//-- Adding replica 192.168.197.128:7004 to 192.168.197.128:7001
//-- Adding replica 192.168.197.128:7005 to 192.168.197.128:7002
访问集群
//-- 使用-c参数以集群模式连接
redis-cli -c -p 7000 --raw
//-- 集群下不能使用select指令
009-docker-安装-redis:5.0.3-单点配置、集群配置
一、基础使用
1、搜索镜像
docker search redis
2、拉取合适镜像
docker pull redis:5.0.3
docker images
3、使用镜像
docker run -p 6379:6379 -v $PWD/data:/data -d redis:5.0.3 redis-server --appendonly yes
命令说明:
-p 6379:6379 : 将容器的6379端口映射到主机的6379端口【主机端口:docker端口】
-v $PWD/data:/data : 将主机中当前目录下的data挂载到容器的/data
redis-server --appendonly yes : 在容器执行redis-server启动命令,并打开redis持久化配置
--requirepass “xxx” :设置认证密码
二、个人使用:【推荐】
3.1、首先启动docker下的容器【查看配置】
docker run -p 6379:6379 --name myredis -d redis:5.0.3 redis-server --appendonly yes
进入docker 容器内
docker exec -it myredis bash
查看持久化数据地址
/data
3.2、删除此版本实例
docker rm -f myredis
3.3、定制化启动容器【定制化】
配置共享目录:Perferences→File Sharing,增加对应的映射目录,即可
docker run -p 6379:6379 --name myredis -v /Users/lihongxu6/docker/redis/data:/data -d redis:5.0.3 redis-server --appendonly yes
接下来使用即可
docker exec -it myredis redis-cli
redis-cli 使用:默认不写 host 当前,port:6379,password:无
redis-cli -h host -p port -a password
mac 版本可以下载工具:https://github.com/onewe/RedisDesktopManager-Mac/releases
windows 版本可以下载工具:查找对应版本使用即可
三、使用配置文件启动
上述 是使用默认配置启动
1、配置文件说明
去redis官网获取对应版本的配置文件redis.conf
如上述使用的是5.0.3如下


# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won''t be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you''d better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## MODULES #####################################
# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use ''yes'' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# To enable logging to the system logger, just set ''syslog-enabled'' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and ''databases''-1
databases 16
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that''s set to ''yes'' as it''s almost always a win.
# If you want to save some CPU in the saving child set it to ''no'' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the ''dbfilename'' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# +------------------+ +---------------+
# | Master | ---> | Replica |
# | (receive writes) | | (exact copy) |
# +------------------+ +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition replicas automatically try to reconnect to masters
# and resynchronize with them.
#
# replicaof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>
# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to ''yes'' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to ''no'' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes
# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It''s just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using ''rename-command'' to shadow all the
# administrative / dangerous commands.
replica-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Replicas send PINGs to server in a predefined interval. It''s possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60
# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
#
# Port: The port is communicated by the replica during the replication
# handshake, and is normally the port that the replica is using to
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
################################### CLIENTS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error ''max number of clients reached''.
#
# maxclients 10000
############################## MEMORY MANAGEMENT ################################
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can''t remove keys according to the policy, or if the policy is
# set to ''noeviction'', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the ''noeviction'' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is ''noeviction'').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don''t evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes
############################# LAZY FREEING ####################################
# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It''s up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don''t fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that''s usually the right compromise between
# speed and data safety. It''s up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that''s snapshotting),
# or on the contrary, use "always" that''s very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it''s possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can''t happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn''t want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
# Normal Redis instances can''t be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
# in order to try to give an advantage to the replica with the best
# replication offset (more data from the master processed).
# Replicas will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the replica will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they''ll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10
# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can''t be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
########################## CLUSTER DOCKER/NAT support ########################
# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don''t have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don''t need
# this feature and the feature has some overhead. Note that if you don''t
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don''t start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don''t compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don''t have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can''t consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don''t receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it''s maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don''t have fragmentation
# issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# Enabled active defragmentation
# activedefrag yes
# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5
# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75
# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000
配置文件修改
bind 127.0.0.1 #注释掉这部分,这是限制redis只能本地访问
protected-mode no #默认yes,开启保护模式,限制为本地访问
daemonize no#默认no,改为yes意为以守护进程方式启动,可后台运行,除非kill进程(可选),改为yes会使配置文件方式启动redis失败
dir ./ #输入本地redis数据库存放文件夹(可选)
appendonly yes #redis持久化(可选)
即配置文件
# bind 127.0.0.1 #注释掉这部分,这是限制redis只能本地访问
# 默认yes,开启保护模式,限制为本地访问
protected-mode no
# 默认no,改为yes意为以守护进程方式启动,可后台运行,除非kill进程(可选),改为yes会使配置文件方式启动redis失败
# daemonize no
# redis持久化(可选)
appendonly yes
2、使用下属命令配置
全面的
docker run -d --privileged=true -p 6379:6379 \
-v /Users/lihongxu6/docker/redis/default/redis.conf:/etc/redis/redis.conf \
-v /Users/lihongxu6/docker/redis/default/data:/data \
--name redisconf redis:5.0.3 redis-server /etc/redis/redis.conf --appendonly yes
由于配置文件配置了部分,这里可以省略
docker run -d -p 6379:6379 \
-v /Users/lihongxu6/docker/redis/default/redis.conf:/etc/redis/redis.conf \
-v /Users/lihongxu6/docker/redis/default/data:/data \
--name redisconf redis:5.0.3 redis-server /etc/redis/redis.conf
-p 6379:6379:把容器内的6379端口映射到宿主机6379端口 【主机:容器】
-v /Users/lihongxu6/docker/redis/default/redis.conf:/etc/redis/redis.conf :挂载,把宿主机配置好的redis.conf放到容器内的这个位置中
-v /Users/lihongxu6/docker/redis/default/data:/data:挂载,把redis持久化的数据在宿主机内显示,做数据备份
redis-server /etc/redis/redis.conf:这个是关键配置,让redis不是无配置启动,而是按照这个redis.conf的配置启动 ,并且最终映射至主机的目录中文件
--appendonly yes:redis启动后数据持久化
-d 后台启动
--privileged=true:如果需要授予 Docker 容器足够的管理权限,则直接将--privileged 参数设为 true
启动过程日志
docker logs containerId
四、集群
注意点
envsubst 安装:
brew install gettext
brew link --force gettext
终端获取dockerip方法;014-docker-终端获取 docker 容器(container)的 ip 地址
4.1、基础
1、创建搭建集群所需的conf文件,
如:redis-cluster.tmpl
2、基础配置文件【需要手工配置】
env-conf.sh


#!/bin/bash
echo "读取配置..."
#服务器内网ip
ip="192.170.25.170"
#redis镜像版本
redis_version="5.0.3"
#集群文件放置目录
redis_dir="/Users/lihongxu6/docker/redis/redis-cluster"
#端口范围,要至少6个才可以
redis_port_range="7000 7005"
echo "IP:${ip}"
echo "redis_version:${redis_version}"
echo "redis_dir:${redis_dir}"
3、脚本命令说明
initup.sh 初始化环境 【不论现有环境如何,都可以初始化构建,但是结果 会销毁集群中数据】
./initup.sh 执行即可,主要是配置网络以及,构建配置文件
最后会进入第一个的docker中,继续执行:
./exe.sh 主要是构建集群,将几个节点互相配置
create.sh 新建一个redis docker 集群,新建一个集群,指的是新建docker容器,不使用新配置,所以在新建时候,不会销毁数据
destroy.sh 销毁一个redis docker 集群 销毁一个集群,指的是销毁docker容器,不删除配置,所以在需要再次使用时,可以 使用 create.sh 新建
start.sh 启动 平常可以停止,使用时候再启动【当然也可以销毁,在建,数据存在】
stop.sh 停止
4.2、快速使用
//拉取脚本
git clone https://github.com/bjlhx15/docker.git
cd docker/redis/mac/
//执行初始换脚本
./initup.sh
//脚本执行过程中会自动进入redis-${post}容器中,在data目录下会有个exe.sh脚本,执行构建集群脚本
./exe.sh
//执行过程中,会让提示输入yes or no ,yes即可
配置完毕,使用即可【手工进入docker中:docker exec -it redis-7000 bash】
redis-cli -p 7000 -c
set akey avalue
4.3、集群设置密码
- 如果对集群设置密码,那么
requirepass和masterauth
都需要设置,否则发生主从切换时,就会遇到授权问题 - 各个节点的
密码都必须一致
,否则Redirected就会失败
步骤
# 设置masterauth
config set masterauth 密码
# 设置requirepass
config set requirepass 密码
# 验证密码,以继续操作
auth LinShen
# 回写到文件,使其永久生效(如果这里出现Permission denied,则说明Dockerfile少了RUN chmod 777 /usr/local/etc/redis/redis.conf)
config rewrite
centos 关于redis 集群配置安装
一、概述
1Redis3.0版本之后支持Cluster.
1.1、redis cluster的现状
目前redis支持的cluster特性:
1):节点自动发现
2):slave->master 选举,集群容错
3):Hot resharding:在线分片
4):进群管理:cluster xxx
5):基于配置(nodes-port.conf)的集群管理
6):ASK 转向/MOVED 转向机制.
1.2、redis cluster 架构
1)redis-cluster架构图
架构细节:
(1)所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽.
(2)节点的fail是通过集群中超过半数的节点检测失效时才生效.
(3)客户端与redis节点直连,不需要中间proxy层.客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可
(4)redis-cluster把所有的物理节点映射到[0-16383]slot上,cluster 负责维护node<->slot<->value
2) redis-cluster选举:容错
(1)领着选举过程是集群中所有master参与,如果半数以上master节点与master节点通信超过(cluster-node-timeout),认为当前master节点挂掉.
(2):什么时候整个集群不可用(cluster_state:fail),当集群不可用时,所有对集群的操作做都不可用,收到((error) CLUSTERDOWN The cluster is down)错误
a:如果集群任意master挂掉,且当前master没有slave.集群进入fail状态,也可以理解成进群的slot映射[0-16383]不完成时进入fail状态.
b:如果进群超过半数以上master挂掉,无论是否有slave集群进入fail状态.
二、redis cluster安装
用三台虚拟机模拟6个节点,一台机器2个节点,创建出3 master、3 salve 环境。
redis 采用 redis-3.2.4 版本。三台虚拟机都是 CentOS , CentOS6.5 (IP:10.0.80.199、10.0.80.200、10.0.80.201)
安装过程
yum install patch gcc-c++ make bzip2 autoconf automake libtool bison iconv-devel readline readline-devel zlib zlib-devel libyaml-devel libffi-devel openssl* openssl-devel curl-devel expat-devel gettext-devel
1. 下载并解压
cd /usr/software
wget http://download.redis.io/releases/redis-3.2.4.tar.gz
tar -zxvf redis-3.2.4.tar.gz
2. 编译安装
cd redis-3.2.4
make && make install
3. 创建 Redis 节点
首先在 192.168.31.245 机器上 /usr/software 目录下创建 redis-cluster 目录;
mkdir redis-cluster
在 redis-cluster 目录下,创建名为6379、6380目录,并将 redis.conf 拷贝到这三个目录中
mkdir 6379 6380
cp redis.conf redis-cluster/6379
cp redis.conf redis-cluster/6380
分别修改配置文件,修改如下内容
port 6379 //端口6379,6380 bind 10.0.80.199 //默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口,无法创建集群 daemonize yes //redis后台运行 pidfile /usr/software/redis-cluster/6379/redis-6379.pid //pidfile文件对应6379,6380 cluster-enabled yes //开启集群 把注释#去掉 cluster-config-file /usr/software/redis-cluster/6379/nodes-6379.conf //集群的配置 配置文件首次启动自动生成 6379,6380 cluster-node-timeout 15000 //请求超时 默认15秒,可自行设置 appendonly yes //aof日志开启 有需要就开启,它会每次写操作都记录一条日志 logfile /usr/software/redis-cluster/6379/redis.log //日志记录文件
接着在另外2台机器上重复以上三步
4. 启动各个节点
/usr/local/bin/redis-server redis-cluster/6379/redis.conf
... ...
ps -ef | grep redis
5.创建集群
Redis 官方提供了 redis-trib.rb 这个工具,就在解压目录的 src 目录中。使用下面这个命令即可完成安装
./redis-trib.rb create --replicas 1 10.0.80.199:6379 10.0.80.200:6379 10.0.80.201:6379 10.0.80.201:6380 10.0.80.199:6380 10.0.80.200:6380如果出错了,这个工具是用 ruby 实现的,所以需要安装 ruby。
--replicas 1 表示 自动为每一个master节点分配一个slave节点 上面有6个节点,程序会按照一定规则生成 3个master(主)3个slave(从)
6.ruby安装
系统默认安装的是1.8版本,需要安装2.0以上版本,下载
wget https://cache.ruby-lang.org/pub/ruby/2.4/ruby-2.4.1.tar.gz tar -zxvf ruby-2.4.1.tar.gz ./configure --enable-shared --enable-pthread --prefix=/usr/local/ruby make && make install将ruby命令集加入系统环境变量
#export RUBY_HOME=/usr/local/ruby #export PATH=$PATH:$RUBY_HOME/bin echo "PATH=$PATH:/usr/local/ruby/bin;export PATH" >> /etc/profile source /etc/profile
7.检查ruby、gem版本
ruby -v
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]
gem -v
8.安装redis的第三方接口
gem install redis--version 3.3.3
如果安装没问题
find / -name "redis"
/usr/local/ruby/lib/ruby/gems/2.4.0/gems/redis-3.3.3/lib/redis
如果没有出现,说明安装有问题,redis-trib.rb还是会无法执行
如果安装不正确,只有先卸载系统自带的ruby
yum remove ruby
进入先前ruby安装目录
make uninstall ruby
make clean
在重新步骤6
说明:在这里安装redis接口折腾好久(千万不能在ruby未安装配置好的情况下直接直接利用系统自带gem执行安装,那样的话也是安装在系统ruby 1.8版本下,等你配好ruby,去执行gem install redis,也是不成功),如果安装成功,执行步骤5 集群部署就成功了
./redis-trib.rb create --replicas 1 10.0.80.199:6379 10.0.80.200:6379 10.0.80.201:6379 10.0.80.201:6380 10.0.80.199:6380 10.0.80.200:6380 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.0.80.199:6379 10.0.80.200:6379 10.0.80.201:6379 Adding replica 10.0.80.200:6380 to 10.0.80.199:6379 Adding replica 10.0.80.199:6380 to 10.0.80.200:6379 Adding replica 10.0.80.201:6380 to 10.0.80.201:6379 M: 7c516370de41dfa88a67c65bda150027eacf58a5 10.0.80.199:6379 slots:0-5460 (5461 slots) master M: ce517da519e9d48b4b2ccd5d9c59ce9272be889d 10.0.80.200:6379 slots:5461-10922 (5462 slots) master M: 69fa96dcc1035d0339cb38042a473819514257e6 10.0.80.201:6379 slots:10923-16383 (5461 slots) master S: 9713ecac4a54542e1f7889bb7cca9697b48197ff 10.0.80.201:6380 replicates 69fa96dcc1035d0339cb38042a473819514257e6 S: 84502278160d571489a838a0318483a898fcbdf6 10.0.80.199:6380 replicates ce517da519e9d48b4b2ccd5d9c59ce9272be889d S: 926abd06fd5207a4b6970d5bdd5ab417b4fcbb12 10.0.80.200:6380 replicates 7c516370de41dfa88a67c65bda150027eacf58a5 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join..... >>> Performing Cluster Check (using node 10.0.80.199:6379) M: 7c516370de41dfa88a67c65bda150027eacf58a5 10.0.80.199:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 926abd06fd5207a4b6970d5bdd5ab417b4fcbb12 10.0.80.200:6380 slots: (0 slots) slave replicates 7c516370de41dfa88a67c65bda150027eacf58a5 S: 84502278160d571489a838a0318483a898fcbdf6 10.0.80.199:6380 slots: (0 slots) slave replicates ce517da519e9d48b4b2ccd5d9c59ce9272be889d S: 9713ecac4a54542e1f7889bb7cca9697b48197ff 10.0.80.201:6380 slots: (0 slots) slave replicates 69fa96dcc1035d0339cb38042a473819514257e6 M: ce517da519e9d48b4b2ccd5d9c59ce9272be889d 10.0.80.200:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) M: 69fa96dcc1035d0339cb38042a473819514257e6 10.0.80.201:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
三、集群节点操作
1. 添加新master节点
1)启动实例
2)增加实例为master
redis-trib.rb add-node 10.0.80.202:6379 10.0.80.200:6379
说明:
第一个 ip:port 为新节点
第二个 ip:port 是任意一个已经存在的节点
新节点没有包含任何数据,也没有包含任何slot。
当集群需要将某个从节点升级为新的主节点时, 这个新节点不会被选中,同时新的主节点因为没有包含任何slot,不参加选举和failover。
3)为新增加的master 再分区(resharding),即从其他master移动一些slot
redis-trib.rb reshard 10.0.80.202:6379
#根据提示选择要迁移的slot数量(ps:这里选择4000)
How many slots do you want to move (from 1 to 16384)? 4000
#选择要接受这些slot的node-id
What is the receiving node ID? 035e1b5cc4ddd115546f024305c282b79020f1e3
#选择slot来源:
#all表示从所有的master重新分配,
#或者数据要提取slot的master节点id,最后用done结束
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:all
#打印被移动的slot后,输入yes开始移动slot以及对应的数据.
#Do you want to proceed with the proposed reshard plan (yes/no)? yes
#结束
//我们从201 移动500个slots 给202
redis-trib.rb reshard --from 0a8166334d595340ee86909a570944f7c14e5f73 --to 035e1b5cc4ddd115546f024305c282b79020f1e3 --slots 500 --yes --timeout 5000 10.0.80.201:6379
redis-trib.rb check 10.0.80.201:6380
>>> Performing Cluster Check (using node 10.0.80.201:6380)
S: 5654178703c5ffe4ec91c63d2837908a12f13a9c 10.0.80.201:6380
slots: (0 slots) slave
replicates 0a8166334d595340ee86909a570944f7c14e5f73
S: 79f20aeb23d61d0b580df3aca104260c5457aa6c 10.0.80.200:6380
slots: (0 slots) slave
replicates 7f5dda96b4c63f170c59b6cb767792ae0a4ffebb
M: 035e1b5cc4ddd115546f024305c282b79020f1e3 10.0.80.202:6379
slots:0-1332,5461-6794,10923-12255 (4000 slots) master
0 additional replica(s)
M: 0a8166334d595340ee86909a570944f7c14e5f73 10.0.80.201:6379
slots:12256-16383 (4128 slots) master
1 additional replica(s)
S: ec818b8579d630abbaf02f4c9157728f3b0237b6 10.0.80.199:6380
slots: (0 slots) slave
replicates 6efd3629d4b94ed6d91986cd3709cf0aa5a49892
M: 6efd3629d4b94ed6d91986cd3709cf0aa5a49892 10.0.80.200:6379
slots:6795-10922 (4128 slots) master
1 additional replica(s)
M: 7f5dda96b4c63f170c59b6cb767792ae0a4ffebb 10.0.80.199:6379
slots:1333-5460 (4128 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
1)启动实例
2)增加实例为slave
方法一:redis-trib.rb add-node --slave 10.0.80.200:6380 10.0.80.200:6379
说明:
第一个 ip:port 为新节点
第二个 ip:port 是任意一个已经存在的节点
新的节点会作为集群中其中一个主节点的从节点,一般来说是从节点最少的主节点
方法二:redis-trib.rb add-node --slave --master-id 035e1b5cc4ddd115546f024305c282b79020f1e3 10.0.80.200:6380 10.0.80.200:6379
说明:
-master-id xxxx 主节点的 ID
第一个 ip:port 为新节点
第二个 ip:port 是任意一个已经存在的节点
注意:在线添加slave 时,需要bgsave整个master数据,并传递到slave,再由 slave加载rdb文件到内存,rdb生成和传输的过程中消耗Master大量内存和网络IO,以此不建议单实例内存过大,线上小心操作。
redis-trib.rb del-node ip:port '<node-id>'
说明:
ip:port 集群中已有的任意一节点(不是被删除的节点)
被删除节点的 ID
删除master节点之前首先要使用reshard移除master的全部slot,然后再删除当前节点。 (目前redis-trib.rb只能把被删除master的slot对应的数据迁移到一个节点上)。
1)迁移 slot
#把10.0.80.202:6379当前master迁移到10.0.80.199:6379上
redis-trib.rb reshard 10.0.80.199:6379
#根据提示选择要迁移的slot数量(ps:这里选择4000)
How many slots do you want to move (from 1 to 16384)? 4000(被删除master的所有slot数量)
#选择要接受这些slot的node-id(10.0.80.199:6379)
What is the receiving node ID? 7f5dda96b4c63f170c59b6cb767792ae0a4ffebb (ps:10.0.80.199:6379的node-id)
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:035e1b5cc4ddd115546f024305c282b79020f1e3(被删除master的node-id)
Source node #2:done
#打印被移动的slot后,输入yes开始移动slot以及对应的数据.
#Do you want to proceed with the proposed reshard plan (yes/no)? yes
2)删除空的 master 节点
redis-trib.rb del-node 10.0.80.201:6379 '035e1b5cc4ddd115546f024305c282b79020f1e3'
说明:
ip:port 集群中已有的任意一节点(不是被删除的节点)
被删除节点的 ID
对于负载/数据不均匀的情况,可以在线reshard slot来解决,方法与添加新master的reshard一样,只是需要reshard的master节点是已存在的老节点。
redis-trib.rb reshard 10.0.80.199:6379
redis-cli -p 6380
10.0.80.201:6380> cluster replicate 7f5dda96b4c63f170c59b6cb767792ae0a4ffebb
说明:7f5dda96b4c63f170c59b6cb767792ae0a4ffebb 为新的主节点的 ID
1)人工备份一个master
redis-cli -c -p 6380
10.0.80.200:6380> cluster failover
这个命令必须在slave node上面执行,使slave去failover它的master
2)del-node
redis-trib.rb del-node 10.0.80.201:6379 '0a8166334d595340ee86909a570944f7c14e5f73'
8.查看节点状态
redis-trib.rb check 10.0.80.201:6380
centos6.5 下 redis 集群配置(多机多节点)
可参考官网文档:redis 集群配置
需要注意的是,集群中的每个节点都会涉及到两个端口,一个是用于处理客户端操作的(如下介绍到的 6379/6380),另一个是 10000+{监听端口},用于集群各个节点间通信。
第一步:环境准备
此处我准备了三台服务器(也可安装三台虚拟机),其内网 IP 分别为:192.168.103.54 192.168.103.56 192.168.103.57,我们约定把 192.168.103.54 作为集群控制端。
第二步:安装 redis
可参考 redis 安装与配置
第三步:修改配置、创建节点(此处我每台机器运行两个 redis 节点)
次处我的 redis 安装目录为:/usr/data/redis-4.0.8
1、跳转到 redis 安装目录
cd /usr/data/redis-4.0.8
2、创建一个新目录:cluster (该目录名称、位置任意,此处我是建在了 redis 安装目录下)
mkdir cluster
3、在 cluster 目录下,创建两个目录,分别是 6379 6380 (用于存储 redis 配置文件)
备注:此处新建的目录名必须与 redis 节点对应的端口号对应
cd cluster
mkdir 6379 6380
4、添加 redis 节点对应的配置文件,此处我直接拷贝 redis 安装目录下的配置文件 redis.conf 再加以修改
cp redis.conf cluster/6379
cp redis.conf cluster/6380
5、修改 redis 配置文件(目录 6379、6380 下的 redis.conf), 主要修改如下配置,
#绑定局域网ip,使得三台服务器可相互访问 bind 192.168.103.54 protected-mode yes #redis节点监听端口 port 6380 #以守护进程启动 daemonize yes pidfile /var/run/redis_6380.pid dbfilename dump_6380.rdb #如下为集群配置 cluster-enabled yes #启用集群 cluster-config-file nodes-6380.conf #集群配置文件,由redis自动更新,不需要手动配置 cluster-node-timeout 5000 #集群节点超时时间,即集群中主从节点断开连接时间阈值,超过该值则认为主节点不可以,从节点将有可能转为mastercluster-slave-validity-factor 10
#在进行故障转移的时候全部slave都会请求申请为master,但是有些slave可能与master断开连接一段时间了导致数据过于陈旧,不应该被提升为master。该参数就是用来判断slave节点与master断线的时间是否过长
(计算方法为:cluster-node-timeout *cluster-slave-validity-factor,此处为:5000 * 10 毫秒
)#cluster-migration-barrier 1
,才能提供服务cluster-require-full-coverage
yes
#集群中的所有slot(16384个)全部覆盖
第四步:启动三台机器上的 6 个节点
cd /usr/data/redis-4.0.8/src
./redis-server ../cluster/6379/redis.conf
./redis-server ../cluster/6380/redis.conf
第五步:创建集群
因为此处我是以 192.168.103.54 作为集群控制端,所有一下操作都在改机器上完成
gem install redis
./redis-trib.rb create --replicas 1 192.168.103.54:6379 192.168.103.54:6380 192.168.103.56:6379 192.168.103.56:6380 192.168.103.57:6379 192.168.103.57:6380
备注:执行 gem install redis 前,需先安装 ruby 环境,如下通过 yum 安装 ruby:
yum -y install ruby rubygems
安装完后再次执行 gem 还是会报错:“redis requires Ruby version >= 2.2.2”
解决办法为重装高版本的 ruby,此处我这里是以修改 ruby yum 源的方式重装 ruby
yum install centos-release-scl-rh //会在/etc/yum.repos.d/目录下多出一个CentOS-SCLo-scl-rh.repo源
yum install rh-ruby23 -y //直接yum安装即可
scl enable rh-ruby23 bash //必要一步
ruby -v //查看安装版本
再次执行:gem install redis 成功
第六步:查看集群状态
可连接集群中的任一节点,此处连接了集群中的节点 192.168.103.54:6379
第七步:总结
redis 集群实际上是通过 hash slots 的方式实现负载均衡(总共 16384 个 slot), 采用了非一致 hash,使得集群节点可动态增加或减少,通过 key 计算 slot, 然后将值存储到对应的 slot 关联的机器上。
扩展:
添加一个新的 master
Adding a new node is basically the process of adding an empty node and then moving some data into it, in case it is a new master, or telling it to setup as a replica of a known node, in case it is a slave.
./redis-trib.rb add-node 192.168.103.58:6379 192.168.103.54:6379
当添加新的 master 后,需要从新分片 redis, 不然新的 master 由于没有 slot,所以不能提供服务
./redis-trib.rb reshard 192.168.103.54:6379
添加一个新的 slave
./redis-trib.rb add-node --slave 192.168.103.58:6380 192.168.103.54:6379
移除集群中的一个节点
./redis-trib del-node 192.168.103.54:6379 `<node-id>`
java操作redis集群配置[可配置密码]和工具类(比较好用)
转:
java操作redis集群配置[可配置密码]和工具类
-
<dependency>
-
<groupId>redis.clients</groupId>
-
<artifactId>jedis</artifactId>
-
<version>2.9.0</version>
-
</dependency>
-
<dependency>
-
<groupId>org.apache.commons</groupId>
-
<artifactId>commons-pool2</artifactId>
-
<version>2.4.2</version>
-
</dependency>
注意:
版本:jedis2.9.0[此版本可以加入密码配置]+commons-pools2.4.2
配置:
<context:property-placeholder ignore-unresolvable="true" location="classpath*:cache.properties"/>
-
<!-- 连接池配置 -->
-
<bean id="jedisConfig" class="redis.clients.jedis.JedisPoolConfig">
-
<!-- 最大连接数 -->
-
<property name="maxTotal" value="150" />
-
<!-- 最大空闲连接数 -->
-
<property name="maxIdle" value="50" />
-
<!-- 最小空闲连接数 -->
-
<property name="minIdle" value="10" />
-
<!-- 获取连接时的最大等待毫秒数,小于零:阻塞不确定的时间,默认-1 -->
-
<property name="maxWaitMillis" value="3000" />
-
<!-- 每次释放连接的最大数目 -->
-
<property name="numTestsPerEvictionRun" value="100" />
-
<!-- 释放连接的扫描间隔(毫秒) -->
-
<property name="timeBetweenEvictionRunsMillis" value="3000" />
-
<!-- 连接最小空闲时间 -->
-
<property name="minEvictableIdleTimeMillis" value="1800000" />
-
<!-- 连接空闲多久后释放, 当空闲时间>该值 且 空闲连接>最大空闲连接数 时直接释放 -->
-
<property name="softMinEvictableIdleTimeMillis" value="10000" />
-
<!-- 在获取连接的时候检查有效性, 默认false -->
-
<property name="testOnBorrow" value="true" />
-
<!-- 在空闲时检查有效性, 默认false -->
-
<property name="testWhileIdle" value="true" />
-
<!-- 在归还给pool时,是否提前进行validate操作 -->
-
<property name="testOnReturn" value="true" />
-
<!-- 连接耗尽时是否阻塞, false报异常,ture阻塞直到超时, 默认true -->
-
<property name="blockWhenExhausted" value="false" />
-
</bean>
-
-
-
<!-- jedis集群版配置 -->
-
<bean id="hostport1" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port1}"/>
-
</bean>
-
-
<bean id="hostport2" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port2}"/>
-
</bean>
-
-
<bean id="hostport3" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port3}"/>
-
</bean>
-
-
<bean id="hostport4" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port4}"/>
-
</bean>
-
-
<bean id="hostport5" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port5}"/>
-
</bean>
-
-
<bean id="hostport6" class="redis.clients.jedis.HostAndPort">
-
<constructor-arg name="host" value="${redis.host}"/>
-
<constructor-arg name="port" value="${redis.port6}"/>
-
</bean>
-
-
<bean id="jedisCluster" class="redis.clients.jedis.JedisCluster">
-
<constructor-arg name="jedisClusterNode">
-
<set>
-
<ref bean="hostport1"/>
-
<ref bean="hostport2"/>
-
<ref bean="hostport3"/>
-
<ref bean="hostport4"/>
-
<ref bean="hostport5"/>
-
<ref bean="hostport6"/>
-
</set>
-
</constructor-arg>
-
<constructor-arg name="connectionTimeout" value="2000"/>
-
<constructor-arg name="soTimeout" value="2000"/>
-
<constructor-arg name="maxAttempts" value="3"/>
-
<constructor-arg name="password" value="${redis.clusterpassword}"/>
-
<constructor-arg name="poolConfig">
-
<ref bean="jedisConfig"/>
-
</constructor-arg>
-
</bean>
-
<bean id="jedisClientCluster" class="xx.xxx.xxxxx.xxxx.xxxx.JedisClientCluster"></bean>
创建cache.properties:
redis.host =192.168.xx.xxx
redis.port1=7001
redis.port2=7002
redis.port3=7003
redis.port4=7004
redis.port5=7005
redis.port6=7006
redis.clusterpassword=xxxxxxx
创建接口JedisClient:
-
import org.codehaus.jackson.type.TypeReference;
-
-
import java.util.List;
-
-
/**
-
* Created by gzy on 2017/11/17 17:16.
-
*/
-
-
public interface JedisClient {
-
String get(String key);
-
-
<T> T get(String key, TypeReference<T> clazz);
-
-
<T> T get(String key, Class<T> clazz);
-
-
String get(String key, int select);
-
-
void setAndExpire(String key, Object o, int expire);
-
-
Long rpush(String key, String string);
-
-
// Long del(String... keys);
-
Long lpush(String key, String string);
-
-
void set(String key, Object o);
-
-
String set(String key, String value);
-
-
String hget(String hkey, String key);
-
-
long hset(String hkey, String key, String value);
-
-
long incr(String key);
-
-
long expire(String key, int second);
-
-
long ttl(String key);
-
-
long del(String key);
-
-
long hdel(String hkey, String key);
-
-
Boolean exists(String key);
-
-
Long decr(String key);
-
-
List<String> brpop(int timeout, String key);
-
-
}
jedisClient实现类JedisClientCluster:
-
/**
-
* Created by gzy on 2017/11/17 17:17.
-
*/
-
-
import org.codehaus.jackson.type.TypeReference;
-
import com.thinkgem.jeesite.common.utils.StringUtils;
-
import org.springframework.beans.factory.annotation.Autowired;
-
-
import redis.clients.jedis.JedisCluster;
-
-
import java.util.List;
-
-
public class JedisClientCluster implements JedisClient {
-
-
@Autowired
-
private JedisCluster jedisCluster;
-
-
public <T> T get(String key, TypeReference<T> clazz) {
-
String json = jedisCluster.get(key);
-
if (StringUtils.isNotEmpty(json)) {
-
return JsonUtil.Json2Object(json, clazz);
-
} else {
-
return null;
-
}
-
}
-
-
public <T> T get(String key, Class<T> clazz) {
-
String json = jedisCluster.get(key);
-
if (StringUtils.isNotEmpty(json)) {
-
return JsonUtil.Json2Object(json, clazz);
-
} else {
-
return null;
-
}
-
}
-
-
public void set(String key, Object o) {
-
String json = JsonUtil.Object2Json(o);
-
jedisCluster.set(key, json);
-
}
-
-
public void setAndExpire(String key, Object o, int expire) {
-
String json = JsonUtil.Object2Json(o);
-
jedisCluster.set(key, json);
-
jedisCluster.expire(key, expire);
-
}
-
-
// public long del(String key) {
-
// return jedisCluster.del(key);
-
// }
-
-
public String get(String key) {
-
return jedisCluster.get(key);
-
}
-
-
public String get(String key, int select) {
-
jedisCluster.select(select);
-
return jedisCluster.get(key);
-
}
-
-
@Override
-
public String set(String key, String value) {
-
return jedisCluster.set(key, value);
-
}
-
-
@Override
-
public String hget(String hkey, String key) {
-
return jedisCluster.hget(hkey, key);
-
}
-
-
@Override
-
public long hset(String hkey, String key, String value) {
-
return jedisCluster.hset(hkey, key, value);
-
}
-
-
@Override
-
public long incr(String key) {
-
return jedisCluster.incr(key);
-
}
-
-
public Long decr(String key) {
-
return jedisCluster.decr(key);
-
}
-
-
@Override
-
public long expire(String key, int second) {
-
return jedisCluster.expire(key, second);
-
}
-
-
@Override
-
public long ttl(String key) {
-
return jedisCluster.ttl(key);
-
}
-
-
@Override
-
public long del(String key) {
-
return jedisCluster.del(key);
-
}
-
-
@Override
-
public long hdel(String hkey, String key) {
-
-
return jedisCluster.hdel(hkey, key);
-
}
-
-
public Long rpush(String key, String string) {
-
return jedisCluster.rpush(key, string);
-
}
-
-
public Long lpush(String key, String string) {
-
return jedisCluster.lpush(key, string);
-
}
-
-
public Boolean exists(String key) {
-
return jedisCluster.exists(key);
-
}
-
-
public List<String> brpop(int timeout, String key) {
-
return jedisCluster.brpop(timeout, key);
-
}
-
-
}
创建测试类ClusterTest:
-
import xx.xx.xxx.xxxx.JedisClient;
-
import org.junit.Test;
-
import org.junit.runner.RunWith;
-
import org.springframework.beans.factory.annotation.Autowired;
-
import org.springframework.test.context.ContextConfiguration;
-
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
-
-
/**
-
* Created by gzy on 2017/11/20 15:22.
-
*/
-
@RunWith(SpringJUnit4ClassRunner.class)
-
@ContextConfiguration("classpath:spring-context-cache.xml")
-
public class ClusterTest {
-
// private static JedisClientCluster redisCluster = SpringContextHolder.getBean("jedisClientCluster");
-
@Autowired
-
private JedisClient jedisClient;
-
-
@Test
-
public void testJCluster() {
-
jedisClient.set( "test:phone:" + "11111111","hhha");
-
String result = jedisClient.get( "test:phone:" + "11111111")
-
System.out.println( "result==="+result)
-
-
}}
原文地址:
关于Redis_集群配置和redis集群配置的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于009-docker-安装-redis:5.0.3-单点配置、集群配置、centos 关于redis 集群配置安装、centos6.5 下 redis 集群配置(多机多节点)、java操作redis集群配置[可配置密码]和工具类(比较好用)的相关知识,请在本站寻找。
本文标签: