Dockerfile for Redis in-memory database

The image is available directly from:

  • PROD:
  • DEV:


Create the templates
$ oc create -f ose-artefacts/redis-cluster-template.json
$ oc create -f ose-artefacts/redis-sentinel-template.json

Start a cluster:

To start a Redis cluster you have to perform the following steps:
1. Deploy redis-cluster-template: This will start a redis-master
2. Deploy redis-sentinel-teplate: You have to provide the redis-master pod-ip. This is only required once.
3. Scale your cluster: Use the replication controller to scale your cluster. A scale-up on the redis-node controller will add automatically redis-slaves to the cluster

Test application:

Create the templates
oc new-app
This will create a Ruby test client that is connecting to the Sentinel service and requesting the current master/slave nodes.

Cluster test:

In order to test a cluster you can delete the master pod and varify if the Sentinel pods are getting aware of this change and selecting a new master node.
The selection is depending on a quorum which can be configured in the Sentinel config. Currently, there is a quorum=2, meaning that 2 Sentinel nodes need to detect the fail of the Master node and will trigger an election process.

Redis commands:

Get master:
redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} –csv SENTINEL get-master-addr-by-name mymaster | tr ‘,’ ‘ ‘

Only ip:
redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} –csv SENTINEL get-master-addr-by-name mymaster | tr ‘,’ ‘ ‘ | cut -d’ ‘ -f1 | redis-cli -h

Get slave nodes:

Reset sentinel nodes in order to remove old/down nodes: Needs to be send to all sentinels


If a node fails, Redis keeps the failed node in the list of available nodes and flags it with s_down
1) 1) "name"
2) ""
3) "ip"
4) ""
5) "port"
6) "6379"
7) "runid"
8) ""
9) "flags"
10) "s_down,slave"

In order to clean and reset the cluster, you have to run a reset of the master config as mentioned in the section above.