Skip to content

gjnoonan/redishappy

 
 

Repository files navigation

redishappy

Build Status Coverage Status

One method of providing a highly available Redis service is to deploy using Redis Sentinel.

Redis Sentinel monitors your Redis cluster and on detecting failure, promotes a slave to become the new master. RedisHappy provides a daemon to monitor for this promotion and to tell the outside world that this has happened.

Currently we support HAProxy with redishappy-haproxy and Consul with redishappy-consul.

Features

  • Automatic discovery and healthchecking of Redis Sentinels.
  • Extensible to support various service discovery mechanisims
  • Developed in Golang, clean deployment with no additional dependencies.
  • Read-only RestAPI.
  • Syslog integration.
  • RPM and Deb packages available.
  • Puppet module available.

Deployment

Redishappy ships in two forms redishappy-haproxy and redishappy-consul.

redishappy-haproxy

redishappy-haproxy updates HAProxy's configuration file on Redis master promotion and then reloads the HAProxy configuration file. The reload maintains current connections.

redishappy_haproxy

The redishappy daemon is installed on the same machine as HAProxy and runs with correct user rights to interact with HAProxy. Multiple instance of HAProxy/redishappy-haproxy can be deployed and operate seperatly.

redishappy-consul

redishappy-consul updates entries in a Consul instance on Redis master promotion.

redishappy_consul

FAQ

Q. Why - I thought in the modern age Redis clients should be Sentinel aware? They should connect to the correct Redis instance on failover.

A. Some do, some don't. Some it seems to be an eternal 'work in progress'. Rather than fixing all of the clients we needed to work correctly with Sentinel, RedisHappy was built upon the fact that all of the clients I have tested are great at connecting to a single address.

Operations teams also need to support legacy applications and libraries - adding redishappy, Sentinels and HAProxy can help provide a HA enviroment for Redis backed applications.

Q. Why - This article suggests that HAProxy can healthcheck Redis instances quite fine by itself.

A. Yes. It can do. But not reliably... I'll explain.

Suppose we have this setup. R1 and R2 are redis instances, S1,S2,S3 are Sentinel instances, H1 and H2 are HAProxy instances.

    R1,R2
    S1, S2, S3
    H1, H2
  • Life is good - R1 and R2 are in a master slave configuration, H1 and H2 correctly identify R1 as the master
    R1      R2
    M  ---- S
    ^
    |
    ---------
    |       |
    H1      H2
  • Disaster! - R1 dies or is partitioned but don't fear R2 is now the "master". Day saved!
    *       R2
            M
            ^
            |
    ---------
    |       |
    H1      H2
  • Disaster! - R1 comes back online and announces itself as a "master". Both R1 and R2 are now accepting writes, as HAProxy's healthcheck identifies both as online.
    R1        R2
    M       M
    ^        ^
    |       |
    ---------
    |       |
    H1      H2
  • R1 is made the "slave" of R2. Everything is ok now, except for the writes that R1 accepted which are lost forever.
    R1      R2
    S ----- M
            ^
            |
    ---------
    |       |
    H1      H2

When a Redis instance is started and stopped it initially announces itself as a "master". It will some time later be made a "slave" but in the meantime accept writes which will be lost when it is correctly made a slave.

RedisHappy attempts to avoid this failure mode by only presenting the correct server to HAProxy or any other service once it is confirmed as a "master". We assume clients will either block or fail until the master is online again.

Building

Using Vagrant

The provided vagrant file creates a virtual machine with all of the dependancies to build redishappy, smoke test it, and build the deb and rpm packages.

vagrant up

The packages are automatically built to - $GOPATH/src/github.com/redishappy/

The vagrant box also installs HAProxy, Docker and https://github.com/mdevilliers/docker-rediscluster for manual testing.

Download and build.

Install golang 1.4 +

go get github.com/mdevilliers/redishappy

cd $GOPATH/src/github.com/redishappy

go get github.com/tools/godep
go get github.com/axw/gocov/gocov
go get github.com/mattn/goveralls
go get golang.org/x/tools/cmd/cover
go get golang.org/x/tools/cmd/vet
go get golang.org/x/tools/cmd/goimports

godep restore

build/ci.sh

Defaults

Installing using the deb and rpm packages will set the following defaults -

  • Installs to /usr/bin/redis-haproxy
  • Configuration to /etc/redishappy-haproxy
  • Logs to file in /var/log/redishappy-haproxy
  • Warnings, Errors go to syslog

Configuration

Example configurations can be found in the main folders

Definitions for the elements

{
  // REQUIRED - needs to contain at least one logical cluster
  "Clusters" :[
  {
    "Name" : "testing", // logical name of Redis cluster
    "ExternalPort" : 6379 // port to expose for the cluster via HAProxy
  }],
  // REQUIRED - needs to contain the details of at least one cluster
  // redishappy will discover additional sentinels as they come online
  "Sentinels" : [
      {"Host" : "172.17.42.1", "Port" : 26377}
  ],
  // OPTIONAL for running redishappy-haproxy
  "HAProxy" :
    {
      // REQUIRED - absolute path to the template file
      "TemplatePath": "/var/redishappy/haproxy_template.cfg",
      // REQUIRED - absolute path to HAProxy's config file
      "OutputPath": "/etc/haproxy/haproxy.cfg",
      // REQUIRED - command to run to reload the config file on changes
      "ReloadCommand": "haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)"
    },
    // OPTIONAL for running redishappy-consul
   "Consul" : {
       // REQUIRED - path to Consul instance
    "Address" : "127.0.0.1:8500",
    // REQUIRED - for each cluster in the main config there should be a defined service
      "Services" : [
        {
            // REQUIRED - should match a name of a Cluster in the main config
            "Cluster" : "testing",
            // REQUIRED - logical name for the node
            "Node" : "redis-1",
            // REQUIRED - logical name for the data centre
            "Datacenter": "dc1",
            // REQUIRED - tags for the service
            "tags" : [ "redis", "master", "anothertag"]
          }
      ]
  }
}

Or you can configure with the following environmental variables -

Environment Variable Example Notes
REDISHAPPY_CLUSTERS clustername:6379 multiple values can be ; seperated
REDISHAPPY_SENTINELS ip_name:26377 multiple values can be ; seperated
REDISHAPPY_HAPROXY_TEMPLATE_PATH string, see config file for example
REDISHAPPY_HAPROXY_OUTPUT_PATH string, see config file for example
REDISHAPPY_HAPROXY_RELOAD_CMD string, see config file for example

API

RedisHappy provides a readonly API on port 8000. You can change the port by specifying a PORT environment variable. It is also possible to use a BIND environment variable if you wish to bind to other interfaces etc.

  • GET /api/ping - will reply "pong" if running
  • GET /api/configuration - displays the start up configuration
  • GET /api/sentinels - displays the sentinels being currently monitored and their current states
  • GET /api/topology - displays the current view of the Redis clusters, their master and their host/ip addresses

redishappy-haproxy provides the following additional read only apis

  • GET /api/template - displays the current template file
  • GET /api/haproxy - displays the rendered HAProxy file

Hacking

Running the following script will gofmt, govet, run the tests, build all of the executables.

build/ci.sh

Testing with Docker

https://github.com/mdevilliers/docker-rediscluster

Will start up a master/slave, 3 sentinel redis cluster for testing.

Thanks

Big thanks to

Copyright and license

Code and documentation copyright 2015 Mark deVilliers. Code released under the Apache 2.0 license.

About

Redis Sentinel high availabillity daemon

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 81.6%
  • Shell 18.4%