When your application's data grows beyond the capacity of a single Redis instance, or when you need to distribute data across multiple instances to increase throughput, you may want to consider scaling out.
Scaling out in Redis can be achieved through two primary methods:
Here is a basic python example of sharding:
import redis # List of Redis servers servers = [ redis.StrictRedis(host='localhost', port=6379), redis.StrictRedis(host='localhost', port=6380) ] def get_redis_server(key): # Simple consistent hashing for picking the server server_index = hash(key) % len(servers) return servers[server_index] key = 'user:1234' r = get_redis_server(key) r.set(key, 'value')
This code creates connections to two Redis instances (on ports 6379 and 6380). The get_redis_server()
function uses simple consistent hashing to pick which server should store a particular key.
Here is an example of how you can set up a Redis slave:
First, configure the Redis master instance. Edit the redis configuration file.
bind 127.0.0.1 # the IP address where redis listens port 6379 # the TCP port where redis listens
Then, configure the Redis slave instance. Edit the redis configuration file like so:
slaveof 127.0.0.1 6379 # the IP and port of the master redis
After this setup, all data written to the master Redis will also be written to the slave.
Remember, Sharding increases your write throughput and storage capacity by spreading data across nodes. Replication, on the other hand, improves read throughput and provides high availability.
The choice between these or a combination depends on the characteristics of your specific workload, your tolerance for complexity, and your needs in terms of data consistency.
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.