Azure Redis Cache is Microsoft's managed implementation of the popular Redis in-memory key-value store. One crucial part of managing a Redis instance is handling how data is evicted when memory becomes full.
In Azure Redis Cache, this is controlled by what is known as the "eviction policy". The eviction policy specifies the method that Azure Redis Cache should use to evict data when it runs out of memory.
The following are the available eviction policies:
volatile-lru
: Evict the least recently used keys out of all the keys with an "expire set".allkeys-lru
: Evict any key using the least recently used (LRU) algorithm.volatile-random
: Evict a random key among the ones with an "expire set".allkeys-random
: Evict any key randomly.volatile-ttl
: Evict a key with an "expire set" where the TTL value is the nearest to expiration.noeviction
: Returns an error on write operations when the memory limit was reached.To set or modify the eviction policy for your Azure Redis Cache:
Go to the Azure portal, then:
Here's a Python example of how to handle eviction with redis-py:
import redis # connect to redis r = redis.StrictRedis(host='localhost', port=6379, db=0) # set a key with an expire time (in seconds) r.set('mykey', 'myvalue', ex=3600) # try to get the key before it expires value = r.get('mykey') print(value) # output: b'myvalue' # after one hour (when the key has expired), trying to get the key will return None value = r.get('mykey') print(value) # output: None
In the above example, if you've set an eviction policy like volatile-lru
or volatile-random
, the keys with an "expire set" ('mykey' in this case) would be considered for removal when necessary.
Remember that choosing the right eviction policy is vital to maintain the performance and reliability of your Azure Redis Cache.
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.