Amazon ElastiCache for Memcached eviction policy determines how the system will behave when the memory is full and a new item needs to be added. When there's not enough memory to store a new item, Memcached will discard or "evict" old items to make space according to the eviction policy.
In the case of Amazon ElastiCache for Memcached, it primarily uses the LRU
(Least Recently Used) eviction policy. When the cache memory is exhausted, LRU
evicts the least recently used items first.
However, you cannot directly configure the eviction policy as such. It's inherently tied up with how Memcached works. You can however control the memory allocation and usage which indirectly influences eviction.
To control memory usage, you use Memcached parameters like maxbytes
. This parameter sets the maximum amount of memory Memcached can use for storing items. When this limit is reached, evictions will start happening according to the LRU policy.
You can set these parameters when creating your cluster through AWS Management Console, the AWS CLI, or the ElastiCache API. Here is an example using AWS CLI:
aws elasticache create-cache-cluster \ --cache-cluster-id my-memcached-cluster \ --engine memcached \ --cache-node-type cache.r6g.large \ --num-cache-nodes 1 \ --region us-west-2 \ --parameter-group-name default.memcached1.6 \ --security-group-ids sg-0abcd1234efgh5678 \ --tags Key=Name,Value=my-memcached-cluster
In this command, --parameter-group-name
is where you would specify a custom parameter group if you had one with a specific maxbytes
value.
Remember to monitor your metrics such as evictions
and curr_items
to ensure your cache is performing optimally and not evicting more often than expected which could indicate that you may need to add more nodes or increase your node size.
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.