Introducing Dragonfly Cloud! Learn More

Question: Is ElastiCache serverless?

Answer

Amazon ElastiCache is a fully managed in-memory data store service that allows you to improve the performance of your applications by retrieving data from fast, managed, and scalable in-memory caches instead of relying on slower disk-based databases.

While Amazon ElastiCache is not a serverless offering by default, it can be configured to function like one through the AWS Application Auto Scaling service. By leveraging this service for your ElastiCache cluster, it enables automatic scalability, essentially transforming it into a serverless setup. With Auto Scaling, you can specify the minimum and maximum number of cache nodes your application requires, and it will automatically scale up or down to meet demand while ensuring high availability and durability.

To set up auto scaling for an ElastiCache cluster using the AWS CLI, you need to perform the following steps:

1.Create the ElastiCache cluster:

aws elasticache create-cache-cluster --engine redis --cache-cluster-id myrediscluster --num-cache-nodes 1 --cache-node-type cache.t2.micro --auto-pause --max-memory-gb 0.5 --tags Key=Name,Value=myrediscluster

2.Create an AWS CloudWatch alarm to trigger scaling actions based on your desired metric (e.g., CPU utilization, network throughput). You can use the put-metric-alarm command to create an alarm.

aws cloudwatch put-metric-alarm --alarm-name myrediscluster-cpu-alarm --alarm-description "Alarm for CPU utilization" --namespace AWS/ElastiCache --dimensions Name=CacheClusterId,Value=myrediscluster --metric-name CPUUtilization --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --period 60 --threshold 80 --statistic Average --alarm-actions <scaling-policy-arn>

Replace <scaling-policy-arn> with the ARN of the scaling policy you want to associate with the ElastiCache cluster.

3.Configure auto scaling for the ElastiCache cluster by associating the CloudWatch alarm with the cluster:

aws application-autoscaling put-scaling-policy --policy-name myrediscluster-cpu-scaling-policy --service-namespace elasticache --resource-id cache-cluster/myrediscluster --scalable-dimension elasticache:cluster:TotalShards --policy-type TargetTrackingScaling --target-tracking-scaling-policy-configuration '{ "PredefinedMetricSpecification": { "PredefinedMetricType": "ElastiCacheUtilization" }, "TargetValue": <target-value>, "ScaleOutCooldown": 60, "ScaleInCooldown": 60 }'

Replace <target-value> with the desired target value for scaling, between 10 and 90, representing the percentage of the metric that should trigger scaling actions.

Note: Please make sure you replace the placeholders with your desired values and adjust the commands accordingly.

Overall, while ElastiCache is not exclusively designed for serverless usage, using it in combination with AWS Application Auto Scaling service, provides a serverless option for auto-scaling ElastiCache clusters based on the demand.

Was this content helpful?

White Paper

Free System Design on AWS E-Book

Download this early release of O'Reilly's latest cloud infrastructure e-book: System Design on AWS.

Free System Design on AWS E-Book

Start building today 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.