Question: Does scaling ElastiCache involve downtime?


Scaling AWS ElastiCache instances can be achieved with minimal or no downtime depending on the specific scenario.

There are two primary ways of scaling ElastiCache: vertical scaling (scaling up or down) and horizontal scaling (sharding).

  1. Vertical Scaling: This involves changing the node type, which results in more compute resources like CPU, RAM etc. For Redis, scaling vertically can be done without downtime as it supports online resizing. But for Memcached, changing the node type will result in downtime as the nodes are replaced.
import boto3 elasticache = boto3.client('elasticache') response = elasticache.modify_cache_cluster( CacheClusterId='my-memcached-cluster', CacheNodeType='cache.r6g.large', # Increase the size ApplyImmediately=True )
  1. Horizontal Scaling: This involves adding or removing nodes in a cluster. For Redis, this can be achieved without downtime using the Redis Cluster Mode Enabled option. For Memcached, adding new nodes also doesn’t cause downtime but data might not be evenly distributed among nodes until new items are written to cache.
response = elasticache.modify_replication_group_shard_configuration( ReplicationGroupId='my-redis-cluster', NodeGroupCount=6, # Increase the number of shards from the current count ApplyImmediately=True )

Please note that while these operations are designed to minimize impact, they are resource-intensive and can impact the performance of your ElastiCache cluster during the operation. Always monitor your applications' performance closely during such changes.

Was this content helpful?

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.