Dragonfly

ElastiCache Supports Valkey: What’s In the Box & Getting Started

Amazon ElastiCache now supports Valkey, offering a fully managed, high-performance in-memory key-value store as a drop-in replacement for Redis OSS.

January 16, 2026

Guides Cover | ElastiCache Valkey

Does Amazon ElastiCache Support Valkey?

Amazon ElastiCache now supports Valkey, offering a fully managed, high-performance in-memory key-value store as a drop-in replacement for Redis open-source software (Redis OSS). Amazon Web Services (AWS) added Valkey support in the wake of Redis license changes, which limit its use in managed cloud services.

Background about ElastiCache and Valkey

Amazon ElastiCache is a fully managed in-memory data store and cache service provided by AWS. It originally supported two open-source in-memory engines (Redis and Memcached) and has now added support for Valkey. Valkey is an open-source, in-memory key-value data store, created as a community-driven fork of Redis after its license changes. It is compatible with the Redis API.

Important things to know about ElastiCache Valkey support:

  • Support is available in both serverless and node-based deployment models.
  • According to the initial Amazon announcement, Valkey pricing is lower than Redis OSS equivalents, with serverless pricing up to 33% lower and node-based pricing up to 20% lower.
  • ElastiCache serverless for Valkey allows customers to provision a cache in under a minute and start with as little as 100MB of storage (90% less than the Redis OSS minimum) at a starting price of $6 per month.
  • Valkey’s support in ElastiCache includes the same capabilities as other engines: 99.99% availability, built-in monitoring, automatic failover, and security.
  • Existing ElastiCache reserved node customers can switch to Valkey while retaining their current discount rates across compatible node sizes.
  • Customers can migrate from Redis OSS to Valkey with minimal downtime through the AWS console, SDK, or CLI.

Note: The details above are based on the initial AWS announcement. Pricing and other service details may change over time. Consult the official product page for the latest info.


Benefits of Amazon ElastiCache for Valkey

Amazon ElastiCache for Valkey offers a fully managed experience for developers who need fast, reliable caching and in-memory data storage. By combining the performance of Valkey with the operational benefits of ElastiCache, it provides a scalable and cost-effective alternative to self-managed or Redis OSS-based solutions.

Key benefits:

  • Lower costs: Up to 33% lower pricing in serverless mode and up to 20% lower for node-based deployments compared to Redis OSS.
  • Drop-in compatibility: Full API compatibility with Redis allows easy migration of existing applications without code changes.
  • Technical maturity: Valkey has been in active development since it was forked from Redis in early 2024 and exhibits a high level of maturity.
  • Flexible deployment models: Choose between serverless for dynamic, unpredictable workloads or node-based clusters for consistent, high-throughput applications.
  • Quick startup: Serverless caches can be provisioned in under a minute with a minimum size of just 100MB.
  • Minimal-downtime migration: Seamless migration from Redis OSS using the AWS console, CLI, or SDK without service interruption.
  • Operational simplicity: AWS handles patching, scaling, backups, and recovery, reducing operational overhead.
  • High availability and resilience: Includes automatic failover, multi-AZ replication, and 99.99% availability.
  • Secure by default: Supports encryption at rest and in transit, VPC isolation, IAM integration, and compliance with major security standards.

Related content: Read our guide to ElastiCache costs.


Quick Tutorial: Getting Started with Amazon ElastiCache for Valkey

You can get started with Amazon ElastiCache for Valkey in just a few steps. The process involves creating a Valkey cache, configuring access through an EC2 instance, installing the Valkey CLI, and connecting to the cache for read/write operations. These instructions are adapted from the AWS blog.

1. Create an ElastiCache Serverless for Valley Cache

You can provision a Valkey cache using the AWS management console, AWS CLI, or ElastiCache API. For example, using the AWS CLI:

aws elasticache create-serverless-cache \
  --serverless-cache-name ec-valkey-serverless \
  --engine valkey \
  --region us-west-1
ElastiCache Valkey Tutorial | Creating an ElastiCache Valkey Deployment

This creates a cache in your default VPC with default security settings. To confirm the cache is ready, use the describe-serverless-caches command and check that the status is available.

2. Set up an EC2 Instance

To connect to ElastiCache, launch an EC2 instance in the same VPC or in a peered VPC. Make sure your security groups allow access to ports 6379, which is the default port used by Valkey. Follow AWS’s standard EC2 setup documentation if you need guidance on creating the instance.

3. Install the Valkey CLI

Once your EC2 instance is running, SSH into it and install the Valkey CLI:

sudo yum install gcc jemalloc-devel openssl-devel tcl tcl-devel -y
wget https://github.com/valkey-io/valkey/archive/refs/tags/7.2.7.tar.gz
tar xvzf 7.2.7.tar.gz
cd valkey-7.2.7/
make BUILD_TLS=yes install

Building with TLS support is required because ElastiCache for Valkey only allows connections over TLS.

4. Connect and run commands

After installation, use the valkey-cli to connect to your cache. First, retrieve the endpoint address using the describe-serverless-caches command. Then connect using:

valkey-cli -h ec-valkey-serverless-xxx.cache.amazonaws.com -p 6379 -c --tls
ElastiCache Valkey Tutorial | Connecting to the ElastiCache Valkey Data Store

Once connected, you can run commands. For example, to store airline hash objects:

HSET airline:1 name "Emirates" on_time_percentage 88 country "United Arab Emirates"

HSET airline:2 name "Qantas"   on_time_percentage 91 country "Australia"

HSET airline:3 name "Singapore Airlines" on_time_percentage 89 country "Singapore"

5. Upgrade from Redis OSS to Valkey

If you’re currently using ElastiCache for Redis OSS, you can upgrade to Valkey without downtime:

  • In the AWS console, navigate to Redis OSS caches and choose Modify.
  • Under Cluster settings, change the engine to Valkey.
  • Preview changes, confirm with Apply Immediately, and click Modify.

The cache will be upgraded in place. After completion, it will appear under the Valkey cache listings.

6. Cleanup

To avoid unwanted charges, delete the Valkey cache and EC2 instance after testing. Use the AWS console or CLI to remove these resources when you’re finished.


Best Practices for Using ElastiCache with Valkey

1. Choose the Right Instance Type

Selecting the appropriate instance type is critical to balancing performance and cost. ElastiCache supports a variety of instance families optimized for different workloads. For memory-intensive applications, such as large caching layers or analytics workloads, use memory-optimized instances like the r7g or r6g series, which offer high memory per vCPU and use AWS Graviton processors for better price-performance.

For workloads that require high CPU throughput, such as heavy computation on data or large volumes of client connections, compute-optimized instances like c7g are more suitable. In contrast, for development, testing, or low-traffic applications, general-purpose instances like t4g or m6g offer cost-effective entry points.

For smaller or unpredictable workloads, consider using ElastiCache serverless with Valkey. The serverless model is suitable for development, testing, or applications with intermittent or bursty traffic. You can start with as little as 100MB of memory, which lowers costs significantly compared to node-based deployments.

2. Optimize Data Structures and Key Expiry

Efficient use of Valkey’s built-in data structures can significantly reduce memory footprint and improve processing speed. For example, instead of storing a JSON object as a single string, use a hash to map individual fields to keys. This allows partial updates and lookups without parsing the entire object.

Avoid using large string blobs or deeply nested structures. Split large datasets into manageable chunks or use sorted sets for ranked data like leaderboards. For collections that grow unbounded, such as lists of user actions or events, use trimming techniques (LTRIM, ZREMRANGEBYRANK) to keep memory usage predictable.

Use key expiration (EXPIRE, PEXPIRE, SETEX) to automatically evict stale data. This is especially important for session storage, caching, and ephemeral state. When using TTLs, be mindful of expiration storms—avoid setting large numbers of keys to expire at the same time to reduce load spikes.

3. Use Long-Lived Connections and Connection Pooling

Creating a new TCP connection for every request introduces latency and consumes resources on both the client and server. Instead, use long-lived connections to maintain persistent sessions with the ElastiCache deployment. This minimizes connection overhead and improves throughput, especially for applications with frequent cache access.

Most Valkey client libraries support connection pooling, which allows multiple operations to share a fixed number of open connections. Pooling improves performance by reducing connection churn and helps manage concurrent access to the cache efficiently. Tune the pool size based on expected concurrency and latency requirements.

Avoid excessive connection creation, particularly in serverless environments like AWS Lambda, where functions may open new connections on each invocation. Reuse connections within the execution context when possible. Monitor ElastiCache metrics like CurrConnections and NewConnections to identify patterns of inefficient connection usage and adjust client behavior accordingly.

4. Secure Connections with IAM Auth and TLS

Security should be a default part of every ElastiCache configuration. ElastiCache for Valkey requires all connections to use TLS, which encrypts data in transit and prevents eavesdropping or tampering. Use the latest supported TLS version and validate certificates where applicable.

Instead of static passwords or hardcoded credentials, use IAM authentication to enforce identity-based access. IAM roles can be assigned to EC2 instances or Lambda functions that need to access the cache. This simplifies credential management and supports fine-grained access control through AWS policies.

Network access should be restricted using VPC security groups. Only trusted resources, such as application servers or internal APIs, should be allowed to connect to the cache. Regularly audit your security group rules and eliminate overly broad access.

5. Monitor and Scale Proactively

Effective monitoring helps detect performance issues before they impact users. Use Amazon CloudWatch to track key metrics such as:

  • EngineCPUUtilization: High values may indicate underprovisioned compute or inefficient commands.
  • DatabaseMemoryUsagePercentage: Indicates the percentage of the memory for the deployment that is in use. Consider proactive or passive eviction mechanisms. 
  • CurrConnections: Sudden spikes might point to connection leaks or traffic surges.

Set up CloudWatch alarms for thresholds that signal degradation, and enable SNS notifications for alerting. Use slow log features to identify expensive operations, and review command latency metrics for potential bottlenecks.

For serverless caches, ElastiCache auto-scales capacity based on demand, but node-based clusters require manual intervention. Scale up by increasing node sizes or adding shards. Use data partitioning (sharding) to distribute load and avoid hot keys. Always test scale-in and scale-out procedures in staging environments before applying to production.


Dragonfly Cloud: Ultimate ElastiCache Valkey Alternative

Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt legacy technologies, Dragonfly redefines what an in-memory data store can achieve.

Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore. Built on top of the Dragonfly project, Dragonfly Cloud offers:

  • Redis API Compatibility: Seamless integration with existing applications and frameworks running on Redis while overcoming its limitations.
  • Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
  • Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
  • Unlimited Scalability: Built to scale vertically and horizontally (via Dragonfly Swarm), providing a robust solution for rapidly growing data needs.
  • Minimal DevOps: Dragonfly Cloud handles deployment, monitoring, version upgrades, automatic failover, data sharding, backups, auto-scaling, and everything else you need to run Dragonfly in the most resource-optimized and secure way.

Was this content helpful?

Help us improve by giving us your feedback.

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost