Dragonfly

Migrating from Redis to Valkey: Process, Cost & Performance

Explore the benefits of migrating from Redis to Valkey. We break down the cost savings, performance gains, and what the migration process truly involves.

October 23, 2025

Migrating from Redis to Valkey: Process, Cost & Performance | Cover Image

Introduction

There have been some recent significant shifts in the world of in-memory infrastructure. License changes for Redis have pushed Valkey into the spotlight as the new open-source successor under the Linux Foundation. Backed by major cloud providers and community contributors, this fork of Redis has emerged as the designated path forward for those seeking to preserve the ecosystem’s open-source future.

But Valkey represents more than just a license change, it’s an opportunity to re-evaluate your data layer architecture and performance requirements. While Valkey maintains strong compatibility with Redis, it also introduces its own enhancements and roadmap that differ from its predecessor.

This post will help technical decision-makers navigate this new landscape. We’ll move beyond the headlines to examine Valkey’s practical value proposition, migration considerations, and what the future holds for the project, providing a clear framework for your migration strategy from Redis.


The Core Benefits of Migrating from Redis to Valkey

Migrating from Redis to Valkey offers compelling advantages, primarily driven by immediate cost savings (especially with its backing cloud providers) and considerable performance enhancements, all while ensuring your infrastructure is built on a sustainable, open-source foundation. Let’s break it down.

Direct Cost Savings with Cloud Providers

One of the most immediate benefits of switching to Valkey is the direct reduction in cloud service bills. As a strategic move to encourage adoption, leading cloud providers like AWS have priced their Valkey offerings lower than Redis. For instance, on AWS ElastiCache, choosing Valkey over Redis for node-based deployments comes with an automatic 20% discount. For the serverless offering, the savings can be even greater, with prices being 33% lower for Valkey. These savings apply directly to your hourly instance or capacity costs, as shown below.

US East (Ohio) | October 2025

ElastiCache for Valkey

ElastiCache for Redis

On-Demand cache.r6g.8xlarge

$2.6272 / hour

$3.284 / hour

Serverless

$0.084 / GB-hour

$0.125 / GB-hour

The cost benefits can compound. Valkey’s improved memory efficiency, which will be discussed later, might allow you to run your workload on a smaller node type or with fewer servers (both in the cloud and on premises in this case). According to this blog post by AWS, combining this downsizing with the Valkey discount can potentially lead to total cost reductions of up to 60% for certain workloads. Another case study suggested a 40% cost reduction after migrating from a managed Redis provider to ElastiCache for Valkey, which also took the 20% saving compared to ElastiCache for Redis into account.

Measurable Performance and Efficiency Gains

While starting from the same codebase as Redis, Valkey has quickly advanced with targeted optimizations to its I/O threading, data structures, and other aspects that unlock considerably better performance on contemporary hardware.

A key differentiator is the multi-threaded, enhanced I/O introduced in Valkey 8.0. This architecture allows it to better utilize multi-core processors, leading to substantially higher throughput under concurrent load. Benchmarks have shown Valkey can deliver over three times the throughput of its previous versions, handling up to 1.19 million requests per second in tested scenarios.

Valkey 8.0 vs. Valkey 7.2 | Throughput & Latency Comparison

Valkey 8.0 vs. Valkey 7.2 | Throughput & Latency Comparison

This performance is complemented by greater memory efficiency. Starting with Valkey 8.1, optimizations to its hash table can reduce memory usage by approximately 10-20 bytes per key-value pair (or per composite data type element). For large datasets, this translates into meaningful overall memory savings, which not only lowers costs but also improves cache hit rates and application performance.

The innovation continues with the latest Valkey 9.0 release, which introduces powerful new capabilities without compromising on performance. Features like field-level expirations for hashes and the new DELIFEQ (delete if equal) command provide finer data control, while ongoing cluster-related improvements ensure that this performance, efficiency, and robustness can be scaled across a distributed system.

Open Source Assurance and Future-Proofing

The initial motivation for the Valkey fork was a licensing change by Redis. Valkey directly addresses this by keeping the permissive BSD 3-Clause License and establishing governance under the Linux Foundation. This community-driven model ensures the project remains truly open source, protecting your investment from future licensing shifts. The commitment to this principle is validated by strong backing from a broad coalition of major industry players, including AWS, Google Cloud, Oracle, and Ericsson. This widespread support not only encourages rapid innovation but also guarantees the project’s long-term viability, making Valkey a safe and future-proof choice for in-memory data infrastructure.


The Reality of Migration

The effort required to migrate from Redis to Valkey is not a one-size-fits-all equation. In practice, the experience tends to fall into two extreme categories: it can be a nearly seamless upgrade or a complex, lengthy project requiring meticulous planning.

The Minimal Effort Promise

For many users, particularly those on managed services like Amazon ElastiCache, migrating to Valkey is designed to be straightforward. AWS promotes it as a seamless, in-place replacement, engineered for short migration durations and minimal or even close-to-zero downtime. This promise is rooted in Valkey’s full compatibility with the Redis protocol. Since your applications use the same commands and clients, the migration can be as simple as a cross-engine upgrade, often requiring no code changes in the application, as shown below:

# For uncomplicated ElastiCache deployments, the migration can be as easy as
# an engine version upgrade. In this case, by specifying the engine as 'valkey'
# as well as a desired version, the cross-engine upgrade can be performed.
#
# For more details about the parameter requirements and constraints,
# please refer to the AWS documentation:
# https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/VersionManagement.HowTo.html
$> aws elasticache modify-replication-group \
       --replication-group-id myReplGroup \
       --engine valkey \
       --engine-version 8.0

Once the update is finished, which can take a while, you will see that the engine has changed from Redis to Valkey, and the version has updated as well (which can be confirmed by running the INFO SERVER command). The key point here is that your data is preserved during this change. After the update, you can still access all your existing keys, and your application should experience minimal disruption.

The On-the-Ground Reality for Complex Deployments

While the promise holds true for standard deployments, the on-the-ground reality for large-scale, complex systems is often different. In these scenarios, migration is consistently a non-trivial project that demands significant planning and resources.

Feedback from real-world migrations indicates that moving large deployments can be a lengthy process, with one source citing a timeline of over 8 weeks. This extended timeline accounts for several critical phases:

  • Version and Engine Compatibility Verification: Teams often operate on older, stable Redis versions and may use specialized Redis Stack modules. A successful migration requires a meticulous audit of the engine version, commands in use, and dependencies on features not yet fully available in Valkey’s early releases.
  • Data Migration and Cache Warming Strategy: For many workloads, a simple data sync is insufficient. A robust strategy must be developed to migrate large datasets efficiently while minimizing downtime. Even for caching use cases, keeping the cache warm and up-to-date during the cutover can be crucial, which may require specialized tooling and monitoring for large-scale deployments.
  • Performance and Load Testing: While Valkey promises similar or better performance, this cannot be assumed. Comprehensive load testing against an application’s production-like traffic patterns is mandatory to validate throughput, latency, and memory usage post-migration.
  • Orchestrating a Careful Cut-Over and Fallback Plan: A live cut-over of a critical data store is a high-stakes operation. It requires a detailed, step-by-step execution plan and a well-rehearsed rollback strategy to quickly revert to the original system in case of unforeseen issues, ensuring business continuity and minimizing risk.

This level of effort is well-known to engineers who have managed significant infrastructure changes. The process, complexity, and duration are comparable to migrating a core database to a different cloud or provider. It involves similar challenges: ensuring data integrity across the move, reconfiguring applications, and managing extended testing cycles to guarantee performance and stability post-migration.


Making the Strategic Choice

As we’ve established, migrating any large, production-grade data store is surely a non-trivial undertaking. With that being said, the key question becomes, should you invest that effort for a greater long-term return?

The benefits we’ve discussed (direct cost savings on cloud platforms, measurable performance gains, and the open-source nature) make Valkey a compelling option. For new projects or existing deployments that primarily use core data structures, migrating to Valkey is a strategic move to build on a more performant, cost-effective, and future-proof platform.

However, it would be an oversimplification to view this as a one-sided decision. Redis itself continues to evolve as well. In my opinion, the two projects haven’t diverged drastically in their core offerings as of today. Redis is actively catching up on features like its new multi-threaded I/O too (with the older implementation replaced in 8.0). More importantly, Redis maintains a feature advantage through Redis Stack and its Enterprise offerings. If your application critically depends on features like time series data or tiered storage, then these are not yet fully replicated in Valkey, and it may be advisable to stay with Redis for the time being.


Final Thoughts: When Evolution Isn’t Enough

For most, Valkey represents a safe evolutionary step as a compatible successor with solid improvements. However, the game changes if your workload has crossed a threshold into the realm of terabytes of in-memory data and tens of millions of requests per second. At this scale, the fundamental architectural limits of both Valkey and Redis begin to surface: data operations are still bound to a single main thread, creating a scalability bottleneck for heavy operations. This is where a revolutionary solution like Dragonfly becomes compelling. Its true shared-nothing, multi-threaded architecture isn’t just an enhancement, but a re-imagination for modern hardware. Dragonfly scales both vertically and horizontally, designed to eliminate the operational complexity while delivering extremely high throughput and up to 80% lower total cost of ownership (TCO). If you are already facing the inevitable project of migrating a heavy workload, targeting Dragonfly from the beginning can maximize your return on that significant migration investment in terms of both performance and cost efficiency.


Valkey

Dragonfly

Primary Value

Open-source successor to Redis

Next-generation, ultra high-performance engine

Architecture

Multi-threaded I/O, single-threaded data operations

True multi-threaded, shared-nothing core for both vertical and horizontal scalability

Performance

Solid improvements

4.5x higher throughput (29x for sorted sets) on GCP C4 with 48 vCPUs compared with Valkey

Memory Efficiency

Solid improvements

30% less total memory, 45% less for sorted sets

Cost Saving Potential

20-33% cloud price discount, less capacity needed

Up to 80% lower TCO via superior hardware utilization

The ideal migration path really comes down to your team’s priorities. If you’re looking for a sensible upgrade with immediate savings on your current cloud bill, Valkey is a solid and safe choice. But if you’re ready to fundamentally transform your in-memory data layer, Dragonfly is the definitive long-term destination. Its architectural advantages deliver not just incremental improvements but transformative performance gains and cost efficiency that justify the migration investment.

Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost