Dragonfly

Choosing the Right Path: Migration Patterns for Redis to Dragonfly

Explore proven migration patterns for moving from Redis to Dragonfly, and choose the right approach for reliability and low-disruption cache modernization.

December 11, 2025

Migration Patterns for Redis to Dragonfly | Cover Image

Why Migration Patterns Matter

Teams, community users, and enterprises consider migrating from Redis/Valkey to Dragonfly for many reasons: much higher performance, better memory efficiency, lower cost, and simpler operations. To me, Dragonfly is very often the obvious choice once you compare it with legacy solutions. The benefits tend to make themselves clear early on.

But the real challenge is not deciding whether to migrate. Instead, the challenge often lies in choosing the most effective method for migration. Moving a live cache or data layer is rarely trivial, and the wrong approach can create unnecessary risk, downtime, or operational pain.

This is where migration patterns become useful.

Just as software engineers rely on design patterns to solve familiar architectural problems, in-memory data store migrations also fall into a small set of repeatable approaches. Over time, we have seen the same techniques succeed across many teams, from small deployments to large, high-volume systems. These patterns provide a shared vocabulary, help clarify trade-offs, and make the migration process more predictable.

In the sections ahead, we will walk through the most common migration patterns and explain when each one is the right tool for the job.


Pattern 1: Cold-Start Cutover

Some teams just want the easiest way to migrate. There isn’t much point in relocating the current dataset if the data saved in Redis or Valkey doesn’t last long or can be easily recreated. In these situations, a cold-start cutover makes it easy and clean to switch to Dragonfly.

In this approach, we can keep both Redis and Dragonfly running in parallel while we update the application’s connection configuration. Once the application is redeployed and traffic begins flowing into Dragonfly, Redis can be safely shut down. The system starts fresh, with Dragonfly acting as an empty but instantly available caching layer or data store.

# Using the `redis-py` library.
import redis

pool = redis.ConnectionPool(host='redis-host', port=6379, db=0).     # before
pool = redis.ConnectionPool(host='dragonfly-host', port=6379, db=0)  # after

client = redis.Redis(connection_pool=pool)

A great real-world example of this pattern comes from one of our community users, Sharp App. Their services aggregate data from many external sources and cache it in Redis, but the data itself becomes stale very quickly. Persisting the existing dataset during migration was unnecessary, so a cold-start cutover was the natural choice. After switching their application to Dragonfly, they immediately benefited from lower latency and more stable performance.

Latency after Switching to Dragonfly | One of Sharp App’s Services

Latency after Switching to Dragonfly | One of Sharp App’s Services

This pattern is appealing for its simplicity, but it does come with trade-offs:

  • Since the old Redis instance is discarded, any existing keys are lost.
  • It is also possible for a few in-flight writes to land in Redis if older application instances are still running during the switchover.

For workloads that depend on strong consistency or durable state, another pattern will be more appropriate. But for ephemeral, fast-expiring data, this cutover is both efficient and reliable.


Pattern 2: Snapshot Porting (RDB Import)

When we need to preserve the existing dataset but can tolerate a short, planned maintenance window, snapshot porting offers a clean and predictable migration path. Redis or Valkey already exposes a portable binary snapshot format through RDB files. Although Dragonfly uses its own format of backup files for higher snapshotting and loading speed, it also supports loading RDB files directly. This makes the process both familiar and low-risk for teams that already rely on Redis persistence.

The idea is straightforward. First, we generate an RDB snapshot from the Redis/Valkey instance. Once the snapshot is created, we start Dragonfly with the RDB file specified in its configuration. Dragonfly loads the snapshot at startup and immediately makes the data available to our application. After verifying that the new instance is healthy, we point our application at Dragonfly and bring the system back online.

Of course, there are trade-offs.

  • Generating an RDB file can be resource intensive, especially for large datasets, and may require careful timing.
  • The system will experience downtime during the switch because Dragonfly must load the snapshot (very fast), and application code needs to be redeployed before accepting traffic.

But for many small- to medium-sized teams or applications that exhibit strong daily usage patterns, this controlled pause is a reasonable cost for the simplicity and reliability the pattern provides. Plus, this pattern can be combined with the previous, essentially achieving a "warm-start" cutover if the loss of a few in-flight writes is acceptable. Snapshot porting remains one of the most trusted and widely used approaches in Redis to Dragonfly migrations, and it gives teams confidence that their data arrives in Dragonfly as expected.


Pattern 3: Replica Promotion with Sentinel

Replica promotion can provide teams with a smooth and controlled transition during a migration process that requires minimal downtime and utilizes existing familiar tools. Redis Sentinel is a distributed system designed to manage and monitor primary-replica Redis deployments to ensure high availability. Dragonfly is fully compatible with Redis Sentinel, which means it can join an existing Sentinel deployment as a replica, participate in health checks, and take over as the primary when the Redis instances are shut down.

The process begins by adding Dragonfly as a replica of the existing Redis primary. Sentinel immediately recognizes Dragonfly as part of the topology, monitors it alongside the other replicas, and treats it as a valid failover candidate. As Redis streams updates to Dragonfly through the standard replication protocol, Dragonfly maintains a near-live copy of the dataset.

Once the system has fully synchronized and we are ready to migrate, we can use the SENTINEL FAILOVER command to force a failover as if the Redis primary was not reachable. After that, we can gradually shut down the old Redis primary (now a replica) and all other remaining Redis replicas. From the perspective of the application, nothing changes. It continues using the same Sentinel-aware client library and automatically connects to Dragonfly as the new primary.

Migration with Redis Sentinel | Redis Instance(s) Shut Down After Migration

Main considerations of this approach are:

  • If a team is not using Sentinel, introducing it solely for the migration purpose can be cumbersome.
  • It’s notable that Redis Sentinel provides a best‑effort guarantee with respect to data consistency on the replica that becomes primary, as replication lags can occur for various reasons.

Other than that, this pattern is reliable, as the entire migration can be performed with minimal interruption to traffic and without any change to how the application discovers or connects to the data store. And it extends gracefully when more replicas are required as well. Additional Dragonfly instances can join the Sentinel deployment as replicas, providing a more secure failover structure from the start. For many production environments, especially those already relying on Sentinel, replica promotion is one of the cleanest and most operationally friendly migration strategies.


Pattern 4: Live Clone with RedisShake

RedisShake is a powerful tool for Redis data transformation and migration. When used for migration, it can create a real-time pipeline from Redis to Dragonfly, copying the existing dataset and continuously streaming updates until both systems are in sync.

The workflow is similar to that with Sentinel. RedisShake performs an initial full copy of the data, then keeps Dragonfly updated by reading Redis replication traffic or by using SCAN with keyspace notification. During this period, the application continues operating normally. Once the replication lag approaches zero, we switch the application to Dragonfly and complete the migration with no downtime. A major difference is that even after the cutover, we can keep RedisShake running for a while, making sure the last few new writes to the old Redis instance, potentially from old application instances during the rollout, are also synced over.

Live Migration with RedisShake | ElastiCache to Dragonfly Cloud

Live Migration with RedisShake | ElastiCache to Dragonfly Cloud

RedisShake also shines in complex topologies or deployments, such as Redis Cluster or Dragonfly Swarm, where simple replication or Sentinel may not be feasible. It works independently of the application.

Even as robust and sophisticated as RedisShake is, there are still a few trade-offs to keep in mind:

  • It requires deploying and operating an additional service during the migration process.
  • RedisShake is versatile, but we need to make sure the correct configurations are used.

Despite these considerations, RedisShake remains one of the most reliable ways to achieve a seamless migration to Dragonfly, especially in production systems where continuity is essential.


Pattern 5: Dual Write and Gradual Cutover

Some teams prefer to migrate with absolute safety and complete visibility into how the new system behaves under real production traffic. In these cases, the dual write pattern offers a cautious and controlled path. Instead of switching from Redis to Dragonfly all at once, we can introduce Dragonfly alongside Redis and gradually shift responsibility to it over time.

In this setup, the application writes updates to both Redis and Dragonfly but continues reading from Redis initially. As Dragonfly warms up and accumulates state, we begin redirecting a small portion of reads to Dragonfly. Over time, as confidence grows and metrics remain stable, we increase the proportion of reads served by Dragonfly until it becomes the primary cache. Redis can then be decommissioned once it is no longer needed.

Migration with Dual Write

This approach allows teams to validate performance, correctness, and operational stability using live traffic, without risking a sudden cutover. It also provides clear rollback options at every step. If something appears off, we simply route reads back to Redis until the issue is resolved.

The dual write pattern is especially suitable for organizations with strict SLAs or workloads where even brief inconsistency is unacceptable. It offers the strongest guarantees around safety and predictability, at the cost of added application logic and a longer migration timeline. In fact, quite a few large Dragonfly Cloud customers performed their migration this way. The Dragonfly team respects, praises, and is happy to help with this route, as we are talking about migrating TB-scale data stores with more than 100M RPS in production.

Key considerations include:

  • Requires updates to the application to dual write and manage gradual read shifts.
  • Increases temporary operational overhead while both systems run in parallel.
  • Works best when the team has robust observability and clear validation criteria.

For teams that value caution for large-scale data stores running heavy loads, this pattern provides a high-confidence path to adopting Dragonfly with zero surprises.


Choosing the Right Pattern

Selecting a migration approach often comes down to how much downtime you can allow, whether your data must be preserved, and how much complexity you are willing to introduce. The table below summarizes the strengths and trade-offs of each pattern to help you identify which one aligns best with workload and operational constraints:

Pattern

Keeps Existing Data

Downtime Requirement

Complexity

Ideal Use Case

Cold-Start Cutover

No

Very low

Very low

Ephemeral data, simplest migration path

Snapshot Porting

Yes

Short, planned downtime

Low

Durable data that can tolerate a maintenance window

Replica Promotion

Yes

Minimal

Medium

Teams already using Redis Sentinel, live migration without new tooling

Live Clone with RedisShake

Yes

Zero

Medium

Large datasets or high-throughput systems needing seamless cutover

Dual Write

Yes

Zero

Medium to high

Strict SLAs, safety-first adoption, gradual validation under production traffic


Final Thoughts

Modern workloads, like very heavy caching or AI inference with a wide context, keep pushing for better performance, easier operations, and more efficient data infrastructure like Dragonfly. With Dragonfly’s high compatibility with Redis (the protocol, commands, and snapshot file format), switching from Redis to Dragonfly has never been daunting. The process is becoming more predictable and accessible to teams of all sizes now that there are clear migration patterns and a growing ecosystem of tools.

Each pattern we explored shows a different set of priorities. You can choose an approach that works for you instead of forcing a one-size-fits-all procedure on different systems if you understand these patterns. There is a migration pattern that can help you make a smooth and safe move to Dragonfly, no matter if you need speed, safety, or scale.

We also have detailed step-by-step migration guides for the patterns discussed above if you want to learn more. These guides will help you turn the right approach into a confident rollout that is ready for production.

Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost