Redis and Dragonfly Architecture Comparison

This blog post provides a comparison of Redis and Dragonfly architectures, highlighting Dragonfly's advanced solutions for overcoming Redis's limitations in performance, scalability, and memory efficiency.

March 26, 2024

Redis and Dragonfly Architecture Comparison


Redis, the in-memory data store synonymous with speed, has earned its reputation for blazing-fast performance. But like any good sprinter, it shines in short bursts. When the race gets long and the data mountains pile up, Redis starts to struggle.

Horizontal scaling through clusters adds a level of abstraction and requires some limitations on key distribution, demanding intricate management. Also, inefficient memory management at times can lead to sudden spikes, potentially causing out-of-memory issues. These limitations translate into increased infrastructure costs and maintenance issues, turning that initial speed boost into a long-term burden.

Are you frustrated by these growing pains? We were too. That's why we created Dragonfly, the next-generation in-memory data store built to overcome Redis's limitations. Dragonfly retains the speed you love but adds the vertical scalability you need to handle massive datasets with ease. Plus, its memory-efficient architecture minimizes spikes and keeps your infrastructure costs under control.

Join us as we take a deep dive into the challenges of Redis and unveil the innovative solutions that Dragonfly brings to the table. We'll compare and contrast the differences between Redis and Dragonfly architectures. Speed is important, but scalability and efficiency are the keys to winning the long-distance race.

Dragonfly Shared-Nothing Architecture

Redis is a single-threaded, event-driven system by default. With such an approach, the architecture is quite simple, but this simplicity comes at a cost. As workloads grow, the single thread becomes a bottleneck, limiting performance and scalability. Dragonfly takes a different flight path, leveraging a multi-threaded architecture and a shared-nothing approach to deliver impressive performance and scalability. So, how does Dragonfly achieve this?

  • Sharding and Parallel Processing: Dragonfly divides the entire dataset into smaller, independent sections called shards. Each shard is assigned to a dedicated thread, enabling parallel processing of requests. This eliminates the single-thread bottleneck that plagues Redis, allowing Dragonfly to handle significantly higher loads.

  • Minimal Locking and Synchronization: Since a single key in a shard is managed exclusively by one dedicated thread, the shared-nothing architecture minimizes the need for complex locking and synchronization mechanisms. This further boosts performance and reduces potential bottlenecks.

  • Asynchronous Operations and Responsiveness: Dragonfly doesn't stop there. It employs asynchronous operations (leveraging io_uring under the hood) for tasks like disk I/O. This interleaved execution ensures that long-running tasks like snapshotting don't affect the responsiveness of other operations, keeping the system snappy even under heavy load.

  • Abstract I/O and Efficient Fibers: Dragonfly leverages a custom framework that abstracts I/O operations, utilizing stackful fibers for efficient task management within each thread. This combination optimizes resource utilization and ensures the smooth handling of concurrent requests.


In the graph above, imagine our Dragonfly server process spawning 4 threads, where threads 1 through 3 handle I/O (i.e., manage client connections) and threads 2 through 4 manage data shards. Thread 2, for example, divides its CPU time between handling incoming requests and processing data operations on the shard it owns. In general, any thread can have many responsibilities that require CPU time. Data management and connection handling are only two examples of such responsibilities.

In essence, Dragonfly's multi-threaded architecture with a shared-nothing approach and focus on asynchronous operations unlock significant performance and scalability advantages compared to Redis, making it a compelling choice for demanding workloads.

Requests Processing

When a client submits a request to Redis, the system analyzes the request and forms an object (utilizing the command pattern) that encapsulates all relevant information. The lifespan of this object varies based on the nature of the command: non-blocking or blocking. Non-blocking commands are processed immediately, whereas blocking commands are executed time after time by Redis until their specific requirements are fulfilled.

Dragonfly adopts a transactional model that aligns with its architecture, facilitating asynchronous request processing. While this transactional approach may marginally raise the average latency, the combination of asynchronous operations and multi-threading significantly boosts overall throughput and decreases tail latency. Contrary to Redis, which sequentially processes commands, Dragonfly is capable of executing multiple commands simultaneously. Dragonfly optimizes performance by suspending fiber execution during data access or modification tasks when it must wait for required data, thereby reducing unnecessary CPU consumption on unsuccessful attempts.

For commands involving multiple keys that need to operate across different shards, Dragonfly splits them into separate subcommands. Every subcommand contains keys for one relevant shard and executes on the relevant thread, and the results are aggregated and sent back to the client. To ensure atomicity in multi-key operations, Dragonfly leverages the latest academic findings on the very lightweight locking (VLL) algorithm. It locks every key involved in a transaction to block concurrent access. Transactions attempting to access locked keys are queued until all required keys become available, ensuring orderly execution without conflicts.

Persistence and Replication

Safely taking snapshots or replications of in-memory data while handling concurrent writes can be tricky. This is especially true for Redis, which can experience memory spikes and performance issues during snapshots.

Redis employs a conventional snapshotting method based on copy-on-write memory management, utilizing a fork call. This approach can potentially result in a doubling of memory usage during the process, as any change to data triggers the copying of the memory page containing that data. Consequently, with a high volume of write requests, there's a possibility that all utilized memory could be duplicated. Such a scenario can lead to system instability and hinder performance, particularly in environments with intense write activity. To avoid this situation, Dragonfly uses a different approach with several key features:

  • Versioning: Each data entry has a version number that increases with modifications. Snapshotting only includes entries with versions older than the snapshot's version, ensuring consistency and avoiding duplicates.

  • Asynchronous Serialization: Data is serialized and written to disk asynchronously, preventing blocking and allowing concurrent writes.

  • Pre-Update Hook: Updates trigger sending the entry to the serialization sink if the snapshot is active and the entry hasn't been serialized yet, ensuring all entries are captured once.

  • No Forking: Dragonfly avoids memory-intensive process forks, resulting in stable memory usage during snapshots.

Cluster Mode

Redis and Dragonfly operate similarly in cluster mode, where both data stores use a sharding technique to distribute keys across 16,384 hash slots. To determine a key's hash slot, its CRC16 value is calculated and then divided by 16,384. Each node in the cluster is responsible for managing a certain range of these hash slots. For communication, Redis nodes use a gossip protocol to keep up-to-date with the cluster's status and the allocation of hash slots. As of now, Dragonfly only supports an emulated cluster mode or static cluster nodes. However, the development of a comprehensive cluster management and migration system for Dragonfly is underway. The envisioned Dragonfly full cluster mode will feature a control plane to manage the entire cluster. While nodes will be aware of the cluster configuration, they will primarily communicate with the control plane, except during slot migration when node-to-node communication may occur.

Additionally, it's worth noting that using cluster mode is less common for Dragonfly than for Redis, since Dragonfly has much greater capabilities for vertical scaling.


I've outlined a fundamental comparison between Dragonfly's and Redis's architectural approaches, avoiding overly complex details. Specifically, I didn't delve into Dragonfly's hash table implementation and its more efficient memory management, as discussing our data structures in depth is beyond this article's scope. However, it's worth noting that Dragonfly can reduce memory usage by over 40% in certain cases.

The key takeaway is that Redis and Dragonfly follow distinctly different approaches. Dragonfly aims not only to boost performance but also to simplify operations, enhance memory efficiency, and reduce costs for users. Don't be hesitant to explore Dragonfly's capabilities further, as it might just be the solution you've been searching for.

Stay up to date on all things Dragonfly

Subscribe to receive a monthly newsletter with new content, product announcements, events info, and more!

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.