The Definitive In-Memory Data Store Benchmark Report
As engineering teams face increasing pressure to deliver sub-millisecond performance at scale, choosing the right in-memory data store is no longer a minor infrastructure decision. This benchmark report provides a rigorous, side-by-side analysis of Dragonfly, Redis, and Valkey across throughput, latency, memory efficiency, and scalability tested across AWS and Google Cloud environments under real-world, high-concurrency conditions.
Inside this report, you'll find:
- How Dragonfly achieves 25x the throughput of Redis on identical hardware
- Why Valkey's single-threaded bottleneck limits scaling, and how Dragonfly delivers 2.4x to 4.6x higher throughput as core counts grow
- How Dragonfly's memory efficiency reduces storage requirements by up to 45% versus Valkey for sorted set workloads
- Why Dragonfly's multi-threaded, shared-nothing architecture unlocks 61% more throughput when moving to newer cloud hardware, with no retuning required
- The hidden operational costs of Redis snapshotting and how Dragonfly eliminates dangerous memory spikes entirely
Whether you're evaluating a migration from Redis, weighing Valkey as an alternative, or looking to right-size your infrastructure spend, this report gives you the data to make a confident decision.
Trusted by the best
Featured In-memory Data Resources

Why Multi-Terabyte Redis Deployments Are Due for a Rethink
At a certain scale, Redis starts costing you more than just money.

Building a Self-Fuzzing CI Pipeline for Dragonfly
This post covers how we went from ad-hoc manual fuzzing to a fully automated CI pipeline where an LLM generates targeted attack vectors for every pull request.

Vector Search Just Got Faster
Dragonfly v1.37 Delivers Up to 7x Throughput Gains and 65x Lower Latency.
