The Value Benchmark: Dragonfly Cloud Beats ElastiCache on Every Dollar
Dragonfly outperforms ElastiCache in throughput, latency, and cost-efficiency, proving much higher value per dollar.
June 26, 2025

In-Memory Data: Why Performance-Per-Dollar Matters
In today’s data-driven world, performance is currency. A fraction of a second can cost you a huge number of customers. Whether you’re powering real-time analytics, a high-traffic e-commerce platform, or a low-latency gaming backend, the choice of an in-memory data store can make or break your application’s performance and budget.
For years, ElastiCache has been the go-to managed in-memory data store solution for AWS users, offering Redis and now Valkey as a reliable, scalable service. But newer contenders like Dragonfly Cloud are challenging the status quo with fundamental architectural improvements, promising not just better performance but much higher per-dollar value.
So, how do they stack up? We ran rigorous benchmarks comparing Dragonfly Cloud against ElastiCache under identical conditions, and the results were striking. Dragonfly Cloud delivers:
- 81% higher value per dollar for storage. Yes, that’s nearly twice as much capacity/dollar.
- 193% higher write throughput. Again, yes, that’s nearly triple the ops/sec for writes.
- 40% higher read throughput and 43% lower P99 latency.
If you’re optimizing for raw efficiency (more throughput with consistent latency and fewer dollars wasted), you should keep reading, as this post might just change your infrastructure strategy.
The Contestants: Dragonfly Cloud vs. AWS ElastiCache
Before diving into benchmarks, let’s meet the competitors. Both Dragonfly Cloud and AWS ElastiCache promise high-performance, fully managed in-memory data storage, but they take very different approaches under the hood.
Dragonfly Cloud: The Performance-Optimized Upstart
Dragonfly represents a modern rethinking of in-memory data stores, designed from the ground up to leverage multi-core processors. Its multi-threaded shared-nothing architecture breaks free from the single-threaded limitations that constrain traditional Redis-like solutions, enabling true parallel processing of operations. The system employs innovative data structures like Dashtable to dramatically improve memory efficiency, packing more data into each GB of RAM. As a fully Redis-compatible solution, it offers a frictionless migration path for existing applications. Dragonfly particularly shines in the most demanding scenarios, such as extremely high-traffic caching layers where consistent low latency is also non-negotiable. Dragonfly Cloud is the managed offering for Dragonfly, which further reduces operational complexity and provides features like seamless scaling, automated backups, and managed Dragonfly Swarm distributed clusters.
ElastiCache for Valkey: The Established Leader with the New Fork
ElastiCache for Valkey builds on Redis’s proven technology while improving the open-source fork, Valkey, and adding AWS’s managed service reliability. As the default in-memory solution for many AWS users, it maintains full Redis compatibility while benefiting from AWS’s cloud infrastructure. Since forking from Redis, Valkey has been incrementally improving its underlying design to achieve better throughput and stability while continuing to use a single thread for data operations. ElastiCache for Valkey works well for many applications but can still limit performance for heavy workloads. Where it truly excels is in its native AWS integration, offering automatic compatibility within the AWS ecosystem. This makes ElastiCache a natural choice for teams running their entire stack on AWS and having needs for common in-memory data use cases.
Key Differentiator: Performance vs. Convenience?
Dragonfly leverages architectural innovations and data structure advancements to deliver significantly higher ops/sec on modern multi-core servers, while ElastiCache prioritizes ecosystem familiarity and AWS’s operational polish. The question isn’t just which is more performant, it’s which delivers better per-dollar value for heavy workloads.
Benchmark Setup
To understand how these solutions perform in real-world conditions, we designed the benchmark around a familiar scenario: a high-traffic, fast-growing application requiring fast and reliable caching, just like the one you are building right now.
Testing Methodology
Evaluating performance per dollar, we conducted a test as follows. Firstly, we loaded both data stores to their absolute memory limits until used_memory
reached maxmemory
, which can be obtained by issuing the INFO MEMORY
command. This reveals how much actual data each could store.
Note that memory can often be measured in gibibytes (GiB, base-2) or gigabytes (GB, base-10). 1GiB is roughly 7.4% higher than 1GB. For consistency, we’ll use GB throughout this post.
# Command for data ingestion.
# For ElastiCache, -n 14000000
export cmd="setex __key__ 86000 __data__"
./dfly_bench -h $HOST -c 10 --command "$cmd" --qps=0 -n 20000000 \
--pipeline=25 --key_maximum=1600000000 -d 16 --proactor_threads=8 \
--probe_cluster=false
After that, by using identical benchmark configurations, we measured read performance at scale using the command below.
# Command for benchmarking read performance.
export cmd="mget __key__ __key__ __key__ __key__ __key__"
./dfly_bench -h $HOST -c 10 --command "$cmd" --qps=0 -n 1000000 \
--pipeline=30 --key_maximum=1600000000 --proactor_threads=8 \
--probe_cluster=false
We also captured throughput and P99 latencies for writes and reads throughout the process. Let’s break down the numbers and see which solution delivers the best performance-per-dollar value.
Scenario: 100GB Cache for a Fast-Growing Application
A 100GB cache size offers an ideal balance for many growth-stage companies. This capacity provides sufficient headroom to handle significant traffic volumes while effectively caching both application data and user sessions. On the other hand, it’s small enough to remain cost-efficient before considering sharding or complex scaling solutions—at least in the case of Dragonfly.
To ensure an apples-to-apples comparison, we matched both services as closely as possible:
- Dragonfly Cloud: a 100GB data store with the
Standard Compute Tier
. - AWS ElastiCache for Valkey: a
cache.r7g.4xlarge
instance, which has 113.61GB (converted from 105.81GiB in the specification) with Valkey v8.0.1 installed. And it is the closest available instance type to match memory capacity. - Both data stores are deployed in the AWS us-east-1 region to eliminate any variability.
Cost Efficiency: Breaking Down the Dollar
Let’s set up the price baseline first. In Dragonfly Cloud, a 100GB data store with the Standard Compute Tier
is priced at $800/month, which undercuts ElastiCache’s $1,020/month for the cache.r7g.4xlarge
instance with a similar memory capacity. Remember, we are comparing both within the AWS us-east-1
region for on-demand rates, and that’s already a straightforward more than 20% savings.
“Only 20% cheaper?” you might think. The real value emerges when we factor in what each dollar actually buys.
Capacity Efficiency: What Your Dollar Actually Stores
While the 20% price difference is compelling, the real value becomes clear when we examine how much data each dollar actually stores. When pushed to their limits, the Dragonfly Cloud data store held 920 million keys, which is a staggering 42% more than ElastiCache’s 645 million, despite similar nominal memory capacity. This efficiency advantage comes from fundamental architectural differences in how each service utilizes memory.
dragonfly$> DBSIZE
920775662
elasticache-valkey$> DBSIZE
645367293
Firstly, there’s a “well-known hidden fee” in fine print you have to pay for ElastiCache. ElastiCache instances reserve 25% memory by default for background tasks like snapshots, leaving only 85.21GB available out of 113.61GB. On the other hand, Dragonfly Cloud delivers all 100GB you pay directly to your data. This means you’re already getting 17% more usable memory with Dragonfly before considering data structure advancements.
Additionally, Dragonfly’s modern storage engine provides compounding gains. Its Dashtables reduce hash table overhead, while B+ Tree-backed sorted sets store data more compactly than the traditional skiplist-based implementation. Innovations like these combined explains how Dragonfly achieves such dramatic density.
Metric | Dragonfly Cloud | ElastiCache for Valkey | Advantage |
Instance Memory | 100GB | 113.61GB | |
Usable Memory | 100GB | 85.21GB | 17% More Memory |
Max Keys Stored | 920,775,662 | 645,367,293 | 42% More Keys |
Monthly Cost ( | $800 | $1,020 | 20% Cheaper |
$ / Million Keys Stored / Month | $0.87 | $1.58 | 81% Higher Value per Dollar |
Put simply, Dragonfly stores nearly twice as much data per dollar, transforming what initially appears as a 20% price advantage into a far more substantial value proposition for data-intensive workloads.
Throughput Efficiency: Getting More Ops Per Dollar
In terms of throughput, Dragonfly Cloud also demonstrated advantages that amplify its price savings. For write operations, Dragonfly achieved 1,147,446 ops/sec, which is a 193% increase over ElastiCache’s 391,618 ops/sec. The read performance showed similar dominance, with Dragonfly delivering 350,877 ops/sec compared to 250,783 ops/sec (40% higher). What makes these results remarkable is the underlying resource allocation.

AWS cache.r7g.4xlarge
Instance Specification
ElastiCache’s cache.r7g.4xlarge
instance manages 7.1GB memory per CPU core, while Dragonfly’s Standard Compute Tier
manages 100GB with fewer CPU cores. Despite this heavier memory burden, Dragonfly still delivered substantially higher throughput across both read and write operations.
Metric | Dragonfly Cloud | ElastiCache for Valkey | Advantage |
Writes / Sec | 1,147,446 | 391,618 | 193% Higher |
Reads / Sec | 350,877 | 250,783 | 40% Higher |
The numbers reveal an undeniable pattern: Dragonfly doesn’t just cost less—it delivers more operations per CPU cycle and more throughput per dollar spent. Whether handling sudden write spikes or sustaining high read volumes, this efficiency compounds to create substantially better value at scale.
Latency Efficiency: Consistent Speed Under Pressure
Throughput tells only half the story, as latency matters equally too. At the critical P99 percentile, Dragonfly maintained a 43% latency advantage, delivering reads in 8.46ms compared to ElastiCache’s 14.79ms. Remarkably, this advantage persists despite Dragonfly’s cores handling more memory per CPU while keeping higher throughput, as discussed above.
Metric | Dragonfly Cloud | ElastiCache for Valkey |
Read Latency Min | 211μs | 155 μs |
Read Latency Avg | 5,672μs | 8,532μs |
Read Latency P99 | 8,460μs | 14,791μs |
Conclusion: The Price-Performance Revolution
Let’s be blunt: in the world of cloud databases, you’re used to trading performance for cost, or cost for convenience. Dragonfly Cloud just broke that rule.
With nearly twice the per-dollar value for storage capacity, triple the write throughput, 40% higher read throughput, and 43% lower P99 latency, this isn’t just an incremental upgrade; it’s a rethink of what in-memory data stores should deliver. The math doesn’t lie: whether you’re scaling a viral app or optimizing enterprise workloads, Dragonfly gives you more ops/sec, more data capacity, and more savings with no compromises.
AWS ElastiCache makes sense if you prioritize AWS-native integrations above all else. But here’s the reality: with VPC peering connections, Dragonfly Cloud slots seamlessly into your AWS infrastructure, delivering identical network latency and security, just better performance at a lower cost. This isn’t about choosing between ecosystems. It’s about choosing better engineering.
Ready to see the difference? Your budget (and your users) will thank you.
Appendix | Benchmark Details
We’ve covered the key benchmarks and cost comparisons in this analysis, but for those who want to explore further, here are additional details.
After data ingestion, here are the outputs of running the INFO MEMORY
command on Dragonfly Cloud and ElastiCache, respectively:
# Dragonfly Cloud
used_memory: 99824836672
used_memory_human: 92.97GiB
maxmemory: 100000000000
maxmemory_human: 93.13GiB
# ...
# ElastiCache for Valkey
used_memory: 85207157632
used_memory_human: 79.36GiB
maxmemory: 85207398912
maxmemory_human: 79.36GiB
# ...
Below are the summaries of running the read test on Dragonfly Cloud and ElastiCache, respectively:
# Dragonfly Cloud
Total time: 3m48.788175872s.
Overall number of requests: 80000000, QPS: 350877, P99 lat: 8460.42us
Latency summary, all times are in usec:
Count: 80000000 Average: 5671.8667 StdDev: 5160529.48
Min: 211.0000 Median: 5623.1774 Max: 26645.0000
# ElastiCache for Valkey
Total time: 5m19.287117138s.
Overall number of requests: 80000000, QPS: 250783, P99 lat: 14791us
Latency summary, all times are in usec:
Count: 80000000 Average: 8532.1179 StdDev: 12355929.20
Min: 155.0000 Median: 7381.6984 Max: 41444.0000