Dragonfly

Why Redbus Skipped Valkey and Bet on a New Cache Architecture

Redbus, one of the world's largest travel ticketing platforms, migrated a large distributed cache from ElastiCache to Dragonfly for lower costs and better efficiency.

April 14, 2026

Redbus & Dragonfly

At a Glance

Company

Redbus

Use Case

Distributed caching across a wide range of production services

Previous Stack

AWS ElastiCache (Redis)

Results

Redbus migrated a large distributed cache from ElastiCache to Dragonfly for lower costs and better efficiency.



The Challenge: A Forced Move, and a Chance to Rethink Caching

Redbus runs the world's largest online bus ticketing platforms, with caching infrastructure that sits in the hot path of nearly every user-facing service. The team had standardized on AWS ElastiCache for Redis over the years, and it had served them well.

The trigger to look elsewhere came from the Redis license changes in 2024. With Valkey emerging as the successor fork of Redis and the default path forward on ElastiCache, Redbus made a deliberate choice not to simply follow it. Instead, the team used the moment to evaluate what else was in the market, specifically looking for options that could deliver meaningful performance and scalability improvements over their existing setup. Valkey, being a like-for-like fork, wasn't going to change the fundamental architecture they were already running. If they were going to migrate anyway, it was worth seeing whether a different architecture could unlock gains that Valkey alone couldn't.

There was also a long-standing efficiency concern. Redis is single-threaded at its core, so even on multi-core instances, most of the available CPU sat idle. Scaling meant adding nodes rather than making better use of the ones they already had, which drove up cost without improving utilization. For a team focused on performance-per-dollar across a global footprint, the license change was the push to finally address it.

Why Dragonfly

Dragonfly's architecture was what made the team want to evaluate it seriously. The thread-per-core, shared-nothing design sidesteps the single-threaded bottleneck that constrains Redis. Each shard is managed by a dedicated thread, locks and synchronization are minimized, and inter-thread communication happens through message passing rather than shared memory contention. On paper, this was exactly the kind of architecture Redbus needed to get more out of the hardware it was already paying for.

A few other design choices reinforced the decision. Dragonfly uses io_uring for snapshotting instead of the traditional fork plus copy-on-write approach, which improves I/O performance during persistence. It replaces Redis hashtables with DashTables for better memory efficiency, uses B+ trees for sorted sets, and supports SSD-based tiering for string values so datasets can grow beyond available RAM. The team also valued that Dragonfly's source code is open and available on GitHub, giving them full visibility into the engine internals before committing to a production migration.

Evaluation and Benchmarking

Before migrating anything, the Redbus team ran a thorough head-to-head benchmark of Dragonfly against Redis and Valkey. The setup used an 8-core, 64 GB RAM cache server with a separate client machine running memtier_benchmark. To keep the comparison fair, Valkey was configured with 6 I/O threads while Dragonfly was allowed to use all 8 cores.

The results were consistent across data structures. On the operations that matter most to Redbus (strings, sorted sets, and hashes), Dragonfly delivered substantially higher throughput, lower average latency, and meaningfully lower memory usage than both Redis and Valkey. The table below summarizes the headline numbers from the Redbus benchmarks:

Operation

Dragonfly vs Redis

Dragonfly vs Valkey

String SET

+239% ops/sec, -70% avg latency, -29% RAM

+64% ops/sec, -38% avg latency, -29% RAM

String GET

+200% ops/sec, -67% avg latency, -29% RAM

+44% ops/sec, -30% avg latency, -29% RAM

Sorted Set ZADD

+220% ops/sec, -67% avg latency, -43% RAM

+61% ops/sec, -35% avg latency, -50% RAM

Sorted Set ZRANGEBYSCORE

+183% ops/sec, -64% avg latency, -40% RAM

+30% ops/sec, -22% avg latency, -45% RAM

Hash HSET

+258% ops/sec, -72% avg latency, -5% RAM

+95% ops/sec, -49% avg latency, -8% RAM

Hash HMGET

+102% ops/sec, -50% avg latency, -5% RAM

+5% ops/sec, -4% avg latency, -8% RAM

Snapshot performance was another standout. On hash workloads, Dragonfly produced snapshots that were 84% smaller and took 94% less time to write than Redis, with similar gains over Valkey. This matters for any team running large caches where snapshot duration directly affects backup windows and recovery time.

Why Dragonfly Cloud Over Self-Hosted

The benchmarks were run on self-managed Dragonfly, but when it came time to roll out in production, Redbus chose Dragonfly Cloud. The reasoning was pragmatic. The team was new to the Dragonfly tech stack, and standing up a dedicated group to manage the infrastructure would have pushed the total cost of ownership higher than the managed option. More importantly, it would have pulled engineering focus away from the domain problems that actually differentiate the Redbus product. Dragonfly Cloud let them capture the architectural benefits without taking on the operational burden.

Migration and Adoption

Setting up Dragonfly was straightforward. It's a lightweight binary with a configuration process similar to Redis, and the team could continue using redis-cli to interact with it. Dragonfly doesn't yet support every Redis command, but the feature set already covered the vast majority of Redbus's use cases.

The migration has touched almost every corner of the Redbus platform. Services now running on Dragonfly Cloud include Bus Search, Bus Tracking, the Personalization Service, the Ticket Transaction Service, the Dynamic Price Engine, the Offer Engine, Fraud Detection, the Bus Operators Service, Seat Seller GDS, the Inventory Service, the Data Science Platform, the Bus Rating Service, the Accounting Service, and the Central Notification Service, among others. These span the full range of cache access patterns, from high-read search and pricing workloads to transactional and real-time tracking use cases.

As of early 2026, approximately 90% of the total cache fleet has been migrated to Dragonfly Cloud.

Results

The headline outcome is cost. Redbus is targeting at least a 40% reduction in caching spend compared to its previous AWS ElastiCache costs under a savings plan. Compared to on-demand ElastiCache pricing, the savings are significantly higher.

The cost story goes beyond the raw per-instance comparison. As part of the migration, the team identified additional optimization opportunities including right-sizing instances and reducing cross-zone network data transfer costs, both of which compound on top of the architectural efficiency gains from Dragonfly itself.

Beyond cost, the team is getting meaningfully better hardware utilization out of every node. The benchmark gains in throughput, latency, and memory efficiency translate directly into needing fewer resources to serve the same workload, which is exactly what Redbus set out to achieve when the project began.

Advice for Engineering Teams Evaluating Redis Alternatives

If there's one takeaway from the Redbus experience, it's that a forced migration is an opportunity, not a tax. When the Redis license change made it clear that some kind of migration was coming, the easy path would have been to move to Valkey and call it done. Instead, the team used the moment to benchmark the broader market against their own workloads, and found an architecture that delivered meaningfully better performance-per-dollar than a like-for-like Valkey swap would have. The lesson isn't that Dragonfly is right for everyone. It's that every team should benchmark against their own workload, on infrastructure that resembles production, and let the data make the decision.


Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost