This post explores the limitations of horizontal scaling in terms of cluster reliability, load distribution, and cloud over-commitment. It also outlines design decisions that were made to allow Dragonfly, a drop-in Redis replacement, to scale vertically in order to handle heavy workloads and large data volumes on a single instance. By adopting Dragonfly's vertical scaling capabilities, organizations can achieve improved performance, cost savings, and operational efficiency in their distributed systems.
We are thrilled to announce the latest addition to our in-memory data store - the Kubernetes operator for Dragonfly!
A thorough benchmark comparison of throughput, latency, and memory utilization between Redis and Dragonfly.
2022 saw the emergence of a new technology and database project Dragonfly as well as the founding of a new company (DragonflyDB) to shepard and evolve it.
Balance is essential in life. When our focus is limited to improving a single aspect of our life, we weaken the whole system.
Infrastructure should be boring. Boring is good. Boring means that it just works, and you don’t have to worry about it. A year ago, we went on a quest to build a boring in-memory store.
Dragonfly crossed the 10K GitHub stars milestone in just 75 days. What an incredible start for our journey!
I talked in my previous post about Redis eviction policies. In this post, I would like to describe the design behind Dragonfly cache.
Let’s talk about the simplicity of Redis. Redis was initially designed as a simple store, and it seems that its APIs achieved this goal.
Following my previous post, we are going start with the “hottest potato” - single-threaded vs multi-threaded argument.
During the last 13 years, Redis has become a truly ubiquitous memory store that has won the hearts of numerous dev-ops and software engineers.