Dragonfly

Redis 8.0 vs. Valkey 8.1: A Technical Comparison

A deep technical comparison of Redis 8.0 vs. Valkey 8.1: threading models, performance benchmarks, feature sets, and licensing.

May 27, 2025

Redis 8.0 vs. Valkey 8.1: A Technical Comparison

Beyond Licensing: A Deep Dive into Redis and Valkey

Redis has been the go-to in-memory data store for years—fast, flexible, and packed with features. However, recently, the situation has become more complex. With two big licensing changes (first in 2024 and then again in 2025), the ecosystem has split, sparking projects like Valkey, the Linux Foundation’s Redis fork. If you’re building premium applications with low-latency requirements, these shifts matter: they’re forcing developers and architects to rethink sustainability, control, and where to bet their stack.

Now, the big question: Redis or Valkey? It’s not solely about licensing anymore—performance, scalability, and future-proofing all play a part. To help you decide, we’re breaking down Redis 8.0 and Valkey 8.1, the latest stable versions of each platform at the time of writing. We’ll compare architectures, features, licensing, and community backing—so you can pick the right tool without the hype.


Architecture: Core Data Processing & Network I/O

Threading architecture critically impacts latency, throughput, scalability, and resource utilization of in-memory data stores. While single-threading simplifies concurrency control, multi-threading unlocks modern hardware potential, making the threading model a key differentiator for high-throughput applications. Both Redis and Valkey balance these tradeoffs while evolving their approaches.

Redis’ New I/O Threading Implementation

Redis maintains simplicity through its famous single-threaded command execution and data manipulation, ensuring atomicity while avoiding contention, locks, and context switches. However, since version 6.0, Redis has incorporated multi-threaded I/O to enhance performance:

By delegating the time spent reading and writing to I/O sockets over to other threads, the Redis process can devote more cycles to manipulating, storing, and retrieving data.

While this was a significant step forward, the initial implementation didn’t fully realize the performance potential. Redis 8.0 addresses this with an improved new I/O threading model that delivers substantially better throughput:

When the io-threads parameter is set to 8 on a multi-core Intel CPU, we’ve measured up to 112% improvement in throughput.

Valkey’s Asynchronous I/O Threading

As a fork of Redis, Valkey retains the same single-threaded core for command processing to preserve atomicity. However, Valkey 8.0 introduced enhanced asynchronous I/O threading, allowing concurrent handling of network reads/writes while keeping data manipulation single-threaded. Additionally, Valkey is able to intelligently distribute I/O tasks across multiple cores based on real-time usage, improving overall hardware utilization.

Performance Characteristics

The latest I/O threading implementations in both Valkey and Redis have delivered significant throughput improvements, though their performance characteristics differ based on hardware and workload. Valkey 8.0 demonstrated particularly impressive gains when benchmarked on an AWS c7g.4xlarge instance (Graviton3, 16 vCPUs), achieving 1.19 million requests per second (RPS)—a 230% increase over Valkey 7.2’s 360K RPS.

Valkey 8.0 vs. Valkey 7.2 | Throughput & Latency Comparison

Valkey 8.0 vs. Valkey 7.2 | Throughput & Latency Comparison

A more recent comprehensive testing by Momento on an AWS c8g.2xlarge instance (Graviton4, 8 vCPUs) revealed that neither Valkey 8.1 nor Redis 8.0 could sustain 1M RPS. However, on this smaller but newer CPU generation configuration, both came remarkably close within the 730K-1M RPS range. These benchmarks collectively demonstrate that while both data stores benefit substantially from modern threading approaches, Valkey’s implementation currently shows greater headroom for scaling on multi-core systems. Here’s a table summarizing the recent benchmark results discussed above. Note that for detailed benchmark configurations, please refer to the original articles.

Metric

Valkey 8.0 (c7g.4xl)

Valkey 8.1 (c8g.2xl)

Redis 8.0 (c8g.2xl)

Throughput (RPS)

1.19M

947.1K GET, 999.8K SET

821.4K GET, 729.4K SET

Latency Avg (ms)

0.542

0.21 GET, 0.352 SET

0.44 GET, 0.51 SET

Latency P99 (ms)

0.927

0.28 GET, 0.80 SET

0.95 GET, 0.99 SET

Key Limitations

Despite their I/O threading improvements, both Redis and Valkey remain fundamentally single-threaded for the core workload that consumes most CPU cycles, which is data processing. This architectural choice preserves atomicity but creates inherent bottlenecks:

  • Adding cores improves I/O throughput but doesn’t accelerate actual command execution, which limits vertical scaling.
  • CPU-bound workloads (e.g., complex Lua scripts, sorted set operations) still block the main thread, causing latency spikes or even server unresponsiveness.

Persistence & Background Operations

In addition to network I/O and command processing, both Redis and Valkey utilize forked processes for essential background operations, particularly those related to persistence, maintenance, and high availability, although their implementations differ in some aspects.

Redis’ Fork-Based Operations

Redis employs fork()-based background processing for critical functions. RDB (Redis Database) snapshotting provides point-in-time backups for disaster recovery. AOF (Append Only File) logs every write operation received by the server, and these operations can then be replayed again at server startup, reconstructing the original dataset. AOF rewriting is a mechanism to maintain log compaction by deleting stale writes on the same key. Replication is useful for high-availability setups. Many of these mechanisms strategically utilize fork() to prevent the Redis server from blocking, prioritizing service continuity during maintenance operations.

  • RDB Snapshotting: When creating snapshots in the background (via BGSAVE or automated triggers), Redis forks a child process to write the RDB file to disk. While this keeps the main process responsive, it can trigger significant copy-on-write (COW) memory overhead on write-heavy instances, as the kernel duplicates modified memory pages.
  • AOF Rewriting: For log compaction (via BGREWRITEAOF or automated triggers), a child process generates an optimized AOF file. Like RDB, this avoids blocking but suffers the same COW penalties during high write activity.
  • Replication: During the initial full synchronization with replicas, the primary forks to create an RDB snapshot for transfer. After the RDB snapshot is created and transferred to the replica, the primary instance starts to stream additional write commands during the full sync period.

Recent Redis versions have enhanced these mechanisms. Redis 7.0 introduced multi-part AOF for less AOF rewrite overhead, while Redis 8.0 improved replication by using two parallel streams: one for transferring the RDB snapshot and another for live changes.

Redis 8.0 Dual-Stream Replication

Redis 8.0 | Dual-Stream Replication

Valkey’s Enhancements

While retaining the same core fork()-based mechanism for RDB snapshot and AOF rewrite operations, Valkey 8.0 introduces a dual-channel replication scheme that significantly improves synchronization. And it is notable that, although similar, Valkey implemented this mechanism much earlier than Redis 8.0. This enhancement allows Valkey to simultaneously stream both the RDB snapshot and the replication backlog during full sync. It also uses a dedicated connection for RDB transfer, freeing the primary process to focus on client query handling.

Performance Characteristics

Both Redis and Valkey have enhanced their replication and persistence mechanisms in recent versions. Redis 8.0’s dual-stream approach demonstrates concrete improvements—handling writes 7.5% faster during synchronization while completing the process 18% faster and reducing peak buffer memory by 35% in tests with a 10GB dataset under heavy (yielding 25GB changes) write loads. Valkey 8.0’s dual-channel replication achieves even more dramatic sync time reductions of up to 50% in read-heavy scenarios while maintaining better write latency during synchronization compared with previous versions. Although these numbers are hard to quantify, they vary depending on hardware setup and workloads.

Key Limitations

Despite optimizations, both systems still share some inherent challenges:

  • Fork Overhead: Write-heavy workloads still frequently trigger costly copy-on-write memory duplication during RDB snapshots or AOF rewrites, risking temporary performance hits.
  • Memory Scaling: Write-heavy workloads on large datasets require significant overhead (25-50% extra RAM) to avoid OOM (out-of-memory) kills.
  • Latency Sensitivity: The main process blocks briefly during fork(), which can matter for sub-millisecond use cases.
  • Tuning Dependency: Optimal behavior requires careful persistence and replication settings.

For memory-bound or latency-critical deployments, these constraints remain non-trivial.


Feature Sets & Data Structures

Both Redis and Valkey support a vast array of commands for data manipulation on rich data types, including string, bitmap, list, hash, set, sorted set, stream, geospatial indexes, HyperLogLog, and Pub/Sub. They both support programmability via Lua scripting as well.

Redis 8.0 leverages its mature API and packs the original Redis Stack modules into a single distribution to target advanced use cases:

  • It bundles probabilistic data structures (i.e., Bloom filter, Cuckoo filter), JSON, time series, and search capabilities directly into the core.
  • Vector set, a new native type, enables high-dimensional similarity search—critical for AI workloads like recommendations and semantic search.

Valkey, on the other hand, prioritizes core optimizations and is also gradually adding new data types and commands:

  • As of Valkey 8.1, it already supports JSON and Bloom filter data types and commands.
  • It also implemented a new CPU-cache-friendly hash table, which reduces memory by ~20 bytes per key-value pair. Nested data types (hash, set, sorted set) can gain 10-20 bytes/element savings when containing a large number of elements as well.
Valkey 8.1 New Hash Table Memory Usage

Valkey Memory Overhead for Different Value Sizes | Lower is Better

Trade-offs

Redis 8.0’s integrated modules and native vector sets make it a stronger choice for AI-driven and advanced analytics workloads, offering out-of-the-box solutions for semantic search, time-series data, and JSON document storage—all tightly coupled with its core engine.

Valkey, while currently lacking some data types and search capabilities, compensates with fundamental optimizations like its redesigned hash tables, which deliver measurable memory savings. This makes Valkey particularly compelling for traditional caching, queueing, and high-throughput scenarios where raw efficiency trumps specialized features.


Licensing

Redis adopted a tri-license model starting with Redis 8.0, offering three options: the Redis Source Available License v2 (RSALv2), Server Side Public License v1 (SSPLv1), and GNU Affero General Public License v3 (AGPLv3). While the inclusion of AGPLv3 reinstates its open-source classification, this move has faced skepticism. Organizations often hesitate to adopt AGPLv3 due to its stringent requirements, and the 2024 licensing shifts have torn some community trust. The complexity of three licenses also creates friction for developers who prioritize simplicity over legal fine print.

Redis Licensing Overview

Redis Licensing Overview | OSS, Enterprise, and Cloud

Valkey, in contrast, uses the straightforward BSD 3-Clause License—a permissive, well-understood option that aligns with mainstream open-source expectations. This simplicity lowers adoption barriers and reassures users wary of licensing risks.


Final Thoughts

The choice between Redis and Valkey ultimately depends on your priorities. If you need advanced features like vector search, JSON support, and AI readiness, Redis 8.0 remains the stronger option—despite its complex licensing model. However, if you value simplicity, memory efficiency, and a permissive open-source license, Valkey’s streamlined architecture and BSD licensing make it an appealing alternative, especially for traditional caching and high-throughput workloads.

At Dragonfly, we’re building an ultra-high-performance Redis-compatible data store from the ground up because we believe competition drives innovation. When Redis, Valkey, and new approaches push each other forward, the entire ecosystem wins—and developers get better solutions for modern data infrastructure.

Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost