Complete Guide to Redis in 2025: Components, Uses, and Alternatives
Redis is an open-source, in-memory data structure store often used as a database, cache, and message broker.
September 14, 2025

What Is Redis?
Redis is an open-source, in-memory data structure store often used as a cache, database, and message broker. It supports a variety of data structures such as strings, hashes, lists, sets, and more. By keeping data in memory, Redis enables sub-millisecond response times, making it suitable for applications requiring real-time performance or rapid data access patterns.
Its popularity in web application architectures stems from its simplicity, reliability, and versatility. Redis offers features like persistence, replication, and high availability, while maintaining ease of use with straightforward APIs for various programming languages.
In this article:
- What Is Happening with Redis Licensing?
- Understanding Redis Architecture and Key Features
- Essential Redis Use Cases
- Main Redis Deployment Options
- Redis vs. Other Solutions
- Best Practices and Strategies for Successful Redis Management
What Happened to Redis Licensing?
History of Redis License Changes
For many years, Redis was distributed under the permissive BSD license, allowing anyone—including cloud providers and managed service vendors—to freely use, modify, and redistribute it. This openness contributed to its widespread adoption and integration into countless products and platforms.
However, in 2024, the main Redis codebase switched from the open-source BSD-3 license to being dual-licensed under the Redis Source Available License v2 (RSALv2) and the Server Side Public License v1 (SSPLv1).
Adding an Open-Source License in 2025
In 2025, Redis Labs transitioned Redis back to open-source. Redis is now tri-licensed and available under the Affero General Public License v3 (AGPLv3), an open-source license that allows developers to run and modify Redis without restriction, provided conditions are met.
However, as part of the recent licensing changes, Redis now prevents third-party providers from packaging Redis into cloud services without Redis Labs’ involvement or contribution. This means services like Amazon ElastiCache will no longer be able to directly offer newer versions of Redis, from v7.4 onwards.
How This Impacts Redis Managed Service Providers
Managed service providers are most affected by this licensing update. AWS ElastiCache and similar offerings can only provide Redis up to version 7.3. To address this limitation, AWS and GCP now support Valkey, a Redis-compatible fork that remains under a permissive license.
For developers running Redis themselves (on-premises or on their own cloud infrastructure), the license change has no significant impact. They retain access to the latest Redis versions and features, provided they comply with the new license terms for non-commercial, non-managed use cases. However, due to frequent licensing changes, there is uncertainty about the future open-source status of the Redis product.
Understanding Redis Architecture and Key Features
Single-Threaded Event-Driven Model
Redis operates on a single-threaded event-driven model, meaning that it processes all commands within a single thread. Despite being single-threaded for data manipulation, the multi-threaded network I/O and the simplicity enable Redis to handle a high volume of requests, as each command is processed sequentially without the need for thread synchronization or context switching.
This single-threaded architecture is effective because Redis is optimized for in-memory operations, allowing it to focus on serving requests quickly. The event-driven nature ensures that Redis remains responsive as long as long-running blocking commands are avoided.
Data Structures
Redis supports a variety of data structures, which is one of its key features. The primary data structures in Redis are:
- Strings: The most basic type, which can store simple values such as text or numbers.
- Lists: Ordered collections of elements that allow for push and pop operations from both ends.
- Sets: Unordered collections of unique elements, with operations for adding, removing, and testing membership.
- Hashes: A collection of key-value pairs, useful for representing objects with multiple fields.
- Sorted Sets: Similar to sets but with an associated score, enabling elements to be ordered.
- Bitmaps: Used for efficient storage of binary data.
- HyperLogLogs: A probabilistic data structure used for approximating the cardinality of a set.
- Geospatial Indexes: Used for storing and querying geospatial data, such as coordinates.
Persistence Mechanisms
Redis provides multiple persistence mechanisms to enhance data durability while maintaining high performance. The primary persistence methods are:
- RDB (Redis Database Backup): This approach creates snapshots of the dataset at specified intervals. RDB is a great mechanism for point-in-time recovery, but it can result in data loss if Redis crashes between snapshots.
- AOF (Append-Only File): With AOF, every write operation is logged to a file, ensuring that all commands are persisted. This method provides more durability than RDB, as it logs each command as it happens. However, AOF can be slower than RDB due to the overhead of writing every operation to disk. The actual impact is dependent on the configuration of AOF, such as
always
oreverysec
. - Hybrid Mode: Combines RDB snapshots with AOF logging to balance durability and performance. Redis loads the RDB file on restart, then replays the AOF for recent changes. This approach reduces data loss while keeping recovery time efficient.
Replication and High Availability
Redis supports replication for high availability, allowing data to be replicated across multiple Redis nodes. In a typical Redis deployment, a primary node is used for write operations, while one or more replica nodes handle read queries and can act as failover targets in case the primary node goes down. Note that the failover process is not automatic unless Redis Sentinel is used. You can learn more about Redis high availability options in this blog post.
Clustering and Sharding
Redis supports clustering to horizontally scale and distribute data across multiple nodes. Clustering uses sharding, a technique that partitions the dataset into smaller, more manageable chunks called shards. Each shard is responsible for a subset of the entire dataset, which is distributed across the Redis instances in the cluster.
Sharding allows Redis to scale beyond the memory limits of a single machine by distributing data across multiple nodes. Redis Enterprise simplifies this process by handling automatic sharding and re-sharding, enabling elastic scaling with minimal manual intervention.
Security
Security in Redis is handled through multiple layers of protection, including:
- Access Control List (ACL): Redis can restrict access to sensitive operations based on roles assigned to different users.
- Encrypted Communication: Data transmitted between clients and Redis can be encrypted using SSL/TTLS.
Essential Redis Use Cases
Caching
Redis can store frequently accessed data, such as user profiles, database query results, or dynamically rendered HTML fragments, significantly reducing backend load and application response time. The ability to set cache expiration and eviction policies ensures optimized memory usage.
Its atomic operations and persistence options also make Redis a preferred choice for reliable caching. These capabilities protect against data loss and provide rapid recovery, supporting highly available systems where cache misses would lead to unacceptable delays.
Session Management
Session management requires temporary, fast, and reliable storage of user state across requests, making Redis well-suited for this use case. Its in-memory design guarantees near-instant access to session data. Expiration features are used to automatically remove stale sessions. Redis’ support for replication and persistence ensures that session data is not easily lost, even during node failures.
Real-Time Analytics
Modern analytics platforms often require real-time aggregation and streaming of large volumes of data. Redis meets these needs through fast operations on in-memory data and data structures like counters, HyperLogLogs, sorted sets, and streams. It is commonly used for real-time analytics tasks such as counting events, generating trending lists, or monitoring metrics over defined time windows.
Event Streaming & Messaging
Redis provides an efficient publish/subscribe (pub/sub) messaging system that supports event-driven communication between different services or microservices. Publishers send messages to channels, and subscribers that listen on those channels receive them in real time.
While Redis pub/sub delivers low latency, simplicity, and ease of use compared to full-featured message brokers, it does not persist messages or guarantee delivery if subscribers are offline. It’s best suited for intra-service messaging, ephemeral event notifications, or scenarios where at-most-once delivery is acceptable. In the meantime, if you need features like consumer groups, message acknowledgments, message trimming, and persistence, Redis Streams is a much more viable option.
- Learn more in our detailed guide to Redis Pub/Sub.
- Learn more about Redis Streams.
Geospatial Indexing
Redis introduces native geospatial support through commands that allow storage, retrieval, and querying of location-based data. Using the GEO*
command family, developers can add latitude/longitude coordinates to keys, calculate distances, and query members within specified radii.
The performance advantages of in-memory geospatial indexing are compelling, especially for real-time mobile or logistics applications where map-based queries must return results instantly.
Rate Limiting
Rate limiting is critical for protecting APIs and applications from abuse or accidental overload, and Redis is frequently used to implement effective rate limiting due to its speed and atomicity. Patterns like counters, token buckets, or sliding windows can be implemented directly using Redis commands, ensuring accurate enforcement even under high concurrency.
Redis’s atomic increment and expiration guarantees prevent race conditions, making it especially suitable for distributed systems that require consistent enforcement across multiple nodes.
Main Redis Deployment Options
Self-Managed Deployments
Self-managed Redis deployments involve setting up and managing Redis on in-house infrastructure. This offers full control over configuration, scaling, and maintenance but requires a higher level of operational expertise. Organizations can choose their preferred environment, whether it’s on-premise hardware or virtual machines in a cloud provider like AWS, GCP, or Azure.
In a self-managed setup, organizations are responsible for ensuring Redis availability, persistence, and scaling. This typically includes configuring replication, high availability, and failover mechanisms, either manually or through tools like Redis Sentinel. Scaling often involves managing Redis Clusters and sharding, as well as monitoring resource usage to ensure the system can handle growing loads.
Redis on Kubernetes
Kubernetes simplifies the management of Redis clusters by automating deployment, scaling, and orchestration of containers across multiple nodes. In this setup, Redis instances run in containers that are managed through Kubernetes pods, and resources like CPU and memory are allocated dynamically based on demand.
Kubernetes provides built-in features such as automatic scaling, health checks, and rolling updates, which are useful for maintaining Redis clusters in a distributed environment. Tools like Helm can also be used to simplify Redis deployment on Kubernetes, making it easy to deploy complex Redis architectures such as clusters or replicated setups.
While Redis on Kubernetes can improve flexibility and scalability, it does require familiarity with Kubernetes and container orchestration. Additionally, managing persistence in Kubernetes requires careful planning, often involving Kubernetes’ StatefulSets and persistent volumes to ensure data durability and availability in case of pod restarts or failures. Generally, it’s recommended to use a Kubernetes operator developed for Redis for this scenario.
Managed Cloud Services
Managed cloud services have provided a convenient way to use Redis without the burden of managing infrastructure. Providers such as Redis Cloud, AWS ElastiCache, Azure Managed Redis, and Google Cloud Memorystore offered Redis standalone instances or clusters with built-in monitoring, scaling, backups, and security features.
However, the recent licensing changes in Redis now restrict third-party providers from offering newer Redis versions (starting with Redis 7.4) as managed services, unless the services are provided with approval from Redis. They can continue supporting older versions (up to 7.3), but users seeking access to the latest Redis features will not find them in managed cloud offerings.
To address this gap, some cloud providers have begun supporting Redis-compatible alternatives like Valkey, which remains fully open source and can be offered as a managed service. Services like our own Dragonfly Cloud provide a fully managed experience with a Redis-compatible API. Dragonfly uses its own multi-threaded high-performance in-memory architecture designed for the most demanding workloads.
Redis vs. Other Solutions
Redis vs. Memcached
Both Redis and Memcached are popular in-memory data stores that can be used for caching. However, Redis offers more features compared to Memcached, making it more versatile for a wider range of use cases.
Memcached is simple and effective for caching key-value pairs, supporting basic data types such as strings. It’s often preferred for use cases where speed and simplicity are the top priorities. It’s highly efficient when handling large volumes of simple data and can scale horizontally by adding more nodes.
Redis supports a richer set of data types, including strings, lists, sets, sorted sets, and hashes, enabling more complex caching strategies and use cases such as session management and real-time analytics. Redis also provides persistence options (RDB and AOF) and features like event streaming and messaging, geospatial indexing, and Lua scripting, which Memcached lacks.
Redis vs. Valkey
Redis and Valkey, both originating from the same codebase, offer similar functionalities as in-memory data stores and caches. The primary differences between Redis and Valkey lie in performance and feature offerings. Valkey 8.0 introduced performance improvements, such as asynchronous I/O threading and experimental support for Remote Direct Memory Access (RDMA). This gives Valkey an edge in throughput compared to previous versions.
However, Redis continues to lead in terms of features, offering advanced capabilities like vector sets and time series operations, all of which Valkey lacks at the time of writing. Redis also benefits from a well-established ecosystem and extensive documentation, backed by a large, active developer community and enterprise support.
You can read more about our detailed comparisons of Redis v8.0 vs. Valkey v8.1.
Redis vs. MongoDB
Redis and MongoDB serve different purposes and excel in distinct areas of application architecture. Redis is an in-memory data structure store, often used for fast, transient data storage. Its primary strength lies in its ability to handle high-throughput, low-latency operations.
MongoDB is a distributed document-based NoSQL database that stores data on disk, offering greater flexibility with complex, persistent data models. MongoDB is typically used for storing large volumes of data, including documents with nested structures, and supports querying and indexing. It’s suitable for use cases requiring complex queries, transactions, and durability.
While Redis is often used for caching or as a temporary store, MongoDB is intended for more permanent, structured data storage with richer querying capabilities. Redis can complement MongoDB in architectures by serving as a cache to speed up frequently accessed data, while MongoDB handles the long-term storage of more complex data.
Redis vs. Kafka
Redis and Kafka are both popular tools for managing data streams, but they are designed with different goals in mind, making them suited for distinct use cases.
Redis provides basic message queuing through its Pub/Sub API and a more advanced streaming model with the streams API. The Redis Pub/Sub model is simpler to implement for real-time messaging, and it’s commonly used in applications that need to broadcast messages to multiple subscribers. The Redis Streams API, introduced in Redis 5.0, offers more advanced features such as message persistence, message acknowledgment, and consumer groups, which allow for more complex event-driven architectures.
However, using Redis Streams directly can be more complicated and requires careful management of consumers and stream data, which is why many developers prefer to use frameworks built on top of Redis, such as BullMQ and Sidekiq. These frameworks simplify the use of Redis streams for job queues, task processing, and background job management.
Kafka is a distributed event streaming platform for handling high-throughput, fault-tolerant, and scalable real-time data streams. Kafka is better suited for use cases where large volumes of events or logs need to be processed in a reliable and scalable manner, with features like built-in replication, message retention, and stream processing. Kafka excels in scenarios where durability and message ordering are critical and where there is a need to handle data streams across many systems or services in a distributed environment.
While Redis is typically used for short-lived messaging or caching, Kafka is built for handling large-scale, long-lived streams of data. Kafka’s distributed nature and message storage capabilities make it suitable for event-driven architectures or data pipelines that require durable, persistent message logs. Redis is often preferred for smaller, more transient data stores.
Redis vs. DynamoDB
Redis and DynamoDB are both used for data storage but differ in their architecture and use cases. Redis is an in-memory key-value store designed for ultra-fast data access with low-latency operations.
DynamoDB is a managed NoSQL database offered by AWS that provides durable, scalable, and highly available storage for structured data. It is suitable for applications that require a fully managed database with configurable read consistency, scalability, and durability.
Redis is better suited for real-time, in-memory use cases where performance is critical, whereas DynamoDB excels in persistent data storage with a focus on scalability and availability for large-scale applications. In some architectures, Redis may be used alongside DynamoDB to provide caching and real-time data processing, while DynamoDB handles long-term storage.
Best Practices and Strategies for Successful Redis Management
Organizations should consider the following practices when using Redis.
1. Memory Management
To optimize memory usage and ensure Redis operates efficiently, regularly monitor the memory consumption of datasets. Use Redis’ memory eviction policies, such as volatile-lru
, allkeys-lru
, or allkeys-random
, to ensure that Redis can handle high memory utilization by automatically removing less important data when the memory limit is reached.
Additionally, it’s important to use appropriate data structures based on the use case to minimize memory overhead. For example, using hashes instead of multiple strings can save memory when storing related pieces of data.
Always evaluate the size and structure of the dataset to avoid unnecessary memory bloat, and consider using Redis modules, like RedisBloom for probabilistic data structures. Monitoring tools such as MEMORY USAGE
and INFO MEMORY
provide insights into memory utilization, helping to track down any unexpected memory spikes.
2. Command Optimization
Optimizing Redis commands can improve performance and reduce load. Use pipelining to send multiple commands in one request, reducing the number of round trips between the client and server. For operations that involve frequent data access, try to minimize the number of commands needed by using more efficient Redis commands.
For example, instead of issuing separate commands to get and then update a value, use atomic operations like INCRBY
or HINCRBY
to modify values in one step. Also, be mindful of blocking commands, such as BLPOP
or BRPOP
. Use these commands only when necessary, and ensure that they are not called too frequently, as this can impact Redis’ responsiveness.
3. Use Redis Sentinel for High Availability
Redis Sentinel provides high availability and automated failover for Redis instances. It continuously monitors the health of Redis primaries and replicas, automatically promoting a replica to primary if the current primary fails. This ensures that the Redis infrastructure remains available even in the event of server failures.
To set up Redis Sentinel, there must be at least three Sentinel nodes for quorum-based decision-making. Each Sentinel node monitors a set of Redis servers and communicates with other Sentinels to determine when a failover is necessary. During a failover, Redis Sentinel also updates clients with the new primary’s address.
For optimal high availability, consider placing Sentinels on different physical or virtual machines. This ensures that a single point of failure does not impact the monitoring of Redis servers. Sentinel also allows for configuration of alerting mechanisms, ensuring that organizations are informed of issues before they affect their systems.
4. Implement Data Sharding for Horizontal Scaling
Data sharding in Redis involves splitting the dataset across multiple Redis instances to scale horizontally. This is particularly useful for handling very large datasets that may not fit into the memory of a single server. Sharding allows Redis to distribute data evenly across multiple nodes, increasing the system’s overall capacity and performance.
There are two main approaches to sharding: client-side sharding and server-side sharding. Client-side sharding requires developers to manage the distribution of data across different Redis instances, often by employing consistent hashing algorithms. This approach is flexible but can introduce complexity in application logic, as teams need to decide which instance holds which part of the dataset.
Server-side sharding is offered through Redis Cluster, where Redis automatically handles the distribution of data across multiple nodes. Redis Cluster provides built-in support for partitioning data and managing the communication between shards, reducing the complexity of sharding. Key distribution strategies still need to be planned in order to avoid hotspots, where some nodes receive more traffic.
5. Analyze Performance Metrics
Regularly analyzing Redis performance metrics is key to maintaining optimal operation. Use built-in Redis commands like INFO STATS
and MONITOR
to track key performance indicators such as the number of commands processed, memory usage, and the frequency of key eviction events. Redis also provides SLOWLOG
to identify slow-running requests and potential bottlenecks.
To monitor Redis in a production environment, integrate it with observability tools like Prometheus or Datadog, which provide real-time insights into metrics such as CPU usage, keyspace hits/misses, and network throughput.
Setting up alerts for abnormal behavior—such as high memory usage, slow queries, or replication lag—helps ensure that organizations can take proactive measures before performance degrades.
6. Schedule Regular Backups
Even though Redis is primarily used for transient data, regular backups are critical to ensure data persistence in case of unexpected failures. Backups can be performed using Redis’ RDB snapshots or AOF logs. Schedule RDB snapshots at intervals that balance performance with backup frequency, and enable AOF persistence for point-in-time recovery.
Additionally, ensure that backups are stored securely and are easy to recover. Automate the backup process to ensure that it runs consistently and test the restoration process periodically to verify that backups are functional. Regularly review backup strategies and adjust them as Redis usage scales or as business requirements evolve.
Learn more in our detailed guide to Redis best practices.
Dragonfly: The Next-Generation In-Memory Data Store
Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt existing legacy technologies, Dragonfly redefines what an in-memory data store can achieve. With Dragonfly, you get the familiar API of Redis (including Streams) without the performance bottlenecks, making it an essential tool for modern cloud architectures aiming for peak performance and cost savings. Migrating from Redis to Dragonfly requires zero or minimal code changes.
Key Advancements of Dragonfly
- Multi-Threaded Architecture: Efficiently leverages modern multi-core processors to maximize throughput and minimize latency.
- Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
- Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
- Redis API Compatibility: Offers seamless integration with existing applications and frameworks running on Redis while overcoming its limitations.
- Innovative Design: Built to scale vertically and horizontally, providing a robust solution for rapidly growing data needs.
Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore.
Was this content helpful?
Help us improve by giving us your feedback.
Switch & save up to 80%
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost