Use Redis as a Message Queue and a Quick Tutorial to Get Started
Redis offers low-latency performance and in-memory data structures that make it suitable for lightweight queuing tasks.
September 26, 2025

Why Use Redis as a Message Queue?
Redis offers low-latency performance and in-memory data structures that make it suitable for lightweight queuing tasks. For instance, it supports atomic operations on lists, allowing fast enqueue and dequeue actions. These capabilities make Redis suitable for use cases requiring near real-time processing, such as job scheduling, event pipelines, or task distribution in microservices.
Unlike full-fledged message brokers like RabbitMQ or Kafka, Redis is simpler to set up and manage. It doesn't require distributed infrastructure (i.e., multiple servers for Kafka) for basic queuing, making it a good choice for small to medium workloads or systems that already use Redis for caching or session storage. For many developers, this reduces the need to introduce new dependencies.
Redis also supports multiple queuing patterns—from simple lists to advanced stream-based queues with consumer groups. This flexibility allows teams to start with a minimal setup and evolve toward more robust messaging models as requirements grow. Redis can thus serve both prototyping needs and production-grade systems with moderate complexity.
In this article:
- 3 Options for Using Redis as a Message Queue
- Built-In Simple Data Types
- Using Redis Streams
- Using Dedicated Frameworks
- Quick Tutorial: Setting Up Redis as a Message Queue Using Built-In Data Types
- Data Types We'll Use in This Tutorial
- Step 1: Install Redis
- Step 2: Configure Redis for Message Queuing
- Step 3: Implement Producers and Consumers
- Step 4: Monitor the Message Queue
3 Options for Using Redis as a Message Queue
Built-In Simple Data Types
Redis Lists offer a straightforward, low-latency mechanism for message queuing using commands like LPUSH
(enqueue) and RPOP
or blocking BRPOP
(dequeue). This method is easy to implement and works well for lightweight task queues or job processing in real-time systems.
- Pros: Simple, fast, minimal configuration, supports blocking consumers without polling.
- Cons: No built-in message acknowledgment, higher risk of message loss if a consumer fails after pop, and limited support for coordinating multiple consumers.
Pub/Sub provides event-driven messaging where subscribers receive live messages as they're published—but messages are ephemeral and not stored if no subscriber is active.
Learn more in our detailed guide to Redis Pub/Sub.
Using Redis Streams
Introduced in Redis 5.0, Streams are an append-first data structure supporting message durability, unique IDs, and complex consumption models.
- Message IDs & Ordering: Each entry has a timestamp-based ID to preserve order and enable range-based reads.
- Consumer Groups: Allow multiple consumers to process different messages in parallel, with features like pending lists (for retries), acknowledgments (
XACK
), and recovery. - Pros: Good durability, at-least-once delivery, scalable consumer group support.
- Cons: More complex to set up, requires memory management (stream trimming), and still less feature-rich than purpose-built MQs like Kafka.
Learn more in our detailed guide to Redis Streams.
Using Dedicated Frameworks
Rather than working directly with Redis data types, many developers prefer frameworks that offer rich, production-ready queuing systems built on top of Redis. These provide features such as priorities, advanced scheduling, and retry handling for messages or jobs.
Here are common frameworks used for message queues in Redis:
- BullMQ: A robust queue library built on Redis data types and Lua scripting for Node.js and Python. It supports delayed jobs, retries, concurrency control, event hooks, and job lifecycle management. Ideal for handling background jobs in large applications.
- Sidekiq: A background job processor for Ruby. It supports retries, failure tracking, scheduling, and middleware extensibility. Widely used in the Ruby ecosystem.
- Celery: A distributed task queue for Python applications. It supports scheduling, retries, result tracking, and multiple broker backends, with Redis being one of the most commonly used.
- RQ: A simple Python library for job queuing that uses Redis. It supports workers, retries, and monitoring, making it suitable for Python applications that need lightweight background jobs.
- Resque: Another Redis-backed Ruby queuing library, focused on reliability and process isolation. You can create background jobs, place them on multiple queues, and process later.
Quick Tutorial: Setting Up Redis as a Message Queue Using Built-In Data Types
This tutorial shows the simplest way to set up Redis as a message queue, using built-in data types and commands like LPUSH
and RPOP
.
Data Types We'll Use in This Tutorial
Before setting up Redis as a message queue, it's important to understand the core data types we will be using:
- Lists: Ordered collections that allow fast push and pop operations from both ends. Used here to maintain the queue order (FIFO with
LPUSH
andRPOP
). - Hashes: Key-value maps where each field is a string. Used to store detailed message data independently of the queue.
- Keys with TTL: Redis supports setting time-to-live (TTL) on keys, allowing automatic deletion after a set period, which is useful for message expiration.
This combination enables separation of metadata and payloads, scalable queue management, and automatic cleanup of expired data.
Note: The example below shows a simple design of utilizing multiple data types of Redis as a message queue. The frameworks mentioned above generally have much more robust implementations (i.e., atomicity guarantees with transactions or Lua scripting), which should be preferred for high-quality applications.
Step 1: Install Redis
Begin by installing Redis on your system.
For Ubuntu:
$> sudo apt-get update
$> sudo apt-get install redis-server
For macOS:
$> brew install redis
Once installed, start the Redis server:
$> redis-server
You can interact with Redis through the command-line interface:
# assuming localhost and default port:
$> redis-cli -h 127.0.0.1 -p 6379
Step 2: Example Commands for Using Redis as a Message Queue
Redis lists can be used to implement a FIFO queue, while hashes can store the full message content separately. This structure supports quick queue access and allows expiration control on message data. To enqueue a message, push its metadata to the list:
redis$> LPUSH messages:file_processing:list '{"id":"1"}'
#=> (integer) 1
Then store the full message details in a Redis hash and set an expiration:
redis$> HMSET messages:file_processing:hash:1 file_id f123 state INITIALIZED action UPLOAD
#=> OK
redis$> EXPIRE messages:file_processing:hash:1 3600
#=> (integer) 1
This setup ensures message metadata is available in the queue and the complete message data expires automatically after the specified TTL.
To consume a message, use the following RPOP
command, which returns the metadata. The result can then be used to retrieve the full message from the corresponding hash:
redis$> RPOP messages:file_processing:list
#=> "{\"id\":\"1\"}"
redis$> HGETALL messages:file_processing:hash:1
#=> 1) "file_id"
#=> 2) "f123"
#=> 3) "state"
#=> 4) "INITIALIZED"
#=> 5) "action"
#=> 6) "UPLOAD"
Step 3: Implement Producers and Consumers in Code
A producer inserts metadata into a list and saves the full message details in a hash:
# producer.py
import json
import redis
# Connect to Redis server.
r = redis.Redis(host='localhost', port=6379, db=0)
print("[DEBUG] Connected to Redis.")
# Push file metadata into the file processing queue.
file_metadata = {
"id": "1",
}
list_key = 'messages:file_processing:list'
r.lpush(list_key, json.dumps(file_metadata))
print(f"[DEBUG] Pushed to queue '{list_key}': {file_metadata}.")
# Store the file processing task in a Redis hash with TTL.
ttl = 3600
hash_key = f"messages:file_processing:hash:{file_metadata['id']}"
r.hset(name=hash_key, mapping={
"file_id": "f123",
"state": "INITIALIZED",
"action": "UPLOAD",
})
print(f"[DEBUG] Created hash '{hash_key}' with initial state.")
r.expire(hash_key, ttl)
print(f"[DEBUG] Set TTL of {ttl} seconds for hash '{hash_key}'.")
Let's store the above code in the producer.py
file. We can execute it using the following command:
$> python3 producer.py
#=> [DEBUG] Connected to Redis.
#=> [DEBUG] Pushed to queue 'messages:file_processing:list': {'id': '1'}.
#=> [DEBUG] Created hash 'messages:file_processing:hash:1' with initial state.
#=> [DEBUG] Set TTL of 3600 seconds for hash 'messages:file_processing:hash:1'.
A consumer waits for messages and processes them:
# consumer.py
import json
import redis
# Connect to Redis server.
r = redis.Redis(host='localhost', port=6379, db=0)
print("[DEBUG] Connected to Redis.")
# Blocking right-pop from the file processing queue.
list_key = 'messages:file_processing:list'
print(f"[DEBUG] Waiting for files in queue '{list_key}'...")
metadata = r.brpop(keys=[list_key], timeout=60)
file_metadata = json.loads(metadata[1].decode('utf-8'))
hash_key = f"messages:file_processing:hash:{file_metadata['id']}"
file_details = r.hgetall(hash_key)
print(f"[DEBUG] Retrieved file metadata: {file_metadata}")
print(f"[DEBUG] Retrieved file details: {file_details}")
print(f"[DEBUG] Processing file ID: {file_details[b'file_id']} | Action: {file_details[b'action']}")
$> python3 producer.pyconsumer.py
#=> [DEBUG] Connected to Redis.
#=> [DEBUG] Waiting for files in queue 'messages:file_processing:list'...
#=> [DEBUG] Retrieved file metadata: {'id': '1'}
#=> [DEBUG] Retrieved file details: {b'file_id': b'f123', b'state': b'INITIALIZED', b'action': b'UPLOAD'}
#=> [DEBUG] Processing file ID: b'f123' | Action: b'UPLOAD'
Step 4: Monitor the Message Queue
To monitor the message queue, you can use Redis commands to inspect the queue length, peek at items, and check the status of stored messages. This helps verify that producers and consumers are working correctly and lets you troubleshoot or audit message flow.
To check how many messages are currently in the queue:
redis$> LLEN messages:file_processing:list
To view the latest message added (head of the list):
redis$> LINDEX messages:file_processing:list 0
To view the oldest message (tail of the list, next to be processed):
redis$> LINDEX messages:file_processing:list -1
Again, the previous example demonstrates a basic approach to building a message queue with Redis list and hash data types. For production-grade applications, however, we strongly recommend using the established frameworks mentioned above, as they provide robust features like atomicity guarantees through transactions or Lua scripting, message priorities, scheduling, and automatic retries.
Dragonfly: The Next-Generation In-Memory Data Store
Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt legacy technologies, Dragonfly redefines what an in-memory data store can achieve.
Dragonfly and Message Queue Frameworks
Leveraging its full Redis protocol compatibility, Dragonfly delivers enhanced performance and throughput for the popular message queue and job processing frameworks mentioned above. The Dragonfly team further ensures reliability by sponsoring and working directly with framework maintainers to validate compatibility and refine configurations. For detailed case studies and technical insights, please refer to our blog posts:
- Scaling Heavy BullMQ Workloads with Dragonfly Cloud
- How We Optimized Dragonfly to Get 30x Throughput with BullMQ
- Running and Optimizing Sidekiq Workloads with Dragonfly
- Dragonfly and Celery: Powering Financial Transactions
- Integrating Apache Airflow with Celery and Dragonfly
Key Advancements of Dragonfly
- Multi-Threaded Architecture: Efficiently leverages modern multi-core processors to maximize throughput and minimize latency.
- Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
- Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
- Redis API Compatibility: Offers seamless integration with existing applications and frameworks running on Redis while overcoming its limitations.
- Innovative Design: Built to scale vertically and horizontally, providing a robust solution for rapidly growing data needs.
Dragonfly Cloud
Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore.
Was this content helpful?
Help us improve by giving us your feedback.
Switch & save up to 80%
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost