Running BullMQ with Dragonfly

In this post, we explore the seamless integration of Dragonfly as a drop-in replacement for Redis as the backing in-memory store of BullMQ, a robust background job processing library for Node.js.

October 16, 2023

Running BullMQ with Dragonfly


BullMQ is a lightweight, robust, and fast Node.js library for creating and processing background jobs by sending messages using queues. BullMQ is easy to use, but it is also highly configurable and comes with powerful advanced features. As a trusted message queue system with a powerful feature set, BullMQ is adept at managing tasks like video transcoding, image processing, email sending, data ETL (extract, transform, load) tasks, and many more. BullMQ is used by developers in many industries, such as e-commerce, social media, advertising, and online gaming.

Announcing Dragonfly's Full Compatibility with BullMQ

BullMQ was originally developed to use Redis as its primary data store. It also makes very heavy use of server-side Lua scripts. As many of our readers and community members already know, Dragonfly is a drop-in Redis replacement that is optimized for high-traffic, low-latency applications.

After working closely with the BullMQ team and the community, we are excited to announce that Dragonfly is now fully compatible with BullMQ.


Choosing Dragonfly over Redis for BullMQ

Redis, while well-known for its speed and efficiency, does have some limitations while being used as the backing store for BullMQ under heavy load, most notably due to its single-threaded nature. This design choice restricts performance capabilities when dealing with a single Redis instance. While Redis Cluster can indeed scale its performance, it increases the complexity in infrastructure design, deployment, and maintenance.

By running BullMQ with Dragonfly, developers get:

  • Infrastructure Simplification: Avoid the complexity of managing a cluster setup. With Dragonfly, you get the power to handle heavy workloads on a single instance.
  • Ultra-Performance & High-Throughput: Leveraging its advanced multi-threaded architecture, Dragonfly ensures operation performance and high throughput.
  • Memory Efficiency: Dragonfly's design ensures up to 30% less memory consumption, which can translate to tangible savings, especially for larger deployments.
  • Hardware Cost Reduction: By maximizing the utility of each server, Dragonfly can lead to dramatic reductions in operational costs, with potential savings ranging from 50% up to 80%.
  • Latest Lua Engine: Dragonfly comes with the latest Lua 5.4 engine, which is up to 2x faster than the Lua 5.1 engine used by Redis.

In essence, by transitioning the backing store of BullMQ from Redis to Dragonfly, developers and organizations can reap the benefits of enhanced performance, simpler infrastructure, and significant cost reductions, all while maintaining the familiar functionalities of BullMQ.

Benchmark Results

Without further ado, let's take a look at the benchmark results of BullMQ with Dragonfly as the backing store. We've conducted comprehensive benchmarks to lay bare the performance differentials between Dragonfly and Redis in the context of BullMQ. Note that in the benchmark illustrations below, Dragonfly 1T means running Dragonfly with a single thread, and Dragonfly 4T means running Dragonfly with four threads, so on and so forth. Here's a concise breakdown:


In the benchmark above, we evaluated the simplest scenario where only a single queue is used. Under this setup, Dragonfly performs either on par with or marginally surpasses Redis. However, increasing the number of threads for Dragonfly in this case actually decreases the performance.

Real-world applications rarely use a single queue. A more representative setup involves producing to and consuming from multiple queues concurrently. When we tested such a scenario, Dragonfly's advantages became more pronounced, as shown below when deploying 16 queues.


When operating on a multi-core machine, the Dragonfly instance showcases great improvements in performance, which demonstrates the benefits of Dragonfly's advanced multi-threaded shared-nothing architecture: Multiple queues can be distributed across multiple Dragonfly threads; each BullMQ queue is exclusively owned by a single thread, and accessing multiple queues could be done in parallel.

Venturing further into our benchmarks, we also scaled up to a scenario with 64 queues. While not many applications need 64 queues in practice, this experiment is instructive and provides an illustrative result of what Dragonfly is currently capable of achieving.


Running BullMQ with Dragonfly

Now that we've seen the benchmark results, let's dive into the details of how to run BullMQ with Dragonfly. For more details and the most up-to-date information, you can always find the latest instructions in our newly released integrations documentation. Since Dragonfly distributes queues across multiple Dragonfly threads, there are a few steps we need to follow in order to achieve the best performance.

1. Emulated Cluster Mode & Hashtag Locking

Run Dragonfly with the following flags:

./dragonfly --cluster_mode=emulated --lock_on_hashtags
  • --cluster_mode=emulated lets Dragonfly emulate a Redis Cluster on a single instance.
  • --lock_on_hashtags enables hashtag locking.

A hashtag is a substring in a key name. If the key contains a {...} pattern, only the substring between { and } is hashed in order to determine which Dragonfly thread owns the key. Thus, keys with the same hashtag will be assigned to the same Dragonfly thread. And keys with different hashtags will very likely be assigned to different Dragonfly threads.

2. Install BullMQ & Choose Queue Names

In your Node.js application, install BullMQ with the following commands, based on your package manager:

npm install bullmq

# Yarn
yarn add bullmq

pnpm add bullmq

To use a hashtag in a queue name, you can initialize a queue using one of the following methods:

import { Queue } from 'bullmq';

const queue1 = new Queue("{myqueue}");

const queue2 = new Queue("myqueue", {
    prefix: "{myprefix}",

Either by using a hashtag directly in the queue name or by specifying a prefix that contains a hashtag, the queue will be assigned to a Dragonfly thread based on the hashtag substring. Note that hashtags should not be confused with JavaScript template literals. The curly braces need to be present in the queue name in order to be recognized as a hashtag by Dragonfly.

To achieve superior performance for your application, consider using a larger number of queues with different hashtags. By distributing the queues across distinct Dragonfly threads, you can optimize the utilization of multiple threads of Dragonfly. This is also known as thread balancing in Dragonfly.

However, if you have queue dependencies, especially in a parent-child relationship, it's important to use the same hashtag for them. This ensures that both queues are processed within the same Dragonfly thread and maintains the integrity of the dependencies.

3. Start Your Dragonfly/BullMQ Journey

With the above steps, you are now ready to start your Dragonfly/BullMQ journey. For instance, you can start sending messages (or jobs) to a queue and start processing them with a worker.

// client_connection.js

import Redis from "ioredis";

const connection = new Redis({
    host: "dragonfly-host", // Your Dragonfly host.
    port: 6379,             // Your Dragonfly port number.
// producer.js

import { Queue } from "bullmq";

const queue = new Queue(
    { connection }, // Reuse the connection instance.
queue.add("my_email_job", { userId: "user-123", emailId: "weekly-newsletter" });
// worker.js

import { Worker } from "bullmq";

const worker = new Worker(
    async (job) => {
        if ( === "my_email_job") {
            await sendEmail(,;
    { connection }, // Reuse the connection instance.

Above is a basic setup for running BullMQ with Dragonfly. As long as the connection is established with a Dragonfly server instance, you can use BullMQ as usual. Keep in mind that it is crucial to plan your queue names and hashtags carefully to fully utilize the performance gains of Dragonfly. For more details around BullMQ Queues, Workers, Jobs, and Flows, please refer to the BullMQ documentation.

Note that now we have a Dragonfly instance running for BullMQ. Depending on how heavy the application workload is, we may still use this Dragonfly instance with its ordinary API as a general-purpose in-memory data store, such as a caching layer or a session store. Mixed usage of Dragonfly is possible, but we should also plan the hardware resources carefully.


The integration between Dragonfly and BullMQ allows Node developers to run their BullMQ jobs using the most powerful in-memory data store on the market. In a future blog post, we will share the journey of how we worked with the BullMQ team to achieve this integration, as well as our continuous optimization efforts to further improve the performance of BullMQ with Dragonfly.

Dragonfly is committed to embracing the open-source community and broadening the ecosystem. More SDKs and integrations will be tested with Dragonfly and released in the future. As always, start trying Dragonfly in minutes, and happy coding!

Appendix - Useful Resources

  • Our Dragonfly/BullMQ integration documentation can be found here.

  • The announcement from BullMQ can be found here.

  • Read the comprehensive documentation of BullMQ here.

  • Benchmark results were obtained using this tool with Dragonfly/Redis running on AWS c7i.2xlarge and BullMQ running on AWS c7i.16xlarge. Note that we used a smaller instance for Dragonfly/Redis and a larger instance for BullMQ to ensure that the bottleneck is not on the BullMQ side as we are benchmarking Dragonfly/Redis.

    node bullmq-concurrent-bench/index.js -h $SERVER_IP -c 100 -d 10 -r 8 -w 8 -q $NUM_QUEUES

Stay up to date on all things Dragonfly

Subscribe to receive a monthly newsletter with new content, product announcements, events info, and more!

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.