Relay with Dragonfly: Towards the Next-Gen Caching Infrastructure

Integrating Dragonfly with Relay enhances PHP applications by significantly boosting caching performance, showcasing scalability, high throughput, and reduced latency for efficient next-gen caching infrastructure.

February 27, 2024

Relay with Dragonfly: Towards the Next-Gen Caching Infrastructure


Caching has been a critical technique that improves the scalability and performance of web applications. A caching framework reduces expensive disk and network I/O generated by application data requests by storing frequently accessed data in memory. Different kinds of caching schemes have been developed: client-side caching stores hot data on client devices to minimize data requests to the web server; server-side caching stores accessed data on the web server either in memory or with the help of an in-memory data store such as Redis and Memcached. This article focuses on server-side caching solutions, as they have been proven to be critical for accelerating most of the popular web applications. Moreover, such applications heavily rely on querying disk-based remote databases. Therefore, using in-memory data stores to cache data is a natural solution. Dragonfly is a next-generation in-memory data store that uses modern design, focusing on high scalability, concurrency, and performance. Dragonfly has outperformed existing in-memory data stores such as Redis and Memcached; thus, it has excellent potential to provide even more outstanding performance to caching infrastructure where those in-memory data stores have already been deployed.

Specifically, we discuss our recent integration with Relay, a next-generation server-side caching solution for PHP. Relay is a PHP extension often used as a modern drop-in replacement for phpredis and shared in-memory cache solutions such as APCu. Relay stands out from the existing solutions thanks to its excellent performance. A recent benchmark report shows Relay can be two orders of magnitude faster than phpredis and predis.

The rest of the article is structured as follows: we briefly overview Relay. We then discuss recent developments that allow Dragonfly to serve as a backend caching data store for Relay. We conclude this article with some benchmarking results. We show that the performance of Relay with Dragonfly compares favorably with Relay backed by Redis, showing the great potential for Dragonfly to be an ideal candidate for next-generation caching solutions.

Relay: A Next-Generation Caching Solution for PHP

Relay currently uses Redis to store application-accessed data. The secret ingredient of Relay's excellent performance is maintaining a highly efficient, partial replica of Redis' data in the memory of the local PHP master process. Relay uses server-assisted client-side caching to actively invalidate the cache upon data updates to prevent the local cache from becoming stale. The invalidation is achieved through Redis' client tracking feature, which notifies Relay once the data inside Redis is modified so that it will further invalidate its copy in the local PHP cache. The code snippet from Relay's documentation below shows an example workflow.

$relay = new Relay(host: '', port: 6379);

// Fetch the user count from Relay's memory,
// or from Redis if the key has not been cached, yet.
$users = $relay->get('users:count');

// Listen to all invalidation events.
$relay->onInvalidated(function (Relay\Event $event) use ($users) {
    if ($event->key === 'users:count') {
        $users = null;

Although Dragonfly is primarily compatible with Redis and is often deployed as a drop-in Redis replacement in many use cases, Relay previously did not work with Dragonfly due to the lack of client-tracking API support. In this work, we added such support for Dragonfly.

The Need for Client Tracking

Redis released client tracking in 6.0 as part of the functionality of client-side caching. The minimum client tracking API that satisfies Relay is defined as:


Enabling client tracking will track the updates to all the keys accessed by the enabler client: upon receiving the command CLIENT TRACKING ON, the Redis server starts memorizing the keys read by that client. If any tracked keys are modified (by any client), an invalidation notification is sent to each tracking client to announce the staleness of the modified keys.

Let's walk through a simple example. We will be using two clients, client-1 and client-2, to demonstrate the client tracking feature. First, we have client-1, which enables tracking and creates a key user_count, setting its value to 100. Note that simply setting a key that never existed before will not make Redis track the key. Then, client-1 reads this key back, which makes user_count tracked by the Redis server for client-1.

### client-1 ###

# Switch protocol to RESP3, output omitted.
redis> HELLO 3

# Switch on client tracking.

# Create a key and set its value to 100.
redis> SET user_count 100

# Read the key so that the server starts tracking its update.
redis> GET user_count

Now, client-2 changes and updates the value of user_count to 101.

### client-2 ###

# Now client-2 updates the value.
redis> INCR user_count
(integer) 101

Upon receiving the update, the Redis server sends an invalidation message to client-1. When client-1 rereads the key, an invalidation message and the new value of the key are displayed.

### client-1 ###

# After client-2 updates the value,
# client-1 reads the key again and receives an invalidation message.
redis> GET user_count
-> invalidate: 'user_count'

Again, a key will be tracked if a read command is issued on the key. When the RESP3 protocol is used (e.g., via HELLO 3 or redis-cli -3), updating a tracked key makes Redis send a RESP3 PUSH message to the tracking clients for invalidation. Note that Redis also supports client tracking with the RESP2 protocol, and the approach of key invalidation is significantly different from that of RESP3. After discussing this closely with the Relay developers, we determined that the minimum requirement to support Relay is to implement the client tracking API shown above only for the RESP3 protocol.

Implementing Client Tracking for Dragonfly

We published the basic client tracking API in our recent 1.14 release. We can find the primary implementation in the two pull requests: PR1 and PR2. We briefly describe the implementation in this section.

The implementation includes two parts, namely, tracking and invalidation. Dragonfly divides in-memory data into multiple shards, each managed by a separate thread. Therefore, keys belonging to different shards should be tracked independently by the corresponding managing threads. When Dragonfly executes a command, we verify if the command is a read command and whether the client that issues the command has turned on client tracking. If both conditions are met, the key being read by the command will be provided to the shard to initiate tracking.

To track key updates, we maintain a client tracking map in each database shard, where each tracked key is mapped to a set of tracking client IDs. Internally, the map is implemented by the flat hash map of Google's Abseil Common Libraries, and the set holding client IDs is implemented by the flat hash set from the same library. We selected these data structure implementations as both provide decent insertion and look-up performance compared to other popular implementations, and the hash map should be able to support the efficient tracking of many keys. Therefore, when we determine a key that needs to be tracked, the key and the ID of the client that requests the tracking will be stored in the map.

When a write command updates a key, we must send invalidation messages to all the clients tracking the key. Dragonfly always invokes the function PostUpdate() at the end of the implementation of each write command. The function provides a convenient way to perform any internal background functionalities accompanying the data updates. We leverage this function to look up the list of clients from the client tracking map above, then send an asynchronous invalidation RESP3 PUSH message to all the clients. Notice that once a tracked key is updated, we remove the key from the tracking map, indicating the key is no longer being tracked. Redis has also implemented the same behavior. Naturally, if we see commands such as FLUSHDB or FLUSHALL, the corresponding shards' tracking maps will be completely cleared, and one single invalidation message will be sent to all the tracking clients instead of one message for each of the keys being flushed. Once a Redis-compatible client receives an invalidation message, it automatically understands the invalidation intention and displays the message accordingly, following Redis's specifications.


Besides testing the client tracking API on its compatibility with Redis, we must work closely with Relay maintainers to verify the correctness of our integration. Relay's core developers, Michael Grunder, and Till Krüss, were kind enough to take our prototypes and perform multiple rounds of tests using Relay's functional tests, significantly improving the quality of our implementation. Following their suggestions, we further leveraged the Relay performance benchmark. While guaranteeing Dragonfly survived all the benchmark tests, we also took the chance to evaluate the end-to-end performance of Dragonfly-backed Relay and compare the results with those of Relay using Redis.

We conducted our evaluation on AWS. We deployed Relay 0.7.0 and ran the Relay benchmark on a c6a.8xlarge instance (32 cores and 64GB DRAM). We deployed Dragonfly as well as Redis 7.0.12 on a separate c6a.16xlarge instance (64 cores and 128GB DRAM). Both instances use images with Ubuntu 23 (Linux kernel 6.5.0-1012-aws) and reside in the same availability zone. Dragonfly is launched with the command below, with snapshots disabled. The value of THREADS was set to 4, 8, 16, and 32 to explore the performance brought by different numbers of Dragonfly threads.

./dragonfly --logtostderr --dbfilename="" --proactor_threads=THREADS

Redis is launched with the following command, where data persistence through AOF and RDB are both disabled.

./redis-server --save "" --appendonly no

We ran the Relay benchmark with the following command, where WORKERS was set to 16, 32, 64, 128, 256, and 512 to analyze the performance scaling brought by different PHP workers.

./relay/benchmarks/run -h DF-Redis-Host-IP --duration=3 --workers=WORKERS

Note that we chose to deploy a single instance for both Dragonfly and Redis, as this is the most common usage according to the Relay community.

Below, we present the Relay benchmark performance for both Dragonfly and Redis.


Note that multiple scenarios have been evaluated in the Relay benchmark. We highlight results from the RelayNoCache scenario, which mimics the performance of Relay when its local PHP in-memory cache is cold. The case captures the Relay performance when a PHP application is warming up its local cache. When the number of benchmark workers is less than 64, Dragonfly and Redis perform similarly with good linear scalability. After further increasing the number of workers, we found Redis's performance started saturating due to its limited concurrency.

In contrast, Dragonfly's performance continues to scale even after 512 workers, significantly outperforming the single Redis instance. The excellent scaling is due to Dragonfly's modern design, where multiple threads are deployed in a single instance. The data store scales horizontally with the number of CPUs available on the machine. Therefore, scaling with Dragonfly will be the ultimate solution. We also observed that the best performance is achieved using eight Dragonfly threads. This is due to the particular structure of Relay benchmarks that employ MGET commands with eight keys.

We have also studied performance in the scenario when Relay's local in-memory cache is thoroughly warmed. In those cases, Dragonfly and Redis perform very similarly, as most data access only happens inside the Relay memory instead of the backend data stores.


We have discussed the integration of Dragonfly with Relay, one of the promising next-gen caching frameworks for PHP web applications. Relay achieves orders of magnitude higher throughput than traditional frameworks thanks to its local in-memory cache running inside PHP, allowing web applications to execute with low data access latency. The high performance implies that if a considerable amount of cache miss happens, an extremely well-performing in-memory data store is needed so that the backend data store will not quickly become the bottleneck, allowing web applications to continue behaving smoothly. Our evaluation results show that Dragonfly's great scalability, high throughput, and low latency make it an ideal data store for Relay to achieve a highly efficient next-gen caching infrastructure.

Stay up to date on all things Dragonfly

Subscribe to receive a monthly newsletter with new content, product announcements, events info, and more!

Start building today

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.