Memcached to Dragonfly: Stop Serializing, Start Simplifying
Upgrade from simple strings to rich data types. Our guide shows you how to migrate from Memcached to Dragonfly with dual mode and keyspace sharing.
September 11, 2025

Dragonfly Also Speaks Memcached
We’ve extensively discussed Dragonfly’s Redis compatibility in previous posts, but its capabilities extend beyond that. Dragonfly also offers support for the Memcached API, making it a versatile replacement for existing Memcached deployments. This means you can seamlessly replace your Memcached instances with Dragonfly without drastically modifying application code (except for CAS operations at the time of writing), while simultaneously gaining access to Dragonfly’s rich features for future enhancements.
To me, this is a big deal because Memcached’s design as a simple, robust "dumb" cache has made it a popular choice for decades. Just basic get/set APIs, no persistence by default, no fancy data types—this is exactly what made it successful. Memcached is a workhorse for reducing database load and speeding up applications, and it remains widely deployed today in production environments.
# Using a Telnet client to issue a set command to Memcached.
memcached$> set user:123 0 3600 92
{"id":123,"name":"Joe","email":"joe@test.com","age":30,"prefs":{"theme":"dark","lang":"en"}}
#=> STORED
# Using a Telnet client to issue a get command to Memcached.
memcached$> get user:123
#=> VALUE user:123 0 92
#=> {"id":123,"name":"Joe","email":"joe@test.com","age":30,"prefs":{"theme":"dark","lang":"en"}}
#=> END
What makes Dragonfly particularly powerful in this case is its Redis/Memcached dual-mode and keyspace sharing capabilities. By starting Dragonfly with the --memcached_port
flag, you enable simultaneous support for both Redis and Memcached protocols. This creates a seamless migration path for existing Memcached applications, which we will explore more in the rest of this post.
When Simplicity Becomes Limiting
Memcached’s simplicity is its greatest strength, but in modern applications, this minimalism can become a constraint. While its straightforward design works well for basic caching, several limitations emerge as applications grow in complexity and scale.
The Scalability Challenge
Traditionally, Memcached’s approach to horizontal scaling requires client-side sharding, where applications distribute data across multiple nodes on the client-side using consistent hashing algorithms. From the Memcached servers’ perspective, they are not aware of each other and operate independently. While this method is conceptually simple, it creates operational overhead:
- No Automatic Rebalancing: Adding or removing nodes requires manual intervention or library support.
- Data Redistribution: New nodes remain under-utilized until keys are redistributed.
- Client Complexity: Application clients or client libraries may implement sharding logic differently.
from pymemcache.client.hash import HashClient
# The 'https://github.com/pinterest/pymemcache' Python client library uses
# a consistent hashing algorithm to choose which server to set/get the values from.
# The library can also automatically rebalance depending on if a server goes down.
client = HashClient([
'localhost:11211',
'localhost:11212',
])
client.set('user:123', '{"id":123,"name":"Joe","email":"joe@test.com","age":30,"prefs":{"theme":"dark","lang":"en"}}')
result = client.get('user:123')
Starting with version 1.6.23, which was released in January 2024, Memcached ships with a built-in proxy feature. This allows you to run a Memcached instance as a "frontend proxy" that routes requests to one or more Memcached "backend servers" over TCP/IP. Instead of connecting directly to individual backends, clients talk to the proxy, which forwards requests to backend pools you configure. This setup simplifies client connections and adds benefits like better load balancing and fault tolerance while adding roundtrips and latency.
The Data Manipulation Problem
The fact that Memcached stores values only as a string (or let’s say a blob of bytes) becomes particularly limiting when working with complex objects. Consider the user profile above stored as a serialized JSON object:
{
"id": 123,
"name": "Joe",
"email": "joe@test.com",
"age": 30,
"prefs": {
"theme": "dark",
"lang": "en"
}
}
Updating a single field requires a complete read-modify-write cycle:
# Read, modify, and write back using Python.
import json
from pymemcache.client.base import Client
client = Client('localhost')
# Get the entire user profile.
user_data = client.get('user:123')
user = json.loads(user_data)
# Modify the specific nested field.
user['prefs']['theme'] = 'light'
# Serialize and store the entire user profile back.
new_user_data = json.dumps(user)
client.set('user:123', new_user_data)
For consistency in highly concurrent environments, you might need to implement compare-and-swap (CAS) operations as well. However, this approach introduces network overhead, computational waste, and complexity for what should be simple operations.
Seeking the Balance: Simplicity with Power
While Memcached’s API is elegantly simple, modern applications often need more sophisticated operations without sacrificing performance. Dragonfly’s compatibility with the Redis APIs and RedisStack module strikes this balance beautifully, which maintains simplicity while providing rich functionality out of the box:
- Atomic Operations: Increment, decrement, append, and bit operations.
- Partial Updates: Modify part of a key-value pair without full serialization.
- Rich Data Structures & Operations: Native support for collections like hashes, lists, sets, sorted sets, and even JSON. (Note that Redis supports JSON via the RedisJSON module, while Dragonfly does so natively.)
- Batch Operations: Lua scripting and transactions for complex atomic operations.
The challenge isn’t abandoning Memcached’s simplicity but rather enhancing it with additional capabilities when needed. In the next section, we’ll see how Dragonfly bridges this gap.
Bridge of Migration: Redis/Memcached Dual-Mode & Keyspace Sharing
As mentioned earlier, Dragonfly can run in a dual mode, handling Redis and Memcached requests at the same time. This dual-protocol support creates a smooth migration path that eliminates the typical “all-or-nothing” or “go/no-go” migration dilemma.
Enabling Memcached API for Dragonfly
Dragonfly’s Memcached API can be activated with a simple --memcached_port
server flag:
$> ./dragonfly --logtostderr --memcached_port 11211
That’s it! Dragonfly now speaks both Redis (port 6379
by default) and Memcached (port 11211
as configured) protocols simultaneously.
Simple Yet Magical: Keyspace Sharing in Action
Now we have Dragonfly speaking both Memcached and Redis APIs. Another powerful feature that dramatically eases the migration process is Dragonfly’s keyspace sharing between protocols. String values written via the Memcached API become immediately accessible through the Redis API in database 0
, and vice versa. Let’s see it in action.
First, let’s set the value using the Memcached protocol:
# === Memcached API === #
dragonfly-memcached$> set mykey 0 0 5
hello
#=> STORED
The same key can be read via the Redis protocol immediately. Note that keyspace sharing is only available for Redis API logical database 0
, which is the default database.
# === Redis API === #
dragonfly-redis$> SELECT 0 # Switch to database '0' just in case.
#=> OK
dragonfly-redis$> GET mykey
#=> "hello"
Next, we can try to modify the value using the Redis protocol and read back from the Memcached side:
# === Redis API === #
dragonfly-redis$> SET mykey "world"
#=> OK
# === Memcached API === #
dragonfly-memcached$> get mykey
#=> VALUE mykey 0 5
#=> world
#=> END
This bidirectional compatibility means you can gradually migrate different parts of your application at your pace or even maintain hybrid access patterns for a long while.
Important Considerations for Keyspace Sharing
While keyspace sharing is powerful, it’s essential to understand the boundaries between protocols. Memcached only understands strings or blobs of bytes. If you create complex data types using Redis commands, they cannot be manipulated from the Memcached side. Moreover, any values set by the Memcached API would override existing complex values as strings.
# === Redis API === #
# Create a set with 5 members.
dragonfly-redis$> SADD myset 1 2 3 4 5
#=> (integer) 5
# === Memcached API === #
# Attempt to write via Memcached. This will overwrite the set and change it to the string 'hello'.
dragonfly-memcached$> set myset 0 0 5
hello
#=> STORED
# === Redis API === #
dragonfly-redis$> TYPE myset
#=> string
dragonfly-redis$> GET myset
#=> "hello"
Similarly, the Memcached flush_all
command affects the entire database 0
, including all data types stored instead of only strings.
Migration Best Practices
With the dual-mode and keyspace sharing features we discussed above, keep in mind while migrating from your existing Memcached setup to Dragonfly:
- Start with Strings: Initially use the shared keyspace only for string values accessible from both protocols.
- Progressive Migration: Moving services from Memcached to Redis API one by one can be a safe bet.
- Type Awareness: When using Redis-specific data types, manage those key names wisely to avoid Memcached overriding.
The dual-mode approach largely eliminates migration risk while providing immediate benefits: no client changes needed, immediate Dragonfly performance and cost efficiency, and the freedom to adopt Redis-compatible features when you’re ready.
Beyond Strings: Start Using Rich Data Types
Migration to Dragonfly isn’t just about maintaining existing functionality, it’s an opportunity to enhance your application with performance, cost efficiency, and powerful data modeling capabilities. Let’s revisit the user profile JSON object example earlier.
Before, we had a multi-step process of doing that: read, deserialize, modify, serialize, and write back. Now, we can achieve the same by running a single JSON command:
# === Redis API === #
# Store the complete user profile as JSON.
dragonfly-redis$> JSON.SET user:123 $ '{"id":123,"name":"Joe","email":"joe@test.com","age":30,"prefs":{"theme":"dark","lang":"en"}}'
#=> OK
# Update only the theme preference with a single command.
dragonfly-redis$> JSON.SET user:123 $.prefs.theme '"light"'
#=> OK
# Verify the modifycation by reading only the 'prefs' nested object.
dragonfly-redis$> JSON.GET user:123 $.prefs
#=> "[{\"lang\":\"en\",\"theme\":\"light\"}]"
It’s important to note that we cannot magically convert a string set from the Memcached side into a JSON object. To use the powerful JSON.SET
and other related commands, the original data must be stored as a JSON type from the beginning.
For any data that has meaningful structure (whether lists, queues, relationships, or rankings), using the proper native data type (lists, hashes, sets, sorted sets, etc.) unlocks type safety and much more convenient operations. By embracing Dragonfly’s rich data types, you’re upgrading your application’s data layer to handle modern requirements with elegance.
Final Thoughts
Dragonfly offers the perfect migration path for your Memcached workloads, preserving your existing caching investments while gaining rich data types and superior performance. Start with dual-mode operation, gradually introduce JSON and other data structures, and transform your caching layer into a powerful data platform—all without risky rewrites. Ready to begin? Run Dragonfly with the server flag --memcached_port enabled and experience the best of both worlds.