Introducing Dragonfly Cloud! Learn More

Redis Interview Questions for DevOps and Platform Engineers

Prepare for your next DevOps or Platform engineering interview. Let's learn and discuss best practices for managing high-performance, reliable Redis systems.

May 28, 2024

Redis Interview Questions for DevOps and Platform Engineers

Introduction

Redis (or as the most recent fork is known, Valkey) is a popular in-memory data structure store used as a cache, real-time statistics store, and message broker. Its performance, simplicity, and support for various data structures make it a popular choice for many organizations. As Redis is generally on the critical path, if it goes down, it can have a significant impact on the business. For example, if Redis manages the inventory of your Black Friday flash sales, having healthy Redis instances is crucial to avoid revenue loss during peak times, as discussed in this blog post.

Understanding Redis is essential due to its extensive adoption and presence in many existing systems. If you're preparing for an interview for a DevOps or Platform engineer role, it's crucial to have a solid understanding of Redis. In this blog, we'll cover some Redis operational questions that you might encounter during an interview. In the meantime, it's worth noting that Dragonfly, our modern multi-threaded high-performance drop-in replacement for Redis, is potentially able to address some of the limitations and challenges associated with Redis.


Eviction Policies

Let's start with the classical question: What are the eviction policies for Redis?

Redis uses eviction policies to handle situations when the dataset size reaches the maximum memory limit. These policies determine how Redis will free up memory. Redis provides eight eviction policies—that is a lot! A good strategy is to group them into three categories based on their behavior.

No Eviction

  • noeviction: Returns an error when the memory limit is reached without saving the new data. It's the safest policy but can cause the application to fail when memory is exhausted.

Eviction on All Keys

  • allkeys-lru: Evicts the least recently used (LRU) keys from the entire dataset.
  • allkeys-lfu: Evicts the least frequently used (LFU) keys from the entire dataset.
  • allkeys-random: Randomly evicts keys from the entire dataset.

Eviction on Expiring Keys

  • volatile-lru: Evicts the least recently used (LRU) keys that have an expiration.
  • volatile-lfu: Evicts the least frequently used (LFU) keys that have an expiration.
  • volatile-random: Randomly evicts keys that have an expiration.
  • volatile-ttl: Evicts the least frequently used (LFU) keys with the shortest remaining time-to-live (TTL) value.

Choosing the right eviction policy depends on the application's requirements. For instance, if we need to ensure certain data is always available, volatile strategies might be suitable, as they only evict keys with expiration.

LRU and LFU are two common algorithms used in eviction policies. They are based on simple but logical assumptions about the keys: if a key was accessed recently or frequently, it's likely to be accessed again soon. Despite their simplicity, they often provide good performance in practice. However, we still get to choose one of these policies, which can be tricky in cases where the access patterns are less predictable—sometimes we just don't have enough information to make an informed decision.

Dragonfly, on the other hand, provides only one eviction policy. Once configured with --cache_mode=true, Dragonfly uses a more sophisticated algorithm that combines LRU and LFU to make eviction decisions. As you can see, if the same interview question were asked in the context of Dragonfly, the answer would be much simpler. If you are interested in learning more about Dragonfly's cache design, check out this blog post.


Persistence & Snapshots

If the choice of eviction policies is more of a mixed concern between development and operations, taking snapshots for persistence and backup sits more on the operational side. Let's dive into another frequently asked question: How does snapshot work, and what are the major risks?

Redis uses a snapshotting mechanism called RDB (Redis Database) to persist data on disk. This process involves creating a point-in-time snapshot of the entire dataset and saving it to a binary format file.

How Snapshotting Works

  • The SAVE Command: Manually triggers a synchronous snapshotting process on the main thread. This command should almost never be used in production because it blocks the main thread, causing Redis to stop serving client requests.
  • The BGSAVE Command: Forks a child process to create a snapshot while the main process continues to handle client requests. This avoids blocking the main thread, so Redis can continue handling incoming operations, but it consumes additional resources, which we will discuss later.
  • Automatic Snapshots: Can be configured using the save directive in the Redis configuration file, specifying intervals and conditions for creating snapshots. The following examples show how to configure Redis to automatically create snapshots based on time elapsed and the number of changes:
# In the redis.conf file.

# General format: save <seconds> <changes> [<seconds> <changes> ...]

# After 3600 seconds if at least 1 change occurred, save a snapshot.
save 3600 1

# Save a snapshot if:
# - After 600 seconds if at least 100 changes occurred.
# - After 60 seconds if at least 5000 changes occurred.
save 600 100 60 5000

Comparison with AOF (Append-Only File) Persistence

Redis also supports another persistence mechanism called AOF (Append-Only File).

  • RDB Snapshots: Capture the entire dataset, recording the actual data at a particular moment. This method is faster for recovery since it involves loading the snapshot directly into memory without re-executing commands.
  • AOF Files: Log every write operation to the dataset, recording the sequence of commands. Recovery involves replaying all the commands, which can be slower if the log file is extensive.

RDB and AOF can be used together, with RDB acting as a backup mechanism and AOF providing continuous durability, which combines the benefits of both methods.

Risks and Considerations

When solely talking about snapshots, there are several risks and considerations to keep in mind. Since snapshots are taken at intervals, any data changes made after the last snapshot and before a crash will be lost.

For large datasets, snapshotting is resource-intensive. The process potentially involves copying data, which can temporarily consume significant CPU and memory resources. Specifically, the BGSAVE command creates a child process using the fork() system call. Initially, the child process shares the same memory pages as the parent process (main Redis instance). When data is modified, the copy-on-write mechanism ensures that modified pages are duplicated, increasing memory usage. As shown in the diagram below, the Redis main process accepts a new write operation that happens to modify a key in memory page-3. The memory page-3 is then duplicated, and the child process continues to use the original page-3 for snapshotting.

Copy-on-Write

In extreme conditions, if the instance is under heavy write load and almost all keys are updated during the snapshotting process, the memory usage can double, potentially causing the system to run out of resources. As a consequence, over-provisioning Redis instances is a common practice to ensure sufficient resources are available during snapshotting.

Besides the normal usage of snapshots, the replication process also involves the primary node creating a snapshot and sending it to the replica node for full synchronization. Connecting multiple replicas and taking snapshots at the same time should be avoided to prevent performance degradation or even crashes for the primary node.

Copy-on-write is a common and established technique used in many systems. However, it's not the best choice for every scenario, especially for in-memory data stores with ultra-heavy write operations. This issue actually became such a troublemaker for system reliability that Dragonfly decided to use a brand new approach for snapshotting that doesn't rely on copy-on-write. More details about Dragonfly's snapshotting algorithm can be found in this blog post. Simply put, Dragonfly requires very minimal and steady memory overhead during snapshotting, which is a significant advancement in terms of maintaining system stability.


Cluster Split-Brain Scenario

Last but not least, let's discuss a more advanced topic: What is a split-brain scenario in Redis Cluster, and how can you avoid it?

A split-brain scenario occurs in a distributed Redis setup when the cluster becomes partitioned due to network issues, leading to multiple masters being elected in different partitions. This can result in data inconsistencies and conflicts. Imagine a network partition in a six-node cluster (three primaries, three replicas). If the partition divides the cluster into two groups of three, each group might think the other is offline and trigger failovers, leading to both sides having primary shards. When the network heals, reconciling these conflicts becomes challenging as each side has accepted different write operations.

Split-Brain

How to Avoid Split-Brain

  • Reliable Network Infrastructure: Ensure a stable and reliable network is always valuable. The DevOps or Platform team should continuously monitor the network and healthiness of the Redis Cluster to minimize the risk of partitions.
  • Cluster Configuration: In Redis Cluster, ensure proper configuration of node slots and replicas to maintain data consistency and availability. As a bottom line, always maintaining an odd number of primary shards and two replicas per primary shard is a good practice to prevent split-brain scenarios in Redis Cluster. This way, during a partition, the smaller group (minority) will not trigger failover and accept writes, reducing the risk of conflicting data changes.

By carefully designing your Redis infrastructure and employing these strategies, you can minimize the risk of split-brain scenarios and maintain a robust and reliable Redis Cluster topology. It's worth noting that such a topology is complex to manage. It is also costly since more Redis instances (and thus more resources) are required to maintain two replicas per primary shard. Despite using Redis or Dragonfly, if your in-memory data size is tremendous, a cluster setup is inevitable. However, if your data size can be fitted into a single multi-core server with up to 1TB of memory, Dragonfly can be a more cost-effective and simpler solution, as Dragonfly by design is able to fully utilize all CPU cores and memory resources on a single server.


Conclusion

Understanding the intricacies of Redis, such as eviction policies, snapshot mechanisms, and handling split-brain scenarios, is crucial for DevOps and Platform engineers. These concepts not only prepare you for interviews but also enable you to build and maintain high-performance, reliable Redis-based systems.

Obviously, we didn't cover all the aspects of managing Redis. If you have encountered other scenarios or have more interesting techniques that you would like to share, please feel free to engage with our growing Discord community.

Stay up to date on all things Dragonfly

Join the Dragonfly community to get access to exclusive content, events, and more!

Join

Start building today 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement.