Dragonfly

Running Redis in Docker: A Practical Guide

Running Redis in a Docker container provides several advantages that enhance both development and production workflows.

July 6, 2025

Running Redis in Docker: A Practical Guide

What is Redis?

Redis is an open-source, in-memory key-value data store known for its high performance and low latency. It is commonly used as a database, cache, and message broker. Redis stores data mainly in memory, which makes read and write operations extremely fast. It also provides on-disk persistence options for recovery and data safety purposes.

It supports various types of data structures, such as strings, hashes, lists, sets, and sorted sets. Redis also provides features like pub/sub messaging, transactions, and Lua scripting. Because of its speed and versatility, Redis is often used to cache frequent queries, manage sessions, or queue tasks in web applications.

What is Docker?

Docker is a platform used to develop, ship, and run applications inside containers. A container is a lightweight, standalone executable that includes everything needed to run a piece of software—code, runtime, system tools, libraries, and settings.

Docker containers are isolated from each other and the host system, which makes them portable and consistent across development, testing, and production environments. Docker images are the blueprints of containers, built using a Dockerfile that defines the application environment and dependencies.

Benefits of Running Redis in a Docker Container

Running Redis in a Docker container provides several advantages that enhance both development and production workflows:

  • Simplified Deployment: Docker allows Redis to be deployed quickly and consistently across different environments by encapsulating its configuration, dependencies, and runtime into a container.
  • Portability: Redis containers can run on any system that supports Docker, ensuring uniform behavior regardless of the underlying infrastructure or operating system.
  • Isolation: Redis runs in its own isolated container, reducing the risk of conflicts with other services or applications on the host machine.
  • Scalability: Docker makes it easier to spin up multiple Redis containers to support horizontal scaling, which is useful for applications with high throughput or distributed architecture.
  • Version Control: Docker images can be tagged with Redis versions, making it easy to test new versions or roll back to stable ones without complex installations.
  • Infrastructure as Code: Redis configurations and setup can be defined in Docker Compose files or Kubernetes manifests, supporting repeatable and automated deployments.
  • Resource Management: Docker provides control over CPU and memory usage, which helps in allocating the right resources to Redis and avoiding overconsumption.
  • Rapid Testing and Development: Developers can quickly start a Redis container locally for development or testing, eliminating the need for installing Redis manually.

How to Run Redis in Docker

To run Redis in Docker, start by installing Docker Desktop. This gives you access to essential tools like the Docker CLI and Docker Compose and also provides a graphical interface for managing containers and images.

1. Pull the Redis Image

Fetch the official Redis image from Docker Hub using the following command:

docker pull redis
#=> Using default tag: latest
#=> latest: Pulling from library/redis
#=> 37259e733066: Pull complete 
#=> 929c063e7c67: Pull complete 
#=> 6487d14aef1c: Pull complete 
#=> 1951cc36241a: Pull complete 
#=> 210bcecec106: Pull complete 
#=> 4f4fb700ef54: Pull complete 
#=> 6cdbd38be072: Pull complete 
#=> Digest: sha256:b43d2dcbbdb1f9e1582e3a0f37e53bf79038522ccffb56a25858969d7a9b6c11
#=> Status: Downloaded newer image for redis:latest
#=> docker.io/library/redis:latest

This downloads the latest version of Redis. If you want a smaller image, consider pulling a lightweight variant like redis:alpine3.16.

2. Start a Redis Container

You can launch Redis as a background service using:

docker run --name my-redis -d redis
#=> df3788fe7e8ffc1363a818608688d496e78907981b5a7ac95cd6b7bd650742e7

docker ps
#=> CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS      NAMES
#=> df3788fe7e8f   redis     "docker-entrypoint.s…"   4 seconds ago   Up 3 seconds   6379/tcp   my-redis

This creates a container named my-redis and runs it in detached mode. Redis starts automatically and is accessible to your applications.

3. Enable Persistent Storage

To ensure Redis data persists across container restarts, run Redis with the following command:

docker run --name my-redis -d redis redis-server --save 60 1 --loglevel warning

This enables RDB snapshot persistence, saving the dataset every 60 seconds on disk. Redis stores its data in the /data volume, which can be shared between containers.

4. Access Redis via CLI

To interact with the Redis server, you can use Redis CLI, Redis’s built-in command-line interface. Using Docker, you can simulate a real-world setup where a client communicates with a Redis server, both in isolated containers.

First, create a dedicated network to allow communication between containers:

docker network create my-network

Start a Redis server container named my-redis within the same network:

docker run -d --network my-network --name my-redis redis

Next, run a temporary client container to interact with the server. Note that the redis image contains both the Redis server and the Redis CLI client utility. As shown below, use the redis-cli command to start the container as a client:

docker run -it --network my-network --rm redis redis-cli -h my-redis


my-redis:6379> SET key1 value1
OK
my-redis:6379> GET key1
"value1"
  • -it enables interactive mode for entering Redis commands.
  • --rm removes the container after exiting.
  • redis-cli -h my-redis connects to the server (my-redis) over the shared network.

Once connected, you can execute Redis commands (e.g., SET, GET, PING) directly in the terminal. The client container exits cleanly when done, while the server continues running.

5. Use Custom Configuration

For production setups, you can use a custom redis.conf file to override default settings. Create a Dockerfile like this:

FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]

Build the Docker Image using the following command:

docker build -t custom-redis .

You can run the above custom Redis container using the following command:

docker run -d --name my-redis -p 6379:6379 custom-redis

Alternatively, mount a local directory containing the config:

docker run -v /myredis/conf:/usr/local/etc/redis --name myredis redis redis-server /usr/local/etc/redis/redis.conf

Before doing this, ensure Docker Desktop has permission to access /myredis/conf via its file sharing settings.

Persisting Redis Data in Docker Containers

Redis, when running inside a Docker container, stores its data in the same default location as it would outside the container. The default location of the Redis data storage is /data within the container. Note that persisted data is stored inside the container’s internal filesystem. When restarting the container, Docker by default destroys the container’s writable layer and creates a new one unless a named or mounted volume is used. To persist Redis data across container restarts or removal, you can mount a volume from the host machine to the /data directory inside the container. Here’s an example using docker run command:

docker run -d --name redis-container -v /path/to/your/host/directory:/data \redis:latest

In this example, replace /path/to/your/host/directory with the actual path to the desired directory on your host machine. This will ensure that the Redis data inside the container persists even after the container is stopped or removed.

Additionally, if you’re using Docker Compose, here is an example of how to define a Redis service with a named volume:

version: '3'
services:
  redis:
    image: "redis:latest"
    volumes:
      - redis-data:/data

volumes:
  redis-data:
    driver: local

By using this docker-compose.yml file and running docker compose up, Redis data will be stored in a named volume called redis-data.


Dragonfly: The Next-Generation In-Memory Data Store

Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt existing legacy technologies, Dragonfly redefines what an in-memory data store can achieve. With Dragonfly, you get the familiar API of Redis without the performance bottlenecks, making it an essential tool for modern cloud architectures aiming for peak performance and cost savings. Migrating from Redis to Dragonfly requires zero or minimal code changes.

Key Advancements of Dragonfly

  • Multi-Threaded Architecture: Efficiently leverages modern multi-core processors to maximize throughput and minimize latency.
  • Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
  • Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
  • Redis API Compatibility: Offers seamless integration with existing Redis applications and frameworks while overcoming its limitations.
  • Innovative Design: Built to scale vertically and horizontally, providing a robust solution for rapidly growing data needs.

Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore.

Was this content helpful?

Help us improve by giving us your feedback.

Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost