Dragonfly

Caching with Dragonfly and TypeScript in 5 Minutes

Add Redis-compatible caching to your TypeScript/JavaScript app with Dragonfly in 5 minutes. A full code walkthrough is included.

July 8, 2025

Caching with Dragonfly and TypeScript in 5 Minutes

Introduction: The Beauty of Standard APIs

In modern software development, standards are the unrewarded heroes. They ensure compatibility, reduce vendor lock-in, and let developers focus on building features rather than reinventing the wheel. In this guide, we’ll explore how a carefully chosen stack leverages standardized (or de facto standard) APIs to build a fast, portable, and maintainable application.

The Tooling Stack & Why Standards Matter

Our project relies on a few key technologies.

Hono is a minimal, fast web framework that adheres to standard web APIs (i.e., the Fetch API and the Request and Response objects), making it work across runtimes. There’s no reason for frameworks to invent proprietary request/response models in 2025. Hono embraces this philosophy by building on the Web Standards, ensuring your code runs identically across every major TypeScript/JavaScript environment.

Zod is a popular TypeScript-first schema validation library. Over recent years, Zod has become one of the most popular solutions for schema validation, offering developer-friendly syntax, type safety, and runtime validation. In the meantime, the creator of Zod collaborated with authors of other validation libraries (Valibot, ArkType) to establish Standard Schema, a shared specification for validation patterns across the TypeScript/JavaScript ecosystem. While this isn’t an official standard (yet), it represents the effort of library authors working together to reduce fragmentation.

Dragonfly is a fully Redis-compatible, multi-threaded, high-performance in-memory data store built for the most demanding workloads. The Redis serialization protocol (RESP) represents one of those cases where a popular API becomes the de facto standard. Dragonfly honors this reality by maintaining full Redis API compatibility while completely rearchitecting the underlying engine. The result is a drop-in replacement that requires zero code changes but delivers dramatically better performance through its modern multi-threaded, shared-nothing architecture.

Last but not least, we will pair one of the most advanced open-source relational databases, PostgreSQL, with a lightweight, type-safe ORM and query builder, DrizzleORM, to build the storage layer. We will also utilize development tools like Docker and Bun (one of the fastest runtimes) and other libraries within the TypeScript/JavaScript ecosystem.


Building a URL Shortener with Caching

URL shorteners can be a great example demonstrating the power of a caching layer in the real world. While simple in concept, their traffic patterns create unique challenges. For instance, a single short URL can generate thousands of redirects in minutes when shared virally. Moreover, new links experience immediate bursts of traffic upon sharing, then often decay to near-zero usage over time. Even in this seemingly simple system, a proper caching layer can reduce database load by orders of magnitude while improving response times and system stability. Now let’s walk through the URL shortener project we are about to build. If you want to follow along, all the code snippets in this tutorial are available in our example repository.

Prerequisites

Before we begin, ensure you have the following tools installed:

  • We use the Bun toolchain to run our TypeScript code. Bun is an all-in-one TypeScript/JavaScript runtime & toolkit designed for speed.
  • We also use Docker, which is a platform used to develop, ship, and run applications inside containers. A container is a lightweight, standalone executable that includes everything needed to run a piece of software—code, runtime, system tools, libraries, and settings.
  • Last but not least, we also use npx (as used by Drizzle Kit) to make the database migration process easier. However, if you want to skip npx, the migration script within this repository can be applied directly to your local database as well.

Running PostgreSQL, Redis, and Dragonfly

First, let’s make sure we have PostgreSQL, Redis, and Dragonfly server instances running locally in Docker. Note that we won’t be using Redis and Dragonfly at the same time. Having them both running will be helpful to showcase how easy it is to switch from Redis to Dragonfly.

services:
  dragonfly:
    image: "docker.dragonflydb.io/dragonflydb/dragonfly"
    container_name: "cache-with-hono-dragonfly"
    ports:
      - "6380:6379"
  redis:
    image: "redis:latest"
    container_name: "cache-with-hono-redis"
    ports:
      - "6379:6379"
  postgres:
    image: "postgres:17"
    container_name: "cache-with-hono-postgres"
    ports:
      - "5432:5432"
# Some details are omitted. You can find the full example in our example repository.

This Docker Compose configuration above sets up the three services for local development. Nothing very special except we make sure that Redis and Dragonfly are mapped to different local ports. From within the example project directory, dragonfly-examples/cache-in-5mins-hono, run the following command to spin up the database and in-memory data store servers. (Note that all shell commands listed below should be run within the current project directory.)

$> docker compose up -d
#=> [+] Running 4/4
#=>  ✔ Network cache-in-5mins-hono_default  Created     0.0s
#=>  ✔ Container cache-with-hono-redis      Started     0.2s
#=>  ✔ Container cache-with-hono-dragonfly  Started     0.2s
#=>  ✔ Container cache-with-hono-postgres   Started     0.2s

Service Connections Setup

Before actually running our application server, let’s take a look at a few code snippets to have a better understanding. The code below initializes the database and cache connections for our URL shortener service.

import { drizzle } from "drizzle-orm/node-postgres";
import { Redis as Cache } from "ioredis";
import * as schema from "./schema";

// For simplicity, we are using local Dragonfly/Redis and PostgreSQL instances.
// Please ensure they are running locally and adjust the connection details as needed.
const cache = new Cache({
  host: "localhost",
  port: 6379, // Redis running locally.
});

const db = drizzle(
  "postgresql://local_user_dev:local_pwd_dev@localhost:5432/appdb",
  { schema: schema },
);

const app = new Hono();

// Handlers will be shown later.

// Run the server.
const PORT = 3000;
serve({ fetch: app.fetch, port: PORT });
console.log(`Server running on http://localhost:${PORT}`);

First, we create a Redis-compatible cache client using ioredis, configured to connect to localhost:6379. Later on, we can simply change the port to 6380 and use Dragonfly instead, with no other modifications required. Next, we set up a type-safe PostgreSQL connection through Drizzle, using the connection string and automatically integrating with our application’s schema definitions. Finally, we prepare a fresh Hono application instance that will later contain the route handlers. For production deployments, you would replace these localhost URLs with your actual service endpoints, ideally fetched from a secure location, while keeping the same connection logic.

Database Schema

The code below defines our short_links table schema using Drizzle’s declarative syntax. Each URL shortening record contains a UUIDv7 primary key (id), the original URL, and a unique short code string. The timestamps track creation time and expiration, both with timezone awareness. The notNull() constraints ensure data integrity by preventing null values in all fields. What makes this particularly powerful is how Drizzle and Zod combined can transform this static schema definition into both runtime validation and static type inference so that the database schema’s information is automatically blended into the application’s type system.

import { pgTable, uuid, varchar, timestamp } from "drizzle-orm/pg-core";

// Table schema for 'short_links'.
export const shortLinksTable = pgTable("short_links", {
  id: uuid().primaryKey(),
  originalUrl: varchar("original_url", { length: 4096 }).notNull(),
  shortCode: varchar("short_code", { length: 30 }).notNull(),
  createdAt: timestamp("created_at", { withTimezone: true }).notNull(),
  expiresAt: timestamp("expires_at", { withTimezone: true }).notNull(),
});

A few deliberate design choices warrant explanation: The original_url field uses a 4096-character limit, which should cover most use cases, and modern systems like NGINX also default to 4096 bytes for HTTP URLs. For the short_code, we store the base64-encoded UUID (always 22 characters without padding) as a derived field, despite it being technically derivable from the id. This denormalization provides flexibility if encoding strategies change later. While the core shortening logic relies on base64-encoded UUIDs for their fixed length and collision resistance, we acknowledge that real-world URL shorteners prioritize domain length (e.g., bit.ly). Without a lean domain, much effort in code is wasted. With that said, our implementation favors simplicity and correctness over extreme string compression.

Validation

Next, we can derive a request validator from our table schema as shown below. This code demonstrates a powerful integration between Drizzle and Zod, creating a validated input pipeline for new short link creations. The createInsertSchema automatically generates a base Zod schema from our Drizzle table definition, which we then customize to enforce strict URL validation for the originalUrl field while omitting other fields that will be auto-generated.

import { createInsertSchema } from "drizzle-zod";
import { v7 as uuidv7, stringify as uuidStringify } from "uuid";
import { z } from "zod/v4";

import { shortLinksTable } from "./schema";

// Validator and transformer for creating a new 'short_links' entry.
// Only the original URL is validated.
// All other fields are transformed/generated by our predefined rules.
export const shortLinkInsertSchema = createInsertSchema(shortLinksTable, {
  originalUrl: (val) => z.url(),
})
  .strict()
  .omit({
    id: true,
    shortCode: true,
    createdAt: true,
    expiresAt: true,
  })
  .transform((data) => {
    const idBytes = new Uint8Array(16);
    uuidv7(undefined, idBytes);
    const id = uuidStringify(idBytes);
    const shortCode = Buffer.from(idBytes).toString("base64url");
    const createdAt = new Date();
    const expiresAt = new Date(createdAt);
    expiresAt.setDate(expiresAt.getDate() + 30); // Expire in 30 days.
    return {
      ...data,
      id,
      shortCode,
      createdAt,
      expiresAt,
    };
  });

export type ShortLinkInsert = z.infer<typeof shortLinkInsertSchema>;

The real magic happens in the .transform() phase, where we generate a UUIDv7 identifier, set creation/expiration timestamps, and derive the 22-character base64-encoded short code from the UUID. The result is a type-safe pipeline that both validates user input and enriches it with system-generated values. This pattern eliminates validation drift between data and request layers while keeping business logic centralized. While this example primarily generates field values (with only originalUrl as user input), the same approach scales elegantly to complex user inputs, maintaining clear boundaries between validated inputs, transformed data, and system-generated values regardless of the number of fields.

Backend Server API

Now we have both the database and request schema defined. Let’s build two API endpoints for our Hono server. The first endpoint below takes in a long URL passed by a user and returns a short URL. It showcases a complete validation-to-persistence flow for creating short links. The zValidator middleware first validates and transforms the incoming JSON payload using our shortLinkInsertSchema defined above. Once validated, the request data, now fully typed as ShortLinkInsert, is inserted into PostgreSQL via Drizzle’s type-safe query builder. The handler operation concludes with proactive caching, where we set the key with precise expiration timing using the EXAT option and the Unix timestamp in seconds. By immediately caching after database insertion, we guarantee subsequent redirects will bypass the database entirely.

import { zValidator } from "@hono/zod-validator";
import { shortLinksTable } from "./schema";
import { shortLinkInsertSchema, ShortLinkInsert } from "./validator";

app.post(
  "/short-links",
  zValidator("json", shortLinkInsertSchema),
  async (c) => {
    // Validate and transform the request.
    const req: ShortLinkInsert = c.req.valid("json");

    // Save the new record in the database.
    await db.insert(shortLinksTable).values(req).execute();

    // Cache the new record in Redis/Dragonfly.
    const expiresAt = Math.trunc(req.expiresAt.getTime() / 1000);
    await cache.set(req.id, req.originalUrl, "EXAT", expiresAt);
    return c.json(req);
  },
);

The second handler below implements the redirect logic with cache-first optimization. When a request hits our short domain (e.g., sh.ort/AZfn75PIc6OSM6nk0WKb6Q), it first checks the caching layer using the decoded UUID as the key. If cached (hot path), it redirects immediately. On cache misses (cold path), it falls back to PostgreSQL, verifying the link exists and hasn’t expired. Crucially, it repopulates the cache on misses using the original expiresAt timestamp, making evicted links available again in the cache. This common two-tiered storage approach is used in many backend services, combining sub-millisecond reads of the caching layer and the durability of a reliable on-disk database, ensuring reliability and maintaining low latency for popular links.

import { eq, and, gt } from "drizzle-orm";
import { stringify as uuidStringify } from "uuid";
import { shortLinksTable } from "./schema";
import { ShortLinkSelect } from "./validator";

app.get("/:shortCode", async (c) => {
  // Parse the short code as a UUIDv7.
  const shortCode = c.req.param("shortCode");
  const idBytes = new Uint8Array(Buffer.from(shortCode, "base64url"));
  const id = uuidStringify(idBytes);

  // Read from cache.
  const originalUrl = await cache.get(id);

  // Cache hit: redirect.
  if (originalUrl) {
    return c.redirect(originalUrl);
  }

  // Cache miss: read from database, cache the record again if it exists.
  const result: ShortLinkSelect | undefined =
    await db.query.shortLinksTable.findFirst({
      where: and(
        eq(shortLinksTable.id, id),
        gt(shortLinksTable.expiresAt, new Date()),
      ),
    });
  if (!result) {
    return c.notFound();
  }
  const expiresAt = Math.trunc(result.expiresAt.getTime() / 1000);
  await cache.set(result.id, result.originalUrl, "EXAT", expiresAt);
  return c.redirect(result.originalUrl);
});

Running the Backend Server

Finally, it’s time to launch! Run bun install to grab dependencies, then use bun run dev to start the server. Test it live by curling:

$> bun install
#=> bun install v1.2.17 (282dda62)
#=> + @types/bun@1.2.17
#=> + @types/pg@8.15.4
#=> + @hono/zod-validator@0.7.0
#=> ...

$> bun run dev
#=> $ bun run --hot src/index.ts
#=> Server running on http://localhost:3000

$> curl --request POST \
  --url http://localhost:3000/short-links \
  --header 'Content-Type: application/json' \
  --data '{
	"originalUrl": "https://www.google.com/"
}'

$> curl --request GET --url http://localhost:3000/{SHORT_CODE}

And that’s it, you have set up a TypeScript/JavaScript project with storage and caching in 5 minutes!


Conclusion: Same API, More Performance

Before we wrap up, let’s try two more things. Firstly, imagine your service goes viral and Redis becomes the bottleneck of scaling. You can and you should switch the cache connection string to Dragonfly, and the application continues to work without a hiccup. Thanks to the "cache miss" code path, which passively re-caches recently accessed items, Dragonfly automatically warms up with frequently accessed URLs, maintaining peak performance even under the most demanding load.

const cache = new Cache({
  host: "localhost",
  port: 6380,// Dragonfly running locally.
});

How about switching runtimes? Since Hono uses standard Web APIs and Bun aims for 100% Node.js compatibility, our application runs seamlessly across Bun, Node.js, and other TypeScript/JavaScript runtime environments with zero or minimal code and configuration changes.

$> bun run dev-node
#=> $ tsx watch src/index.ts (Note that tsx is a Node.js enhancement to run TypeScript.)
#=> Server running on http://localhost:3000

At Dragonfly, the team embraces the same philosophy as Bun, Drizzle, Hono, and Zod do: we don’t break compatibility for the sake of change. By preserving the Redis API while rebuilding the underneath architecture for modern multi-core hardware, Dragonfly delivers drop-in compatibility with both vertical (multi-core) and horizontal (multi-node) scalability, without forcing developers to rewrite their apps.

Ready to experience the difference? Get started with Dragonfly locally, or deploy a fully managed data store on Dragonfly Cloud, which will provision and be ready to use in under 5 minutes as well!

Dragonfly Wings

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost