Dragonfly

Google Memorystore: Architecture, Pros/Cons and Best Practices

Memorystore is a Google Cloud service providing fully managed, high-performance in-memory data storage.

September 16, 2025

Google Cloud Memorystore | Cover Image

What Is Memorystore? 

Memorystore is a Google Cloud service providing fully managed, high-performance in-memory data storage. It supports popular caching engines such as Valkey, Redis, and Memcached, enabling low-latency and high-throughput data access. This service is commonly used to optimize web applications, gaming platforms, analytics, and more. With features like built-in replication, automated failover, and integration with other Google Cloud services, Memorystore minimizes operational overhead and ensures uninterrupted performance.

In this article:

  • Memorystore Supported Engines
  • Memorystore Architecture
  • Key Features of Memorystore
  • Key Limitations of Google Memorystore
  • Memorystore Pricing
  • Best Practices for Effective Use of Memorystore

Memorystore Supported Engines 

Redis

Memorystore for Redis provides a fully managed, in-memory key-value store based on the open-source Redis engine. It supports most core Redis data structures, including strings, hashes, lists, sets, and sorted sets, allowing developers to implement complex caching and data manipulation logic directly within the cache layer.

The service offers features such as automatic failover, replica synchronization, monitoring, and IAM-based access control. Memorystore also manages patching and maintenance, eliminating operational overhead. However, some Redis modules and commands are not supported to ensure stability and security in the managed environment.

Valkey

Valkey is an open-source fork of Redis, initiated as a community-driven alternative following Redis’ licensing changes. Memorystore’s support for Valkey enables users to adopt this emerging engine while maintaining compatibility with Redis APIs and data structures.

Valkey support is designed to future-proof caching architectures and align with open governance principles. It aims to deliver performance and feature parity with Redis while fostering faster innovation through a more open development model. Memorystore ensures that Valkey deployments receive the same operational benefits as Redis, including monitoring, scalability, and security integration.

Memcached

Memorystore for Memcached offers a lightweight, volatile caching solution suitable for simple key-value use cases where data persistence and complex structures are not required. It supports fast, in-memory storage for frequently accessed data such as dynamically rendered HTML, session data, or API responses. Memcached instances in Memorystore are managed by Google Cloud, with built-in monitoring to reduce maintenance effort.

Clustered Data Stores

For both Redis and Valkey, Memorystore offers clustering, which adds horizontal scalability. It partitions data across multiple shards, each with its own memory and compute resources, allowing the cache to scale beyond the limitations of a single node.

This setup supports high throughput and availability by enabling parallel processing and distributing load across nodes. Applications must use a cluster-aware client to interact with the sharded dataset, as the cluster mode changes how keys are mapped and accessed. Failover is handled per shard, further improving resilience in large deployments.

Memorystore also follows the traditional way of scaling Memcached by utilizing client-side sharding with auto-discovery support.


Memorystore Architecture

Memorystore is designed to simplify in-memory data storage by offering a fully managed service that handles the operational complexity of Redis, Valkey, and Memcached. Its architecture supports high availability, scalability, and performance with minimal manual intervention.

Memorystore automates core operations such as provisioning, replication, patching, and failover. Redis and Valkey clusters are designed for zero-downtime scaling and support up to 250 shards per cluster instance. These instances can distribute shards across multiple zones to maximize resilience and ensure sub-millisecond latency even at large scales.

Connectivity is provided through private networking options such as Private Service Connect, depending on the engine, to improve security and manageability. Monitoring is integrated via Cloud Monitoring.

Security is enforced through VPC isolation, IAM roles, in-transit encryption, and authentication controls. Memorystore also supports persistence options like RDB snapshots.


Key Features of Memorystore

Here are the primary features offered by the Memorystore service:

  1. Fully Managed Infrastructure: Handles provisioning, patching, replication, and failover automatically.
  2. Zero-Downtime Scaling: Scale Memorystore for Redis and Valkey clusters up to 250 nodes and terabytes of capacity.
  3. High Availability:
    • Cluster instances: 99.99% SLA with zonal redundancy and automatic failover.
    • Standalone instances: 99.9% SLA with automatic failover.
  4. Multiple Caching Engines: Supports Redis, Valkey, and Memcached with full protocol compatibility.
  5. Security: Supports VPC networks and private IP and comes with IAM integration and in-transit encryption.
  6. Monitoring: Integrated with Cloud Monitoring and supports OpenCensus for client-side insights.
  7. Persistence: Append-Only File (AOF) logging or RDB snapshots for durable recovery in Redis and Valkey deployments.
  8. Migration-Ready: Compatible with open-source protocols. Migrate existing setups without code changes.
  9. Vector Search Support (Preview): Includes approximate (ANN) and exact (KNN) nearest neighbor search for generative AI use cases.

Key Limitations of Google Memorystore 

While Google Memorystore is a respected solution, it has some limitations to be aware of. These limitations were reported by users on G2

  1. High Cost Compared to Alternatives: A common concern among users is the higher cost of Memorystore relative to self-hosted or third-party solutions. Pricing can become especially burdensome for high-memory instances or workloads with large data volumes. Several users noted that while the service is convenient, costs can accumulate quickly as usage scales, making it less attractive for budget-conscious teams or smaller organizations.
  2. Limited Customization and Feature Gaps: Because Memorystore is a managed service, it offers limited flexibility for users needing advanced configurations. Some Redis commands and modules are not supported, restricting use cases that rely on extended Redis functionality. Additionally, support for caching engines other than Redis and Memcached is lacking, with some users requesting broader engine compatibility.
  3. Monitoring and Alerting Constraints: While basic monitoring is integrated, users have expressed a need for more detailed observability tools. There are limitations in the granularity of built-in alerts and metrics, which can make proactive incident management more difficult for complex or high-throughput deployments.
  4. Risk of Vendor Lock-In: Several users mentioned concerns about becoming too reliant on the Google Cloud ecosystem. While Memorystore integrates seamlessly within GCP, this tight coupling can pose challenges for organizations planning to adopt a multi-cloud strategy or migrate workloads in the future.
  5. Documentation and Usability Issues: Some reviews mentioned that documentation could be more complete, particularly around IAM configurations and performance tuning. The user interface was also reported to have occasional performance issues, and learning the platform can be challenging for teams unfamiliar with Google Cloud conventions.
  6. Data Durability and Persistence: Memorystore is primarily designed for ephemeral caching use cases. While Redis/Valkey-based instances support persistence options, some users noted concerns about data loss risks during node failures or restarts, especially when persistence is not configured.

Memorystore Pricing

Google’s pricing for the memorystore service varies depending on the caching engine used. Below we show the main pricing options for Redis Cluster, Redis, and Memcached as of September 2025. Google Cloud pricing is subject to change. For up-to-date pricing and additional options, see the official pricing page for Redis and Memcached.

Memorystore for Redis Cluster Pricing

Memorystore for Redis Cluster pricing is determined by several key components: node type, provisioned capacity, region, replica count, and optional features like AOF persistence and backups.

Node-Based Pricing

The cost is primarily based on the number and type of nodes provisioned. Node types vary in memory capacity—from 1.4 GB nano nodes to 58 GB xlarge nodes—and prices differ accordingly. For example, in the Taiwan region, a redis-shared-core-nano node costs $0.0368 per hour, while a redis-highmem-xlarge node costs $0.9936 per hour. Charges are incurred in one-second increments from the time an instance is created.

AOF Persistence

For instances with AOF (Append Only File) persistence enabled, an additional per-GB hourly charge applies based on the instance’s size and region. For example, in Johannesburg, the AOF storage rate is $0.00071671 per GB per hour. This charge also starts accruing in one-second increments once AOF is enabled.

Backups

Backup storage is billed separately. Each backup incurs a per-GB hourly cost depending on region, with a minimum charge of 24 hours. In Johannesburg, the backup rate is $0.00013889 per GB per hour. Backups are not automatically deleted when a cluster is removed.

Networking Costs

Memorystore uses Private Service Connect for secure access. There is no charge for intra-zone traffic or deploying PSC endpoints. However, inter-zone traffic within a region incurs data processing charges, and inter-region replication results in egress costs. For example, traffic from a cluster in North America to a secondary cluster in Europe costs $0.05 per GB.

Example Calculation

A Redis Cluster instance with 5 shards and one replica per shard (10 total nodes), using the redis-highmem-medium type in Iowa (at $0.1923 per node per hour), would cost approximately $1.92 per hour. Additional AOF or backup charges and network egress fees may apply depending on configuration.

Memorystore for Redis Pricing

Pricing for Memorystore for Redis is determined by multiple factors: service tier (Basic or Standard), provisioned capacity, regional deployment, and optional configurations like read replicas. Google Cloud charges in 1-second increments, beginning from the moment a Redis instance is created.

Service Tiers

  • Basic Tier provides a standalone Redis instance suitable for simple caching use cases.
  • Standard Tier offers high availability through cross-zone replication and automatic failover. It also supports read replicas to scale read throughput and improve redundancy.

Each tier has distinct pricing structures based on capacity and performance requirements.

Provisioned Capacity and Tiered Pricing

Pricing is based on the instance’s capacity tier, which is determined by the amount of memory provisioned. Larger capacity tiers offer better network throughput and lower per-GB rates. Example pricing in the Johannesburg region:

Tier

Capacity Range

Basic ($/GB/hr)

Standard ($/GB/hr)

M1

1–4 GB

$0.06409

$0.08371

M2

5–10 GB

$0.03532

$0.07063

M3

11–35 GB

$0.03008

$0.06017

M4

36–100 GB

$0.02485

$0.04578

M5

>100 GB

$0.02093

$0.03924

If an administrator changes an instance’s capacity and moves it to a different tier, billing switches to the new rate once scaling is complete.

Read Replica Pricing

Standard Tier instances support up to five read replicas (M2 and above). Each node, including replicas, is billed individually at a per-GB rate. Example replica pricing in Johannesburg:

Capacity Tier

Price per GB/hr per Node

M2

$0.03532

M3

$0.03008

M4

$0.02485

M5

$0.02093

Replicas can improve read performance and availability but add to total instance costs.

Network Pricing

  • Intra-region traffic (within the same region) is free from the Memorystore side, though clients may incur cross-zone egress charges.
  • Inter-region traffic incurs network egress fees, e.g., $0.01/GB in North America, $0.02/GB in Europe, and up to $0.15/GB for Indonesia and Oceania.

These charges apply when clients access Redis instances across different Google Cloud regions.

Pricing Examples

  • A Basic Tier 8 GB M2 instance in Iowa, at $0.027/GB/hr, would cost $0.22 per hour or about $160.60 per month.
  • A Standard Tier 20 GB M3 instance running for 90 minutes would cost $1.38 (20 GB × $0.046 × 1.5 hours). Scaling it to 50 GB (M4) would then switch billing to $0.035/GB/hr from the moment scaling finishes.

Memorystore for Memcached Pricing

Memorystore for Memcached pricing is determined by the number of nodes, the vCPUs and memory provisioned per node, and the region where the instance is deployed. Charges accrue in 1-second increments from the time an instance is created.

Cost Components

Memcached pricing includes two main elements:

  • vCPUs per Node: Each node is billed based on the number of virtual CPUs provisioned.
  • Memory per Node: Memory pricing varies depending on whether the node has more than 4 GB of RAM.

The total hourly cost for an instance is calculated by multiplying the per-unit price of vCPUs and memory by the number of nodes and their individual configurations.

For example, in the Taiwan (asia-east1) region:

Item

Price (per hour)

vCPU

$0.058 per vCPU

≤ 4 GB RAM

$0.0051 per GB

> 4 GB RAM

$0.0103 per GB

Network Charges

Memcached traffic is required to stay within the same region as the instance. Memorystore itself does not charge for ingress or egress. However, data movement to and from other Google Cloud services (e.g., Compute Engine) may incur egress charges under those services’ pricing policies.

Pricing Examples

  • Small Instance:
    • 1 node in Iowa (us-central1)
    • 1 vCPU and 1 GB RAM
    • Hourly cost: (1 × $0.050) + (1 × $0.0044) = $0.0544/hour
  • Larger Instance:
    • 4 nodes in Iowa (us-central1), each with 4 vCPUs and 25 GB RAM
    • Total: 16 vCPUs and 100 GB memory
    • Hourly cost: (16 × $0.050) + (100 × $0.0089) = $1.69/hour

Best Practices for Effective Use of Memorystore

Organizations should implement the following practices when using Memorystore.

1. Memory Management

Efficient memory management in Memorystore starts with choosing the right data structures. For example, use strings for simple values and hashes for grouping related fields. Avoid large lists or sets that grow indefinitely, as they can quickly exhaust memory. Set expiration (TTL) for cache entries to automatically evict stale data and free up space.

Use eviction policies like volatile-lru or allkeys-lru to handle memory pressure gracefully. Monitor key eviction rates and memory usage through Google Cloud metrics. Over-provisioning memory can reduce eviction risks, but it increases cost—aim for a balance based on workload patterns. Avoid storing large blobs; use object stores like Cloud Storage for such data and cache only metadata or frequently accessed parts.

2. Maintenance and Availability

Memorystore is managed by Google, but users still need to plan for scheduled maintenance and failover scenarios. Use the Standard Tier for Redis to benefit from automatic replication and zone-level high availability. Memorystore automatically applies patches and updates, minimizing manual effort, but applications should be resilient to brief instance restarts or reconfigurations.

For maintenance resilience, implement retry logic and exponential backoff in clients. For Redis Cluster, monitor shard health and redistribute load proactively if a node fails. Schedule maintenance during off-peak hours when possible and subscribe to maintenance notifications in the Cloud Console.

3. Security and Access Control

Memorystore integrates with Google Cloud IAM for administrative access control. Use IAM roles to grant only the minimum required permissions to users and service accounts. Avoid granting overly broad roles like Editor unless necessary.

For network-level security, deploy instances in private VPCs and use Private Service Connect (PSC) to control access. Restrict access to specific IP ranges or subnets when needed. Redis and Memcached on Memorystore do not support TLS or encryption-in-transit within the service; secure network design and firewall rules are essential to protect sensitive data.

4. Monitoring and Alerts

Use Google Cloud Monitoring to track key performance indicators such as memory usage, command rates, eviction counts, and replication lag. Set up custom dashboards to visualize trends and identify bottlenecks early.

Configure alerts for critical thresholds, such as memory utilization exceeding 80%, high command latency, or frequent failovers. For Redis, monitor metrics like used_memory_rss, connected_clients, and instantaneous_ops_per_sec.

5. Scaling and Performance

For performance, distribute load across multiple clients within a client pool and use pipelining to reduce latency. Choose the appropriate instance size and adjust based on metrics like CPU load and memory usage. In Redis Cluster, design keys to ensure even shard distribution and avoid hotkeys that overload a single node.

Memcached scales horizontally by default; add nodes to handle increased load. Redis supports vertical and horizontal scaling via Standard Tier and Redis Cluster. Plan for scaling events during low-traffic windows and validate client compatibility before switching to cluster mode. Always benchmark changes in staging environments to ensure they meet performance goals.


Dragonfly: The Next-Generation In-Memory Data Store

Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt existing legacy technologies, Dragonfly redefines what an in-memory data store can achieve. With Dragonfly, you get the familiar API of Redis without the performance bottlenecks, making it an essential tool for modern cloud architectures aiming for peak performance and cost savings. Migrating from Redis to Dragonfly requires zero or minimal code changes.

Key Advancements of Dragonfly

  • Multi-Threaded Architecture: Efficiently leverages modern multi-core processors to maximize throughput and minimize latency.
  • Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
  • Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
  • Redis API Compatibility: Offers seamless integration with existing applications and frameworks running on Redis while overcoming its limitations.
  • Innovative Design: Built to scale vertically and horizontally, providing a robust solution for rapidly growing data needs.

Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore.

Was this content helpful?

Help us improve by giving us your feedback.

Switch & save up to 80%

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost