Dragonfly Cloud is now available in the AWS Marketplace - learn more

Why is Your In-Memory Data Store Bill so High?

Discover the most common causes of rising Redis/Valkey costs, including over-provisioning, pricing model misalignment, and cluster inefficiencies, along with strategies to address these issues.

November 12, 2024

Why is Your In-Memory Data Store Bill so High?

A few weeks ago, a friend of mine who manages a financial service application reached out to me with questions that may sound familiar to anyone grappling with high infrastructure costs. They had recently looked into Dragonfly Cloud and noticed how significantly it could cut their expenses compared to their current in-memory data store setup. And they are in the process of migrating to Dragonfly Cloud, by the way! But they couldn't shake a suspicion about the cost cut they were about to make: "It's too good to be true. What am I missing? What's the catch?"

Their questions highlighted a phenomenon that I've observed with Redis or Valkey users: unexpected costs that quietly accumulate over time. This post will break down some of the common reasons your Redis bill might be higher than anticipated. Whether you're struggling with excessive data retention, facing high operational overhead, or misinterpreting usage needs, understanding and avoiding these pitfalls can lead to potential savings and help you better manage your in-memory data stores at scale.


Over-Provisioning Cost

Over-Provisioning for Snapshotting & Replication

One of the most common reasons for high Redis or Valkey bills is over-provisioning—a familiar pattern for anyone managing data-intensive applications. For most databases, a certain level of over-provisioning can be expected, as it ensures you have enough headroom to handle peak traffic and an increasing user base, as well as to avoid performance issues. However, with Redis or Valkey, this challenge is amplified due to their snapshot & backup processes. Because of the copy-on-write mechanism Redis/Valkey uses to take snapshots, a write-heavy Redis/Valkey instance may require twice the memory to handle both live data and snapshotting, meaning you might need to double your memory allocation just to ensure stability during snapshotting. In a managed cloud service, this setup may feel more seamless since you only need to choose your persistence settings, such as how often to take snapshots and whether to use AOF or RDB. However, the extra cost of the capacity and operational overhead needed to support snapshotting is already factored into your bill, often leading to unexpected expenses.

You May Need Twice the Memory to Handle Both Live Data & Snapshotting

This over-provisioning challenge is further compounded by replication, which relies on snapshotting for initial synchronization. As a result, the costs of keeping a reliable and resilient Redis/Valkey setup can grow quickly, even if you're only running a single instance or a setup with replicas and high availability.

Over-Provisioning All Cluster Nodes

Now, let's consider cluster size for horizontal scaling briefly here as well. While a single Redis instance has its limits in terms of queries per second (QPS) and memory, scaling with Redis Cluster often becomes necessary to meet performance demands. We will discuss the cost of clustering in more detail in the next section, but it's worth noting that over-provisioning is a significant factor in driving up in-memory data store bills when you're running a cluster since the same formula (i.e., 50% to 100% more memory) applies to all nodes in the cluster.


Clustering Cost

Another factor driving up in-memory data store bills is the cost of clustering, a necessity many users turn to when they outgrow the capacity of a single Redis/Valkey instance. Since they are mostly single-threaded, scaling horizontally with clustering becomes an attractive option to distribute data and handle more queries per second. However, this approach often comes with its own significant costs.

Clustering Resource Cost

Redis Cluster allows you to split your data across multiple nodes, each managing a segment of the dataset and handling requests independently. While this distribution can help meet higher performance requirements, it also demands more resources, which directly impacts your infrastructure expenses. Each node requires its own allocation of memory and CPU, with over-provisioning, to manage its portion of data, but the costs don't stop there. Clustering brings additional networking overhead as nodes communicate with each other to maintain consistency and handle replication. These extra resources can escalate rapidly as you add nodes to the cluster, leading to a steeper increase in resources and network costs than anticipated.

Over-Provisioning Applied to All Cluster Nodes

Clustering Complexity Cost

With clustering, there's also the operational complexity that accompanies any distributed system. Even if your managed cloud service handles the infrastructure, the inherent complexity of clustering means cloud providers must invest in more engineering resources or automation tools to ensure stability and performance. This operational overhead is built into the cost structure, indirectly contributing to the price you pay. Additionally, implementing clustering prematurely compounds these expenses. Scaling horizontally with clustering before it's necessary may seem like a precaution, but the increased complexity and resource requirements often lead to costs that might otherwise be avoidable.


Network Cost

Network costs are another significant factor that can drive up your cloud bill, particularly if you're frequently accessing data from outside the same cloud environment. Most cloud providers charge for network usage, especially for egress (outbound data) or inter-region transfers. These charges are standard across all services, but they can quickly add up if your application relies heavily on data exchange.

To minimize these costs, it's crucial to ensure that your backend services are communicating directly with your in-memory or on-disk data stores within the same cloud environment. By keeping data traffic internal to the cloud network, you can avoid unnecessary outbound data transfer fees. This is especially important in high-traffic applications, where frequent or large data transactions between the application and the data store could significantly impact your total cloud bill. Taking steps to optimize network usage and consolidate services within the same cloud and the same region can make a noticeable difference in managing network-related expenses.

It is always recommended to connect your application to the data store using VPC peering to not only reduce network costs but also improve stability, latency, and security.


Pricing Model Misalignment & Confusion

The pricing model by your provider can have a major impact on your costs, particularly if it doesn't align well with your actual usage patterns. A common pitfall is selecting a plan that isn't suited to your needs. For example, opting for a pay-as-you-go model when a committed-use plan would better match your steady or predictable workloads. The opposite can also happen: if your traffic is highly variable, a fixed-cost plan might leave you paying for unused capacity.

Moreover, the complexity of some service providers' pricing structures can make it difficult to fully understand the costs you're incurring. Many factors are interwoven into the pricing, such as the number of commands executed, the types of instances, and the topology setup. If you're developing your application based solely on what it requires in terms of functionality and performance, you may run into unanticipated charges as you exceed certain limits or incur excess fees. Conversely, building your application around the pricing model instead of your actual business needs is equally problematic, as it can lead to compromises in service quality or operational flexibility. As an example, how can your team accurately predict the precise number of read/write operations needed as traffic grows?

Additionally, while serverless pricing might seem appealing, it isn't always a practical solution. For applications with small, highly variable workloads, serverless pricing models can offer flexibility and cost efficiency. However, for workloads with consistent traffic or predictable growth, especially in-memory workloads where low-latency responses are crucial, serverless pricing often becomes less viable and can result in much higher overall costs. For these cases, a more predictable, instance-based model often aligns better with the requirements, as it supports stable performance without fluctuating charges.

You can easily perform a quick comparison between Dragonfly Cloud and ElastiCache Serverless for Valkey below just to get a sense of how the pricing models differ. Obviously, instance-based solutions are not as flexible as serverless offerings in terms of elasticity, but they can be more cost-effective in most scenarios. Before choosing a serverless model, it's essential to ask yourself the question: "Is scaling down to zero what my application really needs?"

WorkloadDragonfly Cloud StandardElastiCache Serverless for Valkey
3GB$36/month$181.44/month (excluding ECPU cost)
100GB$800/month$6,048/month (excluding ECPU cost)
200GB$1,600/month$12,096/month (excluding ECPU cost)
400GB$3,200/month$24,192/month (excluding ECPU cost)

Conclusion

As my friend discovered, cost savings in infrastructure can feel almost "too good to be true"—but the right choices can make a real difference. Some expenses, like network costs, are inevitable, but many other costs can be managed with efficient software. When your infrastructure software (databases, in-memory data stores, etc.) is designed to be both robust and resource-efficient, they naturally require fewer servers and less operational overhead—the same is true for your application code as well.

Choosing solutions that support vertical scaling first, like Dragonfly, allows you to handle larger workloads without prematurely turning to complex and costly clustering. So, if high Redis/Valkey costs have you wondering what you might be missing, give Dragonfly Cloud a try. It could be the straightforward, cost-effective solution your current and future projects need.


Appendix

  • Dragonfly Cloud pricing is based on the AWS US East (N. Virginia) region as of November 2024, using the standard compute instance type.
  • ElastiCache Serverless for Valkey pricing is also based on the US East (N. Virginia) region as of November 2024, as shown below:
ElastiCache Serverless for Valkey Pricing DimensionPrice
Data Stored$0.084 / GB-hour = $60.48 / GB-month
ElastiCache Processing Units (ECPUs)$0.0023 / million ECPUs

Stay up to date on all things Dragonfly

Join our community for unparalleled support and insights

Join

Switch & save up to 80% 

Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost