Redis Hashes: Examples and Pro Tips
Learn Redis hashes and related commands through our concise guide covering use cases, code snippets, and best practices.
August 25, 2025

What are Redis Hashes?
Redis Hashes are a data type in Redis that allows you to store a collection of field-value pairs. Each field in a Redis hash can store a string value, which can be a human-readable string or arbitrary serialized data, making it similar to a map or dictionary. Hashes are highly efficient for storing and managing related data that is often accessed together.
You can perform various operations on hashes, such as setting fields, getting fields, deleting fields, and more, all with very low latency. Redis Hashes are often used to represent objects or entities in a way that allows fast, granular updates and access.
What Is Redis HSET?
The HSET
command is used to set one or more values of fields within a Redis hash. If a field already exists, it will update the field with the new value. If the field does not exist, it will create the field and set the value. This is useful for adding or modifying data within a hash without affecting other fields. Here’s an example:
$> HSET user:123 name "John Doe" email "john@example.com" age 30
This command sets the name
, email
, and age
fields for the hash stored under the key user:123
.
Other Common Redis Hash Commands with Examples
In addition to HSET
, which we explained above, there are several other common Redis hash commands.
HGET (Hash Get)
The HGET
command retrieves the value associated with a specific field in a Redis hash. If the field exists, the corresponding value is returned. If the field does not exist, nil
is returned.
Example:
$> HGET user:123 name
This command retrieves the value of the name
field for the hash user:123
.
HMGET (Hash Multiple Get)
The HMGET
command is used to retrieve multiple fields from a Redis hash at once. You provide the hash key followed by the field names you want to retrieve. Redis returns the values for those fields in the order requested.
Example:
$> HMGET user:123 name email age
This command retrieves the name
, email
, and age
fields from the user:123
hash.
HDEL (Hash Delete)
The HDEL
command is used to delete one or more fields from a Redis hash. If the specified field(s) exist, they are removed, and Redis returns the number of fields that were deleted.
Example:
$> HDEL user:123 email
This command deletes the email
field from the user:123
hash.
HGETALL (Hash Get All)
The HGETALL
command retrieves all fields and values from a Redis hash. The result is returned as a list of field-value pairs.
Example:
$> HGETALL user:123
This command retrieves all the fields and their associated values from the user:123
hash. The result might look like:
1) "name"
2) "John Doe"
3) "email"
4) "john@example.com"
5) "age"
6) "30"
HEXISTS (Hash Exists)
The HEXISTS
command checks if a field exists within a Redis hash. It returns 1
if the field exists and 0
if it does not.
Example:
$> HEXISTS user:123 email
This command checks if the email
field exists in the user:123 hash
.
HINCRBY (Hash Increment by Integer)
The HINCRBY
command is used to increment the value of a field by a specified integer. The field must hold an integer value, and the operation adds the given number to the current value.
Example:
$> HINCRBY video:123 views 1
This command increments the views
field of the video:123
hash by 1
.
HKEYS (Hash Keys)
The HKEYS
command retrieves all the field names (keys) of a Redis hash. It returns a list of field names.
Example:
$> HKEYS user:123
This command returns all the field names (e.g., name
, email
, age
) in the user:123
hash.
HLEN (Hash Length)
The HLEN
command returns the number of fields in a Redis hash. It can be useful to know the size of the hash or whether it’s empty.
Example:
$> HLEN user:123
This command returns the number of fields in the user:123
hash. If there are three fields (like name
, email
, and age
), it will return 3
.
HVALS (Hash Values)
The HVALS
command retrieves all the values of the fields in a Redis hash. It returns a list of field values in the order the fields were added.
Example:
$> HVALS user:123
This command retrieves all the values (e.g., "John Doe"
, "john@example.com"
, 30
) from the user:123
hash.
HSETNX (Set a Field If Not Exist)
The HSETNX
command is used to set a field in a Redis hash only if the field does not already exist. If the field already exists, it does not modify the hash. This is useful for initializing fields with default values.
Example:
$> HSETNX user:123 email "john.doe@example.com"
This command sets the email
field only if it does not already exist in the user:123
hash.
HSETEX (Set a Field with Expiry)
The HSETEX
command is used to set one or more fields in a Redis hash and optionally set their expiry at the same time. This is useful for setting hash field values and controlling their TTLs (instead of the TTL of the whole hash key). The command ensures atomicity for both the data and TTL manipulation as well.
Example:
$> HSETEX user:123 EX 3600 FIELDS 1 token "usertokenXXX"
This command sets the token field in the user:123
hash and expires it after 3600 seconds.
HSCAN (Incrementally Iterate over Fields and Values)
The HSCAN
command allows you to incrementally iterate through the fields and values of a Redis hash. This is useful when dealing with large hashes, as it helps avoid memory overhead by returning a subset of the data at a time.
Example:
$> HSCAN user:123 0 MATCH * COUNT 10
This command iterates over the user:123
hash, returning fields and values in a paginated manner. The 0
value is the initial cursor, MATCH *
matches all fields, and COUNT 10
limits the result to 10 fields at a time.
HEXPIRE (Set the Expiration Time)
The HEXPIRE
command sets an expiration time on one or more fields of a given hash key. You must specify at least one field. Field(s) will automatically be deleted from the hash key when their TTLs expire. This is useful for data that should only exist temporarily. The similar HEXPIREAT
command sets an expiration time at a Unix timestamp.
Example:
$> HEXPIRE user:123 3600 FIELDS 1 token
This command sets the expiration time of the token
field of the user:123
hash to 3600 seconds (1 hour). After 1 hour, the field will be automatically deleted from Redis. Additional options (NX/EX/GT/LT
) can be used to control this command’s behavior when setting the TTLs.
Use Cases for Redis Hashes
User Profiles
Redis hashes are ideal for storing user profile data, where each user has a set of attributes (fields) like name, email, address, etc. Instead of storing all the user data as a single string or object, you can store each attribute in a separate field within the hash. This allows fast access and modification of specific user attributes without needing to reload the entire user profile.
For example, a hash for a user profile could look like this:
$> HSET user:123 name "John Doe" email "john@example.com" age 30
This structure allows you to efficiently query or update individual fields, such as changing the user’s email or retrieving the age.
Product Catalogs
Redis hashes can be used to manage product catalogs, where each product has several attributes, such as price, availability, category, and description. Storing these attributes in hashes allows quick updates and access to individual product details. This is particularly useful in e-commerce applications where real-time updates to product information are essential.
For instance, a product catalog could be stored like this:
$> HSET product:1001 name "Smartphone" price 599.99 stock 150
This method allows you to efficiently update specific attributes like stock levels or pricing without needing to update the entire product entry.
Counters and Metrics
Redis hashes are also useful for tracking counters and metrics across different dimensions. Each field in a hash can represent a different metric or counter, making it easy to track multiple values for the same key. For example, tracking views, likes, or other activity counts can be managed using Redis hashes.
A common use case could be:
$> HINCRBY video:123 views 1
This allows for quick incremental updates and retrievals of metrics for different activities, such as user engagement on a video.
Session Management
Hashes are great for managing user sessions. Each session can be stored as a hash, where fields represent different pieces of session data such as user ID, login timestamp, and session timeout. This allows fast access to session information and efficient session updates without needing to store large objects.
An example of a session hash could be:
$> HSET session:d4ecb476-1c48-48b0-9586-cd7dd4a921c7 user_id 42 login_time "2025-07-10T08:00:00" expires_in 3600
This makes it easy to track the status of user sessions in a scalable way.
Limitations of Redis Hashes
While Redis hashes offer many advantages, they also have limitations that should be considered when deciding whether to use them in your application.
- Memory Usage: Redis hashes are efficient in terms of memory, but they still consume memory for each field-value pair. If a hash has a large number of fields, it can grow significantly in size, consuming more memory than expected. Although Redis uses a highly optimized memory model, very large hashes can lead to performance issues, especially when commands like
HGETALL/HKEYS/HVALS
are used. - Atomicity: Operations on fields within a single hash can be atomic when picking the proper commands (i.e.,
HSET
to create or update multiple fields at once), but Redis does not support atomic operations across multiple hashes. If you need to update multiple hashes and ensure the changes are applied together (i.e., in a transaction-like manner), you’ll need to utilize Redis transactions (MULTI/EXEC
) or Lua scripting, which may introduce additional complexity. - No Nested Hashes: Redis does not support nested hashes directly, meaning you cannot store one hash inside another. While you can store JSON strings or serialized objects within a hash, you lose the ability to directly manipulate inner structures without deserializing them first. This limitation can complicate use cases where nested data structures are required.
- Field Size: The maximum size for a field in a Redis hash is limited to 512 MB. While this is typically sufficient for most use cases, very large individual fields might still cause issues, especially if the hash contains numerous fields.
- Limited Querying Capabilities: Redis hashes support basic operations like field retrieval, field setting, and deletion, but they lack advanced querying features like filtering, sorting, or complex searches. If you need to perform complex queries on hash data, you may need to combine Redis with an additional search engine like RedisSearch or use other databases for more advanced querying capabilities.
- Overhead with Frequent Updates: When hashes are frequently updated, Redis may need to reallocate memory and adjust its internal data structures, which can introduce some overhead. For applications with high write frequencies, this might impact performance, especially if the hashes grow large over time.
- No Support for Data Versioning: Redis hashes do not provide built-in support for versioning of data. If you need to maintain historical versions of hash data, you must implement this yourself, which could require managing multiple versions of the hash manually.
Pro Tips for Successfully Managing Redis Hashes
1. Use Hashes to Group Related Data
Redis hashes are designed to store related data together, which can greatly improve the organization of your Redis data structure. By grouping fields that belong to the same entity under one hash, you make the data more manageable. For instance, in a social media app, a user’s data such as username, email, and preferences could all be stored under one key (e.g., user:123
). This approach eliminates the need for separate keys for each attribute, making data retrieval and updates more efficient.
Additionally, since Redis hashes provide atomic operations for fields, you can independently modify individual attributes without affecting others, which is an advantage when dealing with frequently updated data, like user preferences or settings. The key takeaway is that grouping data logically in a hash improves performance, reduces the number of keys, and simplifies data access.
2. Optimize Memory Usage
Even though Redis hashes are memory-efficient, they still require careful monitoring when handling large datasets. Each field-value pair in a hash consumes memory, and excessive fields can lead to increased memory usage. To optimize memory, consider limiting the number of fields stored in a single hash to avoid unnecessary overhead. You can also reduce memory usage by storing smaller data types in the fields, such as using integers or compressed strings when possible.
Additionally, Redis’ internal storage mechanism uses a more compact encoding for small hashes, but as the hash grows, Redis may switch to a more memory-intensive encoding. By keeping the hash size manageable and trimming outdated or unnecessary data, you can mitigate the impact of memory consumption. Implementing regular data cleanup or expiration strategies can also help keep memory usage in check.
3. Avoid Using HGETALL on Large Hashes
The HGETALL
command retrieves all fields and values from a Redis hash, which can be inefficient when the hash is large. When working with large hashes, invoking HGETALL
will consume significant memory and may cause performance degradation, as Redis must load all the fields into memory. (Note that it’s similar for HKEYS
and HVALS
.) If you’re only interested in specific fields, it’s better to use commands like HGET
or HMGET
, which allow you to fetch only the required data, thus improving performance.
For large hashes where you need to iterate over fields, the HSCAN
command is a better alternative. HSCAN
returns the data incrementally, reducing memory load by fetching a subset of the fields at a time, which can be especially useful for paging through large datasets. This incremental approach ensures you don’t overload Redis with a large amount of data at once.
4. Leverage Atomic Operations
Atomic operations in Redis ensure that when you modify a field within a hash, the operation is performed without interference from other Redis commands. This is a crucial feature when you need to ensure consistency, such as when incrementing a counter or updating a user profile. Commands like HINCRBY/HSET/HSETNX/HSETEX
allow you to update individual fields atomically, meaning no other Redis command can interfere with these operations during execution.
For example, if you’re updating a product’s stock count, using HINCRBY
ensures that even if multiple clients are making updates at the same time, the operation will be executed atomically and the value will not be corrupted. If you need to perform multiple operations on a hash at once while maintaining atomicity, Redis transactions (MULTI/EXEC
) or Lua scripting can be used. These methods ensure that a group of commands is executed as a single atomic operation, preventing partial updates.
5. Implement Proper Expiration Strategies
Redis supports setting expiration times for keys, but you need to manage the expiration of hashes themselves by setting expiration on the key holding the hash. This is particularly useful when dealing with temporary data, like sessions or cache data. For example, a session hash could be configured to expire after a certain amount of time, ensuring that outdated sessions are automatically removed without requiring manual intervention.
Redis provides the HEXPIRE
command to set an expiration on hash fields or EXPIRE
for key-based expiration. Implementing expiration ensures that your Redis database doesn’t grow excessively over time, consuming unnecessary memory. By setting expiration times based on the data’s lifecycle, you can ensure that only relevant data remains in memory, thus improving performance and preventing stale or obsolete data from lingering.
6. Avoid Storing Large Objects in Hashes
Redis is optimized for handling smaller, discrete pieces of data, so storing large objects (such as large JSON payloads or complex serialized objects) within hashes can introduce memory inefficiencies and slow down performance. While Redis hashes can store large values, doing so may lead to performance issues, particularly when you need to retrieve many fields at once.
Dragonfly: The Next-Generation In-Memory Data Store
Dragonfly is a modern, source-available, multi-threaded, Redis-compatible in-memory data store that stands out by delivering unmatched performance and efficiency. Designed from the ground up to disrupt existing legacy technologies, Dragonfly redefines what an in-memory data store can achieve. With Dragonfly, you get the familiar API of Redis without the performance bottlenecks, making it an essential tool for modern cloud architectures aiming for peak performance and cost savings. Migrating from Redis to Dragonfly requires zero or minimal code changes.
Key Advancements of Dragonfly
- Multi-Threaded Architecture: Efficiently leverages modern multi-core processors to maximize throughput and minimize latency.
- Unmatched Performance: Achieves 25x better performance than Redis, ensuring your applications run with extremely high throughput and consistent latency.
- Cost Efficiency: Reduces hardware and operational costs without sacrificing performance, making it an ideal choice for budget-conscious enterprises.
- Redis API Compatibility: Offers seamless integration with existing applications and frameworks running on Redis while overcoming its limitations.
- Innovative Design: Built to scale vertically and horizontally, providing a robust solution for rapidly growing data needs.
Dragonfly Cloud is a fully managed service from the creators of Dragonfly, handling all operations and delivering effortless scaling so you can focus on what matters without worrying about in-memory data infrastructure anymore.
Was this content helpful?
Help us improve by giving us your feedback.
Switch & save up to 80%
Dragonfly is fully compatible with the Redis ecosystem and requires no code changes to implement. Instantly experience up to a 25X boost in performance and 80% reduction in cost