Hash Object Modeling

Redis Hash Object Storage

Model complex business objects with Redis Hashes. Learn efficient data modeling for user profiles, product catalogs, and configuration management.

Real-World Business Scenario

A social platform stores user profiles with 15+ fields (name, bio, avatar, settings, stats). Using a single JSON string means every read/write deserializes the entire object. With Redis Hashes, each field is independently accessible — update a user's online status without touching their profile data. This reduces bandwidth, simplifies partial updates, and improves memory efficiency through Redis's ziplist encoding for small hashes.

Architecture Diagram

Application Layer
↓ HSET / HGET / HINCRBY
┌──────────────────────────────────────┐
│ Redis Hash: user:{id} │
│ ┌──────────────────────────────────┐ │
│ │ name → "Alice" │ │
│ │ email → "alice@example.com" │ │
│ │ avatar → "https://cdn/a.jpg" │ │
│ │ bio → "Software Engineer" │ │
│ │ followers → 1024 │ │
│ │ following → 256 │ │
│ │ status → "online" │ │
│ │ lastLogin → 1737000000 │ │
│ └──────────────────────────────────┘ │
└──────────────────────────────────────┘
↓ Sync periodically
PostgreSQL (Persistent Storage)

Key Commands Explained

Performance Analysis

HSET/HGET: O(1) per field. Constant time regardless of hash size.
Ziplist encoding: Hashes with ≤128 fields and values ≤64 bytes use ziplist — 10x more memory efficient than individual keys.
Memory comparison: 1M users × 8 fields as Hash ≈ 1.2GB vs 1M users × 8 separate STRING keys ≈ 4.8GB. 4x savings.
HINCRBY: Atomic O(1) increment. No read-modify-write cycle needed. Handles 100K+ concurrent counter updates per second.

Common Pitfalls

Exceeding ziplist thresholds: Hashes with >128 fields or values >64 bytes switch to hashtable encoding, using significantly more memory. Keep fields compact.
Using HGETALL on large hashes: If a hash has 1000+ fields, HGETALL blocks Redis while serializing. Use HSCAN for large hashes instead.
Nested objects: Redis Hashes are flat (string → string). Don't try to nest hashes. Serialize nested data as JSON in a single field, or use separate keys.
No TTL on individual fields: Redis TTL applies to the entire key, not individual fields. If you need per-field expiration, use separate keys or manage cleanup in application code.

Best Practices

Keep hash fields under 128 and values under 64 bytes to leverage ziplist encoding.
Use HINCRBY for counters instead of GET → increment → SET to avoid race conditions.
Prefer HMGET over multiple HGET calls to reduce network round trips.
Use consistent naming: user:{id}, product:{sku}, config:{service}.
Separate hot fields (status, lastSeen) from cold fields (bio, settings) into different hashes if access patterns differ significantly.
Use HSCAN instead of HGETALL for hashes that might grow beyond 100 fields.

Runnable Demo

Redis Demo
Click "Step" or "Run All" to execute commands...

Try these commands in our online Redis editor