Redis Hot Key & Big Key: Detection, Risks, and Solutions

Hot keys and big keys are the two most common performance killers in production Redis. They cause uneven load distribution, memory spikes, and can even crash your Redis instance. This guide shows you how to find them and fix them.

What Is a Hot Key?

A hot key is a key that receives a disproportionately high number of requests. In a Redis Cluster, all requests for a key go to the same shard. If one key gets 100x more traffic than others, that shard becomes a bottleneck.

Common Hot Key Scenarios

  • A viral product page cached in Redis.
  • A global configuration key read by every request.
  • A rate limiter key for a popular API endpoint.
  • A session key for a bot making thousands of requests.

What Is a Big Key?

A big key is a key whose value is unusually large — either a huge string or a collection (Hash, List, Set, ZSET) with millions of elements.

Common Big Key Scenarios

  • A Hash with 1 million fields (e.g., storing all users in one Hash).
  • A List with 10 million elements (e.g., an unbounded log).
  • A Set with 500K members (e.g., all followers of a celebrity).
  • A String value of 100MB (e.g., a serialized report).

Detecting Hot Keys

Method 1: redis-cli --hotkeys

redis-cli --hotkeys

This uses the LFU (Least Frequently Used) eviction policy's access frequency counter. Requires maxmemory-policy to be set to an LFU variant.

Method 2: MONITOR Command

MONITOR

Captures all commands in real-time. Parse the output to find frequently accessed keys.

Warning: MONITOR has significant performance overhead. Use briefly in production.

Method 3: OBJECT FREQ

If LFU policy is enabled:

SET popular:item "data"
OBJECT FREQ popular:item

Returns the access frequency counter for the key.

Method 4: Application-Level Tracking

Log key access patterns in your application and aggregate them. This is the most reliable method for identifying hot keys.

Detecting Big Keys

Method 1: redis-cli --bigkeys

redis-cli --bigkeys

Scans the entire keyspace and reports the biggest key of each type. Safe to run in production (uses SCAN internally).

Method 2: MEMORY USAGE

SET small:key "hello"
MEMORY USAGE small:key

HSET big:hash f1 "v1" f2 "v2" f3 "v3" f4 "v4" f5 "v5"
MEMORY USAGE big:hash

Returns the exact memory consumption in bytes.

Method 3: DEBUG OBJECT

SET mykey "hello world"
DEBUG OBJECT mykey

Shows encoding, serialized length, and other internal details.

Method 4: Collection Size Commands

HSET user:hash f1 "v1" f2 "v2" f3 "v3"
HLEN user:hash

RPUSH mylist "a" "b" "c" "d" "e"
LLEN mylist

SADD myset "a" "b" "c"
SCARD myset

ZADD myzset 1 "a" 2 "b" 3 "c"
ZCARD myzset

Regularly check collection sizes. Alert when they exceed thresholds.

Risks

Hot Key Risks

RiskImpact
Single shard overloadOne node CPU at 100%, others idle
Increased latencyAll requests to that key queue up
Cluster imbalanceUneven memory and CPU distribution
Cascading failureOverloaded shard affects all keys on it

Big Key Risks

RiskImpact
Slow operationsHGETALL on 1M fields blocks Redis for seconds
Memory spikesSudden allocation of large memory blocks
Slow DELDeleting a big key blocks Redis (use UNLINK instead)
Replication lagBig key changes generate large replication payloads
Backup issuesRDB/AOF operations slow down

Solutions for Hot Keys

Solution 1: Local Cache (L1)

Cache the hot key in application memory. Reduces Redis load dramatically:

SET config:global '{"feature_flags":{"dark_mode":true}}' EX 60
GET config:global

Your app caches this locally for 5 seconds. 1000 requests/second become 1 Redis read per 5 seconds.

Solution 2: Read Replicas

Route read traffic to replicas. The primary handles writes only.

Solution 3: Key Splitting

Split one hot key into N sub-keys. Distribute reads across them:

SET hotkey:product:2001:0 '{"name":"Widget","price":9.99}'
SET hotkey:product:2001:1 '{"name":"Widget","price":9.99}'
SET hotkey:product:2001:2 '{"name":"Widget","price":9.99}'

Application reads from hotkey:product:2001:{hash(request_id) % 3}. In a cluster, these sub-keys may land on different shards.

Solutions for Big Keys

Solution 1: Split into Smaller Keys

Instead of one Hash with 1M fields, split by range:

HSET user:bucket:0 "user:0" "data" "user:1" "data"
HSET user:bucket:1 "user:1000" "data" "user:1001" "data"
HLEN user:bucket:0

Bucket by user_id / 1000.

Solution 2: Use UNLINK Instead of DEL

SET bigkey "some large value"
UNLINK bigkey

UNLINK is non-blocking — it removes the key from the keyspace immediately and frees memory in a background thread. DEL blocks until memory is freed.

Solution 3: Lazy Deletion with SCAN

For big collections, delete in batches:

HSET bighash f1 "v1" f2 "v2" f3 "v3" f4 "v4" f5 "v5"
HSCAN bighash 0 COUNT 2
HDEL bighash f1 f2
HLEN bighash

Scan and delete in small batches to avoid blocking.

Solution 4: Set Limits

Prevent big keys from forming in the first place:

  • Cap List length with LTRIM after each push.
  • Cap ZSET size with ZREMRANGEBYRANK.
  • Cap Stream length with MAXLEN.
RPUSH bounded:list "a" "b" "c" "d" "e" "f" "g" "h"
LTRIM bounded:list -5 -1
LRANGE bounded:list 0 -1

Monitoring Checklist

  1. Run redis-cli --bigkeys weekly.
  2. Set up alerts for collection sizes exceeding thresholds.
  3. Monitor per-key memory with MEMORY USAGE for suspected big keys.
  4. Track command latency with SLOWLOG GET.
  5. Use INFO commandstats to identify hot command patterns.
SLOWLOG GET 5
INFO commandstats

Try It in the Editor

Head to the Redis Online Editor and practice detection commands:

HSET user:profile f1 "v1" f2 "v2" f3 "v3" f4 "v4" f5 "v5"
HLEN user:profile
MEMORY USAGE user:profile
OBJECT ENCODING user:profile

RPUSH mylist "a" "b" "c" "d" "e" "f" "g" "h" "i" "j"
LLEN mylist
LTRIM mylist -5 -1
LRANGE mylist 0 -1

SET bigstring "hello world this is a test"
UNLINK bigstring
EXISTS bigstring

Practice the detection and cleanup patterns. In production, these commands are your first line of defense against performance degradation.