🌏 閱讀中文版本
Why Understanding Redis Data Structures Matters?
Redis’s Core Role in Modern Application Architecture
Redis (Remote Dictionary Server) is not just a caching system, but a multi-purpose in-memory database. Understanding Redis data structures deeply is crucial for three key reasons:
1. The Key to Performance Optimization
Choosing the right data structure can bring orders of magnitude performance improvements:
- Time Complexity Differences: Using Hash instead of multiple Strings can reduce operations from O(n) to O(1)
- Memory Usage: Sorted Set saves 40-60% memory compared to storing JSON strings
- Network Round Trips: Pipeline and atomic operations reduce RTT, boosting throughput by 10-50x
Real-world case: An e-commerce platform changed their shopping cart from multiple GET operations to Hash structure, reducing response time from 50ms to 5ms while cutting memory usage by 35%.
2. Best Match for Business Scenarios
Different business needs correspond to different data structures:
- Social Applications: Set for mutual friends, Sorted Set for leaderboards
- Real-time Systems: List for message queues, Pub/Sub for real-time notifications
- Counting & Statistics: HyperLogLog for billion-scale unique counts with only 12KB memory
- Geolocation: Geo for nearby users, delivery range calculation
3. Avoiding Common Pitfalls
- ❌ Storing everything in String (JSON serialization) → Cannot atomically update partial fields
- ❌ Using List for sorting → O(n log n) each time, should use Sorted Set O(log n)
- ❌ Using Set for time-series data → Cannot query by time range, should use Sorted Set
Five Core Redis Data Structures Explained
1. String
Characteristics & Use Cases
Basic Characteristics:
- Maximum 512MB
- Binary safe (can store any data)
- Supports integer and float operations
- Auto-expiration (TTL)
Typical Applications:
- Session storage
- Distributed locks
- Counters (views, likes)
- Caching JSON/HTML fragments
Complete CRUD Operations Guide
Create & Update (Setting Values)
# Basic set
SET user:1000:name "Alice"
# Set with expiration time (seconds)
SETEX session:abc123 3600 "user_data"
# Set with expiration time (milliseconds)
PSETEX cache:product:999 60000 '{"name":"iPhone","price":999}'
# Set only if key doesn't exist (distributed lock)
SET lock:resource:1 "locked" NX EX 30
# Set only if key exists (update existing value)
SET user:1000:email "alice@example.com" XX
# Batch set (atomic operation)
MSET user:1:name "Alice" user:1:age "30" user:1:city "Taipei"
# Batch set only if all keys don't exist
MSETNX user:2:name "Bob" user:2:age "25"
Read (Getting Values)
# Basic read
GET user:1000:name
# Batch read (reduce network round trips)
MGET user:1:name user:1:age user:1:city
# Get old value and set new value (atomic)
GETSET counter:visits 0
# Get substring (0-based index)
GETRANGE message:welcome 0 9
# Get value length
STRLEN user:1000:name
Update (Updating Values)
# Integer increment
INCR page:home:views # Increment by 1
INCRBY page:home:views 10 # Increment by 10
INCRBYFLOAT product:price 0.5 # Increment by 0.5
# Integer decrement
DECR stock:product:123 # Decrement by 1
DECRBY stock:product:123 5 # Decrement by 5
# Append string to end
APPEND log:error:2024 "New error messagen"
# Update partial string (overwrite from offset)
SETRANGE message:welcome 0 "Hi"
Delete
# Delete single key
DEL user:1000:name
# Delete multiple keys (atomic)
DEL user:1:name user:1:age user:1:city
# Check if key exists
EXISTS user:1000:name # Returns 1 (exists) or 0 (not exists)
# Set expiration time (seconds)
EXPIRE session:abc123 300
# Set expiration time (milliseconds)
PEXPIRE cache:data 60000
# Set absolute expiration time (Unix timestamp)
EXPIREAT session:abc123 1735689600
# Check remaining expiration time (seconds, -1 = permanent, -2 = not exists)
TTL session:abc123
# Remove expiration time
PERSIST session:abc123
Advanced Techniques & Best Practices
1. Distributed Lock Implementation
# Correct distributed lock (atomic operation + auto-expiration)
SET lock:order:123 "server-1-uuid" NX EX 10
# Release lock (use Lua to ensure atomicity)
redis-cli --eval release_lock.lua lock:order:123 , server-1-uuid
Lua script release_lock.lua:
if redis.call("GET", KEYS[1]) == ARGV[1] then
return redis.call("DEL", KEYS[1])
else
return 0
end
2. Rate Limiter Implementation (Sliding Window)
# Maximum 10 requests per minute
SET rate:user:1000:20241018:1430 1 NX EX 60
INCR rate:user:1000:20241018:1430
3. Session Storage Best Practice
# Use Hash instead of String for session (more flexible)
# But if you need atomic operations on entire session, use String + JSON
SET session:abc123 '{"user_id":1000,"role":"admin","login_at":1735689600}' EX 3600
2. Hash
Characteristics & Use Cases
Basic Characteristics:
- Similar to Map<String, String>, field-value pairs
- Suitable for storing objects (avoid serialization)
- Each Hash can have up to 2^32 – 1 fields
- Memory optimization: small Hashes use ziplist encoding
Typical Applications:
- User data (profile)
- Product details
- Configuration information
- Shopping cart
Complete CRUD Operations Guide
Create & Update (Setting Fields)
# Set single field
HSET user:1000 name "Alice"
# Set multiple fields (Redis 4.0+)
HSET user:1000 name "Alice" age "30" city "Taipei" email "alice@example.com"
# Batch set (older Redis)
HMSET user:1000 name "Alice" age "30" city "Taipei"
# Set only if field doesn't exist
HSETNX user:1000 created_at "2024-10-18"
Read (Getting Fields)
# Get single field
HGET user:1000 name
# Get multiple fields
HMGET user:1000 name age city
# Get all fields and values
HGETALL user:1000
# Get all field names only
HKEYS user:1000
# Get all values only
HVALS user:1000
# Get number of fields
HLEN user:1000
# Check if field exists
HEXISTS user:1000 email
Update (Updating Fields)
# Integer increment
HINCRBY user:1000 login_count 1
HINCRBY user:1000 points 100
# Float increment
HINCRBYFLOAT product:999 rating 0.5
Delete (Deleting Fields)
# Delete single field
HDEL user:1000 temp_token
# Delete multiple fields
HDEL user:1000 old_field1 old_field2
# Delete entire Hash
DEL user:1000
Advanced Techniques & Best Practices
1. Hash vs String (JSON) Selection
| Scenario | Recommended | Reason |
|---|---|---|
| Need to update single field | Hash | Avoid deserializing entire object |
| Need atomic update of multiple fields | String (JSON) + Lua | Hash cannot atomically update multiple fields |
| Field count < 10 | Hash | ziplist encoding saves memory |
| Need to expire entire object | String (JSON) | Hash cannot set TTL on individual fields |
2. Shopping Cart Implementation
# Add product to cart
HSET cart:user:1000 product:123 "2" # Product ID: Quantity
# Update product quantity
HINCRBY cart:user:1000 product:123 1 # Quantity +1
# Remove product
HDEL cart:user:1000 product:123
# Get cart item count
HLEN cart:user:1000
# Get all products
HGETALL cart:user:1000
3. Memory Optimization (ziplist encoding conditions)
# Check encoding method
DEBUG OBJECT user:1000
# ziplist encoding conditions (configurable in redis.conf)
# hash-max-ziplist-entries 512 (field count <= 512)
# hash-max-ziplist-value 64 (value length <= 64 bytes)
3. List
Characteristics & Use Cases
Basic Characteristics:
- Ordered, allows duplicates
- Underlying structure: doubly linked list (quicklist)
- Head/tail operations O(1), middle operations O(n)
- Maximum 2^32 – 1 elements
Typical Applications:
- Message queue
- Latest activity feed
- Task queue
- Undo/Redo functionality
Complete CRUD Operations Guide
Create & Update (Inserting Elements)
# Insert from left (head)
LPUSH timeline:user:1000 "post:999"
LPUSH timeline:user:1000 "post:998" "post:997"
# Insert from right (tail)
RPUSH queue:email "email:1" "email:2"
# Insert only if List exists
LPUSHX timeline:user:1000 "post:996"
RPUSHX queue:email "email:3"
# Insert before/after specified element
LINSERT timeline:user:1000 BEFORE "post:999" "post:1000"
LINSERT timeline:user:1000 AFTER "post:999" "post:998"
# Set value at index (0-based, negative counts from tail)
LSET timeline:user:1000 0 "updated_post:999"
Read (Getting Elements)
# Get range of elements (0 is first, -1 is last)
LRANGE timeline:user:1000 0 9 # Latest 10 posts
LRANGE timeline:user:1000 0 -1 # All elements
# Get element at index
LINDEX timeline:user:1000 0 # First element
LINDEX timeline:user:1000 -1 # Last element
# Get List length
LLEN timeline:user:1000
Update (Updating Elements)
# Trim List (keep specified range, delete others)
LTRIM timeline:user:1000 0 99 # Keep only latest 100
# Update value at index
LSET timeline:user:1000 5 "new_value"
Delete (Deleting Elements)
# Pop from left (head)
LPOP queue:email # Pop one
LPOP queue:email 3 # Pop three (Redis 6.2+)
# Pop from right (tail)
RPOP queue:email
# Blocking pop (for message queues, timeout in seconds)
BLPOP queue:email 30 # Wait 30 seconds
BRPOP queue:email 0 # Wait forever
# Remove specified value (count > 0 from head, < 0 from tail, = 0 all)
LREM timeline:user:1000 1 "post:999" # Remove first "post:999" from head
LREM timeline:user:1000 -2 "spam" # Remove two "spam" from tail
LREM timeline:user:1000 0 "ad" # Remove all "ad"
# Delete entire List
DEL timeline:user:1000
Advanced Techniques & Best Practices
1. Reliable Message Queue Implementation
# Producer: send message
LPUSH queue:tasks "task:process_order:123"
# Consumer: get and process (atomic, avoid loss)
BRPOPLPUSH queue:tasks queue:tasks:processing 30
# After processing, delete
LREM queue:tasks:processing 1 "task:process_order:123"
# If processing fails, can retry from processing queue
2. Latest Activity Timeline (Fixed Length)
# Publish new post
LPUSH timeline:user:1000 "post:1001"
# Auto-trim (keep only latest 100)
LTRIM timeline:user:1000 0 99
# Get latest 20
LRANGE timeline:user:1000 0 19
3. Pagination
# Page 1 (10 items per page)
LRANGE messages:chat:room1 0 9
# Page 2
LRANGE messages:chat:room1 10 19
# Page n (0-indexed)
# start = n * page_size
# end = start + page_size - 1
4. Set
Characteristics & Use Cases
Basic Characteristics:
- Unordered, no duplicates
- Underlying structure: hashtable or intset
- Add/remove/lookup O(1)
- Supports set operations (intersection, union, difference)
Typical Applications:
- Tag system
- Mutual friends
- Deduplication (UV statistics)
- Lottery system
Complete CRUD Operations Guide
Create & Update (Adding Elements)
# Add single element
SADD tags:post:1000 "Redis"
# Add multiple elements
SADD tags:post:1000 "Database" "NoSQL" "Cache"
# Add and return count of successfully added
SADD followers:user:1000 "user:2" "user:3" "user:3" # Returns 2 (user:3 duplicate)
Read (Getting Elements)
# Get all elements (unordered)
SMEMBERS tags:post:1000
# Get element count
SCARD tags:post:1000
# Check if element exists
SISMEMBER tags:post:1000 "Redis" # Returns 1 (exists) or 0 (not exists)
# Randomly get n elements (without removal)
SRANDMEMBER tags:post:1000 2
# Randomly pop n elements (with removal)
SPOP tags:post:1000 1
Set Operations (Intersection, Union, Difference)
# Intersection (common elements)
SINTER followers:user:A followers:user:B # Mutual friends of A and B
# Union (all elements)
SUNION tags:post:1 tags:post:2 # All tags
# Difference (A has but B doesn't)
SDIFF followers:user:A followers:user:B # Friends of A but not B
# Intersection and store result
SINTERSTORE result:common followers:user:A followers:user:B
# Union and store result
SUNIONSTORE result:all tags:post:1 tags:post:2
# Difference and store result
SDIFFSTORE result:diff followers:user:A followers:user:B
Delete (Deleting Elements)
# Delete specified element
SREM tags:post:1000 "OldTag"
# Delete multiple elements
SREM tags:post:1000 "Tag1" "Tag2" "Tag3"
# Randomly pop elements (delete and return)
SPOP lottery:users 5 # Lottery: randomly pick 5 winners
# Delete entire Set
DEL tags:post:1000
Advanced Techniques & Best Practices
1. Mutual Friends Feature
# User A's friends
SADD friends:userA "user1" "user2" "user3" "user4"
# User B's friends
SADD friends:userB "user2" "user3" "user5" "user6"
# Calculate mutual friends
SINTER friends:userA friends:userB
# Result: user2, user3
# Recommend friends (B's friends but not A's)
SDIFF friends:userB friends:userA
# Result: user5, user6
2. Tag System
# Tag article
SADD tags:post:100 "Redis" "Database" "NoSQL"
# Build reverse index for tags (find articles with this tag)
SADD tag:Redis:posts "post:100" "post:101" "post:102"
SADD tag:Database:posts "post:100" "post:103"
# Find articles with both Redis and Database tags
SINTER tag:Redis:posts tag:Database:posts
3. UV (Unique Visitors) Statistics
# Record visitors
SADD uv:page:home:20241018 "user:1" "user:2" "user:1"
# Get UV count
SCARD uv:page:home:20241018
# Note: Large UV will consume memory, consider using HyperLogLog
4. Lottery System
# Join lottery
SADD lottery:event:2024 "user:1" "user:2" "user:3" ... "user:10000"
# Draw 10 winners (with removal)
SPOP lottery:event:2024 10
# If you want to keep participant list, use SRANDMEMBER
SRANDMEMBER lottery:event:2024 10
5. Sorted Set
Characteristics & Use Cases
Basic Characteristics:
- Ordered, no duplicates
- Each element associated with a score (sorting basis)
- Underlying structure: skiplist + hashtable
- Add/remove/lookup O(log n)
- Range query O(log n + m)
Typical Applications:
- Leaderboards (game scores, trending articles)
- Delayed queue (score = execution timestamp)
- Time-series data
- Range queries (geolocation, price range)
Complete CRUD Operations Guide
Create & Update (Adding/Updating Elements)
# Add single element (score member)
ZADD leaderboard:game1 1000 "player:Alice"
# Add multiple elements
ZADD leaderboard:game1 950 "player:Bob" 1050 "player:Charlie" 800 "player:David"
# Update options (Redis 3.0.2+)
ZADD leaderboard:game1 NX 1100 "player:Eve" # Only if member doesn't exist
ZADD leaderboard:game1 XX 1200 "player:Alice" # Only if member exists
ZADD leaderboard:game1 GT 1150 "player:Alice" # Only if new score > old score
ZADD leaderboard:game1 LT 900 "player:Bob" # Only if new score < old score
# Return count of affected elements
ZADD leaderboard:game1 CH 1300 "player:Alice" # Return change count (add or update)
# Integer increment score
ZINCRBY leaderboard:game1 50 "player:Bob" # Bob's score +50
Read (Getting Elements)
# Get by rank range (0-based, low to high)
ZRANGE leaderboard:game1 0 9 # Lowest 10 scores
ZRANGE leaderboard:game1 0 -1 # All elements
# Get by rank range (high to low)
ZREVRANGE leaderboard:game1 0 9 # Highest 10 scores (leaderboard)
# Get by rank range (with scores)
ZRANGE leaderboard:game1 0 9 WITHSCORES
# Get by score range (-inf = negative infinity, +inf = positive infinity)
ZRANGEBYSCORE leaderboard:game1 900 1100 # Score between 900-1100
ZRANGEBYSCORE leaderboard:game1 (900 1100 # Score in (900, 1100] (open interval)
ZRANGEBYSCORE leaderboard:game1 -inf +inf WITHSCORES LIMIT 0 10 # Pagination
# Get by score range (high to low)
ZREVRANGEBYSCORE leaderboard:game1 1100 900
# Get element count
ZCARD leaderboard:game1
# Get count of elements in score range
ZCOUNT leaderboard:game1 900 1100
# Get element's score
ZSCORE leaderboard:game1 "player:Alice"
# Get element's rank (0-indexed, low to high)
ZRANK leaderboard:game1 "player:Alice"
# Get element's rank (high to low)
ZREVRANK leaderboard:game1 "player:Alice" # For leaderboards
Delete (Deleting Elements)
# Delete specified element
ZREM leaderboard:game1 "player:David"
# Delete multiple elements
ZREM leaderboard:game1 "player:A" "player:B"
# Delete by rank range
ZREMRANGEBYRANK leaderboard:game1 0 4 # Delete ranks 0-4 (lowest 5)
# Delete by score range
ZREMRANGEBYSCORE leaderboard:game1 0 500 # Delete elements with score 0-500
# Delete entire Sorted Set
DEL leaderboard:game1
Advanced Techniques & Best Practices
1. Game Leaderboard
# Update player score
ZADD leaderboard:weekly 1580 "player:Alice"
# Get Top 10
ZREVRANGE leaderboard:weekly 0 9 WITHSCORES
# Get player rank (need +1 to start from 1)
rank=$(redis-cli ZREVRANK leaderboard:weekly "player:Alice")
echo $((rank + 1))
# Get players around rank (5 before and after)
rank=$(redis-cli ZREVRANK leaderboard:weekly "player:Alice")
redis-cli ZREVRANGE leaderboard:weekly $((rank - 5)) $((rank + 5)) WITHSCORES
2. Delayed Queue (Scheduled Tasks)
# Add delayed task (score = execution time Unix timestamp)
ZADD delayed_queue $(date -d "+5 minutes" +%s) "task:send_email:123"
ZADD delayed_queue $(date -d "+1 hour" +%s) "task:cleanup:old_data"
# Consumer: get expired tasks
current_time=$(date +%s)
redis-cli ZRANGEBYSCORE delayed_queue -inf $current_time LIMIT 0 100
# Get and delete (atomic operation, use Lua)
redis-cli --eval pop_delayed_tasks.lua delayed_queue , $current_time
Lua script pop_delayed_tasks.lua:
local tasks = redis.call('ZRANGEBYSCORE', KEYS[1], '-inf', ARGV[1], 'LIMIT', 0, 100)
if #tasks > 0 then
redis.call('ZREM', KEYS[1], unpack(tasks))
end
return tasks
3. Time-Series Data (Price History)
# Record price (score = timestamp)
ZADD price:BTC:USD 1697635200 "45000"
ZADD price:BTC:USD 1697638800 "45200"
ZADD price:BTC:USD 1697642400 "44800"
# Query price within time range
ZRANGEBYSCORE price:BTC:USD 1697635200 1697642400 WITHSCORES
# Get latest price
ZREVRANGE price:BTC:USD 0 0 WITHSCORES
4. Trending Articles (Sorted by Views)
# Article viewed, increment score
ZINCRBY trending:posts 1 "post:1234"
# Get 24h trending articles (Top 10)
ZREVRANGE trending:posts:20241018 0 9 WITHSCORES
# Auto-expire (reset daily)
EXPIRE trending:posts:20241018 86400
5. Set Operations
# Calculate intersection (take minimum score)
ZINTERSTORE result:inter 2 set1 set2 WEIGHTS 1 1 AGGREGATE MIN
# Calculate union (take maximum score)
ZUNIONSTORE result:union 2 set1 set2 WEIGHTS 1 1 AGGREGATE MAX
# Calculate weighted union (sum scores)
ZUNIONSTORE result:weighted 2 set1 set2 WEIGHTS 0.7 0.3 AGGREGATE SUM
Frequently Asked Questions (FAQ)
Q1: When to use Hash vs String (JSON)?
Answer: Depends on operation patterns and expiration needs
| Scenario | Recommended | Reason |
|---|---|---|
| Frequently update single field | Hash | Avoid deserializing entire JSON |
| Need field-level increment operations | Hash | HINCRBY atomic operation |
| Need to expire entire object | String (JSON) | Hash cannot set TTL on individual fields |
| Need atomic update of multiple fields | String (JSON) + Lua | Hash’s HSET cannot guarantee multi-field atomicity |
| Field count < 100 and small values | Hash | ziplist encoding saves memory |
| Need complex queries at application layer | String (JSON) | Deserialize and use programming language capabilities |
Practical Example:
# Use Hash (suitable for frequently updating single field)
HSET user:1000 login_count "0"
HINCRBY user:1000 login_count 1 # +1 per login
# Use String (suitable for session needing overall expiration)
SET session:abc123 '{"user_id":1000,"role":"admin"}' EX 3600
Q2: List vs Sorted Set, when to use which?
Answer: Depends on whether you need sorting and range queries
| Feature | List | Sorted Set |
|---|---|---|
| Ordering | Insertion order | Sorted by score |
| Duplicates | Allowed | Not allowed |
| Head/Tail Ops | O(1) | O(log n) |
| Range Query | By index O(n) | By score O(log n + m) |
| Rank Query | Not supported | O(log n) |
| Use Cases | Message queue, latest feed | Leaderboard, delayed queue |
Selection Guide:
- List: Message queue (FIFO), latest N records, Undo/Redo
- Sorted Set: Leaderboard, scheduled tasks, time-series, range queries
Q3: How to implement pagination?
Answer: Choose method based on data structure
List Pagination:
# Page 1 (20 items per page)
LRANGE messages:chat 0 19
# Page 2
LRANGE messages:chat 20 39
# Page n (n starts from 1)
start=$((($n - 1) * $page_size))
end=$(($start + $page_size - 1))
LRANGE messages:chat $start $end
Sorted Set Pagination:
# Pagination by rank (Page 1)
ZREVRANGE leaderboard 0 19 WITHSCORES
# Pagination by score range
ZRANGEBYSCORE timeline -inf +inf WITHSCORES LIMIT 0 20 # Page 1
ZRANGEBYSCORE timeline -inf +inf WITHSCORES LIMIT 20 20 # Page 2
Set/Hash Pagination:
# Set uses SSCAN (cursor pagination, avoid blocking)
SSCAN tags:post:1000 0 COUNT 20
# Hash uses HSCAN
HSCAN user:1000 0 COUNT 20
Q4: How to implement distributed locks in Redis?
Answer: Use SET NX EX + Lua release
Correct Implementation:
# 1. Acquire lock (atomic operation, with expiration, avoid deadlock)
SET lock:resource:123 "server-1-uuid-12345" NX EX 10
# 2. Execute business logic
# ...
# 3. Release lock (use Lua to ensure atomicity, avoid deleting other process's lock)
redis-cli --eval release_lock.lua lock:resource:123 , "server-1-uuid-12345"
release_lock.lua:
if redis.call("GET", KEYS[1]) == ARGV[1] then
return redis.call("DEL", KEYS[1])
else
return 0
end
Common Mistakes:
- ❌ Not setting expiration → deadlock
- ❌ SETNX then EXPIRE → not atomic, may deadlock
- ❌ Not checking value when releasing → may delete other process’s lock
Advanced Solutions:
- Redlock algorithm (multi-node distributed lock)
- RedissonLock (Java client, supports reentrant locks)
Q5: How to avoid big key problems?
Answer: Split large keys or use appropriate data structures
Big Key Hazards:
- Blocks main thread (Redis is single-threaded)
- Network transmission delays
- Memory fragmentation
- Master-slave replication delays
Detecting Big Keys:
# Use redis-cli scan
redis-cli --bigkeys
# Use MEMORY USAGE (Redis 4.0+)
MEMORY USAGE user:1000
# Use DEBUG OBJECT
DEBUG OBJECT large_hash
Solutions:
1. Hash Splitting
# Bad: Single large Hash
HSET user:1000 field1 value1
HSET user:1000 field2 value2
... (100,000 fields)
# Good: Split into multiple small Hashes
HSET user:1000:0 field1 value1
HSET user:1000:1 field2 value2
... (max 1000 fields per Hash)
# Use hash function for sharding
shard=$(echo -n "field_name" | md5sum | cut -c1-2) # 00-FF
HSET user:1000:$shard field_name value
2. List Splitting
# Split by time
LPUSH timeline:user:1000:202410 "post:1"
LPUSH timeline:user:1000:202411 "post:2"
3. String Compression
# Application-layer compression (gzip, snappy)
compressed_data=$(gzip -c data.json)
SET cache:large_data "$compressed_data"
4. Use HyperLogLog (Large-scale Unique Counting)
# Bad: Using Set (memory O(n))
SADD uv:20241018 "user:1" "user:2" ... "user:1000000"
# Good: Using HyperLogLog (fixed 12KB memory, 0.81% error)
PFADD uv:20241018 "user:1" "user:2" ... "user:1000000"
PFCOUNT uv:20241018
Q6: How to choose Redis memory eviction policy?
Answer: Choose appropriate maxmemory-policy based on business needs
8 Eviction Policies:
| Policy | Description | Use Case |
|---|---|---|
noeviction |
Reject writes when memory is full (default) | Cannot afford data loss |
allkeys-lru |
Evict least recently used among all keys | General cache |
allkeys-lfu |
Evict least frequently used among all keys | Hotspot data cache |
allkeys-random |
Randomly evict among all keys | Test environment |
volatile-lru |
Evict LRU among keys with TTL | Mixed scenario (some permanent keys) |
volatile-lfu |
Evict LFU among keys with TTL | Hotspot data + permanent keys |
volatile-random |
Randomly evict among keys with TTL | Rarely used |
volatile-ttl |
Evict keys with earliest expiration | Time-sensitive data |
Configuration:
# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru
# Or use CONFIG SET
CONFIG SET maxmemory-policy allkeys-lfu
Selection Guide:
- Pure cache:
allkeys-lruorallkeys-lfu - Cache + persistent data:
volatile-lruorvolatile-lfu - Time-sensitive data:
volatile-ttl
Q7: How to monitor Redis performance?
Answer: Use built-in commands and external monitoring tools
Built-in Monitoring Commands:
# 1. Real-time command monitoring
redis-cli MONITOR
# 2. View server information
redis-cli INFO
redis-cli INFO stats # Statistics
redis-cli INFO memory # Memory info
redis-cli INFO replication # Master-slave replication
# 3. View slow queries
redis-cli SLOWLOG GET 10
# 4. View client connections
redis-cli CLIENT LIST
# 5. View memory usage
redis-cli MEMORY STATS
# 6. View command statistics
redis-cli INFO commandstats
Key Metrics:
| Metric | Description | Normal Range |
|---|---|---|
used_memory |
Memory used | < 80% of maxmemory |
mem_fragmentation_ratio |
Memory fragmentation ratio | 1.0 – 1.5 |
instantaneous_ops_per_sec |
Operations per second (QPS) | Depends on hardware |
keyspace_hits / keyspace_misses |
Cache hit rate | > 80% |
connected_clients |
Connection count | < maxclients |
evicted_keys |
Number of evicted keys | Depends on policy |
External Monitoring Tools:
- Redis Exporter + Prometheus + Grafana (open-source)
- RedisInsight (official GUI tool)
- AWS CloudWatch (ElastiCache)
- Datadog / New Relic (commercial APM)
Best Practices Summary
Performance Optimization
- Use Pipeline for Batch Operations
# Bad: Multiple network round trips for i in {1..100}; do redis-cli SET key:$i value$i done # Good: Use Pipeline redis-cli --pipe < commands.txt - Use Lua Scripts for Atomic Operations
# Ensure atomicity of multiple commands redis-cli --eval complex_operation.lua keys , args - Set Reasonable Expiration Times
- Use random expiration to avoid cache avalanche
- Use SCAN instead of KEYS for scanning (avoid blocking)
Data Modeling
- Key Naming Convention
business:object_type:ID:field user:profile:1000:name order:detail:20241018:123456 cache:product:999 - Choose Appropriate Data Structure
- Need sorting → Sorted Set
- Need deduplication → Set
- Need frequent single field updates → Hash
- Need overall expiration → String
- Avoid Big Keys
- Hash/Set/Sorted Set: single key should not exceed 10,000 elements
- String: should not exceed 10MB
Security & Reliability
- Enable Persistence
- RDB: periodic snapshots (suitable for backups)
- AOF: record every write operation (suitable for disaster recovery)
- Hybrid persistence: RDB + AOF (Redis 4.0+)
- Set Password & Permissions
# redis.conf requirepass your_strong_password # ACL (Redis 6.0+) ACL SETUSER alice on >password ~cache:* +get +set - Use Master-Slave Replication & Sentinel/Cluster
- Master-slave replication: read-write separation
- Sentinel: automatic failover
- Cluster: horizontal scaling
Conclusion
Deep understanding of Redis’s five core data structures and CRUD operations is fundamental to using Redis efficiently. Key takeaways:
- 📌 String: Simplest yet most flexible, suitable for cache, counters, distributed locks
- 📌 Hash: Suitable for object storage, but cannot set TTL on individual fields
- 📌 List: Suitable for message queues, latest feed, but doesn’t support sorted queries
- 📌 Set: Suitable for tags, deduplication, set operations
- 📌 Sorted Set: Suitable for leaderboards, delayed queues, time-series
Choosing the right data structure can bring orders of magnitude performance improvements and memory savings.
Practical Recommendations:
- ✅ Choose data structures based on business scenarios (not just String for everything)
- ✅ Use Pipeline and Lua to reduce network round trips
- ✅ Avoid Big Keys, split data reasonably
- ✅ Set reasonable expiration times and eviction policies
- ✅ Monitor performance metrics, optimize promptly
Through this guide, you should be able to flexibly use various Redis data structures to build high-performance, scalable system architectures.
Related Articles
- Data Storage Technologies Comparison: Redis, SQLite, and IndexedDB
- 資料儲存技術比較:Redis、SQLite 與 IndexedDB
- Optimize Performance with AWS Cache Solutions: Memcached vs Redis Comparison
- Quartz Data Persistence Complete Guide: Configuration, Advantages & Best Practices
- Azure SQL Post-Migration Performance Optimization: Query Statistics, Top SQL Analysis, and Index Tuning Guide