Memory usage had been creeping up for months. We kept throwing more RAM at it. Then came the invoice: $4,800 per month for something that stored 50MB of actual data.
That morning changed everything.
The Breaking Point
We were using Redis for one thing: caching API responses and session tokens. Nothing fancy. No pub/sub. No sorted sets. Just GET and SET with TTL.
Looking at our usage patterns made me angry. We were paying enterprise prices for what amounted to a glorified HashMap with expiration. The overhead was insane. The complexity was unnecessary. The operational burden was killing our small team.
So I did what any sleep-deprived engineer would do at 3 AM.
I opened a new Go file and started typing.
The Realization
Here is what we actually needed:
- Store key-value pairs in memory
- Expire keys after a certain time
- Handle concurrent reads and writes
- Maybe 10,000 keys max
That is it. No clustering. No persistence. No Lua scripts.
Redis gives you a Ferrari when you need a bicycle. And we were paying Ferrari prices.
Building Our Own
The core structure took 30 minutes to write:
type Cache struct {
mu sync.RWMutex
items map[string]*Item
}
type Item struct {
Value []byte
Expiry int64
}
func (c *Cache) Set(key string, val []byte, ttl int) {
c.mu.Lock()
defer c.mu.Unlock()
expiry := time.Now().Add(time.Duration(ttl) * time.Second).Unix()
c.items[key] = &Item{
Value: val,
Expiry: expiry,
}
}
func (c *Cache) Get(key string) ([]byte, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
item, ok := c.items[key]
if !ok {
return nil, false
}
if time.Now().Unix() > item.Expiry {
return nil, false
}
return item.Value, true
}The cleanup goroutine handles expiration:
func (c *Cache) cleanup() {
for {
time.Sleep(30 * time.Second)
now := time.Now().Unix()
c.mu.Lock()
for k, v := range c.items {
if now > v.Expiry {
delete(c.items, k)
}
}
c.mu.Unlock()
}
}The Architecture
Our setup before:
Load Balancer
|
_____|_____
| |
Service A Service B
| |
|___________|
|
Redis Cluster
(3 nodes, 16GB each)Our setup now:
Load Balancer
|
_____|_____
| |
Service A Service B
(cache) (cache)Each service has its own embedded cache. No network calls. No serialization. No connection pools.
Performance Numbers
I ran benchmarks on my laptop (M1 MacBook):
func BenchmarkSet(b *testing.B) {
cache := NewCache()
b.ResetTimer()
for i := 0; i < b.N; i++ {
cache.Set("key", []byte("value"), 60)
}
}
// Result: 142 ns/op
func BenchmarkGet(b *testing.B) {
cache := NewCache()
cache.Set("key", []byte("value"), 60)
b.ResetTimer()
for i := 0; i < b.N; i++ {
cache.Get("key")
}
}
// Result: 38 ns/opRedis over localhost for comparison:
- SET: 45,000 ns/op
- GET: 38,000 ns/op
We are talking about 1000x improvement. Not because our code is magic. Because we removed the network.
The Trade-offs
This approach is not for everyone.
You lose persistence. If your service restarts, the cache is gone. For session tokens and API responses, we do not care. The data regenerates naturally.
You lose shared state. Each service instance has its own cache. For our use case, this actually improved performance. No cache stampedes. No lock contention across services.
You lose Redis features. No transactions. No pub/sub. No complex data types. If you need those, keep Redis.
Six Months Later
Our AWS bill dropped by $4,800 per month. Response times improved by 20ms on average. No more 3 AM pages about Redis memory issues.
The entire cache implementation is 200 lines of Go. I can hold the whole thing in my head. When something breaks, I fix it in minutes, not hours.
Sometimes the best solution is not the most sophisticated one. Sometimes you do not need a distributed system. Sometimes a HashMap with a mutex is enough.
Redis is amazing technology. But for basic caching, it might be overkill.
Your Ferrari is beautiful.
But maybe you just need a bicycle.