I learned that the hard way when one table with a shiny UUID primary key quietly cut our throughput in half.
Queries were fine. Migrations were fine. Then write traffic spiked and the database started to sweat. That was the week I stopped using UUIDs as primary keys for anything that cared about performance.
The day UUIDs hit our throughput ceiling
We launched a feature, write traffic went up, API nodes had headroom, connection pools looked healthy, and yet Postgres sat near 90 percent CPU while insert latency climbed into dangerous territory. Simple insert statements sat at the top of the flame graph.
insert into orders (id, user_id, amount, created_at)
values ('6e5ac3c0-8e3e-4ea0-9a63-5bfa0bf4f422', '5b0a9e7e-8d3f-4d7e-9321-9dc01e9e1234', 1499, now());No joins. No heavy triggers. Random looking primary keys.
We copied the table to staging, changed the primary key to a bigserial integer, replayed the workload, and watched CPU drop by more than 40 percent. p95 insert latency fell under a millisecond.
The data model stayed the same. Only the primary key strategy changed.
Why random keys punish your index
Relational databases love order. A B-tree index works best when new entries arrive roughly in sequence.
With an auto incrementing bigint, new rows land at the right edge of the index.
Pages stay hot in memory and the cache predicts the next insert.
With a random UUID, each insert can hit a different page of the index. The database jumps around, touches cold pages, and keeps a much larger part of the tree alive in memory.
That means more page splits, more cache misses, fatter indexes, and higher write amplification.
You pay this cost on every insert and on every update of secondary indexes that include that UUID.
What the numbers looked like
We ran a benchmark on a test cluster. Only the primary key type changed.
-- Version A: UUID primary key
create table orders_uuid (
id uuid primary key,
user_id uuid not null,
amount integer not null,
created_at timestamptz not null
);
-- Version B: bigint primary key
create table orders_seq (
id bigserial primary key,
user_id uuid not null,
amount integer not null,
created_at timestamptz not null
);Results on our hardware looked like this:
| Key type | Writes per second | Index size after 10M rows | p95 insert latency |
| --------- | ----------------- | ------------------------- | ------------------ |
| UUID | 42k | 3.2 GB | 6.3 ms |
| Bigserial | 118k | 1.1 GB | 1.1 ms |UUIDs were cutting peak throughput by more than half.
How frameworks quietly push you into UUIDs
Here is a typical JPA or Spring Data entity.
@Entity
class Order {
@Id
UUID id;
UUID userId;
int amount;
Instant createdAt;
}It feels safe and modern. IDs are globally unique and easy to generate in the application before you hit the database.
The problem is that your database does not care how the type looks in Java. It cares about how predictable the next key will be. If every insert arrives with a random primary key, the storage engine never gets locality for free.
Better options that still feel modern
Sequential numeric primary key plus public UUID
Keep your primary key as a bigint generated by a sequence, and add a separate UUID for external use.
create table orders (
id bigserial primary key,
public_id uuid not null,
user_id uuid not null,
amount integer not null,
created_at timestamptz not null
);Expose public_id to the outside world. Use id as the internal primary key that keeps indexes lean and fast.
If you truly need client generated keys, use time ordered identifiers such as ULID or newer time ordered UUID formats. Recent keys then cluster together and your index behaves more like a sequence than a random number generator.
A simple mental picture
When you choose a primary key type, imagine the write path through your system.
Client
|
v
API node
|
v
Database
|
v
B-tree index on primary keyA sequential key walks down a mostly warm path in that B-tree. A random key keeps kicking the tree in random places and forcing it to grow wider than it needs to.
UUIDs stop looking harmless. They look like a constant tax you pay on every write.
If your database is struggling under write load, start by looking at your primary keys.
Dropping random UUIDs from the hot path is one of the simplest ways to win back throughput without touching business logic at all.