Recently shipped a feature that required denormalizing a whole domain into one table. GSIs everywhere, composite keys, the whole mess. We were coming from postgres so the mindset shift was brutal.
here's what actually bit us:
key design matters way more than i expected. we started with PK: user#123 SK: order#456 and querying "all orders for a user in july" became impossible without scanning. fixed it to PK: user#123 SK: order#2024-07#456 so range queries actually work.
item size limits are real. postgres let us nest json freely. dynamodb capped items at 400kb. we had to split some entities across multiple items, which defeats the "single table" efficiency thing.
// Before - one item, too big
{
PK: "user#123",
SK: "profile",
orders: [...500 items...],
metadata: {...}
}
// After - split it up
{
PK: "user#123",
SK: "profile"
}
{
PK: "user#123",
SK: "order#2024-07#1"
}
the batcher is your actual bottleneck. single-table design means you're hitting one partition. we went from 40 wcu to 400 during peak traffic because everything serializes.
honestly, if you're not at serious scale, stick with postgres and its boring predictability. single-table dynamodb looks elegant in a diagram but operational reality is painful.
No responses yet.