We were getting crushed by connection limits on an RDS Postgres instance. Tiny t3.micro, maybe 100 concurrent connections max, and we kept hitting the ceiling around 50-60 simultaneous Lambda invocations.
Added RDS proxy thinking that would fix it. Helped a bit, but we were still managing connection pools in application code and it felt fragile. Every Lambda cold start meant thrashing the connection pool.
Spent a weekend migrating the main hot path to DynamoDB. Not the whole thing, just reads and writes that didn't need relational queries. Turned out that was like 70% of traffic.
const result = await dynamodb.get({
TableName: 'users',
Key: { userId: { S: id } }
}).promise();
No connection pooling to think about. No cold start pain. Pay per request, scale to whatever.
RDS is still there for the weird joins and analytics queries. But now it's quiet. Actually has headroom. That t3.micro could probably run for months without thinking about it.
Real lesson: connection pooling on serverless is a band-aid. If your database is the bottleneck on Lambda, you probably picked the wrong database, not the wrong pool strategy.
No responses yet.