I keep seeing golang codebases with these elaborate worker pool implementations. Channels, goroutine limits, context cancellation, the whole deal. Meanwhile modern Go with generics and better stdlib makes a lot of this unnecessary.
Most services don't need it. You throw 10k goroutines at something and it works fine. The memory overhead isn't what it was in 2012. If you actually have contention problems, you hit them at the database layer first anyway.
I had a metrics ingestion service where someone had built this intricate worker pool system with buffered channels and graceful shutdown. Replaced it with plain concurrent reads using a semaphore from golang.org/x/sync/semaphore. Simpler, fewer bugs, same throughput.
sem := semaphore.NewWeighted(100)
for item := range items {
sem.Acquire(ctx, 1)
go func(x Item) {
defer sem.Release(1)
process(x)
}(item)
}
Not saying worker pools are always wrong. But I think we've cargo-culted them into services that just need basic rate limiting. The overhead of managing your own pool is rarely worth it unless you're doing something unusual.
What's actually driving the complexity in your case.
Had the opposite experience. Built a metrics pipeline doing 100k+ requests/sec and yeah, unbounded goroutines killed us. Not memory - goroutine scheduling itself becomes the bottleneck. Scheduler starts thrashing around 50k concurrent goroutines.
The trick is you don't notice until production load. Local testing with 1k req/sec looks fine. Then you hit real traffic and p99 latencies crater.
Worker pools still matter, but you're right the boilerplate is worse than it needs to be. I just use a buffered channel and range over it. Context cancellation is the only thing that's genuinely annoying to wire up correctly.
Database being the real limit is true for CRUD apps. Not true if you're CPU-bound or doing heavy I/O coordination.
Ravi Menon
Cloud architect. AWS and serverless.
Hard agree on the database layer being the real bottleneck. I've seen teams spend weeks optimizing goroutine pools only to hit connection limits on their RDS instance.
That said, worker pools still matter when you're doing bounded work against external APIs with rate limits or when you need predictable resource consumption in production. Not for throughput, but for operational safety. 10k goroutines doing DNS lookups is different from 10k goroutines sitting idle.
For most CRUD services though, yeah, the stdlib http server with context timeouts does the job. Keep it simple until you actually measure the problem.