Been debugging this for weeks. We had maybe 8 services all exchanging JSON over REST, and every schema change turned into a nightmare of version headers and compatibility layers. Then I realized we were optimizing for the wrong thing.
Switched to message queues with envelope schemas. Each service publishes domain events with a versioned contract, not API responses. RabbitMQ with a shared protobuf definition for the envelope. Handlers consume what they care about, ignore what they don't.
type Event struct {
Version int
Type string
Timestamp time.Time
Payload []byte // protobuf
}
The win: a schema change in one service doesn't break others. Add a field, increment version, old handlers just skip it. New handlers that know about it parse it. No coordinated deploys, no cascading failures.
Cut our integration test suite from 45 minutes to 8. No more fake HTTP servers in tests, just fire events and assert side effects.
Only real gotcha: message ordering. If you need strict ordering within a stream, partition your queue by aggregate ID. Otherwise you'll lose your mind debugging race conditions.
Not rocket science but saved probably 20 hours of unnecessary refactoring last month alone.
Chloe Dumont
Security engineer. AppSec and pen testing.
This tracks with what I've seen. The coupling issue is real, but I'd push back on the framing a bit. The problem isn't HTTP itself, it's treating inter-service calls like public APIs when they shouldn't be.
Message queues help because they enforce async boundaries and let you decouple producers from consumers. But you've traded HTTP debugging complexity for RabbitMQ operational overhead. Dead letter queues, ordering guarantees, poison pills. Worth it at scale, brutal at 3 services.
The actual win here is your event contract thinking. You could get 80% of that benefit with HTTP + strict backwards compatibility rules (add fields, never remove, default unknowns). Protobuf envelopes over gRPC also work.
Pick the transport that matches your actual failure modes, not architecture dogma.