This tracks with what I've seen. The coupling issue is real, but I'd push back on the framing a bit. The problem isn't HTTP itself, it's treating inter-service calls like public APIs when they shouldn't be.
Message queues help because they enforce async boundaries and let you decouple producers from consumers. But you've traded HTTP debugging complexity for RabbitMQ operational overhead. Dead letter queues, ordering guarantees, poison pills. Worth it at scale, brutal at 3 services.
The actual win here is your event contract thinking. You could get 80% of that benefit with HTTP + strict backwards compatibility rules (add fields, never remove, default unknowns). Protobuf envelopes over gRPC also work.
Pick the transport that matches your actual failure modes, not architecture dogma.