In my experience, the answer is really "it depends".
In general, I prefer to standardize on HTTP requests because they are mostly cheap, well understood, and available on every single platform. In my experience, unless you have huge performance requirements, this might be enough. There's a lot of prior art on this, and a lot of open source tools that help with the most common issues you'll encounter (timeouts, cascading failures, thundering herds, etc...). Products like Hystrix, Zuul, Eureka, Linkerd, Envoy, Istio, Zipkin, etc... can help immensely on this.
And in my personal experience, most APIs lend themselves well to the model. Also, keep in mind that having HTTP endpoints doesn't mean everything is synchronous. You can do async work on HTTP (even event-based via long polling).
That being said, if your environment or your requirements don't lend themselves well to http (e.g. having multiple producers/consumers), queues can be very useful. But make sure you understand them well. Developing an architecture based on messages, disconnected actors and asynchronous messages is hard, and if done poorly, you'll end in a worst shape than in f you were just doing HTTP requests everywhere.
Keep it simple, start with the patterns that you're most comfortable, and slowly grow your architecture as your requirements grow. Overengineering has a bigger impact on performance and productivity that a badly tuned network call.