MicroServices introduces network complexities. Everything must be async, everything must be made more fault tolerant, and you need to deploy a whole host of service watchers and routers. For example if service A is dependent on service B, it generally has to go through a router of some kind to know where to find service B on the network. That router then has to know when service B goes down, and where a new instance of it has spawned, so that it can tell service A where to find it.
Monolithic apps don't have these complexities, and don't suffer from network latency as much. I would honestly adopt microservices as a last resort. The scalability argument doesn't really track, as any codebase can be spun up in a new instance and put behind a load balancer, monolithic or not. The challenge comes with managing state - user sessions, databases etc, between those instances - but that's a challenge regardless of which approach you take.
A monolith doesn't mean a tightly coupled ball of spaghetti. It's perfectly reasonable for different modules to be built and maintained by different teams in different repositories even, but then compiled into a single application for deployment. Different modules within that monolith still act like services to other different modules in that monolith, so service re-use is just as much a part of monolithic architecture as micro-service architecture.
Microservices make the most sense when two TOTALLY DIFFERENT applications should share a common service (e.g. authentication), AND that common service is responsible for logging and aggregating all activity from all of the client applications that use it. Without that second condition, all you're using the micro-services for is code-reuse, which is a horribly complex and round about way of doing that.