There are usually two things you can do.
Usually people cache before they optimize. Often "runtime caching" is helpful in PHP, you can utilize the clone instruction which is basically a deep copy of the object, but it saves you the constructor and so on.
There are a lot of ways to optimize code but the main focus should be classic time-complexity, just superficial "how can i reduce the amount of loops"; and I don't mean that you should write recursions ... ;D ...
For your application internal calls, if you use Linux try to switch to "file-sockets" and not the TCP protocol, for the web servers and the databases.
Caching is something hard but usually you can say that things that don't change can be cached. Try proxy caches, they are usually low hanging fruits for the payload.
If it gets more complex, employ memcache for datasets (redis is nice, but memcache is faster - to my knowledge - and you don't always need redis)
There are even more solutions, and it depends on your architecture... but this could be a good starting point