There are an infinite amount of variables in this kind of question - High level, but...
Infrastructure - Where is the app housed? In house / on-premise servers or cloud servers or co-location? If you have a bad foundation, like a house, it'll all come crashing down eventually.
The Stack - What languages are you using? LAMP? Who is patching the OS? Who's checking for security breach attempts? Who's keeping abreast of the latest PCI guidelines if you accept credit cards? With the infrastructure, if the OS goes to long without being patched; or goes to long without proper security audits, things can come crashing down. Who is actively hardening the server and watching logs for attempted security breaches?
Server configuration - is the web server (Apache, Nginx) configured properly to dis-allow directory indexing? Cross origin policies setup properly? With the above 2 - this can cause pain if not done right. Is PHP configured properly to never show the user error messages?
Logs - Are only as good as what they report and what your concerned about. Do you keep an eye on the apache access log to see if someone is trying to sql inject a GET parameter? Access an admin page that doesn't exist or isn't protected properly? What about the error logs - are they reporting what you need them to report to fix problems you didn't know about? If stuff hits the fan, will the error log tell you what you need to know to fix it timely ? Make sure logs are being rotated. Be aware of any government policies if you need to keep logs for x years. If an outage occurs; if a lawsuit should come; if a security breach should happen; make sure theres a way to off load that log somewhere to be kept safe.
Cache - is a good idea; cache things that don't change often - images, libraries and frameworks. Cacheing CSS and HTML can be a pain - you'll need to put something in place to cache-bust these.
3rd party outage / mitigation - What happens if Cloudinary goes down and you use it for all images? What happens if a 3rd party service goes out of business (like Parse did)? How long will it take you to move to a new service? Implement your own if it doesn't exist? This is the most scary IMO - so many websites are relying on 3rd party services to do an array of tasks. Parse gave 1 year of notice; others, might not.
Who's watching the users? If your site accepts user generated content, who's keeping an eye on if they upload inappropriate images or post foul language? The random "crap" or "shit" might pass - someone uploads nude photos or otherwise... These issues can land you in hot water.
If you wrote an API - even if it's your own internal use, are you restricting who can access it? From where? What times of the day they can? Any rate per minute / hour / day?
Content Theft - do you watch for users scraping your content? How would you know? (look at the logs). What would you do if they did? Linking directly to your images and not to your page?
Database security - is a whole nother animal - you mention PHP so maybe your using MySQL - check the mysql slow query log and access log. Make sure MySQL users are locked down properly and can only access the DB from your site.
Backup! - backup, backup, backup! Backup HTML, backup code; backup system config files. I personally don't backup the entire OS. I backup each components specific config file + the sites files. If a server failed; I'd build a new one and dump the config files in. How long are backups kept? Are you testing the backups to make sure they actually worked? A backup is worthless if it doesn't capture the right data or is corrupt.
Server Imaging - Even though I said I don't backup the OS, I will take an entire server image before and after I change it significantly. The expense is minimal and it's a way out if anything goes wrong.
Code Quality / Code Audit - All of the above is worthless if your app is poorly written. Bad code can cause more problems then root being enabled on MySQL and the DB being open to everyone. Your app has access to the DB, right? Bad code could allow a user into the DB from the app. Try to write the best code you can; try not implement hacks or code you don't understand. Copying and pasting right from StackOverflow is never a good idea if you don't know what it does.
Who is auditing the admins / programmers - If you've gotten big enough that you can hire help - keep an eye on admins and programmers. We all want to trust our employees, but people become disgruntled; get tired; have personal issues. We've all been there. But make sure no one is doing something stupid; like using a company servers for a personal website.
Disaster recovery planning - What happens if AWS goes down for hours? Days? How much business will you loose? Reputation? What's your plan if stuff REALLY hits the fan for more then 5 minutes. Can you restore a backup to another cloud provider, change DNS and be back online in 1 hour?
Documentation - Document everything. EVERYTHING. What you changed on a server; what day / time. What you did; the command you ran. If you have employees; everyone puts their name besides their entry. Document outages; document back code deploys. Document a storm rolled through N Virginia and the site went down for 2 minutes because of a dropped internet connection.
Learn from mistakes - None of this is easy; there is no 1 way to do anything. There is no best way to do something. What works well for me; might not work for you. Try, fail - try again.
... roughly :) I'm sure I missed something.