Well, with easyname.at we had our shared hosting application "on premise" because the second company of the owner is nessus.at which is a datacenter in the next room.
We had some advantages like defining certain internal networks, we didn't have to care about traffic limits and such there were no limitations.
It's just different since you have more control hence more responsibility, We had a complete control over the physical machines, direct uplinks to important uplink hubs. No traffic costs to consider, our own load balancers, backup machines, migration managers, DDoS protections, Mail-clusters, Server-Clusters, Application-Frameworks, Migration-Frameworks who moved customers from machine to machine without them even knowing it. Locking mechanisms if a customer got hacked .... in the end we've built our own cluster.
It is fun :) that's my experience you actually learn a lot, build a lot and it's cheap if you already have the infrastructure.
My longest job-stint was with a local company that was 100% self hosted.
Our server room was about 40ft x 20ft - it's own air conditioning units; fire suppression system, etc... the building had a Diesel backup generator and we did use it from time to time over the 8 years I was there.
We ran it all - from Microsoft Exchange to Windows AD / print servers; our own PBX, a mix of Dell Power Edge Servers / HP switches and some off the shelf PCs for smaller tasks. When I left it was at about 6 42U racks full of servers (the PBX consumed 4 of them - about 700 internal phone lines)
The website was a traditional Lamp server that accessed the IBM AS400 mainframe over ODBC.
We did our own backups to LTO3 then LTO5 tapes; rotated them off site in case of disaster.
It was a lot of work and when stuff hits the fan, you only have yourself to fix it or blame but I got a LOT of education out of it. Not even the cleaning crew was allowed in there; we swept (no vacuums / static charge == big no no)
To answer the question - what was it like - Really, no different then hosting a website on any other host - just now, you need to check the servers and make sure there aren't any hard drive failures; make sure backup runs; etc... An internet connection going down is the same as AWS or Rackspace going down; it's just now up to you to figure out if it's you (the site) or the telecom company (often, it's them)
I still on occasion recommend self hosting - recommended it to my current day job to save some money but we don't have the room for a generator or the room for a proper server room.
It's as small or as complex as you want it to be. It can be 1 tower server behind lock and key or it can be an entire room under scheduled badge access with fire suppression, security cameras, generators, etc...
We've a mix of internal and external hosting. As I'm a department of 1, I want the security of knowing there's someone out there looking after our stuff in case of emergency.
Over the last bank holiday weekend, PHP on our main internally hosted webserver decided it wanted all of the CPU and no, it wasn't going to let go. Our alerting system let someone know it had happened - not entirely sure who as I didn't get anything. Finally found out there was a problem at lunchtime Sunday when I picked up a Facebook messenger message!
Long story short, if there's a team of you, or it's not mission-critical, host it internally. You'll learn loads and get the chance to work with software you might otherwise not get. I've got a collection of Raspberry Pi boxes as development and testing servers. If there's a team of you, and you've a good rota for emergencies and monitoring, host the critical stuff as well!
Brandon
Frontend Developer
I host some of my own servers at home and some in the cloud. The cloud is often cheaper (than cost per month of buying dedicated hardware) so I tend to use it for things which aren't mission critical.
Most of the risks are similar but there are a few extra ones like: