Thanks for doing the AMA!
I am a freelance developer who has used DO exclusively for all my servers for several years now. Being able to provision servers via API has been a huge timesaver. Most everything I try to do with the platform is focused on automation as I just don't have time to manually do things. With that being the context, I am curious about the following:
Block storage has been really useful for abstracting files from the main OS, but I would like to share that between two (or more) servers for load-balancing instead of something like GlusterFS. Any plans for something like this (and I know this is a non-trivial problem)?
I just started with object storage and was planning on using it much like a CDN to offload assets and save on block storage (provisioned one private, for backups, and one public, for content). Was it designed for that in mind, or more of a straight up S3 alternative? Mostly, I'm worried about latencies, not that DO has ever really underperformed in that regard.
Any future plans for cross datacenter networking and/or easy live migration of droplets between datacenters? I would love to be able to move them around as need be, but have all the associated things come with or work (i.e. floating IP's, firewalls, block storage).
Thank you for DO, it has seriously changed my life as a developer. Most of what I do now wouldn't be possible without it. Thank you!
Clearly you automate the heck out of everything you work on and I imagine you work a lot with bare metal. What challenges have you faced regarding automating bare metal resources? What tools have you used to help with those challenges?
With different hardware vendors having widely varying interfaces for automating their products, how do you manage those interfaces to provide the same end-result across the board?
Do you implement automated testing or CI of the physical provisioning process? If so, what does that look like?