I'm mainly using Docker to prevent bloating my computer with a lot of tools that I don't want to install.
Other than that, I also tend to Dockerize every of my dev projects as it make the setup process a lot easier.
Using it every day, all our CI/CD flow is built on Docker containers, with Wercker, Tutum (and now DockerHub, Wercker's support for Amacon ECR came a little bit late for us), Amazon ECS.
Performance tuning in prod has sometime been a little bit challenging, but the constraints are really worth getting into, as soon as we also consider how much benefits we get from it (we make sure our Docker images' tags correspond to git sha1, hence with the task definition of the service it's a no-brainer to get the exact code that is running on any given stacks, just to give a small example. Dependencies, easy to stop/(re)start, are other obvious wins.
Every day, from CI and tests (on Wercker), to dev/qa/production. We're running on AWS/ECS, and so for we're happy with it. Sure, adding another tool add some complexity and some configuration to master, but the gains in speed and consistency across environments really make the additional work/complexity a fair price in our use-cases.
I like the concept of Docker, but running containers in production has been a bit overwhelming. While they are easy to build, maintenance is a bit tricky.
I tend to use Docker containers for makeshift development environments (especially when I'm forced to work on Windows).
For example, here is my Dockerized version of Laravel Homestead: https://github.com/laraedit/laraedit-docker
It's not the traditional single service per container structure, instead, it is all dependencies rolled into a single container managed by supervisor.
This solution works far better than running a full VM on lower powered devices (like my Surface 3 tablet).
Until I feel more comfortable with CoreOS, I'll probably stick with what I know. But using Docker has drawn me more towards building my applications as a collection of micro services instead of going with the old monolithic approach.
Personally, I'm currently in a transition from virtual machines to virtual machines and docker containers not only in my own development projects, but also for the professional work I do. I currently have two major projects that are heavily container based: my personal game server host, and the internal system through which our consultants operate at the startup I work with. They are both wildly different use cases so I'll explain a little about both.
I'm really fortunate in that I am the lead (and currently only) developer for the startup, so I get to make all the technical decisions. The latest big decision I made was to ditch our legacy system that we inherited from our sister company (it's inefficient and our business model is completely different from our sister company's, and unfortunately I wasn't there when that decision was made!) and replace it with something flexible enough to allow us to explore different ideas without compromising the overall integrity of our system.
One of the biggest problems we've had with our legacy system is that it's pretty atomic and there's little to no separation between subsystems aside from those that the original developer imposed on themselves, so adding new features can be a minefield, and removing ideas that didn't work is even worse. This has been one of my primary areas of focus since I started designing our new system, and Docker has relieved the majority of the burden for me by allowing me to deploy (and destroy) new subsystems without even thinking about it. In fact, all of these issues are no longer a concern for me:
I used to agonise over these kinds of things, and now they are a no-brainer. Designing, developing and deploying new subsystems and connecting them together is easy, and the separation between these subsystems is implicit and insurmountable, which motivates me to keep the APIs between my subsystems clean and proper. In some respects it can be more work, but overall I have a much healthier system and I'll likely keep my hair longer.
My other heavily docker based venture is the hosting of my own private game servers, something I've been doing in one form or another for the last 10 years or so. Game servers are horrible when it comes to dependencies. Some are 32 bit, some are 64 bit, some use old libc versions, some are bleeding edge - it's a nightmare and sometimes it's just not possible to run two particular game servers on the same operating system. For the last few years I've solved this by using different virtual machines and carefully planning which game servers should live on which hosts - which for the most part works out, but sometimes a server will be updated and its dependencies will change, and I'll have to move it to another host (the players on my servers love me for that, let me tell you..).
Again, Docker to the rescue. I no longer have to worry about dependencies as each game server lives in its own isolated environment tailored to its specific needs. Not only is the maintenance easier and my game servers more stable overall but I can now pass on some of Docker's benefits to my players and allow them to create their own game servers, which is the next step in the evolution of my game hosting. I'll allocate my users "slots" on which they can run any game server my system supports and they will be able to manage their own containers. If I need to support a new game server, I only need create the appropriate Dockerfile configuration for it. None of this would be possible without containers. I could do with it LXC of course and skip docker entirely, but then I would miss out on Docker's orchestration abilities.
There's no guarantee Docker will be the prevailing container technology going forward. It's definitely got the lion's share of attention at the moment, but there are competing standards (like RKT for example) due to dissatisfaction with some of Docker's approaches and as such there will almost certainly be several container architectures to choose from in the future. So yes, right now Docker is the bees knees when it comes to containers - but it's containers that we should be thinking about overall, not just Docker's implementation.
Every day. So far, everything that worked in dev worked exactly the same in production, combined with Kubernetes and CoreOS, I can easily build clusters of docker images instead of starting up a VM per application.
Dong Nguyen
Web Developer
I use docker to: