Right, huge subject but in a nutshell:
- First there were virtual machines, which are a great way to squeeze multiple machines onto a single piece of hardware, but they are generally resource intensive. Advancements have been made to allow sharing system memory more efficiently between virtual machines, but they're still pretty hungry because even if you have 10 Linux virtual machines you're still running 10 machines, 10 kernels, 10 sets of standard user land tools, etc.
- Then came LXC, which specifically addresses the latter drawback of virtual machines listed above by creating virtual machines as containers that share the hosts Linux kernel. Now your 10 virtual machines are only using 1 kernel.
- Then came Docker, which expands on the concept of LXC by focusing on processes rather than machines - which thanks to LXC - is extremely lightweight. Instead of deploying a whole virtual machine to run nginx, apache or rails (for example), you can instead deploy a docker container that runs that one process (docker containers can run multiple processes, but that's an advanced topic for this discussion).
That is the main use case for Docker, but it has lots of other benefits that aren't immediately apparently (until you start using it). For example:
- It can be orchestrated in a number of ways (either via command line or remote api) and is the basis (or at least an option) for many PaaS implementations (like Dokku, Flynn, etc.).
- It uses a union filesystem, which allows a Docker container to be comprised of several layers that make up its eventual image, which means you can extend images and you don't need to rebuild your entire container every time you change something.
- It has a registry (either the Docker public registry or your own self-hosted) through which you can store and download images that can then be extended or deployed to a docker host.
Like I say, it's a pretty huge topic (an entire platform's worth of information), but those are the highlights :-)