It allows you to provide and provision a virtualised machine to work from, allowing your team to work from the same base platform with the same tooling (software).
Depending on your aims with virtualisation, environments and developmenet flow, you might consider vargant instead, or a combination of both.
@labsvisua covered most of it.
Docker essentially tries to package applications in such a way that you'll never get, "but it works on my machine" type of problems.
With Java, you package all your Jar files into a WAR file if you're building a web application, with Docker you now take that WAR file and package it with Tomcat in a Docker image, so that when you run it, the exact version of Tomcat it needs is already included as well as the exact version of Java that it was tested on instead of testing it on Oracle Java8 on your dev machine and having to run it on OpenJDK Java7 on your server.
Same for PHP as another example, instead of taking your PHP code that was working in PHP5.5 and uploading it to the server which has another version of PHP included, package the correct version of PHP with your code, package Apache2.4 with it if that is what you've tested it on and when you run it on the server, it will run in the same environment than what you've tested on.
Using docker, you can also start leveraging things like Kubernetes to utilise your instances to the max instead of having many instances all running at 30% capacity.
Containers
Containers are a lightweight virtualization system that allows apps to share the same Linux kernel, yet operate in isolated environments. Each container gets its own file system, networking, memory and CPU.
Compared to VMs
Using VMs, you would share the same hardware among multiple OSs (eg, Linux, Windows). Using containers you go one level higher and share the same kernel among multiple distributions (eg, Ubuntu, CentOS). Containers are lightweight and utilize resources better.
Docker
Docker is a container technology, but adds versioning, packaging and distribution. This makes creating and sharing of containers standardized and painless.
Usage
You could use it wherever you would use Linux VMs but you don't care about the isolation being very secure. It's ideal for software development since all developers can get the same exact environment on their dev systems. Like Vagrant, but lightweight.
Further reading: What is Docker
The answer below is on point apart from one thing. The installation of docker has becoming easier in the past year.
Head over to the official Docker website, click on the 'Get Docker' button and select your system's docker app be it desktop (mac or Windows), cloud providers or servers (Ubuntu falls in this category).
Install the docker app and run it. To confirm that all's good running:
docker
should list help information for docker.
You can now proceed to fetching an image from the Hub section in the previous answer.
Shreyansh Pandey
node, coffee and everything in between
Sorry for being this brief. There is so much in Docker. So, so much.
Ahh.... Finally a topic I love. Worked on and with it for the past 4 years, and still love it.
Let's start with the basics, really. So, before we being with anything, let's see how apps were deployed before the invention of virtualisation or any of this.
Essentially, to deploy an application, you had to spin up your own servers; order servers, set it up. It was a lot of hassle. Then, humans evolved. We came up with virtualisation. Allocating virtual memory, virtual disk space. It was great. It was a good system.
But as we evolved as programmers, and came to realise that global variables were evil, we came to know that virtualizing has its own problems. Say you were running Node; being single-threaded, you end up wasting like 80% of the available power. And everything was rather isolated, this was good and bad. Good, because finer control was easier; bad because, it cluttered the system. Another problem was that you wasted a lot of space for each of the VMs, and it again cluttered your deployment environment. But, according to me, the biggest problem was deployment. Typical VMs take 1-2 minutes to boot up. If you have custom scripts, and all that, it takes more. So, failover handling was really bad.
Then came containers.
The principle behind containers was very simple: - if we abstract away all the common stuff (system files), we get a machine image typically ~50 mb and works just as well; - if we focus on providing a single process base, scaling would be easy
Essentially, Docker has images. Now, these images are the barebones of any operating system, and contain only the components required for it to run. This drastically reduces the size of the images. Ubuntu 14.04 is 100 MB. I guess that should say it.
Further, Docker provides you with version control for these images. Now, there are some gory filesystem details. If you want those, I'll edit it and put it. Leave a comment. ;) So, if your intern breaks an image, you can always go back.
Then, we have the Docker Hub. GitHub for Docker. Store your images, and pull them at deployment.
Since we all love hands on stuff, let's have a look! :D
Docker in Two Minutes
Installation
Head over to the official Docker website and click on the 'Get Started' button.
I am assuming you're on a Mac. For Ubuntu OS, it's easier than this.
Download the Docker Toolbox from this link. Install it.
Now, run the following command in your terminal:
dockerIf you get the help information, pat yourself on the back.
Now, type
docker ps; if you get the following error:Cannot connect to the Docker daemon. Is the docker daemon running on this host?you have to get out of the room because it'll blast. Just kidding.We need to do an extra step in order to make this thing work on Mac. The problem is that it wasn't made for Mac, so we need to run a VirtualMachine with the base Docker image, and then work from there. It's rather easy.
Open VirtualBox and note the name of the VM. It's either
devor something like that. Head over to the terminal, and enter the following command:VBoxManage startvm dev --type headlessAs the more perceptive of you might have guessed,
VBoxManageis the cli-management tool for VirtualBox. Here, we start the machinedev(or whatever it was, you noted it) as headless or without a visible window. It's like running it like a daemon.Let's tell Docker to use those settings:
eval "$(docker-machine env dev)"docker-machineis the tool to manage a machine; a machine is a host + client. In this case, our computer. This command gets and evaluated the output, which looks something like this:Be sure to change
devto whatever name you noted before.Terrific! you just got yourself a Docker setup. Use all those cool Docker swaggers now! ;)
Now, let's
fetch an image from the Hub:docker pull ubuntu:latestThis command tells Docker to fetch the
latest(tag) image forUbuntuOS. Great. Let's run it.Run the following now:
docker run -it ubuntu:latest /bin/bashAnd you'll see the terminal change to something like this:
root@51a986e28a80:/#Congrats! You just ran a docker container based on the Ubuntu image. This is a fully fledged Ubuntu machine. But, it doesn't have any of the good stuff. You'll need to manually install
nanoor whatever you want. The51a96...is the unique container ID.Another thing, while we're here. If you enter
clearin this environment, it will produce something like:DUMB terminal executedor it won't do anything. Quick fix:export TERM=xterm.For starting out, I guess this is good enough to show you how powerful Docker is.
If anyone wants a full tutorial on Docker, leave a comment, and I'll try to go from 0 to Hero with Docker! ;)
Be sure to read everything at docker.com
I hope this helps!