Docker Containers enclose(wrap) a piece of software in a complete file-system that contains everything needed to run: code, run-time, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
What made dockers adoption?
Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server(not windows, since they relies on kernel) irrespective of any language. This helps enable flexibility and portability on where the application can run, whether on premises, public cloud, private cloud, bare metal, etc.
What is the difference between VM’s and Docker Containers?
- a Docker container, unlike a virtual machine, does not require a separate operating system. Instead, it relies on the kernel’s functionality and uses resource isolation (CPU, memory, block I/O, network, etc.) Docker accesses the Linux kernel’s virtualization features either directly using the libcontainer library, which is available as of Docker 0.9, or indirectly via libvirt, LXC (Linux Containers) or systemd-nspawn.
- Size: VMs are very large which makes them impractical to store and transfer.
- Performance: running VMs consumes significant CPU and memory.
- Portability: To any Linux VM/Machine.
Which one to Use?
In reality, both are complementary technologies(VMs and Containers are better together) for achieving maximum agility. (***Docker Containers can run inside Virtual Machines).
****Both VM’s and containers are IaaS solutions.
For application/software portability, Docker is your safest bet. For machine portability and greater isolation(h/w), go with VM.
Docker containers are Open source, Secure(isolate from each other) and so lightweight, a single server or virtual machine can run several containers simultaneously. A 2016 analysis found that a typical Docker use case involves running five containers per host, but that many organizations run 10 or more.
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Microsoft Azure, Ansible, Chef, Jenkins, Puppet,Salt, Vagrant, Google Cloud Platform, IBM Bluemix, Jelastic, OpenStack Nova, HPE Helion Stackato,and VMware vSphere Integrated Containers.
Docker in Details – briefly
Docker builds upon Linux Container(LXC) and consists of three parts: Docker Daemon, Docker Images, the Docker Repositories which together make Linux Container easy and fun to use.
Docker Daemon: runs as root and orchestrates all running containers.
Docker images: Just as virtual machines are based on images, Docker Containers are based on Docker images which are tiny compared to virtual machine images and are stackable .
Registry: A service responsible for hosting and distributing images. The default registry is the Docker Hub.
Repository: Docker repository is a collection of different docker images with same name, that have different tags.
Tag: An alphanumeric identifier attached to images within a repository (e.g., 14.04 or stable ).
Use Case: Spinning up a Docker Container on Ubuntu(14.04)
Docker has two important installation requirements:
- Docker only works on a 64-bit Linux installation.
- Docker requires version 3.10 or higher of the Linux kernel.
To check the Ubuntu version, run: # cat /etc/lsb-release // o/p: 14.04.4 LTS
To check your current kernel version, open a terminal and use # sudo uname -r // o/p: 3.13
Installation of Docker
Step 1: Ensure the list of available packages is up to date before installing anything new. Login to root user and then
# apt-get update
Let’s install Docker by installing the docker-io package:
# apt-get install docker.io
Now check the docker version using # docker version
Optionally, we can configure Docker to start when the server boots:
# update-rc.d docker defaults
And then we’ll start the docker service:
# service docker restart
Step 2: Download a Docker Container
There are many community containers already available, which can be found through a search. In the command below I am searching for the keyword debian:
# docker search debian/ubuntu // displays list available images
Let’s begin using Docker! Download the ubuntu Docker image:
# docker.io pull ubuntu
Now you can see all downloaded images by using the command: # docker images
Step 3: Create & Run a Docker Container
Now, to setup a basic ubuntu container with a bash shell, we just run one command. docker run will run a command in a new container, -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard ubuntu container.
# docker.io run -i -t ubuntu /bin/bash
That’s it! You’re now using a bash shell inside of a ubuntu docker container.
span style=”font-weight: 400;”>To disconnect, or detach, from the shell without exiting use the escape sequence Ctrl-p + Ctrl-q.
But the container will stop when you leave it with the command exit.
### If you like to have a container that is running in the background like daemon, you just need to add the -d option in the command, optionally add a message to it.
# $ docker run -d ubuntu /bin/sh -c “while true; do echo Hello Ram Howdy?; sleep 1; done”
### Use below command to see all the containers that are running in the background.
# docker ps
Now you can check the logs with this command:
# docker logs 68a29978b064 //ContainerId – take 1st 12 digits of the long form Id
#### If you like to remove the container, 1st stop it first and then remove it with the command:
# docker stop 68a29978b064 // here inplace of stop you can use keywords like start/restart
span style=”font-weight: 400;”># docker rm 68a29978b064 // removes the container
Install & Run Jenkins 2.0 in “Docker Container”
Step 1: First, pull the official jenkins image from Docker repository.
# docker pull jenkins
Step 2: As jenkins default plugin capabilities won’t sufficient to build devops, we should implement a Data Volume Container to provide simple backup capabilities and to extend the official image to include some plugins via the core-support plugin format.
# docker create -v /var/jenkins_home –name jenkins-dev jenkins
This command uses the ‘/var/jenkins_home’ directory volume as per the official image and provides a name ‘jenkins-dv’ to identify the data volume container.
Step 3: To use the data volume container with an image you use the ‘–volumes-from’ flag to mount the ‘/var/jenkins_home’ volume in another container:
# docker run -d -p 8080:8080 –volumes-from jenkins-dev –name jenkins-master jenkins
Step 4: Once you have the docker container running you can go to http://IP:8080 to see the Jenkins instance running. This instance is storing data in the volume container you set up in step 2, so if you set up a job and stop the container the data is persisted.
Up on Hitting this url http://IP:8080 it asks for password then run below command and copy, paste the pwd in Jenkins login screen >
# docker exec jenkins-master cat /var/jenkins_home/secrets/initialAdminPassword
Next-Install Plugins >Provide login credentials >start jenkins
To backup the data from the volume container is very simple. Just run:
# docker cp jenkins-dv:/var/jenkins_home /opt/jenkins-backup
Once this operation is complete on your local machine in ‘/opt/jenkins-backup’ you will find a ‘jenkins_home’ directory backup. You could now use this to populate a new data volume container.
Docker container is virtualization platform which helps developers to deploy their applications and system administrators to manage applications in a safe virtual container environment. Docker runs on 64-bit architecture and the kernel should be higher 3.10 version. With Docker, you can build and run your application inside a container and then move your containers to other machines running docker without any worries.