The Register

Thursday, March 23, 2017

Docker is different. Docker Rules!

Docker is one of the most important software programs I have seen in my career. Forget most of what you know about VMware or KVM or XEN. Docker Datacenter on Docker Engine provides Portability, Service Discovery, Load Balancing, Security, High Performance, and Scalability.
IT is in the middle of transformation. This is driven by the desire to increase productivity, improve efficiency, and meet the rising demands of business. In this digital world, software becomes the vehicle to connect the customer to the business, and optimize the organization’s operations. Finding the right platform that improves efficiency to boost application deployment speeds and increase security and scalability while maintaining control, is crucial.

Docker is a software container platform. System administrators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Companies use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers.

What is a Container?

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.


This  from the Docker Blog:

A natural response when first working with Docker containers is to try and compare them to virtual machines. Oftentimes we hear people describe Docker containers as “lightweight VMs”.
This is completely understandable, and many people have done the exact same thing when they first started working with Docker. It’s easy to connect those dots as both technologies share
some characteristics. Both are designed to provide an isolated environment in which to run an application. 

Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities, but these are the two biggest.



The key is that the underlying architecture is fundamentally different between the containers and virtual machines. The analog to  Docker is comparing houses (virtual machines) to apartments (Docker containers). 

Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority
of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. It’s incredibly difficult to ever find a “studio house” – even if one buys the smallest house they
can find, they may end up buying more than they need because that’s just how houses are built.

Apartments (Docker containers) also offer protection from unwanted guests, but they are built around shared infrastructure.  The apartment building (the server running the Docker daemon,
otherwise known as a Docker host) offers shared plumbing, heating, electrical, etc. to each apartment. Additionally apartments are offered in several different sizes – from studio to multi-bedroom
penthouse. You’re only renting exactly what you need. Docker containers share the underlying resources of the Docker host. Furthermore, developers build a Docker image that includes
exactly what they need to run their application: starting with the basics and adding in only what is needed by the application. Virtual machines are built in the opposite direction. They start
with a full operating system and, depending on the application, developers may or may not be able to strip out unwanted components.


An image is a filesystem and parameters to use at runtime. It doesn’t have state and never changes. A container is a running instance of an image. When you ran the command, Docker Engine:
checked to see if you had the software image, and if not, downloaded the image from the Docker Hub, loaded the image into the container and “ran” it. Depending on how it was built, an image might run a simple, single command and then exit. This is what hello-world did. A Docker image, though, is capable of much more. An image can start software as complex as a database, wait for you (or someone else) to add data, store the data for later use, and then wait for the next person.


Docker Engine lets people (or companies) create and share software through Docker images. Using Docker Engine, you don’t have to worry about whether your computer can run the software in a Docker image — a Docker container can always run it.

Docker Datacenter on Docker Engine includes service discovery and load balancing capabilities to aid the devops initiatives across any organization. Service discovery and load balancing make it easy for developers to create applications that can dynamically discover each other. Also, these features simplify the scaling of applications by operations engineers.

Docker Datacenter allow network and sysadmins to provide secure, scalable, and highly efficient network internally and externally through  Service Discovery and Load Balancing. Service discovery is an integral part of any distributed system and service-oriented architecture. As applications are increasingly moving towards microservices and service-oriented architectures, the operational complexity of these environments can increase. Service discovery will register the service and publish its connectivity information so that other services are aware of how to connect to the service.


Internal DNS server:


The Container Networking Model


More resources can be found at this link:


No comments:

Computer gigs in SF bay area

Tech Jobs in Los Angeles, CA

Tech Jobs in San Jose, CA

Top 10 Articles