Container: Future Application distribution standard.

Introduction

Containerization is a form of Operating System Virtualization, Applications are run in isolated spaces, which are called containers or Linux containers, containers run on the same shared Operating System and they are essentially fully packaged and portable computing environment.

Concept of Containerization

In the early period, Implementing Virtualization using Hardware Abstraction Layer based on hypervisor can help developers to get a flexible managing of Virtual Machines. But Each Virtual Machine has its own Operating System and a bunch of pre-installed applications. But when it comes to Development(dev) and Production(prod) Environment, real things that we need are our applications or services to be deployed. The lack of a traditional Virtualization approach is that we have to install a complete Operating System and to configure the environment of all the dependencies. That leads to a heavier load of hardware performance and low efficiency.

Facing these problems, A Concept of Containerization had been put forward. This kind of technology aims on letting developers focus on programming and avoiding to configure the environment every time. The approach is implemented by sharing a low-ended Operating System(ship) and isolating different programs’ environments (container) instead of setting up a Virtual Machine for each program. In another word, we can deploy an application once, and run it in a different place without caring about the Operating System and environments. Likes the real container on a port, goods(Applications) have been put in a container and transported from the port of Hong Kong(CentOS 7) to port of Singapore(Ubuntu 20.04). When the ship arrived in Singapore, the docker drives container forklifts to unship this container, the goods in the container also can work well.

Instances of Containerization

Linux Container Technology

Linux Container(LXC) Technology is a solution of containers in the above analogy. namespace and Cgroup are the two key approaches of LXC. namespace likes a scope, is for isolating different “goods”, Cgroup is in charge of resource management and control, such as limiting CPU and Memory resources of processes, controlling priorities of processes.

Docker

Docker, a Containerization solution, is so famous that it can represent the term “Container”, actually it is the tool for building Containers. Docker was launched in 2013 as an open-source, It leveraged existing computing concepts around containers and specifically in the Linux world, primitives known as Cgroup and namespace. It focuses on the requirements of developers and systems operators to separate application dependencies from [email protected]

Scenarios of Containerization

Before we discussing about the application scenario, we need to understand IaaS and PaaS. Albert Barron, a software architect at IBM, used the pizza analogy to explain the problem, and David Ng took it a step further to make it more understandable. Imagine you are planning to start a pizza business. You could produce your own pizza from beginning to end, but that would require a lot of preparation, so you decide to outsource some of the work and use someone else’s services. There are two [email protected]

Solution 1: IaaS

Someone else provides the kitchen, stove, and gas, and you just use these infrastructures to bake your pizzas.

Infrastructure as a Service(IaaS), is s form of cloud computing that delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go [email protected]

Solution 2: PaaS

In addition to providing you infrastructure, they also provide Pizza dough.

Platform as a service (PaaS), is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. You purchase the resources you need from a cloud service provider on a pay-as-you-go basis and access them over a secure Internet connection. @paas

Application Scenarios

The container technology is the birth to solving the technical implementation of the PaaS layer. Actually, The Docker project solves a higher-dimensional problem: In what way should the software be distributed.

Containerize Traditional Applications

Isolating using a container can enhance the security and make it possible to be migrated easily. That can lower the cost of migration and maintenance in an enterprise.

Optimizing, Improving the Utilization of Infrastructure

Optimization is not just about cutting costs, it is also about ensuring the right resources are used effectively at the proper time. Containers are a lightweight approach to pack and isolate application workloads.

Provide better support for microservices architectures

Distributed applications and microservices can be more easily isolated, deployed, and scaled using individual container building blocks.

Containers vs. Virtual Machines

Container architecture

A container is an isolated, lightweight basket for running an application on the host Operating System. Containers build on top of the host Operating System’s kernel and contain only apps and some lightweight operating system APIs and services that run in user mode.

Virtual Machines

VMs run a complete Operating System including its own kernel.

FeatureVirtual machineContainer
IsolationProvides complete isolation from the host operating system and other VMs. This is useful when a strong security boundary is critical, such as hosting apps from competing companies on the same server or cluster.Typically provides lightweight isolation from the host and other containers, but doesn’t provide as strong a security boundary as a VM. (You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM).
Operating systemRuns a complete operating system including the kernel, thus requiring more system resources (CPU, memory, and storage).Runs the user mode portion of an operating system, and can be tailored to contain just the needed services for your app, using fewer system resources.
Guest compatibilityRuns just about any operating system inside the virtual machineRuns on the same operating system version as the host(Hyper-V isolation enables you to run earlier versions of the same OS in a lightweight VM environment)
DeploymentDeploy individual VMs by using Windows Admin Center or Hyper-V Manager; deploy multiple VMs by using PowerShell or System Center Virtual Machine Manager.Deploy individual containers by using Docker via command line; deploy multiple containers by using an orchestrator such as Azure Kubernetes Service.
Operating system updates and upgradesDownload and install operating system updates on each VM. Installing a new operating system version requires upgrading or often just creating an entirely new VM. This can be time-consuming, especially if you have a lot of VMs…Updating or upgrading the operating system files within a container is the same: 
Edit your container image’s build file (known as a Dockerfile) to point to the latest version of the Windows base image. Rebuild your container image with this new base image.Push the container image to your container registry.Redeploy using an orchestrator.
The orchestrator provides powerful automation for doing this at scale. For details, see Tutorial: Update an application in Azure Kubernetes Service.

Docker

The Docker technology uses the Linux kernel and features of the kernel, like CGroups and namespaces, to segregate processes so they can run independently. This independence is the intention of containers—the ability to run multiple processes and apps separately from one another to make better use of your infrastructure while retaining the security you would have with separate systems.


Docker provides an image-based deployment model. This makes it easy to share an application, or set of services, with all of their dependencies across multiple environments. Docker also can automate deploying the application inside this container environment.


In addition, to run multi-container Docker applications(like WordPress with MySQL), you can use docker-compose. With docker-compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Without docker, if you want to build a WordPress website, you should download and install the LNMP(Linux Nginx MariaDB PHP) or LAMP(Linux Apache MariaDB PHP) environment first and configure each of them, it may take several hours(even much for green hand). With docker-compose, you can build the website with the same function from WordPress Image and MySQL Image just in one-line docker-compose command in several minutes, it’s super cool.

Concepts of Docker

There are three important terms in Docker. With the cooperation of them, the docker becomes a powerful tool. Docker is a “single container” approach to application definition.

Images

A docker image is a read-only template. For example, an image can contain a complete CentOS with only Apache or the User’s other applications installed, an Image can be used to create a Docker Container, and Docker provides a simple mechanism to create and update an Image. Container Image has become the standard for modern software distribution. Many famous applications such as Oracle Database, JRE, MySQL Server, Redis, Node, etc.

Containers

Docker runs application within a Container, Container is an instance created by an Image, It can be created, started, stopped, and deleted, in the meanwhile, each container is isolated from the other. Container can be considered as the simplest Linux environment with root as superuser, users, image space, network, and your application or service running above dependencies environment.

Repositories

Repositories are the place to hold Images, likes the Linux Software repositories(repos), containing Docker Images.

Structure of Docker and Docker operations

The following figure describes the relationship between Repositories, Images, Containers, and operations interacting with them.

Conclusion

Trends of Containerization

When we are visiting Github or some Official Websites of Projects. You can find that there are more and more “Run with Docker”, It’s clear that container Image has become the de facto standard in distributing Software. With the release of Docker Project. It really change the entire cloud computing industry over the past 7 years. Moreover, there is a much more powerful containerization solutions such as the Kubernetes project which proposes a set of containerized design patterns and corresponding control models that define how to build an application distribution.

Future of Containerization

It is impossible to stop more and more users from deploying containers on the cloud computing platforms and in data centers all over the world. We often compare the cloud to water, electricity, and coal and developers shouldn’t care about “generating” or “burning” coal. In reality, developers not only don’t care about these things, they don’t even want to know where the “water,” “electricity,” and “coal” come [email protected] In the future world of the cloud, developers will be able to deliver their applications anywhere in the world without distinction, most likely as naturally as we can now plug our electrical appliances into any socket in the room. We can’t predict the future, but the evolution of code and technology is telling us the truth: the future of software must be grown on the cloud. Container technology is one of the infrastructures of the application distribution which will be run on the cloud.

References

[1] Docker development team. What is a container.2013.

[2] Yifeng Ruan. Difference between IaaS, PaaS and SaaS.

[3] IBM. What is IaaS.

[4] Microsoft Azure. What is PaaS? platform as a service.

[5] Lei Zhang. Why 2019 is the time for container technology.

1 thought on “Container: Future Application distribution standard.”

Leave a Reply

Your email address will not be published. Required fields are marked *