dot CMS

The Benefits of Containers: Security, Speed & Microservice Compatibility

The Benefits of Containers: Security, Speed & Microservice Compatibility
Author image

Will Ezell

Chief Technology Officer

Share this article on:

As information technology infrastructure has evolved from mainframes to distributed server systems to virtual machines, IT managers and CTOs have always been looking for “the next big thing”.

With the emergence of containerization, their search for an infrastructure that delivers at higher speeds while consuming fewer resources might be over (for now).

What Is A Container?

In 2013, a group of Linux developers, fed up with how heavy and resource intensive running virtual machines can be, began to explore forgotten UNIX system tools that were built to allow multiple "jailed" environments to be run on a single server (chroot anyone?).  These engineers cleverly began using these tools as a foundation for a new type of "light weight" system that would allow multiple applications to use the same operating system (OS) simultaneously. The goal was to prevent the duplication of resources involved in creating new instances of the host OS every time a new instance of the application launched. The system, now known as Docker, allowed for applications to be deployed nearly anywhere and launched at speeds much faster than those that required a new instance of the OS.

The Docker system consisted of two major components: a portable and lightweight component that handled the packaging and runtime tasks, and a cloud service that managed automating flows and sharing applications. According to writer and instructor Ahmed Banafa, “A container (like Docker) sets up networking for an application in a standard way and carries as discrete layers all the related software that it needs.”

Today, a number of Docker alternatives have sprung up, accelerating the containerization movement.

Containers vs. Virtual Machines

The primary difference between containers and virtual machines (VMs) lies in how each structure handles its native operating system.

In a virtual machine environment, the host machine generates a new instance of the operating system for each application through a software package known as a “hypervisor”. The applications then use the “guest OS” to manage its functions, rather than the “host OS” that runs on the host server.

In a container environment, the applications connect directly to the host OS through the container software, such as Docker. James Bottomley, a leading Linux kernel developer, explained that containerization allows systems to “leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application,” largely by removing the requirements to generate guest OS instances and conserving resources.

Containerization and Security

A major benefit that comes with containerization can be found in how container architecture enhances data security.

In a virtual machine setting, a corrupted application can affect the entire virtual machine environment. For instance, if an application in a virtual machine consumes too much processing power or memory, it can affect other applications on the server. This approach can often arise from either inefficient applications or a Dedicated Denial of Service (DdoS) attack by hackers seeking to shutdown a server.

The containerization approach limits the damage to the specific container, rather than allowing any problems to spread to the rest of the server. Docker, the leading developer of containers, has addressed many of the security issues with Docker Content Trust, which verifies the publisher of the image Docker containers use to access the host OS. For sensitive applications, developers can also create their own images, which Docker Content Trust can verify and ensure that the containers accessing the kernel are secure.

Containerization and Compatibility

The structure of containers, such as Docker, allows developers the chance to create applications that are completely portable and can run on any platform. Developers can use the language of their choice to create applications, package them using the Docker package tool, and deploy them in the client's environment, without dealing with “99.9% (of the) VM junk,” Bottomley mentioned. Instead, each container runs as a self-sufficient program, with its own bins and libraries.

Instead of depending on the creation of a guest OS every time, the application launches the container accessing the kernel of the host OS (Linux or Windows). The containers use what has been termed “JeOS”, or “Just enough operating system”, to give the application what it needs to run.

Containerization and Speed

The JeOS setup also provides containers with a distinct advantage over virtual machines: speed.

Applications that run on virtual machines may take several minutes to launch after installation. The creation of the guest OS required to run the VM can cause significant delays and drain vital server resources, which can also slow down other applications running simultaneously.

Since containerized applications don't depend on the creation of a guest OS to handle vital functions, they can launch in fractions of a second. Applications within the container can also run faster, as they use fewer resources from the host OS and only need to replicate the OS kernel to run. The “just-in-time” instantiation of the application's processes saves time and resources, allowing the host OS to run multiple applications without any noticeable delays.

Containerization and Resources

The improved speed of containerized applications also lets servers conserve resources, even while running multiple applications. Each containerized application can be a few megabytes in size, while applications in a VM environment can run up to several gigabytes. The smaller size can save system resources (memory and processing speed). The improved efficiency allows servers to run as many as six times the number of containerized applications as VM-style applications on the same system.

These applications can also be compartmentalized into modules (e.g. front end, database queries, data tables, etc.), which can be accessed when needed and ignored when not in use. Since the system can access each module nearly instantaneously, any delays in moving between modules are practically non-existent. This “microservices approach” also makes each containerized application easy to modify. A developer can alter one module, without the need to rebuild the entire application.

The Future Is Contained

Containerization brings about some significant benefits, which is why brands such as Google rely on them to power the apps and websites we all love:

“From Gmail to YouTube to Search, everything at Google runs in containers. Containerization allows our development teams to move fast, deploy software efficiently, and operate at an unprecedented scale,” Google said in relation to its ongoing preference for cloud containerization.

Moreover, a number of our clients across a range of industries use dotCMS to deploy containers, and we’re noticing a steady increase on that front.