Blogs
Nov 09, 2018

System Containers: The oasis for VM lovers in the desert of Containers

ANIRUDDHA CHENDKE
SR. PRINCIPAL ARCHITECT - MICROLABS

Almost a third of enterprise cloud developers were using containers in 2017, according to Forrester Research[i]. For most organizations, the initial foray involves the development of cloud-native microservices applications using application containers. System containers for monolithic applications and those for legacy migrations are also gaining traction. Their adoption is likely to increase in the next two years.

We will be looking at Linux system containers here. Microsoft also offers Hyper-V containers which are like Linux system containers for Microsoft Windows, but they don’t have a significant play in the system container space, yet.

What are System Containers?

It is applications containers that are popular. System containers are their lesser-known cousins. They behave like virtual machines (without any hardware simulation) and are also called machine containers. System containers share the kernel of the host OS and can run multiple processes at the same time. They work across different Linux distributions, means, run CentOS container on Ubuntu OS.

How do they work?

In the early days, containerization was achieved using tools like Solaris Zones and solutions like the Linux OpenVZ (and OpenVZ Release 4). With the emergence of the modern Linux kernel, LXC was created and rose in popularity. LXC uses Linux Kernel features like Namespaces / cgroups, Seccomp, Capabilities and SE Linux for containerization.

While application containers also use the features mentioned above, the design philosophy and toolsets used for system containers are focused towards delivering a VM like experience. For example, if the application running in the container needs a software, patch or library that the host OS lacks, a proxy namespace captures the call from the application and redirects it to the necessary code or library held within the container itself.

LXC launches an OS init in the namespace, so one gets a standard multi-process OS environment like a VM. Application containers like Docker typically launch the application process directly, so one gets a single process container. LXC tools are, however, low level and a little difficult to consume by themselves. To bridge this gap, LXD was created, which utilizes LXC in the backend. LXD provides a REST API for managing LXC. It is more focused on system containers and can run multiple containers using a single system daemon, further reducing overheads. LXD compliments LXC by handling the networking and data storage and utilizes the host level security, making containers secure. LXD also makes live-migration of containers across hosts easier and more usable in production environments.

How are they different from other Containers and VMs?

In its early days, Docker used LXC in the backend to create the application containers. However LXC and Docker parted ways, and Docker developed its own libraries (libcontainer) to focus more on ephemeral, stateless, single application container pattern. Docker is made for portable deployments of applications as single objects. It has a well-developed toolset for CI/CD along with a public registry and version tracking. It has a layered architecture with reusable components and requires specific filesystems that support Copy on Write. It is an ideal solution for fast-scaling microservices-based applications. You can run multiple processes or an OS in Docker—but is not easy as it isn’t the main use case Docker containers target.

LXC can do application containers as well, however, it has a much better toolset to provide an almost VM like feel to the containers. LXC provides a powerful API, base OS templates and toolsets for lifecycle management. As Stephane Graber (one of the original creators of the LXC project) says, these containers run a full Linux system as it would be on metal or in a VM. Typically, these would be long-running containers which could leverage traditional configuration management and deployment tools like existing VMs. It is very different from the approach that a VM takes of setting up a hypervisor on the host which isolates the VM completely from the host by emulating hardware, increasing the overhead significantly. There are hybrid containers like Kata Containers which do not have full hardware emulation but isolate the host kernel from the containers. This provides significant benefit over the traditional VM yet more isolation than LXC/LXD with a small performance hit.

Can they be integrated with existing tools?

The short answer is yes. Since the system containers mimic VMs in their operation, almost all of the VM monitoring and configuration/patch management tools are implicitly supported by LXC containers. Additionally, by exposing a Rest API, LXD makes tooling much easier. OpenStack is a powerful toolset for private cloud orchestration, and the Nova LXD project for OpenStack provides a mechanism for deploying system containers at scale and is almost identical to the VM experience.

What are their typical use cases?

The most common use cases for LXC/LXD based system containers are: 

  • Where you need the maximum performance from a VM but with no overhead of the virtualization layer.
  • Where you need increased server density. As per Canonical, LXD runs 14.5 times more densely than KVM with 57% less latency. These would however not match the density benefits of application containers, but you get the near VM experience here.
  • Where you need to boot your VMs fast. LXD containers can boot up to 94 % faster than KVMs.
  • Where you need some VMs to have hardware access. With the absence of a virtualization hypervisor, the containerized machine can access the hardware resources in a much easier fashion.

It is more appropriate to consider them as an efficient replacement for VMs, but with reduced complexities when compared to application containers.

These features also make system containers an ideal candidate to be leveraged for migrating legacy applications to the cloud.

Two of the most popular methods used to migrate to the cloud are the ‘re-host’ and ‘re-platform’ paths. However, it is observed on multiple occasions that enterprises are unwilling to move certain critical legacy applications to the cloud. This is because cloud providers cannot support the legacy VM OS to re-host or re-platform. Refactoring and rearchitecting are also expensive and time-consuming. This results in retaining some datacenter workloads on-premises, which might not be the desired outcome. By containerizing the on-premises machine and then moving or replicating the container in a cloud VM, the migration process can be made easier and safer. The containerized VM would use the Linux kernel of the host OS while retaining all the application dependencies installed for them to work thus making support relatively easy. Please note this might not be applicable for VMs/ physical machines whose application may be dependent on specific versions of kernels. For this, Kata Containers might be a better fit.

The increased density could also reduce cloud spends by using system containers inside cloud VMs.

Conclusion

In conclusion, system containers are a good option to have. They can’t replace virtualization completely—but they were never meant to. In some cases, they provide significant upsides in performance and capacity.

With Docker and other application containers more focused on modern microservices-based applications, system containers can be a handy tool to facilitate migration towards next-generation technologies. LXC is a proven technology. With Container Management through LXD getting easier and with the increased support for other distributions from Canonical on the cards, LXC/LXD are sure to gain momentum soon.


[i] https://451research.com/blog/1657-featured-insight

Disclaimer: The information and views set out in these blogs are those of the author(s) and do not necessarily reflect the official opinion of Microland Ltd.