The cloud was built on virtualization. It allows maximum use of hardware in a cost-effective way, and helps providers support customers while protecting them from each other. It started with sever virtualization, with software that creates multiple “machines” on a single piece of hardware, then spread to storage and networking, allowing resource usage in these areas to be maximized as well.
That’s the old news, and it comes with its own set of challenges, such as incompatibilities between development and production environments that can have unexpected consequences.
But there’s another increasingly popular technology that chops things into even smaller pieces. It’s called a container.
So what’s a container? Basically, it’s a package that contains an entire runtime environment: an application, all of its dependencies, libraries, configuration files, and binaries that it needs to run. That eliminates the problems that might occur when an application is run on a VM that may not be precisely the same as the one it was built on. A container shares the operating system with other containers on the same server. That lets it be smaller, and use fewer resources than a virtual machine, which is a self-contained “computer” with a full operating system as well as applications.
When most people think of containers, they think Docker. Docker is both a container format and an open source company that offers the technology, and the Docker format, now based on technology from the Open Container Initiative (OCI), is quickly becoming the defacto standard for containers in many users’ eyes. It’s being adopted by organizations building both public and private clouds.
The OCI was launched in June, 2015 under the auspices of the Linux Foundation to create open industry standards around container formats and runtime. It describes its reason for being thusly:
Almost all major IT vendors and cloud providers have announced container-based solutions, and there has been a proliferation of start-ups founded in this area as well. While the proliferation of ideas in this space is welcome, the promise of containers as a source of application portability requires the establishment of certain standards around format and runtime. While the rapid growth of the Docker project has served to make the Docker image format a de facto standard for many purposes, there is widespread interest in a more formal, open, industry specification, which is:
- not bound to higher level constructs such as a particular client or orchestration stack
- not tightly associated with any particular commercial vendor or project
- portable across a wide variety of operating systems, hardware, CPU architectures, public clouds, etc.
Docker donated its container format and runtime, runC, to the OCI to act as the foundation technology for the standard, and has since released a new version of its engine based on OCI’s work. It is one of forty-eight companies supporting the project, which includes a diverse group of organizations such as Hewlett Packard Enterprise, Oracle, Microsoft, Intel, Twitter, Google, VMware, Cisco and Dell.
The OCI has laid out ground rules for the spec. The FAQ states that it must be:
- All tools for downloading, installing, and running containers should be well integrated, but independent and composable. Container formats and runtime should not be bound to clients, to higher level frameworks, etc.
- Portable: The runtime standard should be usable across different hardware, operating systems, and cloud environments.
- Isolation should be pluggable, and the cryptographic primitives for strong trust, image auditing and application identity should be solid
- Discovery of container images should be simple and facilitate a federated namespace and distributed retrieval.
- The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be able to run the same container consistently.
- Code leads spec, rather than vice-versa. We seek rough consensus and running code.
- Minimalist: The spec should aim to do a few things well, be minimal and stable, and enable innovation and experimentation above and around it
- Backward compatible. Given the broad adoption of the current Docker container format (500 M container downloads to date), the new standard should strive to be as backward compatible as possible with that format
Just under a year after its founding, in April 2016, the OCI launched another project to create a standard container image format, to complement its initial work on the container runtime.
Vendors, among them Red Hat (also an OCI member), are hopping on board the container bandwagon, creating platforms based on the technology. Although it began life as an open source Linux operating system vendor, Red Hat has broadened its scope to include its own container offerings based on Docker. In fact, it says, it was the first major vendor to do so. And at its recent user conference, Red Hat launched a couple of new container-based products to round out its OpenShift container platform-as-a-service (PaaS) product line.
The free OpenShift Container Local is designed for developers who want to create container-based applications. It runs on their local, non-production machines, and is designed to let developers work without having to set up a cloud instance. OpenShift Container Labs is a lower-cost version of the full product for non-production use, testing and development. It supplements the existing Red Hat PaaS OpenShift Container Platform (formerly known as OpenShift Enterprise), the full production version of the technology that offers management tools (including Jenkins, for continuous integration), a Web console, automatic application scaling, and other features to make it enterprise-friendly. Workloads created in Local or Lab can be easily shifted to the production environment with a simple license upgrade.
Security has always been a concern with containers – Forrester says it’s the top issue that appears in its surveys, with 75 percent of respondents citing it (scalability and performance followed, at 71 percent and 64 percent respectively) – so another announcement at recent user conference revolved around that worry. Red Hat has expanded its partnership with Black Duck Software to integrate the Black Duck Hub as a container scanner. It provides deep container inspection to spot open source vulnerabilities, and continues to monitor containers for new issues. There are also hooks to allow other security scanners to plug in to the OpenShift Container Platform. In line with its open source roots, all of Red Hat’s enhancements are in turn donated back to the community.
The question arises, of course, is containerization always the way to go. Experts say no – virtualization still has a place. But containers are another tool to add to the cloud arsenal that has the potential to make life easier, especially for DevOps.