Government contractors starting to embrace the containers application development movement

Containers Represent Welcome Leap Forward for Mission-Critical DoD Vendors

This article – special to InsightaaS – was submitted by Wes Caldwell, CTO of Polaris Alpha. Polaris Alpha is an integrator focused on deployment of complex technology solutions to defense industry clients. Caldwell begins his observations by noting that Polaris “adopted containers because it provides an invaluable level of componentization for our software — with containers as the building blocks, providing everything you need to encapsulate software for easier deployment (portability) and reusability.”

Why containers are the next step forward

Virtual machines (or virtualization in general) were the precursor to container technology, which allowed us to cut our memory, computing and storage overhead when designing and deploying applications.  With traditional ‘bare-metal’ servers, you get a pre-defined set of hardware resources (compute, memory, and storage) to work with.  Virtualization helped that utilization efficiency by allowing for dynamic allocation of those resources needed by the host OS and applications, making it possible to run multiple VMs on top of a set of hardware.  If you wanted to componentize your application into a multi-node deployment, you would typically create a logical separation of concerns across multiple VMs – separating application servers, messaging brokers, databases, etc. into component virtual machines.  This was great, but often the building blocks of the enterprise application components would become larger than necessary – for example, having to run a virtual machine with a full-blown OS just to host a web service.

Wes Caldwell, CTO, Polaris Alpha

This is where containers shine.  The modern container engines will run on top of a host operating system, and share the common binaries and libraries that the host OS provides.  Application building blocks such as web applications, microservices, data persistence, messaging, and others are now encapsulated into containers.  These containerized application bits are then spawned into an environment where multiple containers can run on top of the same host OS – gaining even more efficiency across the compute, memory, and storage utilization.  Now the previous example of hosting a web service becomes reasonable in a container-centric environment where you shed most of the excess overhead to run it.

Scaling with containers

Since a container’s building block is typically a smaller component than a virtual machine would hold, scaling becomes much simpler. Take a single service that accesses Twitter data, for example. If it’s a single purpose, single process component, a container engine would be able to spawn and run multiple instances of that container image, in this case scaling that service to handle much larger velocity and volume of data. This is where you save overhead — by not having to spawn a whole new OS and virtual machine to run and scale a single process (like messaging middleware, a micro-service, a data persistence layer or a web app) to handle larger scale.

Additionally, we now can scale and tune at all levels, taking everything we have and making other instances ad infinitum, each of which can scale independently of other components. In the early days of enterprise application development (i.e. the age of the app server), the unit of scale was a large monolith application server that you had to replicate in whole to several large servers in synchronization with each other, using gigabytes of RAM to run all the apps and services you could cram into them.  The modern, container-centric, micro-services based enterprise applications of today are free of those burdens, bringing down the unit of scale to as little as a single micro-service running in a container.  This type of enterprise application eco-system creates many benefits, all of which lead to more agile development, quicker iteration on enhancements, and ultimately better applications for your customers.

The implications for government vendors and the agencies they serve

Polaris Alpha is constantly asked to deploy to different cloud platforms across multiple government agencies. One might use Amazon GovCloud, another might use an internal PaaS like Pivotal Cloud Foundry, VMware vCloud, or Red Hat OpenShift.

This makes it critical to be confident that we can not only deploy software efficiently, but ensure that it will operate in these disparate environments. With containerization, we can do that without having to rewrite everything, every single time.

Every major cloud platform today supports containers and the pace of adoption doesn’t seem to be slowing. While it is a “behind the scenes” function not immediately visible to our customers, the cloud platforms they use are enabling more value by empowering us to make applications highly portable and ubiquitous.

Many of our development teams now go with a “container-centric” view of building applications and deploying into platforms like AWS, Azure and others. We’re building our DevOps strategy around it as well because “compose once, deploy anywhere” helps us be more efficient and deliver better value.

For our mission partners on the procurement side, this new modularity means better use of resources at their data centers — and ultimately cost savings and more predictable timetables for the mission-critical tools that are used to coordinate large amounts of information. Analysts and commanders appreciate having the right information at the right time to maintain operational integrity in a turbulent world to fight both crime and terrorism in the 21st Century.

LEAVE A REPLY