TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
CI/CD / Containers

Container Lifecycle Management, Part One: Not Your Father’s VMs

May 24th, 2016 7:32am by
Featued image for: Container Lifecycle Management, Part One: Not Your Father’s VMs

David Dennis
David is VP of Technical Marketing at Bitnami. Prior to Bitnami, David worked in product management, technical marketing, and product development leadership roles at GroundWork Open Source, Levanta, Mercury Interactive, Hewlett-Packard, and Symantec.

It was during the final Q&A session at the Open Container Night Meetup in Santa Clara that we heard the highest concentration of questions, in a single place, about how to manage containers in production or at scale:

“How do I deal with security tokens?”

“What’s a good backup or DR policy to use with containers?”

“We’re investigating using containers, but it just seems like a hassle for an existing app.”

“We’re using Chef to deploy updates to containers, but that isn’t working so well…any good alternatives?”

We’ve had to deal with these problems ourselves.

To put this in context, this was a room full of 200 to 300 IT and DevOps professionals, many working for marquee names in Silicon Valley, who had taken the time on a rainy Tuesday night in February to listen to talks about containers well into the evening. In other words, this was a group of seasoned professionals who generally know their stuff.

And yet, overall, there was more confusion than clarity about how to manage containers in production, particularly around system architecture and container lifecycle management.

What’s going on here?

Bitnami has made a business out of creating, updating and managing over 137,000 unique cloud and virtual machine images (along with the technologies and processes to make that efficient), and is now doing the same for containers. We’ve had to find answers to many of the same questions being asked by the audience. This is especially true in the areas of container-based stack architectures, packaging, and updates, which is our core competency.


A Talk with Bitnami’s Erica Brescia

In a nutshell, a lot of the procedures, mental models, and technology tools that have become best practices for virtual machine lifecycle management (whether on-premise or in the cloud), are anti-patterns when applied to containers.

This will be a two-part series. In part one, we’ll address some of the broad differences between virtual machine and container lifecycle management that are important to understand. In part two we’ll dive into specific technologies, we have developed to help make container application management and updates easier.

Containers Are Not Just Leaner, Better Virtual Machines

Yes, containers tend to be smaller and more efficient than virtual machines, mainly by getting rid of superfluous and duplicative OS features and functions.

And while there are also minimalist Linux builds, and cloud-based VMs keep getting smaller and cheaper (e.g. Amazon’s T2.Nano instances), there remain fundamental architectural differences between the two that remain true regardless of how small a virtual machine becomes:

Virtual Machines Containers
Persistent Storage Ephemeral Storage
Layers Dockerfile
Immortal Short-lived
Mutable Immutable
Patch Replace

The importance of these differences is often under-estimated when it comes to both application architecture and lifecycle management.

Applications originally architected around virtual machines don’t automatically become web-scale and resilient simply by implementing them in containers. State or configuration information, if implemented via persistent storage, may become an issue.

Similarly, lifecycle and change and configuration management technologies and habits that evolved over 5-10 years of managing virtual infrastructure will need to be re-examined. And deploying updates to containers via something like Chef or Puppet doesn’t usually make much sense given their ephemeral nature.

But now, we run into a tooling and process problem: most organizations that are over five years old have ingrained habits, tools (Chef, Puppet, CFEngine, Red Hat Satellite, several VMware products, etc.), training and processes (ITIL anyone?) specifically designed to manage virtual machines. Trying to force-fit VM-centric management tools and processes to containers can result in a very large and frustrating bag of hurt.

And yet a sizable portion of IT professionals don’t appear to be using anything for automated build, often considered a lynchpin for container-based DevOps. In data from May 2016 Bitnami user survey, over 50 percent of 3,589 respondents reported that they don’t use automated build or CI/CD tools (36 percent “None” + 27 percent “Manual”), it’s a good guess that these respondents are either doing things manually (more likely if their container usage is smaller scale) or using traditional server-centric deployment tools (more likely if container usage is larger scale).

Moving to Immutability

So if the server lifecycle model (virtual or physical), and its associated tools and processes, is not a great fit for container-based applications, what does the alternative look like?

[This may be remedial for some readers, but may be new info for those who are just getting started with containers.]

In a canonical container-native application, built around a CI system (like Jenkins) and an orchestration system (like Kubernetes or Mesos), the life cycle follows a pattern similar to the diagram below:

Bitnami

  1. A base container definition (a Dockerfile), containing the minimum OS, runtime, frameworks, and application components is first created.
  1. Next, this is pushed to the CI system. Once it has been cleared (go on green), the application container is ready to be built.
  1.  The application container is built, including any custom code that needs to be added.
  1. The application container is pushed to an orchestration system, where it is then replicated across a cluster to allow for resiliency, scalability, and rolling updates (more on this below).
  1. When a new container update is available, nothing is patched. This is a key difference from virtual machine management. Instead, an entirely new, updated container is created. This proceeds through the steps described above, but this time, when the orchestration system is reached, a rolling update is applied — older containers in the cluster are killed and replaced by newer versions.

Following this model, applications can achieve immutability, which when translated into classic IT ops terms, equates to massively reduced downtime and maintenance windows.

Sounds wonderful, right?

The Cambrian Explosion Problem

The simple container lifecycle described above scales pretty well as written when the variety of containers is relatively modest.

In contrast, at Bitnami we have over 137,000  images we have to keep updated in a rapid fashion (all the applications in the Bitnami catalog x all the cloud vendors and local downloads we support x number of virtual machine instance sizes), so this explosion of diversity is a concern we’ve had to address. As of this writing, not all Bitnami applications are available in containers yet, but they will be soon.

Now you may be thinking, “Well, I don’t have to worry about that, we’ll never have that many images to worry about,” and you’re probably right. But if you’re serious about using containers in a major way, you can reach several hundred or even a thousand variants very easily:

Iteration #1 Iteration #2 Iteration #3 Iteration #4
Languages 6 8 10 20
Frameworks 5 20 40 50
OS Variants 2 4 5 5
Combinations 60 640 2,000 5,000

And that’s just unique images — now think about keeping them updated at different revision levels.

Key Take-Aways

In the next article, we’ll explore in detail Stacksmith, a service that Bitnami has developed to help automate stack updates at a large scale.

But in the meantime, if you’re beginning to seriously consider using containers in production, keep the following in mind:

  1. Containers aren’t just better virtual machines. There are important architectural differences (storage, security) that affect how applications are developed and how they’re managed in production. You may need to educate your team(s) on these differences to avoid costly missteps.
  1. Legacy applications will likely need to be re-architected (wholly or partially) to truly benefit from the increases in agility, scalability, and resiliency promised by a micro-services architecture. Simply stuffing an application (or component thereof) in a container doesn’t magically make these things happen.
  1. Your existing procedures, habits and tools for updating and deploying virtual machines probably aren’t a very good fit for containers. If you plan to move to containers in production at any kind of large scale, it’s time to start investigating alternative tools.
  1. Manually creating and updating containers may work at a small scale, but as your usage of containers grows, the diversity of images and their rate of change is likely to become a bottleneck that crimps your ability to keep up with critical security and stack updates. This may leave your systems vulnerable to exploits or unable to fix bugs in a way that the market demands. Plan for growth and evaluate methods to automate container creation and stack updates.

Bitnami is a sponsor of The New Stack.

Feature Image: Radio house control room in Fabianinkatu, 1934 via New Old Stock.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Simply, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.