Introduction to Containers and VMware announces Photon and Lightwave

By on 04/27/2015.

Does anyone remember the JeOS versions from the major Linux distributions? They were purported to be ultra-small versions of the upstream OS which only included the bare minimum packages needed to execute applications. They had some popularity, but not widespread, for various reasons. What has supplanted, and surpassed, JeOS-type deployments?

Containers

Unless you’ve been under a rock, out of touch, or just oblivious for a while then you should already know what containers are (not “Dockers”! Docker is a container management technology). For those who aren’t aware, containers are old technology that the world is just waking up to. No, really, they’ve been around for a while…the Linux cgroup functionality was released in kernel version 2.6.24, way back in January 2008. If you are a Solaris admin (sorry…) then you are even more familiar with this type of technology, as Solaris Zones are very similar in principle and were released back in early 2004 (thanks Wikipedia).

Google has been using containers for a long time, with some rumors that they have been doing so for as long as a decade, and create billions (with a B) of containers each week to supply compute resources for their applications.

Containers have become very in-vogue recently when Docker made them accessible to us mere mortals. Docker wraps the complexity of container technology into an easy to use CLI interface, and coupled that with an easy to use repository of containerized applications to build your custom application images from.

Note that I used the words “application images” very deliberately here. Containers are not a virtualization technology, they are an isolation technology. Containerized applications are packaged so that only the app and its dependencies are toted around. As it turns out (who knew?!), the kernel is really the only important part when executing a Linux application at the binary level. By packaging all the dependencies into a container and leveraging the kernel to execute everything we can suddenly make applications very portable.

It no longer matters if you’re using Ubuntu, Red Hat, SLES, or any of the other distros. Instead it only matters that the kernel you are using is supported, regardless of distribution.

This has led to a new breed of Linux which is stripped down to the bare minimum needed to execute containers and provide manageability. Distros and tools such as CoreOS, Atomic, Mesos, and Kubernetes all cropped up to provide the ability to execute and manage containers at scale.

Why do we care about containers at scale? One word: microservices. Breaking the monolithic application into a multitude of microservices ideally allows for the developers and administrators to plan for, account for, and accommodate failure and scalability. By separating the application into smaller pieces we can now begin to adopt the principles of OOP into the application infrastructure: Encapsulation, Abstraction, Inheritance, and Polymorphism. This makes the application resilient, scalable, and (theoretically) easier to manage. A practical example of this?

Netflix

Their much-lauded ChaosMonkey tests the application to ensure it is resilient by introducing failure. By purposely causing failures to happen, they are constantly testing the application and infrastructure to ensure that they have been designed to sustain. How do you know your failure recovery plan works if you don’t test it?

Note that none of this has anything to do with the underlying physical infrastructure, OS, or even the programming language used for the application. Instead, it is a design philosophy. The application is designed to be scalable, and resilient, and encapsulated so that it can tolerate all of those nasty failures that inevitably happen. Making all of that happen means that the ability to add/remove resources, refresh the production code, and recover from failure needs to happen quickly and efficiently.

Which is where containers come in. They are the perfect vehicle for isolated components of the application. They are quick to be created and destroyed and all application dependencies are self-contained.

Does that mean that containers can’t be used for traditional applications? No, but much of the benefit is lost. Think of these scenarios:

  • I have a horizontally scalable web application which has 100 server instances serving web pages to users. What happens when one fails? Nothing of consequence, cause there’s 99 others. And that failed one can simply be recreated new. In seconds.
  • I have a spike in traffic, so I double the number of web server containers. How long did it take? Seconds.
  • The devs found an error and need to update the code. Great, deploy a new container image, then destroy the old instances and create new. It takes seconds.
  • The OS hosting the containers needs to be updated. Or I want to switch because of, well, any reason. What’s the impact? None. The containers are isolated from the hosting OS. They are self-contained.

Ok, I’ve prattled on long enough about containers in general. VMware had a big Monday this week with the announcement of two new projects:

Photon and Lightwave

Photon is VMware’s Linux distro which hosts containers. It is eerily similar to CoreOS (without etcd, fleet, or flannel…which can be added afterward), though there are some claims that it is optimized for being hosted on vSphere and vRealize Air (though I haven’t been able to discover what optimizations exist). Photon, as a distribution, supports container formats from Docker, rkt, and Garden.

Note that each of these is still the same core technology…containers…the difference is the manageability around them …

At this point, there’s not much unique here. They may be the first to support all three major formats on the same OS out of the box, but that’s not really differentiating.

Lightwave is interesting though. It is an authentication and authorization service for, at least in their demo, the Photon OS. According to the announcement from VMware, it supports multi-tenancy and multiple authentication / authorization methods. This fills a gap which has existed for some enterprises wanting to deploy a large, shared, pool of container hosting resources. Judging by the graphic they displayed, it will integrate with not just the container and OS, but the network (presumably some SDN solution…like NSX), orchestration tool (Lattice?), and even the repository of container images.

My belief, and to be clear I have no knowledge of the future with regard to VMware’s plans, is that the integration will grow between the VMware ecosystem (vSphere, vCenter, NSX), Photon, Lightwave…and Lattice. VMware is already pushing the edge of infrastructure integration with applications which have been deployed to virtual machines, and I don’t see why they would let up with containers. Enabling enterprises to seamlessly deploy and manage container-based and VM-based applications, side-by-side, using a common set of tools, and a common hypervisor, makes for an interesting use case.

What’s interesting to me is that if I’m right (and that’s a big if), that observation, plus the announcements, means that VMware is really trying to make all of this containerization palatable to traditional infrastructure admins. Even though the tagline is “Making developers first class citizens in the datacenter”, VMware’s core customer base is infrastructure admins, and these releases are targeted at manageability and security…two things that infrastructure admins care very much about.

The code jockeys care about making the application functional, robust, fast, etc. As someone who has been an infrastructure and app admin (not dev…) for a long time, I care about reliability, security, integrity, and serviceability. We both care about scalability. Containers mean that the developers can easily create, test, and maintain their code. For the infrastructure/app admins they mean that the application can be easily maintained, managed, and scaled. Now we just need the toolset to make it happen!

One last thought …

What about storage? Where do your containers get their storage from? Where does that containerized MySQL, Redis, MongoDB, or PostGres instance store all your precious data? How about backups?

Start here: NetApp and Docker Technology to Enhance and Accelerate Dev/Ops in Hybrid-Cloud Environments

Andrew Sullivan
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

No Comments