6109R: OpenVZ 7 or LXC/LXD?

As some of you were aware: The development of Aventurin{e} 6109R had been put on hold to explore the possibility of switching from OpenVZ 6 straight to LXC or LXD instead of pursuing OpenVZ 7. Here are our findings.

Whenever you criticize something, you risk to offend or to cause objection, no matter how valid the criticism is. When it's time for a decision between several options, then taking the wrong course can be as bad as taking no action at all. But sometimes it can also be worse, as returning to a previously abandoned path takes extra effort.

Back in December 2017 I had Aventurin{e} 6109R about 90% done as far as the Container management goes and had also dabbled into KVM creation from within the GUI interface. Yet I decided to set further development aside, to step away, take a fresh path and to explore alternatives. Not only for our own infrastructure (which entirely ran on Aventurin{e} 6108R at that time), but also for some of my larger enterprise level clients.

The state of OpenVZ had us concerned. We all had noticed key developers leaving, had seen how OpenVZ 6 was hibernating on pure life support and OpenVZ 7 was too much of an unknown quality and quantity. Yes, I had gathered a lot of experience with it while working on 6109R, but was *this* really the way to go forward? Parallel Inc. (and their predecessor SWSoft, Inc. before them) had done a stellar job with OpenVZ from the get go back in 2005. Their contribution to virtualization in general and to the virtualization in the Linux kernel over more than a decade are well known and respected. Yet the Linux mainstream hadn't caught up on many of their ideas and concepts and peer review, different philosophies, different corporate interests at some of the contributors and compromises made sure that the virtualization "components" built into modern Linux kernels are a far cry from what the makers of OpenVZ apparently had in mind. It was as if the Linux world as a whole was proving their concept wrong and proving other concepts as superior. Take LXC and LXD, but of course they're not the only ones.

Another competitors (who I value and respect) who offers an "all in one GUI + Virtualization OS bundle) didn't make the jump from OpenVZ 6 to OpenVZ 7, but went for something on Debian that runs on a modified LXC.

Of course my clients also took notice and had questions about the future viability of OpenVZ, too. Who wouldn't?

So I set out on a quest to explore LXC and LXD and to see if it wouldn't be best to ditch OpenVZ 7 for it. If that had worked out, 6109R would run LXD. No doubt about it.

Tell me more about LXC/LXD ...

LXD is a more modern rewrite of LXC. It's natively supported by modern Linux kernels, but beyond that it also needs appropriate "shell tools" to complement the kernel parts. I know, I'm simplifying. But this won't be nuts and bolts, so we can let that slide.

The older LXC is a bit of a puzzle palace of assorted shell tools. It has grown over the years and there are natural growth and outcroppings which (to the LXC novice) make it a bit ungainly. The newer LXD on the other hand is much more "organized". You end up with just a few commands (and many parameters) and even an API that (in our case) the 6109R GUI could directly "talk" to in order to perform LXD related transactions.  Hence if you have the choice between LXC and LXD it's perhaps the better idea to go for LXD instead, as it's more modern.

Both do Containerized vizualization, so the general principle is the same as the OpenVZ 6 and OpenVZ 7 Containers. Just the surrounding baggage and the general handling is different - if you permit me this very generous generalization.

I tried both on various OS's and spent a few months examining LXD on Ubuntu 16.04 LTS and 18.04 LTS. The documentation is pretty good and there are many practical examples and guides around for anything LXD related. There is a certain learning curve, but it was a fun journey almost all the way.

Eventually it came the time to drop the ball and to make a hard decision. LXD: Go or no go?

Why ... and why not?

The only verdict I could come to was: LXD is great and if it's makers don't fuck it up, it'll have a splendid future. No doubt about it. But it still needs time to mature and it needs to permeate deeper into mainline Linux distributions without having to hack them out of shape and form.

OpenVZ does security via the "bottom up, all included principle". Containers are very solidly isolated against each others. Over the years there have been a few issues and vulnerabilities, but in general you can say that they did a splendid job. Because security was first and foremost on their mind. Not just the container isolation, but also network security. If you allow root access to an untrusted client on one Container, then you don't want him sniffing the network traffic of neighboring Containers, the node or even the whole subnet. For that reason OpenVZ had introduces the "venet" network interfaces, which tackled that in a really neat, orderly and (for the end user) very simple fashion. Yes, ethX style interfaces (or vethX) could alternatively be used, but people who knew what they were doing wouldn't.

If I look at these two features (which OpenVZ has had for 12 years!), then the much, much newer LXD suddenly somewhat pretty awful. How awful in parts depends on the OS that you're using it on and the version of LXD, which is still bleeding edge in some regards with frequent updates. Which you're only getting in a reasonably time frame from certain OS vendors, while others stick with a version locked into a certain state or are still (at best) offering LXC.

Take CentOS 7 for example. It would have been my go to OS for LXD, but immediately proved to be a no show. No LXD available and an ancient LXC included. Even *that* was unusable, because a certain switch in the CentOS Kernel (set at compile time) wouldn't allow LXC to use proper Container isolation. From a security point of view this was unacceptable.

Debian 9 has fairly OK or good LXC and LXD integration, but Ubuntu totally stole the show with providing an almost current LXD via APT and a totally current one via Snapd. But like I said elsewhere: It'll be a cold day in hell before I ever consider using Ubuntu as a server, let alone use it as a mission critical infrastructure node for virtualizing. It's a bloody Desktop OS and kernel updates every 3-4 days (as they sometimes do) aren't really encouraging server usage.

Yet with a bit of ingenuity a Debian 9 can be turned into a pretty neat LXD node if one sets his or her mind on it. As for CentOS 7? I could have taken an axe to it, recompile the kernels (whenever an updated one comes out), tackle some CGroups and SELinux issues and shoe-horn an LXD onto it. However: It would have turned into a maintenance nightmare in the long run, as every OS update would need to be carefully examined to find out if it interferes with all the modifications of the new bastardized CentOS 7 hybrid.

So CentOS 7 was out, Debian 9 was still in the race. Acceptable levels Container isolation was more or less given at that point except that a sad taste remained in my mouth. It was clear that LXC/LXD as much as Docker were using a top down approach as far as security was concerned: They started at "no isolation" and "equal privileges" and then bit by bit had tightened the screws. Which is the exact opposite of what OpenVZ had done and I'm still convinced that it's way too easy for things jumping off the rails with the "top down" approach.

The next issue on my list was network security. Of course LXC/LXD don't have 'venet' interfaces in the way that OpenVZ has them. In fact it gets pretty interesting once you try to isolate network traffic to and from an LXC/LXD container, because you best have to do them via VLANs. Yes, right. VLANs. Picture Jolly Jumper saying: "And that's what he calls a solution!" while it carries it's cowboy and a one armed bandit on a high-rope over a bottomless canyon. Yeah, doable, but meeeeeh.

Lastly: Can we pretty please have Disk Quotas inside a Container? It's not much to ask for, isn't it? For +12 years OpenVZ has had 2nd level disk quotas and even nuts and bolts with which you could restrict certain resource usage, down to how many TCP send and receive buffers a Container was allowed to use simultaneously. I won't ask for *that*, but Disk Quota really is a must.

Let's see what the documentation says about disk quota on LXC/LXD:

Disk quotas per container are possible when using separate partitions for each container with the help of LVM, or when the underlying host filesystem is btrfs, in which case btrfs subvolumes are automatically used.

Can you please hold my beer? I need some shots. Preferably .50 cal in the general direction of whoever managed to offload THIS responsibility - so gracefully - onto the users shoulders. Which is basically the same story as with secure network interfaces inside LXC/LXD Containers: "We don't provide that. If you want security you can build yourself some VLANs." In the end this is - of course - something that a system designer can cope with, can work around and can overcome. So even that wasn't the final nail in the coffin, although it contributed to it.

At the end of the day our enterprise clients want security, stability, scalability, supportability (vendor support or through own means or contractors) and also think about a "Plan B" for various worst case or "less than optimal" scenarios. A self mixed Debian 9 or bastardized CentOS 7 as base OS for LXC or LXD would have been doable, but at the expense of interoperability, transparency and maintainability through third parties and potentially resulting in a deeper vendor lock-in.

I'll keep my attention pinned on LXD and which course it will take and perhaps the next Aventurin{e} (after 6109R) will use it. Until then OpenVZ 7 offers us a comprehensive, secure, maintained and out-of-the-box, well documented virtualization platform with some features that are still unparalleled by its competitors.

Return - General
Feb 20, 2018 Category: General Posted by: mstauber
Next page: News