Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
LXD – next generation system container manager release 4.3 (linuxcontainers.org)
202 points by dragonsh on July 14, 2020 | hide | past | favorite | 84 comments


LXD is actually a cool technology. It's not like Docker/k8s in terms of each "node" usually running only one thing. It's more along the lines of a "VPS in a box", where you can launch virtual servers using a simple command line interface.

I use it to run all of the virtual server machines in my home LAN (mostly virtual desktops) with each machine exposed on the real LAN's DHCP server (so I can move them to a different box if needed), and to launch ephemeral "server boxes" to run some tests or builds or whatever on without polluting my dev environment (think virtualenv, but for anything).

I also used it as the base for my virtual system builder scripts ("container" VMs with LXD, "VM" VMs with libvirt since LXD didn't support VMs back then): https://github.com/kstenerud/virtual-builders


LXD is indeed nice, but it still isn't very widely adopted. I believe this comes down to 3 main issues:

1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".

2) It sits somewhere in between k8s and docker engine. Over time, it will probably get more k8s-like features but still it is a weird position to be in.

3) It lacks a reach ecosystem of tools supporting it and a web UI. This makes it hard for newcomers to adopt. We're working on a web UI ourselves as part of our open source cloud management platform (https://github.com/mistio/mist-ce) and love to hear your thoughts.


Indeed, we have major issues with snap (doesn't work with $HOME on nfs, and auto updates with very few controls are a terrible idea on servers) so avoid anything dependent on it. Otherwise, I really like the concept.


It's unfortunate that the effort to get LXD packaged natively in Debian [1] seem to have stalled as this is definitely one of the last remaining drawbacks for me.

For example, when LXD was still available as a .deb from upstream, it was possible to run LXD inside a container and do nesting, but with the snap, that isn't possible anymore. (However it is now with the --vm flag, though that's really a different mechanism.)

As I understand it the big remaining difficulty is dqlite.

[1] https://wiki.debian.org/LXD


I did write a bit about my experience packaging LXD for Arch Linux a while back.

I got a ping from a Debian dev telling me the Debian dev working on LXD enjoyed the article, and has been trying to pick up on the work again. The linked wiki page is the outdated one.

https://linderud.dev/blog/packaging-lxd-for-arch-linux/


it's packaged for void, though I'm not sure what that entails. will say it works well enough I didn't find out about the snap mess until I tried to install it on another distro later


Debian has stricter rules regarding packaging and vendoring dependencies. Void and Arch packages LXD in a similar fashion by separating all the C dependencies. However, none of us actually separate out the different Go dependencies, they are all vendored inside the LXD package.

Debian separates out all of these into own packages.


I made a GUI too, but dont actively develop it anymore. https://github.com/dobin/lxd-webgui


As it requires docker probably difficult for us to try. Do you have any instructions on how to install it on bare metal? Using them can convert them to LXD container images to run, can reverse engineer Dockerfile to create LXD containers but try to avoid environment variables for configuring and running services as they are kind of security vulnerabilities.


Unfortunately we currently don't. We do have the option to install it on k8s with a helm chart though https://github.com/mistio/mist-ce/tree/master/chart/mist


I wonder how does Mist compare with OpenNebula, which seems to be in the same area? From a quick glance it looks more complex. (OpenNebula supports LXD directly.)


At a high level, Mist is a more "general purpose" platform which supports more providers (20) and more workflows. This increases the perceptual complexity, however it isn't more complicated to use than your average public cloud console. In fact, we strive to keep things as simple and as agnostic as possible. I'd be happy to arrange a demo if you like to see it in action. You can reach us on Github or from our website at https://mist.io.


OpenNebula definitely needs to revive the libcloud interface, or similar; I was thinking of complexity in operation more than using it. I ask mainly for general interest, as I can't think a free software management system would end up expensive enough for an institution which is too broke to retain staff. I might have a play sometime, and thanks for the response. I haven't found anyone with experience to compare these sort of systems in operation after seeing the need a while back.


> 1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".

Last time I tried it on Fedora it did not work (less than 6 months ago).

Also it offers nothing I want over podman with --rootfs


I've tried both podman and lxd with success but I'm curious, what do you use a tool like that for, mostly?

Not to seem like a hypeman for Kubernetes and similar tools, but I actually seem to only ever use containers combined with something like Kubernetes or Docker Swarm. What do you do that you want to do specifically on one machine? Hosting something? Automation à la CI/CD?

Again, I am actually asking for good use cases without orchestration platforms, I am just curious.


I use it as a replacement for situations where I used to use KVM for Linux VMs. For example, my tvheadend and Zoneminder servers are both running inside LXD containers so that I don't pollute my host machine's environment. It's also a nice way to try out another distro other than what the host machine runs with close-to-metal performance.


One thing podman is used for is bootstrapping environments for package building under mock, which at least affects people doing Fedora maintenance.


> I've tried both podman and lxd with success but I'm curious, what do you use a tool like that for, mostly?

I use podman for dev and test environments, CI and CD workers and testing OCI containers that I eventually deploy in K8S. No production use cases. Hoping to soon see K3S working inside podman though, and then I would use it for deploying K3S :)


That isn’t a good comparison. While podman can run systemd inside a container, it isn’t widely adopted in the images in docker hub and elsewhere. There is probably just fedora supporting this. Whereas in LXD it’s normal to run a full systemd inside a container.


> There is probably just fedora supporting this.

How do you mean? I've used podman on Void Linux, openSUSE, GitLab CI, and I think some others that I'm forgetting (I distro hop a lot) and it's worked great.


> While podman can run systemd inside a container, it isn’t widely adopted in the images in docker hub and elsewhere.

With podman and rootfs it's also normal to run a full systemd inside a container and you don't need special considerations from OCI images for rootfs to work just fine.

> There is probably just fedora supporting this.

RHEL is behind podman and I will take RHEL support over Canonical every day.

Podman is also available on most major distros and easy to port to new ones without requiring someone to use some proprietary crapware solution like snappy.


> LXD is actually a cool technology.

It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.

I tried to get LXC running on fedora some months, wondering why there are no official packages and then I very quickly noticed why. LXD uses the old Ubuntu trick of calling sysv style init scripts from systemd unit files without so much as a "|| exit 1" [1][2] - and I don't have time for that bullshit.

If you want vastly superior quality try podman with --rootfs or virt-install. At least those are not made by people with a disdain for error checking.

[1]: https://github.com/lxc/lxc/blob/master/config/init/systemd/l... [2]: https://github.com/lxc/lxc/blob/master/config/init/common/lx...


> It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.

> At least those are not made by people with a disdain for error checking.

Every single time there's a topic about Ubuntu or Canonical you seem to go straight into offensive mode. I can tell you use Fedora, but that's not typical of Fedora users and developers and no excuse.

I'm an open source developer and believer, but this kind of behavior slowly burns my soul, even more when it seems accepted. I've worked on enough projects in my life that it's absolutely certain that you use my code regularly. It's inside Go, Python, APT, and RPM by the way (hello Jeff Johnson, wherever you are), and in key libraries that you surely depend on as well. And I never heard you complaining about any of that with such anger here.

I'm also Canonical's CTO, and I was one of the key designers and developers that started snaps, and juju, and other key projects from Canonical. I usually hear such blind hate in silence, but sometimes it's just too much. I don't understand why do we do that to ourselves, as a Linux community. Why is it okay to openly offend unknown people that we almost certainly depend upon? What is it that we came here to do, again?


You're confusing LXC with LXD.

LXC is legacy and is mostly left as-is for existing users who are happy [and don't want the LXD paradigm shift]. Think of it as 1.0. You're complaining about stuff that is effectively a maintenance branch. At least your links point to LXC, not LXD.

LXD is the new thing. Think of it as 2.0 and onwards.


I'm not sure if that is a good description. LXC is a library and set of command line utilities to run containers. LXD is a clustered service that makes use of the LXC library.


I admit it is confusing. LXC typically means the lxc-* commands like lxc-start and so on. That's the daemon-less "old way". The links the grandparent provided was to this, and their complaint is about that stuff that is in maintenance mode and not actively developed any more.

LXD is a complete rewrite. There is now a server daemon called "lxd", and the client for that is "lxc".


Also superior Proxmox


Proxmox is an absolute great product, in production at 7 clients since over 8 years, some with ZFS some with Ceph, some with lxd some kvm, it's rock-solid and the support is great!


Yup. I discovered LXC container with proxmox, now i have many of them running in my network.


Yeah, been playing with proxmox on my homelab, put it on an AMD Ryzen 5 3600 (6 cores, 12 threads), with 32gb of RAM and it's been running great with containers for when I want to fool around. Fantastic piece of software that has matured well.


Thanks! That's good to know because I've been having troubles with Ubuntu 20 and have been considering switching back to redhat.

Then the only remaining part would be figuring out how to do zfs on root in redhat. I do it with a chroot bash script for Ubuntu server atm, but hopefully there's a debootstrap kind of thing on redhat that I can use as a base.


> I do it with a chroot bash script for Ubuntu server atm

I’ve been using ZFS root on Ubuntu without any such nonsense for years, but I had to debootstrap it manually.

See guides here: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubu...

With Ubuntu 20.04 the standard installer should support setting up a ZFS root without all that manual work, but I haven’t tested it myself.


Yes, that's what I meant. The script does the partitioning, zfs init, chroot, debootstrap, and apt install. You just run it from a live cd.

https://github.com/kstenerud/ubuntu-server-zfs


> I do it with a chroot bash script for Ubuntu server atm, but hopefully there's a debootstrap kind of thing on redhat that I can use as a base.

There are ways to do chroot installs on RHEL but sadly not well documented.

Some resources on this:

- https://glacion.com/2019/06/16/Fedora.html

- https://github.com/glacion/fedora-chroot-installation

- https://web.archive.org/web/20150912004438/https://wiki.cent...

Most of that will work for RHEL with some minor modifications.

Pity it is not better documented.


YUM/DNF has bootstrap-in-chroot functionality built-in.

Just set up your disk layout the way you want on a disk device or a loopback disk backed by an image, and do something like:

$ sudo dnf --installroot=/path/to/mounted/target group install @core @hardware-support

And you're good to go. You can see the list of groups available to include by using "dnf group list --ids".


> And you're good to go.

If you opened the first link I shared you would see more is needed than this, which is why I linked to something explaining both this and the additional steps needed.


In fairness, distros seems to make this kind of "joke" one another.

About six months ago I tried running SystemTap on Ubuntu...


Calling sysv style init scripts with no error checking from systemd unit files is a joke on Canonical, not any other distro.


> Now with virtual machines being supported by LXD, we found ourselves needing to support attaching both our traditional filesystem backed volumes to virtual machines (which has been possible for a while and uses 9p) as well as allowing for additional raw disks to be attached to virtual machines.

Didn't know virtual machine support was offered by LXD -- I haven't had much time to actually play with LXD but sure hope it gets more coverage in the future. It's like the alternate more stable (and feature-ful in some rights since they've had rootless containers longer) kubernetes that no one's ever heard of that's been around just as long if not longer.

For those who want more introduction to the ecosystem -- https://linuxcontainers.org

A short primer -- LXD is the "kubernetes" part, LXC is the "containerd"/"docker" part. LXD is far more "batteries-included" than LXD and runs "system containers" (user namespace-mapped containers that offer better security) easily.


"and feature-ful in some rights since they've had rootless containers longer"

Does LXD actually allow you to run and manage containers as a normal user like podman does? All the official instructions involve adding one's user account to the "lxd" group, which is equivalent to granting oneself root privileges without a password. [1]

[1] https://linuxcontainers.org/lxd/docs/master/security


lxd does not run your containers -- it uses lxc to do so, and yes, lxc supports user namespaces (this is how you get rootless containers). In the end all these tools get here by doing the same thing, they do uid/gid mapping[0 - search "UID MAPPINGS"][1] and some filesystem module (whether kernel or user space) support. Why that is necessary is basically due to how the kernel works, and how containers work in general which I will leave as an exercise in the reader (hint: a "container" struct does not exist in the linux kernel, containers are a combination of processes and isolation via namespaces and cgroups and other kernel features).

What lxd provides is a layer above lxc to coordinate the containers being run (by lxc) and relevant resources on the machines you want.

[0]: https://linuxcontainers.org/lxc/manpages/man5/lxc.container....

[1]: http://docs.podman.io/en/latest/markdown/podman-create.1.htm...


From your link:

"The remote API uses either TLS client certificates or Candid based authentication. Canonical RBAC support can be used combined with Candid based authentication to limit what an API client may do on LXD"

So it's not configured out of the box, but possible.


In that sense docker is also "rootless". Won't be surprising if Canonical does not understand the words they use.

EDIT: actually I find no claims that LXD/LXC supports rootless containers, I think the person claiming it does just don't understand it, would like to see a citation for it.


That's because by default ALL containers in LXD are unprivileged. They use the term unprivileged because the term rootless wasn't around that far back ;)

LXD uses various mechanisms to map the uid-in-container to a uid-on-host. So root-in-container is not root on host. There are a lot of details to work through to make that work neatly, and there is still kernel work being done to map this nicely into the filesystem, but it works, and it's the default, and the FAQ recommends strongly not to grant real-root to your containers, for a good reason.


Great summary -- for anyone interested in the kernel side, there's a talk from 2019 called "A year of Container Kernel Work Past, Present, and Future of Container Kernel Features"[0] which goes into the timeline and future work being done. I'm not sure where it is today but it's a good watch.

[0]: https://www.youtube.com/watch?v=TnArHYRYT3U


I’ve only ever used the low level lxc-* tools to manage IPv6 only containers on one host. What are some things that would be easier with LXD? I’ve always wanted to try it but never found the motivation.


Many things are easier in LXD, like building a highly available, fault tolerant cluster of container and vm-instances.

You can deploy on the cloud of your choice or directly on bare-metal.

You can use REST API, Python API, Golang API, PHP API, Puppet, Chef, Ansible, Dockerfile, Fabric, Capistrano or shell scripts whatever you are familiar with to build container or VM images and also manage running instances. You are not limited by Dockerfile format with mixture of shell scripts only to built images. It's a much easier way to handle containers then learning the complexity of Dockerfile, Kubernetes, Help Charts various YAML (designed for google kind of operations for billions of users).

Just check https://www.youtube.com/watch?v=RnBu7t2wD4U and see how easy it is to setup 3 node fault tolerant cluster. Now with 4.3 it's mature and very resilient.

It's more secure by default than an equivalent Docker or Kubernetes based system as it runs VM and Containers in user namespace for a very long time since LXC 1.0 release. Docker itself started it's life with LXC and moved away from it and became popular with marketing and venture capital money.


> Many things are easier in LXD, like building a highly available, fault tolerant cluster of container and vm-instances.

What features of LXD enable fault tolerance and high availability?

> It's more secure by default than an equivalent Docker or Kubernetes based system as it runs VM and Containers in user namespace

What is the basis for this claim? K8S and Docker also runs everything in user namespaces by default, so why is LXD more secure?


> What features of LXD enable fault tolerance and high availability?

Check more details on https://linuxcontainers.org/lxd/docs/master/clustering.html

> What’s the basis of claim

Kubernetes started with docker and docker image formats. Docker started its life by using LXC. Later on Docker moved to directly use underlying cgroups and namespaces to built it’s own library in the meantime LXC reacheD 1.0 version which support running a container as unprivileged user. For a very long time Docker container, runtime and kubernetes could not get this feature of running containers as unprivileged users. Later they added support when both docker and kubernetes including the managed services from amazon and google suffered from security vulnerability due it. This vulnerability did not impact LXD/LXC as they by default always run container as unprivileged user, it still affected the containers run as privileged by choice. Now a days LXD use new kernel feature called shiftfs (https://discuss.linuxcontainers.org/t/trying-out-shiftfs/515...) to map users between container and host.

Also Docker Containers did not support init process with pid 1 resulting in zombie processes (https://forums.docker.com/t/what-the-latest-with-the-zombie-...). LXD containers do not suffer from this problems from the very beginning.

As I mentioned in my earlier posts Docker became popular due to marketing and a lot of venture capital going into it, not because of superior architecture or better technology. LXD/LXC is still one of the best solution for system containers even though there are many docker related options with CRI, CRD, runc, containerd, moby, podman, kata etc.

Instead of taking my word for it try to setup a HA cluster using LXD and then setup kubernetes you will know what I mean. One of the good project out of LXD is called dqlite[1], chek it.

[1] https://dqlite.io/


User namespaces remapping is still not supported in Kubernetes, see https://github.com/kubernetes/enhancements/issues/127


The multi-machine case is probably where you'd feel the benefit the most -- managing which container runs where, network, storage pooling, etc, that is what lxd is for. LXD sits on top of lxc to get things done, so you can imagine it like just a scripting layer to access lxc.

If you're well served by ssh/scripts/ansible to set up the machines and managing them by manipulating lxc on each one then there's absolutely no need to change! But what lxd offers is a dynamic system that manages that (and other things) for you on the fly.

Stolen from their features page:

> Secure by design (unprivileged containers, resource restrictions and much more) > Scalable (from containers on your laptop to thousand of compute nodes) > Intuitive (simple, clear API and crisp command line experience) > Image based (with a wide variety of Linux distributions published daily) > Support for Cross-host container and image transfer (including live migration with CRIU) > Advanced resource control (cpu, memory, network I/O, block I/O, disk usage and kernel resources) > Device passthrough (USB, GPU, unix character and block devices, NICs, disks and paths) > Network management (bridge creation and configuration, cross-host tunnels, ...) > Storage management (support for multiple storage backends, storage pools and storage volumes)

If you don't need nay of these things, then no biggie, but if you're trying to orchestrate those containers on multiple machines it might be worth looking into a management solution. The cross-host container/image transfer and live migration are pretty cool but not very necessary if you can just... let it die and restart somewhere else.


Live migration is very useful feature - Last time I checked the support for it was very limited. Perhaps the situation is better now? One way around is to run LXC under KVM (and migrate the KVM then as needed).



typo "LXD is far more "batteries-included" than LXD"


Ahhh thanks unfortunately I didn't catch it in time, this was supposed to be "lxd if far more batteries-included than kubernetes"


LXD is nothing like Kubernetes which is why it never took off. While is does some things impressively, there is little need for it at this point with KubeVirt and other virtualization on Kubernetes solutions.


> LXD is nothing like Kubernetes which is why it never took off

I assume this is hyperbole, but for anyone who is unfamiliar with either system this statement isn't right -- Kubernetes does a lot of things, but first and foremost it is a container orchestration solution. LXD is also that. Literally from the kubernetes website:

> Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications

And from LXD:

> LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead.

> The core of LXD is a privileged daemon which exposes a REST API over a local unix socket as well as over the network (if enabled).

> Clients, such as the command line tool provided with LXD itself then do everything through that REST API. It means that whether you're talking to your local host or a remote server, everything works the same way.

If you strip kubernetes down to it's essentials, it is a group of one or more machines running kubelet, with one or more running kube-apiserver. The workloads you run on kubernetes are containerized 99% of the time, with RuntimeClass[0] (and previous to that untrusted runtime support) existing as an option to facilitate VMs and other runtimes that can run a container-ish process.

> While is does some things impressively, there is little need for it at this point with KubeVirt and other virtualization on Kubernetes solutions.

This is also not true -- KubeVirt is just one way to do virtualization on Kubernetes, and there are situations where it may not be optimal. Competition is also a good thing -- If you waited for Kubernetes to get easy/proper support for runtimes with user namespacing, you would have been waiting a long time, whereas the LXD ecosystem has had this for a long time.

The Pokemon Go team actually ran kubernetes on LXD for this reason[1], and gained value from it.

[0]: https://kubernetes.io/blog/2018/10/10/kubernetes-v1.12-intro...

[1]: https://www.youtube.com/watch?v=kQslklE5dKs&t=56s


LXD is a container daemon akin to Docker


This statement is almost right but still wrong in a certain sense -- daemonization is not the issue here. In contrast to two comparable tools like podman and docker (docker is a daemon and podman is not), LXD sits at a level above either of those tools.

The comparable tool to docker in the LXD ecosystem is lxc. LXD is built a level above lxc.


> I assume this is hyperbole, but for anyone who is unfamiliar with either system this statement isn't right

And for people who don't know what they are talking about general reactivity "isn't right" ... not sure why anybody cares about the opinion of those without the requisite understanding.

In actual functionality, and applications, LXD does not offer the same functionality as K8S, nor does it claim or attempt to do so. The "similarity" you are pointing out is between K8S container runtimes[1] and LXD - and even they are quite different, and there are already many container runtimes.

[1]: https://kubernetes.io/docs/setup/production-environment/cont...


> And for people who don't know what they are talking about general reactivity "isn't right" ... not sure why anybody cares about the opinion of those without the requisite understanding.

reactivity = relativity?

And as people we must* care about the opinion of people without requisite understanding, because for most pieces of knowledge that is indeed the majority of people. If we ever hope to grow knowledge you normally must get other people involved, so good explanations and keeping in mind the layman when he might wander into the conversation (AKA this thread) is a good idea.

> In actual functionality, and applications, LXD does not offer the same functionality as K8S, nor does it claim or attempt to do so.

k8s manages the orchestration of compute, network, and storage for applications running on a group of machines, LXD fits this same exact description.

> The "similarity" you are pointing out is between K8S container runtimes[1] and LXD - and even they are quite different, and there are already many container runtimes.

LXD is not a container runtime -- lxc is the container runtime. LXD manipulates lxc the same way docker swarm manipulates docker, or kubernetes mainpulates docker/containerd to get things done.


> LXD is the "kubernetes" part

No, LXD is not 'the "kubernetes" part'. There is no ' the "kubernetes" part' in the steaming pile of shit that LXD is.

EDIT: Also, please cite the claim that LXD/LXC supports rootless containers. Can't find anything substantial to back it up.


I wonder why systemd-nspawn isn't mentioned/used more. I've been using it for years for full system containerization and I'm really happy with how lightweight and functional it is. No need for additinal LXC/LXD layers.


The point of LXD is the multi host clustering. So, if you don't need that, yes, there are simpler solutions.


It's because it's terribad.


Care to elaborate for people who don't already know what you're talking about?


I've been using LXD for over a year now, its quite nice if you can get over the snap/ubuntu/canonical link. I think a lot of people are confused of its purpose- We still use K8s and docker everywhere for new applications, but weve ported all of our old VM based applications to LXD. It gives you containerization at a system level (you legitimately cant tell you are on a container if you log in) with a ton of extra niceness like easy snapshoting, transfers to other LXD hosts, clean networking and fast spin ups.

I highly recommend checking it out if you have a bunch of legacy VMs that are a pain to manage.


I use LXD for ad-hoc clean throw-away Linux instances a lot and it is pretty good. One thing that is confusing is that the command line tool it provides is called lxc and not lxd. Even more so as LXC already is a pretty overloaded term.


In fact, the biggest problem I had overall with LXD is not just the lack of documentation, but even understanding it and its terminologies.


If you care about security, LXD people are just wonderful to work with, fix bug same or next day, fixed release the same week


I've been running the same containers for many years, changing hardware every few years. I started on OpenVZ, moved to linux-vserver, then to LXC, then to LXD ... then on my most recent hardware switch, back to LXD.

LXD's CLI-tool-driven configurations stored in sqlite might be nice if you're starting from scratch, but I gave up trying to determine the incantations required to port my stuff to it. It seemed like I couldn't use the CLI to get from here to there, but I was able to do so by directly modifying the sqlite database. When a CLI doesn't make all valid changes possible, that's frustrating at best. Going back to LXC took an afternoon and I have complete visibility over my configuration, because it's all simple text files.

LXD feels like it's at a point between LXC and K8s, but I'm not sure how many real-world systems want to be there.


> LXD feels like it's at a point between LXC and K8s, but I'm not sure how many real-world systems want to be there.

I would suggest give podman with --rootfs a try.


How's that help? Podman sits in the same position as lxc or single-host docker, doesn't it?


One of my main complains about LXD is the documentation. It's scattered and generally not all that good.

Logging is also a bit of an issue for me, but that might just be me not configuring it correctly. Basically I don't think LXD provides enough logging to correctly and easily pinpoint problems.

I know it's a Canonical project, but at time it seems like Stéphane Graber is the only person actively involved in the project. Everything seems to point back to him and his Github account in some way.


I am running all my production servers with LXD. Letting my services and deamons doing it's job in a lx-container. If I do need to expose a service to the public I port-forward or reverse-proxy to the internal containers ips. This adds an additional layer of security. Emphasis on _additional_, hopefully not the only one. :) This makes it more unlikely that a accidentally open database port without authorization makes it to the public.


Won't this forwarding/proxying make the internal services not see the actual source IP addresses making access logs less useful? (Sure, if you're proxying HTTP there's X-Forwarded-For but that's more configuration to get right and there are other protocols.) It sounds like you would be better served by a firewall, ideally both on the server node and in front of it.


> Won't this forwarding/proxying make the internal services not see the actual source IP addresses making access logs less useful?

Not when forwarding. When port forwarding using DNAT only, the internal container or VM service sees the remote IP address.

As you say, HTTP has X-Forwarded-For. Because it is common to have one or more stages of HTTP reverse proxying these days for Internet-facing services for robustness (for example NginX proxy -> Python application), you'll probably have X-Forwarded-For configured already.


I was initially very thrilled about LXD (lexdee) when they made the initial announcement - because it specifically said [1]:

"We’re working with silicon companies to ensure hardware‐assisted security and isolation for these containers, just like virtual machines today. We’ll ensure that the kernel security cross‐section for individual containers can be tightened up for each specific workload. "

That was the thing that made it special, but later this capability stopped being mentioned and probably isn't planned anymore. In the meantime, one could look at Kata Containers or AWS firecracker.

[1]: https://www.ubuntu.org.cn/cloud/lxd


LXD is awesome. I use it via REST for exploit.courses to dynamically create containers for users.


I'm surprised no one has mentioned that the original execution environment for Docker was LXC.


I've always liked LXD, but always used libvirt for my container needs due to its VM support: It is nice to use the same set of commands and configuration for managing both technologies.

Now I might have another look at LXD due to its VM support.


I've really enjoyed using LXD, both in prod and at home.

I used to use OpenVZ and then proxmox, but LXD seems to cover more or less all my needs. Not only that, but stable updates isn't reserved for those with a license.

Great software!


Still haven’t found anything as good as Linux-Vserver that’s still supported. The ease of set-up and to get networking working, is unmatched by any modern alternatives.

I’m still running an unofficial kernel maintained by “Ben G” to keep it going.


Good




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: