This page looks best with JavaScript enabled

LXD containers

 ·  🎃 kr0m

As we explained in a previous occasion, containers provide several advantages over full virtualization like KVM. This time we will install LXD, which is nothing more than a wrapper over LXC that will facilitate certain tasks.

First, we will check if we have everything we need in our kernel for LXD to work:

ebuild /usr/portage/app-emulation/lxc/lxc-1.1.2.ebuild setup

If we want to clone containers, we will need Rsync compiled with the following use flags:

vi /etc/portage/package.use/rsync

net-misc/rsync acl iconv -ipv6 -static xattr
vi /etc/portage/package.accept_keywords/lxd
app-emulation/lxd ~amd64

We install all the necessary software:

emerge -av app-emulation/lxd net-misc/bridge-utils app-shells/bash-completion net-misc/rsync sys-fs/btrfs-progs

We allow a regular user to manage the containers:

useradd kr0m
usermod –append –groups lxd kr0m
chown -R kr0m:kr0m /home/kr0m/

If we want to have autocompletion of LXD commands:

cp /usr/share/bash-completion/completions/lxc /etc/bash_completion.d/
su kr0m -l
echo “source /etc/bash_completion.d/lxc” » ~/.bash_profile
exit

We configure the network, basically consisting of putting the interface in a bridge and configuring the IP on that bridge:

vi /etc/conf.d/net

bridge_lxcbr0="enp1s0"
config_lxcbr0="A.B.C.D/24"
routes_lxcbr0="default via E.F.G.H"
dns_servers_lxcbr0="8.8.8.8 8.8.4.4"

We put the bridge in the startup and remove the physical interface:

cd /etc/init.d/
ln -s net.lo net.lxcbr0
/etc/init.d/net.lxcbr0 start
rc-update add net.lxcbr0 default
rc-update del net.enp1s0 default

We make users from 0-65535 inside the containers be mapped to the host UIDs above: 1000000+uid and 1000000+gid

echo root:1000000:65536 »/etc/subuid
echo root:1000000:65536 »/etc/subgid

Start the LXD service and add it to the startup:

/etc/init.d/lxd start
rc-update add lxd default

Add the linuxcontainers image repo:

Check that it has been added correctly:

lxc remote list

images <https://images.linuxcontainers.org:8443>
local <unix:///var/lib/lxd/unix.socket>

Check the available images:

lxc image list images:

Launch a CT:

lxc launch images:gentoo/current/amd64 gentoo00

Check the list of CTs:

lxc list

+----------+---------+----------------+------+------------+-----------+
|   NAME   |  STATE  |      IPV4      | IPV6 |    TYPE    | SNAPSHOTS |
+----------+---------+----------------+------+------------+-----------+
| gentoo00 | RUNNING | 192.168.40.165 |      | PERSISTENT | 0         |
+----------+---------+----------------+------+------------+-----------+

We can access the CT directly:

lxc exec gentoo00 – /bin/bash -l

Profiles are nothing more than configuration options, they are applied cascadingly, meaning the last one applied adds, removes, or overwrites the previous configuration.

lxc profile list

default
migratable
lxc profile show default
name: default
config: {}
devices:
  eth0:
    nictype: bridged
    parent: lxcbr0
    type: nic

LXD allows us to limit certain resources, for example, CPU usage:

lxc profile create cpusandbox
lxc profile set cpusandbox limits.cpu 1
lxc init images:gentoo/current/amd64 cpusandboxed -p default -p cpusandbox
lxc config show cpusandboxed

name: cpusandboxed
profiles:
- default
- cpusandbox

To limit RAM:

lxc profile create ramsandbox
lxc profile set ramsandbox limits.memory 250
lxc profile apply cpusandboxed default ramsandbox cpusandbox
lxc config show cpusandboxed

name: cpusandboxed
profiles:
- ramsandbox

If we want to apply several profiles simultaneously:

lxc profile assign cpusandboxed default,ramsandbox,cpusandbox

Profiles default,ramsandbox,cpusandbox applied to cpusandboxed

We can edit the config for something specific, for example, changing the MAC address:

lxc config edit CONTAINER_ID

profiles:
- default
config:
  volatile.base_image: 7a983015256a485891940af71b475612fa97f87173d044daab5d003950372312
  volatile.eth0.hwaddr: 02:00:00:57:ce:6f
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":100000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":100000}]'
devices: {}
ephemeral: false

LXD uses a range of user IDs to run each of the CTs. This is a security measure in case an attacker manages to escape from the CT. However, there are certain functionalities, especially when accessing /proc, that do not work in this way. If we want to make the CT privileged:

lxc stop gentoo00
lxc config set gentoo00 security.privileged true
lxc config show gentoo00

 security.privileged: "true"
lxc start gentoo00

Snapshots are one of the most interesting features of LXD-Btrfs, as Btrfs natively supports them, they are practically performed in real-time without any penalty.

We just need to make sure we have a recent version of the kernel (4.X) and support for that file system.

It is most advisable to mount /var as follows:

/dev/sda4 on /var type btrfs (rw,relatime)

Creating a snapshot is as simple as:

lxc snapshot CT_ID DESCRIPTIVE_NAME

With lxc list, we can see at a glance if the container has snapshots:

lxc list

+----------------+---------+----------------+------+------------+-----------+
|     NAME       |  STATE  |      IPV4      | IPV6 |    TYPE    | SNAPSHOTS |
+----------------+---------+----------------+------+------------+-----------+
+----------------+---------+----------------+------+------------+-----------+
| testkr0m       | RUNNING | X.X.X.X (eth0) |      | PERSISTENT | 1         |
+----------------+---------+----------------+------+------------+-----------+

We can check the snapshots of a particular CT with:

lxc info testkr0m

Snapshots:
 limpio (taken at 2016/11/23 15:40 UTC) (stateless)

To restore the CT to a previous state:

lxc restore testkr0m limpio

To delete a snapshot:

lxc delete testkr0m/limpio

Another topic that every sysadmin should consider is backups. They can be done without the need to mount another LXD and move the CT, just compress the rootfs:

cd /var/lib/lxd/containers/
tar czvf CONTAINER.tar.gz CONTAINER
scp CONTAINER.tar.gz REMOTE_IP:/var/lib/lxd/containers/

If we want to start the CT in another LXD:

cd /var/lib/lxd/containers/
tar czvf CONTAINER.tar.gz
mv CONTAINER CONTAINER_ORI
lxc launch images:gentoo/current/amd64 CONTAINER
mv CONTAINER_ORI CONTAINER
lxc start CONTAINER

LXD CTs have some limitations, including NFS mount points. To solve this, we will mount the shared resource on the LXD server and add it to the CT:

mkdir -p /mnt/nfs/data
vi /etc/fstab

NFS_SERVER_IP:/data /mnt/nfs/data nfs rw,vers=4,async,noatime,nodiratime,soft,timeo=3,intr,bg 0 0
mount /mnt/nfs/data
lxc config device add gentoo00 data disk path=/data source=/mnt/nfs/data
lxc config device show gentoo00

Another limitation may be the use of loop devices, in order to mount an ISO image for example:

mkdir /mnt/iso
mount -o loop FILE.iso /mnt/iso

We look at the UID of the CT user:

ls -la /var/lib/lxd/containers/gentoo00/

total 20
drwxr-xr-x+ 4 100000 100000 4096 may 17 15:52 .

We change the owner of the external directory:

chown 100000:100000 /mnt/iso/

We map the external directory with that of the CT:

lxc config device add CT_NAME RESOURCE_NAME disk path=/mnt/iso source=/mnt/iso
lxc config device add gentoo00 iso disk path=/mnt/iso source=/mnt/iso
lxc config device show gentoo00

NOTE: If the CT is privileged, it is not necessary to change the permissions in /mnt/iso on the LXD server.

One of the great advantages of using LXD is that you can start from a basic infrastructure and grow, at any time you can migrate CTs between servers without problems.

We have two servers:

  • SERVER1
  • SERVER2

We put the management socket on SERVER2 to listen:

lxc config set core.https_address SERVER2_IP:8443
lxc config set core.trust_password PASSWORD

We add SERVER2 as a minion of SERVER1:

lxc remote add SERVER2 https://SERVER2_IP:8443

Now we can check the CTs of SERVER2 from SERVER1:

lxc list SERVER2:

Start remote CTs:

lxc launch images:gentoo/current/amd64 SERVER2:testkr0m
lxc info SERVER2:testkr0m

We move the CT from SERVER2 to SERVER1:

lxc move SERVER2:testkr0m testkr0m

Lxd is a very powerful system with many possibilities. In this article, only the basic functionalities have been commented on. For more information, we can always consult the project website.

If you liked the article, you can treat me to a RedBull here