Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Article Number: 000148670


LXC containers in Ubuntu Server 14.04 LTS Part 2

Summary: OS and Applications

Article Content


Symptoms

This article was written by Kent Baxley (Canonical Field Engineer) and Jose De la Rosa (Dell Linux Engineering).


Introduction

In part 1 of this series, we gave a brief overview on what LXC containers are, how to deploy, access and destroy them in Ubuntu 14.04 LTS.

In part 2, we discuss some additional features and nifty things you can do with LXC containers that will simplify their use and make them more flexible to suit your needs.

Auto-start a container

By default a new container will not start if the host system is rebooted. If you want a container to start at system boot, you can configure that behavior in each container's configuration file.  This is typically found in /var/lib/lxc/<container>/config. For example, we can set our lxc-utopic container to auto-start by adding the following lines to the /var/lib/lxc/lxc-utopic/config file:

lxc.start.auto = 1
lxc.start.delay = 5

With these parameters, the container will start when the host server boots, then the host system will wait 5 seconds before starting any other containers. 

When we run lxc-ls --fancy after setting the autostart parameters, we see that the container is now set up to start automatically when the host boots:

$ sudo lxc-ls --fancy

NAME        STATE     IPV4         IPV6   AUTOSTART
------------------------------------------------------------------------------------
lxc-centos  RUNNING   10.0.3.161   -      NO
lxc-test    RUNNING   10.0.3.156   -      NO
lxc-utopic  RUNNING   10.0.3.157   -      YES

Freezing containers 

Freezing containers will stop (or "freeze") the processes running inside a container. Freezing can come in handy when you want to stop a container momentarily (for a few seconds) but you don’t want to stop it altogether. After freezing a container the processes inside it will not be serviced by the host CPU scheduler, although they will still take up memory.

To freeze a container, run:

$ sudo lxc-freeze -n <container-name>
$ lxc-ls --fancy

NAME        STATE     IPV4          IPV6   AUTOSTART

------------------------------------------------------------------------------------
lxc-centos  RUNNING   10.0.3.161    -      NO
lxc-test    FROZEN    10.0.3.156    -      NO
lxc-utopic  RUNNING   10.0.3.157    -      YES

When unfreezing a container, the processes inside it will resume operating just as before the freeze. To unfreeze a container, run:

$ sudo lxc-unfreeze -n <container-name>

Cloning containers

Cloning containers has the same intent and purpose as with cloning virtual machines. Cloning allows you to make an exact copy of a container and save it for later use. Say that you want to setup a container for development purposes and you had to install a bunch of packages and run some configurations commands to make it just right. When you get to the point where your container is ready, you can clone it so that next time you won’t have to redo everything again.

The copy is a new container copied from the original and it takes up as much space on the host as the original.

To clone our lxc-test container, we first need to stop it if it’s running:

$ sudo lxc-stop -n lxc-test

Then we can clone the original container to a new one called lxc-test2

$ sudo  lxc-clone -o lxc-test -n lxc-test2

Networking

By default, LXC creates a NATed bridge (lxcbr0) on the host at startup. Containers using the default setup will have a virtual ethernet NIC with the remote end connected to the lxcbr0 interface.

However, the container configuration can be customized without this private network namespace.  For example, if I want my lxc-utopic container to use the br0 interface set up on my PowerEdge R620, I can edit the #Network section in /var/lib/lxc/lxc-utopic/config:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:16:3e:14:cf:98

I changed the lxc.network.link from lxcbr0 to br0.  When I start up the container, I can see that the lxc-utopic container is now getting an IP address from my physical network:

$ sudo lxc-ls --fancy

NAME        STATE     IPV4          IPV6   AUTOSTART

-------------------------------------------------------------------------------------
lxc-centos  RUNNING   10.0.3.161    -      NO
lxc-test    RUNNING   10.0.3.156    -      NO
lxc-utopic  RUNNING   10.9.167.68   -      YES

Alternate operating systems

LXC supports containers with operating systems other than Ubuntu. Other popular distributions supported include Debian, CentOS, OpenSuSE, Oracle, and Arch Linux. You can find the full list in /usr/share/lxc/templates on your system.

The templates are basically shell scripts that define how the container can be installed.  Most templates use a local cache, so, the first time a container is bootstrapped for a given architecture, the process will be slow. Any subsequent deployments will be a lot faster since they'll based on a local copy of the cache.

In this example, we will install a CentOS container on our Ubuntu server. The initial command to build the container named "lxc-centos" is:

$ sudo lxc-create -t centos -n lxc-centos

The command may fail right away because yum is not installed. To correct this, we need to install yum on our host system:

$ sudo apt-get install yum

Once yum is installed, we can try the command again. You should see a minimal CentOS image get downloaded and then yum will take over with installing the rest of the container.

When the installation finishes you will see a notification in the terminal telling you where to locate the temporary root password.  In this case it is in /var/lib/lxc/lxc-centos/tmp_root_pass. When you log into the CentOS container with this password for the first time, you will be prompted to change it.

Upon logging in we can see that CentOS 6.5 is installed by running:

$ cat /etc/redhat-release

You can also see that the Linux container is sharing the same Ubuntu kernel as the host system if you run

$ uname -a

Each operating system template typically has its own set of extra advanced options that may be needed to successfully set up the container. To view the options for the centos template, for example, run:

$ lxc-create -t centos --help

Other releases of Ubuntu can be installed by passing the release parameter. For example:

$ sudo lxc-create -t ubuntu -n lxc-utopic -- --release utopic

The above command will install a container based on the Ubuntu 14.10 "Utopic Unicorn" release, currently in development at the time of this writing.

Unprivileged containers

All the examples we’ve covered so far have been privileged containers, in that they are executed with sudo privileges and thus have full access to the system.

Unprivileged containers are run without sudo access and are safer to run since they run in the context of a non-sudo user. This can come in handy in cases where you must share a server with other users (but don’t want to provide sudo access) or when you don’t want to provide your containers full access to your underlying system for security reasons.

When a new user is created in Ubuntu 14.04 LTS, it is automatically assigned a range of 65536 user and group ids, which start at 100000 to avoid conflicts with system users. If you look at /etc/subuid and /etc/subgid, you’ll see something like:

demouser:100000:65536
testuser:165536:65536

This indicates that ‘demouser’ has user and group ids that go from 100000 to 165535 and ‘testuser’ has user and group ids that go from 165536 to 231072. To provide container isolation and avoid potential conflicts, the user and group ids in each unprivileged container (which will start at 0) will then be mapped to the user’s range of allocated ids on the system.

To setup unprivileged containers the first step is to allow the user to hook into the container network through the bridge lxcbr0. In this example, we let ‘demouser‘ be able to create up to 5 network interfaces in the bridge lxcbr0:

$ sudo echo "demouser veth lxcbr0 5" >> /etc/lxc/lxc-usernet

Each user can now setup a local configuration environment. Note that each user must specify its own allocated range of user and group ids to map to:

$ chmod +x $HOME

$ mkdir -p ~/.config/lxc/
$ echo "lxc.id_map = u 0 100000 65536" > ~/.config/lxc/default.conf
$ echo "lxc.id_map = g 0 100000 65536" >> ~/.config/lxc/default.conf
$ echo "lxc.network.type = veth" >> ~/.config/lxc/default.conf
$ echo "lxc.network.link = lxcbr0" >> ~/.config/lxc/default.conf

Finally, to create an Ubuntu 14.04 container, a user can use the same command as before:

$ lxc-create -t download -n <container> -- -d ubuntu -r trusty -a amd64

Want to learn more?

https://help.ubuntu.com/lts/serverguide/lxc.html

https://linuxcontainers.org/

https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/

Article Properties


Last Published Date

21 Feb 2021

Version

3

Article Type

Solution