Getting Started with Linux Containers on Oracle Linux
Generally speaking, Linux Containers use a completely different approach than “classical” virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available.
Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though – if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well.
Some use cases for Linux Containers include:
The creation of Oracle Linux containers can be accomplished on the command line in a few steps, using the LXC utilities. So far, there is no integration or support for this technology in applications like Oracle VM Manager or Oracle Enterprise Manager. However, Oracle has developed several enhancements which are included in the lxc package that’s part of Oracle Linux 6.4; these changes were also contributed to the upstream LXC project and are now part of the official LXC releases.
The LXC template scripts are installed in /usr/share/lxc/templates:
As you can see, the LXC distribution contains templates for other Linux distributions as well. However, the focus of this lab session will be on working with Oracle Linux containers.
Linux Control Groups (cgroups) are an essential component of Linux Containers. Verify that the Control Groups service cgconfig is started and enabled at boot time:
The virtualization management service, libvirtd, also needs to be enabled at boot time:
Among other things, libvirt provides a host-internal virtual network bridge and DHCP/DNS service (using the dnsmasq service) that will be used to automatically configure the network settings of the containers we will create.
Now that we’ve concluded all the necessary preparations, let’s check the configuration using the lxc-checkconfig script:
Looks like we’re good to go!
The creation and management of Btrfs file systems is explained in detail in the chapter “The Btrfs File System” of the “Oracle Linux Administrator’s Solutions Guide for Release 6“. For some practical examples, take a look at the Hands-on lab – Storage Management with Btrfs.
On our virtual lab environment, you can create a Btrfs file system on the second disk (/dev/sdb) and mount it at /container by entering the following commands:
To mount the file system at system startup time, you can add an entry for /container to the /etc/fstab file, either by adding it with your favorite text editor or by using the following command line:
Further information about the available configuration options can be found in chapter “About the lxc-oracle Template Script” of the Oracle Linux 6 Administrator’s Solutions Guide.
Enter the following command to create an Oracle Linux 6 container, based on the latest available update release and using the default configuration options:
The installation script performed a download of the required RPM packages to prepare a minimal installation of the latest version of Oracle Linux 6 (about 400 MB), from Oracle’s “public-yum” service.
The directory structure of the installed container can be found at /container/ol6cont1/rootfs, it can be browsed and modified like any other regular directory structure.
The script also created two user accounts “root” and “oracle” (with passwords equaling the user names) and configured a virtual network device, which obtains an IP address via DHCP from the DHCP server provided by the libvirt framework. The container’s configuration file created by lxc-create is located at /container/ol6cont1/config and can be adapted and modified using a regular text editor.
Since we’ve created our container storage file system on top of Btrfs file system, the lxc-clone script used Btrfs’ snapshotting/cloning functionality to create a snapshot of the first container’s root file system:
The container has now been started by lxc-start in the background (courtesy of the -d option). By passing the option -o any eventual log messages will be redirected to the file /container/ol6cont1/ol6cont.log. As you can tell from the output of lxc-info, the container ol6cont1 has been started and is now in state RUNNING.
A container can be shut down using various ways: either by calling lxc-shutdown (for an orderly shutdown) or lxc-stop (for immediate termination) from the host, or from within the container using the usual commands like shutdown -h or poweroff.
Restart the container using lxc-start again, to continue with the exercises.
The container’s root password defaults to root, it is strongly recommended to change this to a more secure password using the passwd command before deploying a container on an untrusted network!
The key combination CTRL-a, q terminates the LXC console session, leaving the container’s console at the stage where you left it. So make sure to first log out of the container before you disconnect!
Alternatively, you can also log in to the container using Secure Shell (SSH) from the host system. All containers have their own IP address and are connected to a virtual bridge device virbr0 by default, which is also reachable from the host system (but not from outside the host). This way, you can easily set up simple client/server architectures within a host system. To obtain the currently assigned IP addresses, take a look at the default.leases file from dnsmasq running on the host:
Now let’s create some minimal custom content that will be served from this web server. Change to the directory /var/www/html in the container and create a file named index.html using a text editor like vi (or nano if you prefer – simply install it inside the container using yum install nano):
You should now be able to reach the web server running inside the container from the host system.
Try to open the container’s IP address (e.g. 192.168.122.230 in our example) in the host’s Firefox browser:
Unable to render embedded object: File (firefox-ol6cont1.png) not found.
The Apache web server running within the ol6cont1 container has successfully delivered the web page you created!
The second container should now be up and running:
Now log into the Oracle Linux 5 container and verify its version:
Also note that the host’s dnsmasq DHCP server conveniently associates each running container’s host name with its IP address:
We now have two containers up and running, using two different major versions of Oracle Linux. They can reach each other via the host’s virtual bridge by IP address and host name. To quickly double check this, let’s try to reach the web server running on ol6cont1 from within ol5cont1, using the text-based web browser w3m.
Log into ol5cont1 as the root user to install w3m:
Now you can access the web server running on ol6cont1 as follows:
Congratulations, you just exchanged http traffic between two Oracle Linux containers!
To check the status of active network connections, the lxc-netstat command allows you to monitor this from the host system:
As you can see in the example above, both the SSH daemon as well as the apache web server are up and running, listening for incoming connections.
If you’d like to determine the amount of memory currently used by a given container, you can obtain this information from the control groups subsystem:
Please note that containers share memory with the host system when possible, which is why the footprint is pretty low. The Linux memory management subsystem use a copy-on-write technique to share memory pages among processes, if they are identical.
To monitor the state of a container, use the lxc-monitor command. Open a second command line terminal and start it with the following command. Now start and stop your containers using the lxc-start and lxc-shutdown commands from another shell and observe how lxc-monitor indicates the state changes:
lxc-monitor can either be given an actual container name or a regular expression to match multiple container names, as in the example above.
If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host’s network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.
To modify a container so that it uses the macvlan bridge, shut down the ol6cont1 container, edit /container/ol6cont1/config and look for the following lines:
Now replace these with the following lines to switch to macvlan bridge mode:
Now restart the ol6cont1 container and verify its IP address using ifconfig – it should now have obtained a different one instead, if there is a DHCP server configured. This depends on the VirtualBox configuration, in “NAT” mode it will be obtain from the VirtualBox DHCP server, in “Bridged” network mode the DHCP server on your LAN will handle the container’s DHCP request instead.
If you’d like to learn more about this topic, there is a dedicated chapter about Linux containers in the Oracle Linux Administrator’s Solutions Guide. It covers the creation, configuration and starting/stopping as well as monitoring of containers in more detail. Also take a look at the following resources for more details and practical hints.
Getting Started with Linux Containers on Oracle Linux
- Introduction
- Requirements
- Exercise: Installing and configuring additionally required software packages
- Exercise: Creating and mounting a Btrfs volume for the container storage
- Exercise: Creating a container
- Exercise: Cloning an existing container
- Exercise: Starting and stopping a container
- Exercise: Logging into a container
- Exercise: Updating and installing software inside a container
- Installing and starting an Oracle Linux 5 container
- Exercise: Monitoring containers
- Exercise: Changing a container’s network configuration
- Destroying containers
- Conclusion
- References
Introduction
In this hands-on lab you will learn the basics of working with Linux Containers on Oracle Linux:- Installing and configuring the required software packages
- Setting up a Btrfs volume for the container storage
- Installing a container
- Cloning an existing container
- Starting and logging into a container
- Updating and installing software inside a container
- Monitoring and shutting down containers
- Changing a container’s network configuration
- Destroying a Container
Generally speaking, Linux Containers use a completely different approach than “classical” virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available.
Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though – if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well.
Some use cases for Linux Containers include:
- Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space.
- Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his “own” application instance, with a defined level of service/performance. This prevents that one user’s application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes.
- Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than “classic” virtual machines, which can save a lot of time when running frequent build or test runs on applications.
- Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system.
The creation of Oracle Linux containers can be accomplished on the command line in a few steps, using the LXC utilities. So far, there is no integration or support for this technology in applications like Oracle VM Manager or Oracle Enterprise Manager. However, Oracle has developed several enhancements which are included in the lxc package that’s part of Oracle Linux 6.4; these changes were also contributed to the upstream LXC project and are now part of the official LXC releases.
Hint If you want to learn more about Linux Containers, the Oracle Linux Administrator’s Solutions Guide for Release 6 has a dedicated chapter about this technology. |
Requirements
The Oracle Linux 6.5 virtual appliance should be up and running (from the initial snapshot) and you should be logged in as the Oracle Linux user with a terminal window opened to enter the following commands. You should have some basic experience with working on a Linux command line, e.g. opening and editing files, moving around the file system directory structure, running commands.Exercise: Installing and configuring additionally required software packages
To properly support and work with Linux Containers, the following packages (and their dependencies) need to be installed with yum: btrfs-progs, lxc, libvirt and libcgroup. They should already be installed on your Oracle Linux 6.5 lab system, you can use the command rpm -q btrfs-progs libcgroup libvirt lxc to verify this:
[oracle@oraclelinux6 ~]$ rpm -q btrfs-progs libcgroup libvirt lxc
btrfs-progs-0.20-1.5.git7854c8b.el6.x86_64 libcgroup-0.40.rc1-5.el6.x86_64 libvirt-0.10.2-29.0.1.el6.1.x86_64 lxc-0.9.0-2.0.5.el6.x86_64
[oracle@oraclelinux6 ~]$ ls /usr/share/lxc/templates/
lxc-altlinux lxc-debian lxc-opensuse lxc-ubuntu lxc-archlinux lxc-fedora lxc-oracle lxc-ubuntu-cloud lxc-busybox lxc-lenny lxc-sshd
Linux Control Groups (cgroups) are an essential component of Linux Containers. Verify that the Control Groups service cgconfig is started and enabled at boot time:
[oracle@oraclelinux6 ~]$ service cgconfig status
Running [oracle@oraclelinux6 ~]$ ls /cgroup/
blkio cpu cpuacct cpuset devices freezer memory net_cls
[oracle@oraclelinux6 ~]$ chkconfig --list cgconfig
cgconfig 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Running [oracle@oraclelinux6 ~]$ ls /cgroup/
blkio cpu cpuacct cpuset devices freezer memory net_cls
[oracle@oraclelinux6 ~]$ chkconfig --list cgconfig
cgconfig 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[oracle@oraclelinux6 ~]$ service libvirtd status libvirtd (pid 2003) is running... [oracle@oraclelinux6 ~]$ sudo chkconfig --list libvirtd {{libvirtd 0:off 1:off 2:off 3:on 4:on 5:on 6:off}
Hint For more information about libvirt’s virtual networking functionality, please consult this Wiki page. |
[oracle@oraclelinux6 ~]$ lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-3.8.13-16.2.2.el6uek.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: missing Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Exercise: Creating and mounting a Btrfs volume for the container storage
At first, a dedicated directory should be created to host the container file systems. The default location is /container. Creating this directory on top of a Btrfs file system provides a few additional interesting possibilities, e.g. the option to “freeze” a container file system at a certain point in time, or the fast creation (cloning) of additional containers based on a template. Cloning containers using Btrfs snapshots takes place at an instant, without requiring any additional disk space except for the differences to the original template.The creation and management of Btrfs file systems is explained in detail in the chapter “The Btrfs File System” of the “Oracle Linux Administrator’s Solutions Guide for Release 6“. For some practical examples, take a look at the Hands-on lab – Storage Management with Btrfs.
On our virtual lab environment, you can create a Btrfs file system on the second disk (/dev/sdb) and mount it at /container by entering the following commands:
[oracle@oraclelinux6 ~]$ sudo mkfs.btrfs /dev/sdb
WARNING! - Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/sdb
nodesize 4096 leafsize 4096 sectorsize 4096 size 4.00GB
Btrfs v0.20-rc1
[oracle@oraclelinux6 ~]$ sudo mkdir -v /container mkdir: created directory `/container' [oracle@oraclelinux6 ~]$ sudo mount -v /dev/sdb /containermount: you didn't specify a filesystem type for /dev/sdb I will try type btrfs /dev/sdb on /container type btrfs (rw)
[oracle@oraclelinux6 ~]$ sudo su [root@oraclelinux6 ~]# echo "/dev/sdb /container btrfs defaults 0 0" >> /etc/fstab [root@oraclelinux6 ~]# exit exit
Exercise: Creating a container
Now you can create a container of the latest version of Oracle Linux 6 named “ol6cont1” and using the default options by entering the following command. The option “-t” determines the general type of the Linux distribution to be installed (the so-called “template”), e.g. “oracle“, “ubuntu” or “fedora“. Depending on the template, you can pass template-specific options after the double dashes (“--“). In the case of the Oracle Linux template, you can choose the distribution’s release version by providing values like “5.8“, “6.3” or “6.latest” to the --release option.Further information about the available configuration options can be found in chapter “About the lxc-oracle Template Script” of the Oracle Linux 6 Administrator’s Solutions Guide.
Enter the following command to create an Oracle Linux 6 container, based on the latest available update release and using the default configuration options:
[oracle@oraclelinux6 ~]$ sudo lxc-create -n ol6cont1 -t oracle -- --release=6.latest
lxc-create: No config file specified, using the default config /etc/lxc/default.conf Host is OracleServer 6.5 Create configuration file /container/ol6cont1/config Downloading release 6.latest for x86_64 Loaded plugins: refresh-packagekit, security ol6_latest | 1.4 kB 00:00 ol6_latest/primary | 34 MB 00:49 ol6_latest 24357/24357 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 will be installed --> Processing Dependency: libc.so.6(GLIBC_2.2.5)(64bit) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.8)(64bit) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 [...] --> Processing Dependency: pygpgme for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: python-iniparse for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: rpm-python for package: yum-3.2.29-40.0.1.el6.noarch --> Running transaction check ---> Package audit-libs.x86_64 0:2.2-2.el6 will be installed ---> Package bash.x86_64 0:4.1.2-15.el6_4 will be installed ---> Package checkpolicy.x86_64 0:2.0.22-1.el6 will be installed ---> Package coreutils.x86_64 0:8.4-19.0.1.el6_4.2 will be installed --> Processing Dependency: coreutils-libs = 8.4-19.0.1.el6_4.2 for package: coreutils-8.4-19.0.1.el6_4.2.x86_64 [...] ---> Package pinentry.x86_64 0:0.7.6-6.el6 will be installed --> Running transaction check ---> Package groff.x86_64 0:1.18.1.4-21.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: chkconfig x86_64 1.3.49.3-2.el6_4.1 ol6_latest 158 k dhclient x86_64 12:4.1.1-38.P1.0.1.el6 ol6_latest 317 k initscripts x86_64 9.03.40-2.0.1.el6 ol6_latest 940 k [...] rootfiles noarch 8.1-6.1.el6 ol6_latest 6.3 k rsyslog x86_64 5.8.10-6.el6 ol6_latest 648 k vim-minimal x86_64 2:7.2.411-1.8.el6 ol6_latest 363 k yum noarch 3.2.29-40.0.1.el6 ol6_latest 995 k Installing for dependencies: MAKEDEV x86_64 3.24-6.el6 ol6_latest 88 k audit-libs x86_64 2.2-2.el6 ol6_latest 60 k basesystem noarch 10.0-4.0.1.el6 ol6_latest 4.3 k [...] yum-metadata-parser x86_64 1.1.2-16.el6 ol6_latest 26 k zlib x86_64 1.2.3-29.el6 ol6_latest 72 k Transaction Summary ================================================================================ Install 143 Package(s) Total download size: 82 M Installed size: 303 M Downloading Packages: (1/143): MAKEDEV-3.24-6.el6.x86_64.rpm | 88 kB 00:00 (2/143): audit-libs-2.2-2.el6.x86_64.rpm | 60 kB 00:00 (3/143): basesystem-10.0-4.0.1.el6.noarch.rpm | 4.3 kB 00:00 (4/143): bash-4.1.2-15.el6_4.x86_64.rpm | 904 kB 00:03 (5/143): binutils-2.20.51.0.2-5.36.el6.x86_64.rpm | 2.8 MB 00:11 [...] (139/143): vim-minimal-7.2.411-1.8.el6.x86_64.rpm | 363 kB 00:00 (140/143): xz-libs-4.999.9-0.3.beta.20091007git.el6.x86_ | 89 kB 00:00 (141/143): yum-3.2.29-40.0.1.el6.noarch.rpm | 995 kB 00:01 (142/143): yum-metadata-parser-1.1.2-16.el6.x86_64.rpm | 26 kB 00:00 (143/143): zlib-1.2.3-29.el6.x86_64.rpm | 72 kB 00:00 -------------------------------------------------------------------------------- Total 51 kB/s | 82 MB 27:33 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libgcc-4.4.7-4.el6.x86_64 1/143 Installing : setup-2.8.14-20.el6_4.1.noarch 2/143 Installing : filesystem-2.4.30-3.el6.x86_64 3/143 Installing : basesystem-10.0-4.0.1.el6.noarch 4/143 Installing : tzdata-2013g-1.el6.noarch 5/143 [...] Installing : rsyslog-5.8.10-8.0.1.el6.x86_64 139/143 Installing : yum-3.2.29-40.0.1.el6.noarch 140/143 Installing : passwd-0.77-4.el6_2.2.x86_64 141/143 Installing : 2:vim-minimal-7.2.411-1.8.el6.x86_64 142/143 Installing : rootfiles-8.1-6.1.el6.noarch 143/143 Verifying : pam-1.1.1-17.el6.x86_64 1/143 Verifying : rpm-python-4.8.0-37.el6.x86_64 2/143 Verifying : gamin-0.1.10-9.el6.x86_64 3/143 Verifying : procps-3.2.8-25.el6.x86_64 4/143 Verifying : 2:ethtool-3.5-1.el6.x86_64 5/143 [...] Verifying : libxml2-2.7.6-14.0.1.el6.x86_64 138/143 Verifying : mingetty-1.08-5.el6.x86_64 139/143 Verifying : libcurl-7.19.7-37.el6_4.x86_64 140/143 Verifying : 1:findutils-4.4.2-6.el6.x86_64 141/143 Verifying : hwdata-0.233-9.1.el6.noarch 142/143 Verifying : sysvinit-tools-2.87-5.dsf.el6.x86_64 143/143 Installed: chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 dhclient.x86_64 12:4.1.1-38.P1.0.1.el6 initscripts.x86_64 0:9.03.40-2.0.1.el6 openssh-clients.x86_64 0:5.3p1-94.el6 openssh-server.x86_64 0:5.3p1-94.el6 [...] rsyslog.x86_64 0:5.8.10-8.0.1.el6 vim-minimal.x86_64 2:7.2.411-1.8.el6 yum.noarch 0:3.2.29-40.0.1.el6 Dependency Installed: MAKEDEV.x86_64 0:3.24-6.el6 audit-libs.x86_64 0:2.2-2.el6 basesystem.noarch 0:10.0-4.0.1.el6 bash.x86_64 0:4.1.2-15.el6_4 binutils.x86_64 0:2.20.51.0.2-5.36.el6 [...] upstart.x86_64 0:0.6.5-12.el6_4.1 ustr.x86_64 0:1.0.4-9.1.el6 util-linux-ng.x86_64 0:2.17.2-12.14.el6 xz-libs.x86_64 0:4.999.9-0.3.beta.20091007git.el6 yum-metadata-parser.x86_64 0:1.1.2-16.el6 zlib.x86_64 0:1.2.3-29.el6 Complete! Rebuilding rpm database Configuring container for Oracle Linux 6.5 chcon: can't apply partial context to unlabeled file `/container/ol6cont1/rootfs/dev' Added container user:oracle password:oracle Added container user:root password:root Container : /container/ol6cont1/rootfs Config : /container/ol6cont1/config Network : eth0 (veth) on virbr0 'oracle' template installed 'ol6cont1' created[oracle@oraclelinux6 ~]$ lxc-ls ol6cont1
The directory structure of the installed container can be found at /container/ol6cont1/rootfs, it can be browsed and modified like any other regular directory structure.
The script also created two user accounts “root” and “oracle” (with passwords equaling the user names) and configured a virtual network device, which obtains an IP address via DHCP from the DHCP server provided by the libvirt framework. The container’s configuration file created by lxc-create is located at /container/ol6cont1/config and can be adapted and modified using a regular text editor.
[oracle@oraclelinux6 ~]$ cat /container/ol6cont1/config
# Template used to create this container: oracle
# Parameters passed to the template: --release=6.latest
# Template script checksum (SHA-1): 23df66b06d5d71bd5456f0ec573a57513ce3daf0
lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.flags = up
# Container configuration for Oracle Linux 6.latest
lxc.arch = x86_64
lxc.utsname = ol6cont1
lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /container/ol6cont1/rootfs
lxc.mount = /container/ol6cont1/fstab
# Uncomment these if you don't run anything that needs the capability, and
# would like the container to run with less privilege.
#
# Dropping sys_admin disables container root from doing a lot of things
# that could be bad like re-mounting lxc fstab entries rw for example,
# but also disables some useful things like being able to nfs mount, and
# things that are already namespaced with ns_capable() kernel checks, like
# hostname(1).
# lxc.cap.drop = sys_admin
# lxc.cap.drop = net_raw # breaks dhcp/ping
# lxc.cap.drop = setgid # breaks login (initgroups/setgroups)
# lxc.cap.drop = dac_read_search # breaks login (pam unix_chkpwd)
# lxc.cap.drop = setuid # breaks sshd,nfs statd
# lxc.cap.drop = audit_control # breaks sshd (set_loginuid failed)
# lxc.cap.drop = audit_write
#
lxc.cap.drop = mac_admin mac_override setfcap setpcap
lxc.cap.drop = sys_module sys_nice sys_pacct
lxc.cap.drop = sys_rawio sys_time
lxc.cap.drop = sys_resource
# Networking
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.hwaddr = fe:cb:d8:32:d3:5e
# Control Group devices: all denied except those whitelisted
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null
lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero
lxc.cgroup.devices.allow = c 1:7 rwm # /dev/full
lxc.cgroup.devices.allow = c 5:0 rwm # /dev/tty
lxc.cgroup.devices.allow = c 1:8 rwm # /dev/random
lxc.cgroup.devices.allow = c 1:9 rwm # /dev/urandom
lxc.cgroup.devices.allow = c 136:* rwm # /dev/tty[1-4] ptys and lxc console
lxc.cgroup.devices.allow = c 5:2 rwm # /dev/ptmx pty master
Exercise: Cloning an existing container
Before making any changes, it’s recommended to create a snapshot of the container first, which can act as a backup copy and template from which we can quickly spawn additional containers based on this snapshot:
[oracle@oraclelinux6 ~]$ sudo lxc-clone -o ol6cont1 -n ol6cont2
Tweaking configuration Copying rootfs... Create a snapshot of '/container/ol6cont1/rootfs' in '/container/ol6cont2/rootfs' Updating rootfs... 'ol6cont2' created[oracle@oraclelinux6 ~]$ lxc-ls -1 ol6cont1 ol6cont2
[oracle@oraclelinux6 ~]$ sudo btrfs subvolume list /container/
ID 256 gen 53 top level 5 path ol6cont1/rootfs ID 263 gen 54 top level 5 path ol6cont2/rootfs
Exercise: Starting and stopping a container
Now that the container’s file system has been installed, you can start the container using the lxc-start command:
[oracle@oraclelinux6 ~]$ sudo lxc-info -n ol6cont1 state: STOPPED pid: -1 [oracle@oraclelinux6 ~]$ sudo lxc-start -n ol6cont1 -d -o /container/ol6cont1/ol6cont1.log [oracle@oraclelinux6 ~]$ sudo lxc-info -n ol6cont1 state: RUNNING pid: 3001
A container can be shut down using various ways: either by calling lxc-shutdown (for an orderly shutdown) or lxc-stop (for immediate termination) from the host, or from within the container using the usual commands like shutdown -h or poweroff.
[oracle@oraclelinux6 ~]$ sudo lxc-shutdown -n ol6cont1 [oracle@oraclelinux6 ~]$ sudo lxc-info -n ol6cont1 state: STOPPED pid: -1
Exercise: Logging into a container
Now you can log into the container instance’s console using the lxc-console command and take a look at its configuration.The container’s root password defaults to root, it is strongly recommended to change this to a more secure password using the passwd command before deploying a container on an untrusted network!
[oracle@oraclelinux6 ~]$ sudo lxc-console -n ol6cont1
Oracle Linux Server release 6.5 Kernel 3.8.13-16.2.2.el6uek.x86_64 on an x86_64ol6cont1 login: root Password: root [root@ol6cont1 ~]# cat /etc/oracle-release Oracle Linux Server release 6.5 [root@ol6cont1 ~]# ps x
PID TTY STAT TIME COMMAND 1 ? Ss 0:00 /sbin/init 184 ? Ss 0:00 /sbin/dhclient -H ol6cont1 -1 -q -lf /var/lib/dhclien 207 ? Sl 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 249 ? Ss 0:00 /usr/sbin/sshd 256 lxc/console Ss+ 0:00 /sbin/mingetty /dev/console 260 ? Ss 0:00 login -- root 262 lxc/tty2 Ss+ 0:00 /sbin/mingetty /dev/tty2 264 lxc/tty3 Ss+ 0:00 /sbin/mingetty /dev/tty3 266 lxc/tty4 Ss+ 0:00 /sbin/mingetty /dev/tty4 267 lxc/tty1 Ss 0:00 -bash 278 lxc/tty1 R+ 0:00 ps x[root@ol6cont1 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr FE:1E:10:07:2C:C0 inet addr:192.168.122.230 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::fc1e:10ff:fe07:2cc0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:103 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6046 (5.9 KiB) TX bytes:1278 (1.2 KiB)[root@ol6cont1 ~]# ip route
default via 192.168.122.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1006 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.62[root@ol6cont1 ~]# logout
Oracle Linux Server release 6.5 Kernel 3.8.13-16.2.2.el6uek.x86_64 on an x86_64ol6cont1 login: CTRL-a q
Alternatively, you can also log in to the container using Secure Shell (SSH) from the host system. All containers have their own IP address and are connected to a virtual bridge device virbr0 by default, which is also reachable from the host system (but not from outside the host). This way, you can easily set up simple client/server architectures within a host system. To obtain the currently assigned IP addresses, take a look at the default.leases file from dnsmasq running on the host:
[oracle@oraclelinux6 ~]$ grep ol6cont1 /var/lib/libvirt/dnsmasq/default.leases 1379336654 fe:1e:10:07:2c:c0 192.168.122.230 ol6cont1 * [oracle@oraclelinux6 ~]$ ssh oracle@192.168.122.230 The authenticity of host '192.168.122.230 (192.168.122.230)' can't be established. RSA key fingerprint is 29:5b:05:d4:0e:89:ef:a4:76:19:51:35:86:a1:89:b8. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.230' (RSA) to the list of known hosts. oracle@192.168.122.230's password: oracle [oracle@ol6cont1 ~]$ logout Connection to 192.168.122.230 closed.
Exercise: Updating and installing software inside a container
The container’s system configuration can be modified using the usual operating system tools (e.g. yum or rpm to install additional software). Log into the container ol6cont1 (using lxc-console or ssh, see the previous exercise for details) and install and enable the Apache web server:
[root@ol6cont1 ~]# yum install httpd
ol6_latest | 1.4 kB 00:00 ol6_latest/primary | 34 MB 01:48 ol6_latest 24365/24365 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package httpd.x86_64 0:2.2.15-29.0.1.el6_4 will be installed --> Processing Dependency: httpd-tools = 2.2.15-29.0.1.el6_4 for package: httpd-2.2.15-29.0.1.el6_4.x86_64 --> Processing Dependency: /etc/mime.types for package: httpd-2.2.15-29.0.1.el6_4.x86_64 --> Processing Dependency: apr-util-ldap for package: httpd-2.2.15-29.0.1.el6_4.x86_64 --> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.2.15-29.0.1.el6_4.x86_64 --> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.2.15-29.0.1.el6_4.x86_64 --> Running transaction check ---> Package apr.x86_64 0:1.3.9-5.el6_2 will be installed ---> Package apr-util.x86_64 0:1.3.9-3.el6_0.1 will be installed ---> Package apr-util-ldap.x86_64 0:1.3.9-3.el6_0.1 will be installed ---> Package httpd-tools.x86_64 0:2.2.15-29.0.1.el6_4 will be installed ---> Package mailcap.noarch 0:2.1.31-2.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: httpd x86_64 2.2.15-29.0.1.el6_4 ol6_latest 820 k Installing for dependencies: apr x86_64 1.3.9-5.el6_2 ol6_latest 122 k apr-util x86_64 1.3.9-3.el6_0.1 ol6_latest 87 k apr-util-ldap x86_64 1.3.9-3.el6_0.1 ol6_latest 15 k httpd-tools x86_64 2.2.15-29.0.1.el6_4 ol6_latest 72 k mailcap noarch 2.1.31-2.el6 ol6_latest 26 k Transaction Summary ================================================================================ Install 6 Package(s) Total download size: 1.1 M Installed size: 3.6 MIs this ok [y/N]: y
Downloading Packages: (1/6): apr-1.3.9-5.el6_2.x86_64.rpm | 122 kB 00:00 (2/6): apr-util-1.3.9-3.el6_0.1.x86_64.rpm | 87 kB 00:00 (3/6): apr-util-ldap-1.3.9-3.el6_0.1.x86_64.rpm | 15 kB 00:00 (4/6): httpd-2.2.15-29.0.1.el6_4.x86_64.rpm | 820 kB 00:01 (5/6): httpd-tools-2.2.15-29.0.1.el6_4.x86_64.rpm | 72 kB 00:00 (6/6): mailcap-2.1.31-2.el6.noarch.rpm | 26 kB 00:00 -------------------------------------------------------------------------------- Total 410 kB/s | 1.1 MB 00:02 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Importing GPG key 0xEC551F03: Userid : Oracle OSS group (Open Source Software group) <build@oss.oracle.com> Package: 6:oraclelinux-release-6Server-5.0.2.x86_64 (@ol6_latest/$releasever) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracleIs this ok [y/N]: y
Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : apr-1.3.9-5.el6_2.x86_64 1/6 Installing : apr-util-1.3.9-3.el6_0.1.x86_64 2/6 Installing : apr-util-ldap-1.3.9-3.el6_0.1.x86_64 3/6 Installing : httpd-tools-2.2.15-29.0.1.el6_4.x86_64 4/6 Installing : mailcap-2.1.31-2.el6.noarch 5/6 Installing : httpd-2.2.15-29.0.1.el6_4.x86_64 6/6 Verifying : httpd-2.2.15-29.0.1.el6_4.x86_64 1/6 Verifying : apr-util-ldap-1.3.9-3.el6_0.1.x86_64 2/6 Verifying : apr-1.3.9-5.el6_2.x86_64 3/6 Verifying : httpd-tools-2.2.15-29.0.1.el6_4.x86_64 4/6 Verifying : mailcap-2.1.31-2.el6.noarch 5/6 Verifying : apr-util-1.3.9-3.el6_0.1.x86_64 6/6 Installed: httpd.x86_64 0:2.2.15-29.0.1.el6_4 Dependency Installed: apr.x86_64 0:1.3.9-5.el6_2 apr-util.x86_64 0:1.3.9-3.el6_0.1 apr-util-ldap.x86_64 0:1.3.9-3.el6_0.1 httpd-tools.x86_64 0:2.2.15-29.0.1.el6_4 mailcap.noarch 0:2.1.31-2.el6 Complete![root@ol6cont1 ~]# service httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [ OK ][root@ol6cont1 ~]# chkconfig httpd on
< html > < head > < title >ol6cont1 test page</ title > </ head > < body > < h1 >ol6cont1 web server is running</ h1 > Congratulations, the web server in container ol6cont1 is working properly! </ body > </ html > |
Try to open the container’s IP address (e.g. 192.168.122.230 in our example) in the host’s Firefox browser:
Unable to render embedded object: File (firefox-ol6cont1.png) not found.
The Apache web server running within the ol6cont1 container has successfully delivered the web page you created!
Installing and starting an Oracle Linux 5 container
Now repeat the exercises above and create an Oracle Linux 5 (latest version) container named ol5cont1. Then create a clone named ol5cont2 and start it up afterwards, so both an Oracle Linux 5 and 6 container are running in parallel.
Click here to obtain a hint for the correct command sequence (the output has been omitted for brevity).
[oracle@oraclelinux6 ~]$ sudo lxc-create -n ol5cont1 -t oracle -- --release=5.latest [oracle@oraclelinux6 ~]$ sudo lxc-clone -o ol5cont1 -n ol5cont2 [oracle@oraclelinux6 ~]$ lxc-ls -1 ol5cont1 ol5cont2 ol6cont1 ol6cont2 [oracle@oraclelinux6 ~]$ sudo lxc-start -n ol5cont1 -d -o /container/ol5cont1/ol5cont1.log
[oracle@oraclelinux6 ~]$ lxc-ls --active -1 ol5cont1 ol6cont1
[oracle@oraclelinux6 ~]$ sudo lxc-console -n ol5cont1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Oracle Linux Server release 5.10 Kernel 3.8.13-16.2.2.el6uek.x86_64 on an x86_64ol5cont1 login: oracle Password: oracle [oracle@ol5cont1 ~]$ cat /etc/oracle-release Oracle Linux Server release 5.10
[oracle@ol5cont1 ~]$ ping -c 1 ol6cont1
PING ol6cont1 (192.168.122.62) 56(84) bytes of data. 64 bytes from ol6cont1 (192.168.122.62): icmp_seq=1 ttl=64 time=0.100 ms --- ol6cont1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms[oracle@ol5cont1 ~]$ logout [Ctrl+a, q]
Log into ol5cont1 as the root user to install w3m:
[root@ol5cont1 ~]# yum install w3m el5_latest | 1.4 kB 00:00 el5_latest/primary | 18 MB 00:30 el5_latest 12549/12549 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package w3m.x86_64 0:0.5.1-18.0.1.el5 set to be updated --> Processing Dependency: /usr/bin/perl for package: w3m --> Processing Dependency: perl for package: w3m --> Processing Dependency: libgpm.so.1()(64bit) for package: w3m --> Running transaction check ---> Package gpm.x86_64 0:1.20.1-74.1.0.1 set to be updated ---> Package perl.x86_64 4:5.8.8-41.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: w3m x86_64 0.5.1-18.0.1.el5 el5_latest 1.1 M Installing for dependencies: gpm x86_64 1.20.1-74.1.0.1 el5_latest 191 k perl x86_64 4:5.8.8-41.el5 el5_latest 12 M Transaction Summary ================================================================================ Install 3 Package(s) Upgrade 0 Package(s) Total download size: 14 MIs this ok [y/N]: y
Downloading Packages: (1/3): gpm-1.20.1-74.1.0.1.x86_64.rpm | 191 kB 00:00 (2/3): w3m-0.5.1-18.0.1.el5.x86_64.rpm | 1.1 MB 00:02 (3/3): perl-5.8.8-41.el5.x86_64.rpm | 12 MB 00:22 -------------------------------------------------------------------------------- Total 450 kB/s | 14 MB 00:31 warning: rpmts_HdrFromFdno: Header V3 DSA signature: NOKEY, key ID 1e5e0159 el5_latest/gpgkey | 1.4 kB 00:00 Importing GPG key 0x1E5E0159 "Oracle OSS group (Open Source Software group) <build@oss.oracle.com>" from /etc/pki/rpm-gpg/RPM-GPG-KEY-oracleIs this ok [y/N]: yes
Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : perl 1/3 Installing : gpm 2/3 Installing : w3m 3/3 Installed: w3m.x86_64 0:0.5.1-18.0.1.el5 Dependency Installed: gpm.x86_64 0:1.20.1-74.1.0.1 perl.x86_64 4:5.8.8-41.el5 Complete!
[root@ol5cont1 ~]# w3m -dump http://ol6cont1/ ol6cont1 web server is running Congratulations, the web server in container ol6cont1 is working properly!
Exercise: Monitoring containers
Use lxc-ps on the host to get a list of processes running in a given container:
[oracle@oraclelinux6 ~]$ lxc-ps -n ol5cont1
CONTAINER PID TTY TIME CMD ol5cont1 7179 ? 00:00:00 init ol5cont1 7470 ? 00:00:00 dhclient ol5cont1 7522 ? 00:00:00 rsyslogd ol5cont1 7551 ? 00:00:00 sshd ol5cont1 7560 pts/11 00:00:00 mingetty ol5cont1 7562 pts/8 00:00:00 mingetty ol5cont1 7563 pts/9 00:00:00 mingetty ol5cont1 7564 pts/10 00:00:00 mingetty ol5cont1 7609 pts/7 00:00:00 mingetty
[oracle@oraclelinux6 ~]$ sudo lxc-netstat -n ol6cont1 -ntlup
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 :::80 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - udp 0 0 0.0.0.0:68 0.0.0.0:* -
If you’d like to determine the amount of memory currently used by a given container, you can obtain this information from the control groups subsystem:
[oracle@oraclelinux6 ~]$ lxc-cgroup -n ol6cont1 memory.usage_in_bytes 169033728
To monitor the state of a container, use the lxc-monitor command. Open a second command line terminal and start it with the following command. Now start and stop your containers using the lxc-start and lxc-shutdown commands from another shell and observe how lxc-monitor indicates the state changes:
[oracle@oraclelinux6 ~]$ lxc-monitor -n ".*"
'ol6cont2' changed state to [STARTING] 'ol6cont2' changed state to [RUNNING] 'ol6cont2' changed state to [STOPPING] 'ol6cont2' changed state to [STOPPED] 'ol5cont1' changed state to [STARTING] 'ol5cont1' changed state to [RUNNING]
Exercise: Changing a container’s network configuration
By default, the lxc-oracle template script sets up networking by setting up a virtual ethernet (veth) bridge. In this mode, a container obtains its IP address from the dnsmasq server that libvirtd runs on the private virtual bridge network (virbr0) between the container and the host. The host allows a container to connect to the rest of the network by using NAT rules in iptables, but these rules do not allow incoming connections to the container. Both the host and other containers on the veth bridge have network access to the container via the bridge.If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host’s network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.
To modify a container so that it uses the macvlan bridge, shut down the ol6cont1 container, edit /container/ol6cont1/config and look for the following lines:
lxc.network.type = veth lxc.network.flags = up lxc.network.link = virbr0
lxc.network.type = macvlan lxc.network.macvlan.mode = bridge lxc.network.flags = up lxc.network.link = eth1
Destroying containers
Containers that are no longer needed can be discarded using the lxc-destroy command. Use the -f option to stop the container if it’s still running (which would otherwise abort the container destruction):
[oracle@oraclelinux6 ~]$ lxc-ls ol5cont1 ol5cont2 ol6cont1 ol6cont2 [oracle@oraclelinux6 ~]$ lxc-ls --active ol5cont2 ol6cont1 [oracle@oraclelinux6 ~]$ sudo lxc-destroy -n ol5cont2 lxc-destroy: 'ol5cont2' is RUNNING; aborted [oracle@oraclelinux6 ~]$ sudo lxc-destroy -f -n ol5cont2 Delete subvolume '/container/ol5cont2/rootfs' [oracle@oraclelinux6 ~]$ lxc-ls --active ol6cont1 [oracle@oraclelinux6 ~]$ lxc-ls ol5cont1 ol6cont1 ol6cont2
Conclusion
In this hands-on lab, we covered the basics of working with Linux Containers (LXC). Hopefully this information was useful and made you curious to learn more about this technology, which is still evolving.If you’d like to learn more about this topic, there is a dedicated chapter about Linux containers in the Oracle Linux Administrator’s Solutions Guide. It covers the creation, configuration and starting/stopping as well as monitoring of containers in more detail. Also take a look at the following resources for more details and practical hints.
References
- Chapter: Linux Containers in the Oracle Linux 6 Administrator’s Solutions Guide
- Oracle Linux Technology Spotlight: LXC — Linux Containers
- Wikipedia: Linux Containers
- OTN Garage blog: Linux-Containers — Part 1: Overview
- OTN Garage blog: Linux Container (LXC) — Part 2: Working With Containers
- OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy
- Video on the Oracle Linux YouTube channel: Linux Containers Explained
- Linux Advocates: Linux Containers and Why They Matter
- OTN Article: How I Used CGroups to Manage System Resources In Oracle Linux 6
- libvirt – The virtualization API
Comentarios
Publicar un comentario