Origen:
https://wikis.oracle.com/display/virtualsysadminday/Getting+Started+with+Linux+Containers+on+Oracle+Linux+7
Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O.
Generally speaking, Linux Containers use a completely different approach than "classical" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available.
Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well.
Some use cases for Linux Containers include:
The creation of Oracle Linux containers can be accomplished on the command line in a few steps, using the LXC utilities. So far, there is no integration or support for this technology in applications like Oracle VM Manager or Oracle Enterprise Manager.
To properly support and work with Linux Containers, the following packages (and their dependencies) need to be installed on the system: btrfs-progs, lxc, libvirt and libcgroup.
First, verify which of the required packages are currently installed, you can use the command rpm -q btrfs-progs libcgroup libvirt lxc:
You can see from the output that two of the required packages are missing: libvirt and lxc
To install the missing packages, you must first login as the root user. For the purpose of this lab and the remaining exercises, simply login as root using the password provided for the virtual image. Next, we will install the missing packages. In Oracle Linux 7, both btrfs-progs and libcgroup are installed by default with a minimal system image and libvirt package is a dependency for lxc. The only package that needs to be installed is lxc:
The LXC template scripts are installed in /usr/share/lxc/templates:
As you can see, the LXC distribution contains templates for other Linux distributions as well. However, the focus of this lab session will be working with Oracle Linux containers so we will use the lxc-oracle tempate.
Linux Control Groups (cgroups) are an essential component of Linux Containers. In Oracle Linux 7, the cgroup subsystem (also known as resource controllers) are mounted automatically by systemd and can be viewed in /proc/cgroups.
To view the available control groups, type the following at the command line as the root user:
Another key component for containers is the libvirtd service which provides a host-internal network bridge and DHCP/DNS service that will be used to automatically configure the network settings of the containers we will create later in this lab. As with many services running on Oracle Linux 7, systemd provides the details on the status of the libvirtd service. To verify, type the following command as root at the command line:
[root@localhost ~]# systemctl start libvirtd.service
The virtualization management service, libvirtd, is enabled by default with Oracle Linux 7.
Now that we've concluded all the necessary preparations, let's check the configuration using the lxc-checkconfig script:
Looks like we're good to go!
The creation and management of Btrfs file systems is explained in detail in the chapter on the Btrfs file system in the Oracle Linux Administrator's Solutions Guide for Release 7. For some practical examples, take a look at the Hands-on lab - Storage Management with Btrfs.
On our virtual lab environment, we will first need to partition our free space (/dev/sdb) for our Btrfs file system. As root, type the following at the command line:
Next, select 'n' to Create New Partition and select the defaults:
Finally, verify and write the new partition:
Now that you have created the partition, we will create the Btrfs file system and mount the volume to /container (the default container directory created during the installation of the lxc package).
As root, from the command line, type the following:
Done! Use df to verify your partition is mounted and ready. You should see the filesystem /dev/sdb mounted on /container as shown below:
To mount the file system at system startup time, you can add an entry for /container to the /etc/fstab file, either by adding it with your favorite text editor or by using the following command line (as the root user):
First, we will create a container of the latest version of Oracle Linux 7 named "ol7cont1" using the option "-t", which determines the general type of the Linux distribution to be installed (the so-called "template"). In our lab we will use "oracle". Depending on the template, you can pass template-specific options after the double dashes ("--").
In the case of the Oracle Linux template, you can choose the distribution's release version by providing values like "5.11", "6.5" or "7.latest" to the --release option. If you will not be using the public-yum.oracle.com as your default repository, you will need to pass the option "-u" followed by the URL to your repodata information. In addition, the default file system for linux containers will use the rootfs of the host, which by default is the XFS file system on Oracle Linux 7. If you will be using the btrfs filesystem, you must change the Backingstore during the create process, by passing the option "-B btrfs".
Further information about the available configuration options can be found in chapter About the lxc-oracle Template Script of the Administrator's Guide for Oracle Linux 7.
Enter the following command to create an Oracle Linux 7 container, based on the latest available update release and using the default configuration options:
The installation script performed a download of the required RPM packages to prepare a minimal installation of the latest version of Oracle Linux 7 (about 400 MB), from Oracle's public-yum service.
The directory structure of the installed container can be found at /container/ol7cont1/rootfs, it can be browsed and modified like any other regular directory structure.
The script also created two user accounts "root" and "oracle" (with passwords equaling the user names) and configured a virtual network device, which obtains an IP address via DHCP from the DHCP server provided by the libvirt framework. The container's configuration file created by lxc-create is located at /container/ol6cont1/config and can be adapted and modified using a regular text editor.
Now you can verify your containers as subvolumes using the btrfs utilities:
The container has now been started by lxc-start in the background (courtesy of the -d option). By passing the option -o any eventual log messages will be redirected to the file /container/ol7cont1/ol7cont.log. As you can tell from the output of lxc-info, the container ol7cont1 has been started and is now in state RUNNING.
A container can be stopped using lxc-stop ( or from within the container using the usual commands like shutdown -h or poweroff.
Restart the container using lxc-start again, to continue with the exercises.
The container's root password defaults to root, it is strongly recommended to change this to a more secure password using the passwd command before deploying a container on an untrusted network!
The key combination CTRL-A, Q terminates the LXC console session, leaving the container's console at the stage where you left it.
So make sure to first logout of the container before you disconnect!
Alternatively, you can also log in to the container using Secure Shell (SSH) from the host system. All containers have their own IP address and are connected to a virtual bridge device virbr0 by default, which is also reachable from the host system (but not from outside the host).
This way, you can easily set up simple client/server architectures within a host system. To obtain the currently assigned IP addresses, take a look at the default.leases file from dnsmasq running on the host:
Log into the container ol7cont1 (using lxc-console or ssh, see the previous exercise for details) and install and enable the Apache web server:
Now let's create some minimal custom content that will be served from this web server. Change to the directory /var/www/html in the container and create a file named index.html using a text editor like vi:
You should now be able to reach the web server running inside the container from the host system.
Try to open the container's IP address (e.g. 192.168.122.230 in our example) in the host's Firefox browser.
You should see your page and that means the Apache web server running within the ol7cont1 container has successfully delivered the web page you created!
You have successfully created your Oracle Linux 6.5 container. As the root user, start ol65cont1 using the lxc-start command you used earlier.
The second container should now be up and running:
Now log into the Oracle Linux 6.5 container and verify its version:
Also note that the host's dnsmasq DHCP server conveniently associates each running container's host name with its IP address:
Use the CTRL+A Q sequence to exit the container console.
We now have two containers up and running, using two different major versions of Oracle Linux.
However, for the container we created using media ISO, you may need to install additional software later but the /mnt directory containing the ISO image is on the host not the container. Let's learn how to use a bind mount in order to allow the container access to content that is residing on the host, in this case our media ISO.
From your host system, as the root user, you will need to edit the container config file. In this case, open /container/ol65cont1/config using the text editor of your choice, such as vi and add the following statement to the end of the file:
Remember , our ISO is mounted to /mnt on the host.The next step is to stop the container and restart it so the changes can take effect. As root, type the following command on the host:
Now restart the ol65cont1 container and let's look at how containers on the host are able to easily reach each other via the host's virtual bridge by IP address and host name. To quickly double check this, let's try to reach the web server running on ol7cont1 from within ol65cont1, using the text-based web browser w3m.
Because we did the bind mount in the previous step, you will be able to reach the media ISO to install additional software. Let's trying this by logging into ol65cont1 as the root user to install w3m:
Now you can access the web server running on ol7cont1 as follows:
Congratulations, you just exchanged HTTP traffic between two Oracle Linux containers!
To check the status of active network connections, the lxc-attach command allows you to monitor this from the host system:
As you can see in the example above, both the SSH daemon as well as the apache web server are up and running, listening for incoming connections.
If you'd like to determine the amount of memory currently used by a given container, you can obtain this information from the control groups subsystem:
The Linux memory management subsystem uses a copy-on-write technique to share memory pages among processes, if they are identical.
To monitor the state of a container, use the lxc-monitor command. Open a second command line terminal and start it with the following command. Now start and stop your containers using the lxc-start and lxc-shutdown commands from another shell and observe how lxc-monitor indicates the state changes:
lxc-monitor can either be given an actual container name or a regular expression to match multiple container names, as in the example above.
If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host's network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.
To modify a container so that it uses the macvlan bridge, shut down the ol7cont1 container, edit /container/ol7cont1/config and look for the following lines:
Now replace these with the following lines to switch to macvlan bridge mode:
Now restart the ol7cont1 container and verify its IP address using ifconfig - it should now have obtained a different one instead, if there is a DHCP server configured. This depends on the VirtualBox configuration, in "NAT" mode it will be obtain from the VirtualBox DHCP server, in "Bridged" network mode the DHCP server on your LAN will handle the container's DHCP request instead.
To configure a static IP address that a container does not obtain using DHCP:
If you'd like to learn more about this topic, there is a dedicated chapter about Linux containers in the Oracle Linux Administrator's Guide for Oracle Linux 7. It covers the creation, configuration and starting/stopping as well as monitoring of containers in more detail. Also take a look at the following resources for more details and practical hints.
https://wikis.oracle.com/display/virtualsysadminday/Getting+Started+with+Linux+Containers+on+Oracle+Linux+7
Introduction
In this hands-on lab you will learn the basics of working with Linux Containers on Oracle Linux 7:- Download a PDF of this lab.
- Introduction
- Requirements
- Exercise: Installing and configuring additionally required software packages
- Exercise: Creating and mounting a Btrfs volume for the container storage
- Exercise: Creating a container
- Exercise: Cloning an existing container
- Exercise: Starting and stopping a container
- Exercise: Logging into a container
- Exercise: Updating and installing software inside a container
- Installing and starting an Oracle Linux 6.5 container
- Exercise: Monitoring containers
- Exercise: Changing a container's network configuration
- Exercise: Destroying containers
- Conclusion
- References
Generally speaking, Linux Containers use a completely different approach than "classical" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available.
Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well.
Some use cases for Linux Containers include:
- Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space.
- Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes.
- Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications.
- Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system.
The creation of Oracle Linux containers can be accomplished on the command line in a few steps, using the LXC utilities. So far, there is no integration or support for this technology in applications like Oracle VM Manager or Oracle Enterprise Manager.
Hint If you want to learn more about Linux Containers, the Oracle Linux Administrator's Solutions Guide for Release 7 has a dedicated chapter about this technology. |
Requirements
The Oracle Linux 7.0 virtual appliance should be up and running (from the initial snapshot) and you should be logged in as the Oracle Linux user with a terminal window opened to enter the following commands. You should have some basic experience with working on a Linux command line, e.g. opening and editing files, moving around the file system directory structure, running commands.Exercise: Installing and configuring additionally required software packages
-#service NetworkManager stop
-#nano /etc/selinux/config
SELINUX=disabled
-#service firewalld stop
First, verify which of the required packages are currently installed, you can use the command rpm -q btrfs-progs libcgroup libvirt lxc:
[root@localhost ~]# rpm -q btrfs-progs libcgroup libvirt lxc
btrfs-progs-3.12-4.el7.x86_64 libcgroup-0.41-6.el7.x86)64 package libvirt is not installed package lxc is not installed
To install the missing packages, you must first login as the root user. For the purpose of this lab and the remaining exercises, simply login as root using the password provided for the virtual image. Next, we will install the missing packages. In Oracle Linux 7, both btrfs-progs and libcgroup are installed by default with a minimal system image and libvirt package is a dependency for lxc. The only package that needs to be installed is lxc:
[root@localhost ~]# yum install lxc
Loaded plugins: langpacks Resolving Dependencies --> Running transaction check ---> Package lxc.x86_64 0:1.0.4-2.0.3.el7 will be installed --> Processing Dependency: libvirt for package: lxc-1.0.4-2.0.3.el7.x86_64 --> Processing Dependency: liblxc.so.1()(64bit) for package: lxc-1.0.4-2.0.3.el7.x86_64 --> Running transaction check ---> Package libvirt.x86_64 0:1.1.1-29.0.1.el7_0.1 will be installed --> Processing Dependency: libvirt-daemon-driver-lxc = 1.1.1-29.0.1.el7_0.1 for package: libvirt-1.1.1-29.0.1.el7_0.1.x86_64 --> Processing Dependency: libvirt-daemon-config-nwfilter = 1.1.1-29.0.1.el7_0.1 for package: libvirt-1.1.1-29.0.1.el7_0.1.x86_64 --> Processing Dependency: libvirt-daemon-config-network = 1.1.1-29.0.1.el7_0.1 for package: libvirt-1.1.1-29.0.1.el7_0.1.x86_64 ---> Package lxc-libs.x86_64 0:1.0.4-2.0.3.el7 will be installed --> Running transaction check ---> Package libvirt-daemon-config-network.x86_64 0:1.1.1-29.0.1.el7_0.1 will be installed ---> Package libvirt-daemon-config-nwfilter.x86_64 0:1.1.1-29.0.1.el7_0.1 will be installed ---> Package libvirt-daemon-driver-lxc.x86_64 0:1.1.1-29.0.1.el7_0.1 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: lxc x86_64 1.0.4-2.0.3.el7 ol7_latest 199 k Installing for dependencies: libvirt x86_64 1.1.1-29.0.1.el7_0.1 ol7_latest 70 k libvirt-daemon-config-network x86_64 1.1.1-29.0.1.el7_0.1 ol7_latest 69 k libvirt-daemon-config-nwfilter x86_64 1.1.1-29.0.1.el7_0.1 ol7_latest 73 k libvirt-daemon-driver-lxc x86_64 1.1.1-29.0.1.el7_0.1 ol7_latest 154 k lxc-libs x86_64 1.0.4-2.0.3.el7 ol7_latest 186 k Transaction Summary ================================================================================ Install 1 Package (+5 Dependent packages) Total download size: 750 k Installed size: 1.3 M Is this ok []: y Downloading packages: (1/6): libvirt-1.1.1-29.0.1.el7_0.1.x86_64.rpm | 70 kB 00:00 (2/6): libvirt-daemon-config-network-1.1.1-29.0.1.el7_0.1. | 69 kB 00:00 (3/6): libvirt-daemon-config-nwfilter-1.1.1-29.0.1.el7_0.1 | 73 kB 00:00 (4/6): libvirt-daemon-driver-lxc-1.1.1-29.0.1.el7_0.1.x86_ | 154 kB 00:00 (5/6): lxc-1.0.4-2.0.3.el7.x86_64.rpm | 199 kB 00:00 (6/6): lxc-libs-1.0.4-2.0.3.el7.x86_64.rpm | 186 kB 00:00 -------------------------------------------------------------------------------- Total 280 kB/s | 750 kB 00:02 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : lxc-libs-1.0.4-2.0.3.el7.x86_64 1/6 Installing : libvirt-daemon-driver-lxc-1.1.1-29.0.1.el7_0.1.x86_64 2/6 Installing : libvirt-daemon-config-network-1.1.1-29.0.1.el7_0.1.x86_64 3/6 Installing : libvirt-daemon-config-nwfilter-1.1.1-29.0.1.el7_0.1.x86_64 4/6 Installing : libvirt-1.1.1-29.0.1.el7_0.1.x86_64 5/6 Installing : lxc-1.0.4-2.0.3.el7.x86_64 6/6 Verifying : lxc-1.0.4-2.0.3.el7.x86_64 1/6 Verifying : libvirt-daemon-config-nwfilter-1.1.1-29.0.1.el7_0.1.x86_64 2/6 Verifying : libvirt-daemon-config-network-1.1.1-29.0.1.el7_0.1.x86_64 3/6 Verifying : libvirt-1.1.1-29.0.1.el7_0.1.x86_64 4/6 Verifying : libvirt-daemon-driver-lxc-1.1.1-29.0.1.el7_0.1.x86_64 5/6 Verifying : lxc-libs-1.0.4-2.0.3.el7.x86_64 6/6 Installed: lxc.x86_64 0:1.0.4-2.0.3.el7 Dependency Installed: libvirt.x86_64 0:1.1.1-29.0.1.el7_0.1 libvirt-daemon-config-network.x86_64 0:1.1.1-29.0.1.el7_0.1 libvirt-daemon-config-nwfilter.x86_64 0:1.1.1-29.0.1.el7_0.1 libvirt-daemon-driver-lxc.x86_64 0:1.1.1-29.0.1.el7_0.1 lxc-libs.x86_64 0:1.0.4-2.0.3.el7 Complete!
[root@localhost ~]# ls /usr/share/lxc/templates/
lxc-altlinux lxc-debian lxc-opensuse lxc-ubuntu lxc-archlinux lxc-fedora lxc-oracle lxc-ubuntu-cloud lxc-busybox lxc-lenny lxc-sshd
Linux Control Groups (cgroups) are an essential component of Linux Containers. In Oracle Linux 7, the cgroup subsystem (also known as resource controllers) are mounted automatically by systemd and can be viewed in /proc/cgroups.
To view the available control groups, type the following at the command line as the root user:
[root@localhost ~]# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled cpuset 2 1 1 cpu 3 1 1 cpuacct 3 1 1 memory 4 1 1 devices 5 1 1 freezer 6 1 1 net_cls 7 1 1 blkio 8 1 1 perf_event 9 1 1 hugetlb 10 1 1
[root@localhost ~]# systemctl start libvirtd.service
[root@localhost ~]# systemctl status libvirtd.service
libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Mon 2014-09-22 14:53:30 EDT; 1h 38min ago Main PID: 1144 (libvirtd) CGroup: /system.slice/libvirtd.service └─1144 /usr/sbin/libvirtd Sep 22 14:53:30 localhost.localdomain libvirtd[1144]: libvirt version: 1.1.1,... Sep 22 14:53:30 localhost.localdomain libvirtd[1144]: Module /usr/lib64/libvi... Sep 22 14:53:30 localhost.localdomain systemd[1]: Started Virtualization daemon. Hint: Some lines were ellipsized, use \-l to show in full.
For more information about libvirt's virtual networking functionality, please consult this wiki page. |
[root@localhost ~]# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-3.8.13-44.el7uek.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: missing Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Exercise: Creating and mounting a Btrfs volume for the container storage
At first, a dedicated directory should be created to host the container file systems. The default location is /container. Creating this directory on top of a Btrfs file system provides a few additional interesting possibilities, e.g. the option to "freeze" a container file system at a certain point in time, or the fast creation (cloning) of additional containers based on a template. Cloning containers using Btrfs snapshots takes place at an instant, without requiring any additional disk space except for the differences to the original template.The creation and management of Btrfs file systems is explained in detail in the chapter on the Btrfs file system in the Oracle Linux Administrator's Solutions Guide for Release 7. For some practical examples, take a look at the Hands-on lab - Storage Management with Btrfs.
On our virtual lab environment, we will first need to partition our free space (/dev/sdb) for our Btrfs file system. As root, type the following at the command line:
[root@localhost ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help):
Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-12582911, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-12582911, default 12582911): Using default value 12582911 Partition 1 of type Linux and of size 6 GiB is set
Command (m for help): v Partitions 1: cylinder 1024 greater than maximum 783 Partition 1: previous sectors 12582911 disagrees with total 3145727 Remaining 2047 unallocated 512-byte sectors Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
As root, from the command line, type the following:
[root@localhost ~]# mkfs.btrfs /dev/sdb1
[root@localhost ~]# mount /dev/sdb1 /container
WARNING\! - Btrfs v3.12 IS EXPERIMENTAL WARNING\! - see [http://btrfs.wiki.kernel.org] before using Turning ON incompat feature 'extref': increased hardlink limit per file to 65536 fs created label (null) on /dev/sdb nodesize 16384 leafsize 16384 sectorsize 4096 size 6.00GiB Btrfs v3.12
[root@localhost ~]# mount /dev/sdb1 /container
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on /dev/sda3 6.8G 3.5G 3.3G 53% / devtmpfs 1.8G 0 1.8G 0% /dev tmpfs 1.9G 148K 1.9G 1% /dev/shm tmpfs 1.9G 8.8M 1.8G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sdc 5.0G 3.7G 1.4G 74% /ISO /dev/sda1 497M 197M 301M 40% /boot /dev/sdb1 6.0G 320K 5.4G 1% /container
echo "/dev/sdb1 /container btrfs defaults 0 0" >> /etc/fstab
Exercise: Creating a container
There are several options for creating a container. For the purpose of this lab we will use two options:- An Oracle Linux 7 container created using the lxc-oracle template connecting to a public source (by default this would be public-yum.oracle.com, however in this lab we will use a local yum mirror oow-lab1.oracleworld.com.
- An Oracle Linux 6.5 container created using an ISO image that has been mounted locally on the lab virtual machine under /ISO.
First, we will create a container of the latest version of Oracle Linux 7 named "ol7cont1" using the option "-t", which determines the general type of the Linux distribution to be installed (the so-called "template"). In our lab we will use "oracle". Depending on the template, you can pass template-specific options after the double dashes ("--").
In the case of the Oracle Linux template, you can choose the distribution's release version by providing values like "5.11", "6.5" or "7.latest" to the --release option. If you will not be using the public-yum.oracle.com as your default repository, you will need to pass the option "-u" followed by the URL to your repodata information. In addition, the default file system for linux containers will use the rootfs of the host, which by default is the XFS file system on Oracle Linux 7. If you will be using the btrfs filesystem, you must change the Backingstore during the create process, by passing the option "-B btrfs".
Further information about the available configuration options can be found in chapter About the lxc-oracle Template Script of the Administrator's Guide for Oracle Linux 7.
Enter the following command to create an Oracle Linux 7 container, based on the latest available update release and using the default configuration options:
[root@localhost ~]# lxc-create -n ol7cont1 -B btrfs -t oracle -- -R 7.latest
Host is OracleServer 7.0 Create configuration file /container/ol7cont1/config Yum installing release 7.latest for x86_64 Loaded plugins: langpacks ol7_latest | 1.4 kB 00:00 (1/3): ol7_latest/updateinfo | 49 kB 00:00 (2/3): ol7_latest/group | 652 kB 00:02 (3/3): ol7_latest/primary | 5.0 MB 00:01 ol7_latest 6850/6850 Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.61-4.el7 will be installed --> Processing Dependency: libc.so.6(GLIBC_2.2.5)(64bit) for package: chkconfig-1.3.61-4.el7.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.8)(64bit) for package: chkconfig-1.3.61-4.el7.x86_64 --> Processing Dependency: libpopt.so.0(LIBPOPT_0)(64bit) for package: chkconfig-1.3.61-4.el7.x86_64 [.....] ---> Package dhclient.x86_64 12:4.2.5-27.0.1.el7_0.1 will be installed --> Processing Dependency: dhcp-common = 12:4.2.5-27.0.1.el7_0.1 for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: dhcp-libs(x86-64) = 12:4.2.5-27.0.1.el7_0.1 for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: /bin/bash for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: /bin/sh for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: iputils for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: grep for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 --> Processing Dependency: iproute for package: 12:dhclient-4.2.5-27.0.1.el7_0.1.x86_64 [.....] --> Processing Dependency: libtasn1.so.6()(64bit) for package: p11-kit-0.18.7-4.el7.x86_64 ---> Package p11-kit-trust.x86_64 0:0.18.7-4.el7 will be installed ---> Package pkgconfig.x86_64 1:0.27.1-4.el7 will be installed ---> Package readline.x86_64 0:6.2-9.el7 will be installed ---> Package tzdata.noarch 0:2014g-1.el7 will be installed --> Running transaction check ---> Package device-mapper.x86_64 7:1.02.84-14.el7 will be installed ---> Package libmnl.x86_64 0:1.0.3-7.el7 will be installed ---> Package libssh2.x86_64 0:1.4.3-8.el7 will be installed ---> Package libtasn1.x86_64 0:3.3-5.el7_0 will be installed ---> Package pinentry.x86_64 0:0.8.1-14.el7 will be installed ---> Package pth.x86_64 0:2.0.7-22.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: chkconfig x86_64 1.3.61-4.el7 ol7_latest 171 k dhclient x86_64 12:4.2.5-27.0.1.el7_0.1 ol7_latest 275 k initscripts x86_64 9.49.17-1.0.1.el7_0.1 ol7_latest 422 k [.....] policycoreutils x86_64 2.2.5-11.0.1.el7 ol7_latest 801 k rootfiles noarch 8.1-11.el7 ol7_latest 6.8 k rsyslog x86_64 7.4.7-6.0.1.el7 ol7_latest 555 k vim-minimal x86_64 2:7.4.160-1.el7 ol7_latest 434 k yum noarch 3.4.3-118.0.2.el7 ol7_latest 1.2 M Installing for dependencies: acl x86_64 2.2.51-12.el7 ol7_latest 80 k audit-libs x86_64 2.3.3-4.el7 ol7_latest 76 k basesystem noarch 10.0-7.0.1.el7 ol7_latest 4.5 k bash x86_64 4.2.45-5.el7 ol7_latest 995 k [.....] bzip2-libs x86_64 1.0.6-12.el7 ol7_latest 39 k ca-certificates noarch 2014.1.98-70.0.el7_0 ol7_latest 387 k coreutils x86_64 8.22-11.0.1.el7 ol7_latest 3.2 M [.....] util-linux x86_64 2.23.2-16.el7 ol7_latest 1.8 M xz x86_64 5.1.2-8alpha.el7 ol7_latest 199 k xz-libs x86_64 5.1.2-8alpha.el7 ol7_latest 100 k yum-metadata-parser x86_64 1.1.4-10.el7 ol7_latest 27 k zlib x86_64 1.2.7-13.el7 ol7_latest 88 k Transaction Summary ================================================================================ Install 12 Packages (+139 Dependent packages) Total download size: 72 M Installed size: 324 M Downloading packages: (1/151): audit-libs-2.3.3-4.el7.x86_64.rpm | 76 kB 00:00 (2/151): basesystem-10.0-7.0.1.el7.noarch.rpm | 4.5 kB 00:00 (3/151): acl-2.2.51-12.el7.x86_64.rpm | 80 kB 00:00 (4/151): bind-libs-lite-9.9.4-14.0.1.el7.x86_64.rpm | 709 kB 00:00 (5/151): bind-license-9.9.4-14.0.1.el7.noarch.rpm | 79 kB 00:00 [.....] (81/151): libtasn1-3.3-5.el7_0.x86_64.rpm | 316 kB 00:00 (82/151): libstdc++-4.8.2-16.2.el7_0.x86_64.rpm | 288 kB 00:00 (83/151): libutempter-1.1.6-4.el7.x86_64.rpm | 24 kB 00:00 (84/151): libuser-0.60-5.el7.x86_64.rpm | 395 kB 00:00 (85/151): libuuid-2.23.2-16.el7.x86_64.rpm | 71 kB 00:00 [.....] (149/151): yum-metadata-parser-1.1.4-10.el7.x86_64.rpm | 27 kB 00:00 (150/151): yum-3.4.3-118.0.2.el7.noarch.rpm | 1.2 MB 00:00 (151/151): zlib-1.2.7-13.el7.x86_64.rpm | 88 kB 00:00 -------------------------------------------------------------------------------- Total 2.3 MB/s | 72 MB 00:31 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libgcc-4.8.2-16.2.el7_0.x86_64 1/151 [.....] Installing : openldap-2.4.39-3.el7.x86_64 85/151 Installing : libmount-2.23.2-16.el7.x86_64 86/151 Installing : libpwquality-1.2.3-4.el7.x86_64 87/151 Installing : systemd-libs-208-11.0.1.el7_0.2.x86_64 88/151 [.....] Installing : python-libs-2.7.5-16.el7.x86_64 125/151 Installing : python-2.7.5-16.el7.x86_64 126/151 [.....] Installing : passwd-0.79-4.el7.x86_64 149/151 Installing : 2:vim-minimal-7.4.160-1.el7.x86_64 150/151 Installing : rootfiles-8.1-11.el7.noarch 151/151 Verifying : readline-6.2-9.el7.x86_64 1/151 Verifying : 7:oraclelinux-release-7.0-1.0.3.el7.x86_64 2/151 Verifying : pygpgme-0.3-9.el7.x86_64 3/151 Verifying : initscripts-9.49.17-1.0.1.el7_0.1.x86_64 4/151 Verifying : 1:pkgconfig-0.27.1-4.el7.x86_64 5/151 [.....] Verifying : libnetfilter_conntrack-1.0.4-2.el7.x86_64 76/151 Verifying : nss-softokn-freebl-3.16.2-1.el7_0.x86_64 77/151 Verifying : qrencode-libs-3.4.1-3.el7.x86_64 78/151 [.....] Verifying : libsemanage-2.1.10-16.el7.x86_64 131/151 Verifying : iptables-1.4.21-13.el7.x86_64 132/151 Verifying : libsepol-2.1.9-3.el7.x86_64 133/151 Verifying : basesystem-10.0-7.0.1.el7.noarch 134/151 [.....] Verifying : krb5-libs-1.11.3-49.el7.x86_64 148/151 Verifying : diffutils-3.3-4.el7.x86_64 149/151 Verifying : pam-1.1.8-9.el7.x86_64 150/151 Verifying : gnupg2-2.0.22-3.el7.x86_64 151/151 Installed: chkconfig.x86_64 0:1.3.61-4.el7 dhclient.x86_64 12:4.2.5-27.0.1.el7_0.1 initscripts.x86_64 0:9.49.17-1.0.1.el7_0.1 openssh-clients.x86_64 0:6.4p1-8.el7 openssh-server.x86_64 0:6.4p1-8.el7 oraclelinux-release.x86_64 7:7.0-1.0.3.el7 passwd.x86_64 0:0.79-4.el7 policycoreutils.x86_64 0:2.2.5-11.0.1.el7 rootfiles.noarch 0:8.1-11.el7 rsyslog.x86_64 0:7.4.7-6.0.1.el7 vim-minimal.x86_64 2:7.4.160-1.el7 yum.noarch 0:3.4.3-118.0.2.el7 Dependency Installed: acl.x86_64 0:2.2.51-12.el7 audit-libs.x86_64 0:2.3.3-4.el7 basesystem.noarch 0:10.0-7.0.1.el7 bash.x86_64 0:4.2.45-5.el7 [.....] libgpg-error.x86_64 0:1.12-3.el7 libidn.x86_64 0:1.28-3.el7 libmnl.x86_64 0:1.0.3-7.el7 libmount.x86_64 0:2.23.2-16.el7 [.....] pcre.x86_64 0:8.32-12.el7 pinentry.x86_64 0:0.8.1-14.el7 pkgconfig.x86_64 1:0.27.1-4.el7 [.....] yum-metadata-parser.x86_64 0:1.1.4-10.el7 zlib.x86_64 0:1.2.7-13.el7 Complete! Rebuilding rpm database Patching container rootfs /container/ol7cont1/rootfs for Oracle Linux 7.0 Configuring container for Oracle Linux 7.0 Added container user:oracle password:oracle Added container user:root password:root Container : /container/ol7cont1/rootfs Config : /container/ol7cont1/config Network : eth0 (veth) on virbr0
The directory structure of the installed container can be found at /container/ol7cont1/rootfs, it can be browsed and modified like any other regular directory structure.
The script also created two user accounts "root" and "oracle" (with passwords equaling the user names) and configured a virtual network device, which obtains an IP address via DHCP from the DHCP server provided by the libvirt framework. The container's configuration file created by lxc-create is located at /container/ol6cont1/config and can be adapted and modified using a regular text editor.
[root@localhost ~]# cat /container/ol7cont1/config
# Template used to create this container: /usr/share/lxc/templates/lxc-oracle # Parameters passed to the template: \-R 7.latest \-u [http://public-yum.oracle.com] # For additional config options, please look at lxc.container.conf(5) lxc.network.type = veth lxc.network.flags = up lxc.network.link = virbr0 lxc.rootfs = /container/ol7cont1/rootfs # Common configuration lxc.include = /usr/share/lxc/config/oracle.common.conf # Container configuration for Oracle Linux 7.latest lxc.arch = x86_64 lxc.utsname = ol7cont1 lxc.cap.drop = sys_resource lxc.autodev = 1 lxc.kmsg = 0 # Networking lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.hwaddr = fe:fb:3b:71:05:3b
Exercise: Cloning an existing container
Before making any changes, it's recommended to create a snapshot of the container first, which can act as a backup copy and template from which we can quickly spawn additional containers based on this snapshot: Since we are using btrfs as a filesystem, we will pass the option to use the btrfs snapshot option "-s" for creating the clone.
[root@localhost ~]# lxc-clone -s ol7cont1 -n ol7cont2
[root@localhost ~]# lxc-ls -l
Created container ol7cont2 as snapshot of ol7cont1
[root@localhost ~]# lxc-ls -l
drwxr-xr-x. 1 root root 24 Sep 23 11:48 ol7cont1 drwxr-xr-x. 1 root root 24 Sep 23 12:17 ol7cont2
[root@localhost ~]# btrfs subvolume list /container
ID 260 gen 45 top level 5 path ol7cont1/rootfs ID 263 gen 46 top level 5 path ol7cont2/rootfs
Exercise: Starting and stopping a container
Now that the container's file system has been installed, you can start the container using the lxc-start command:
[root@localhost ~]# lxc-info -n ol7cont1
[root@localhost ~]# lxc-start -n ol7cont1 -d -o /container/ol7cont1/ol7cont1.log
[root@localhost ~]# lxc-info -n ol7cont1
Name: ol7cont1 State: STOPPED
[root@localhost ~]# lxc-start -n ol7cont1 -d -o /container/ol7cont1/ol7cont1.log
[root@localhost ~]# lxc-info -n ol7cont1
Name: ol7cont1 State: RUNNING PID: 3340 IP: 192.168.122.223 CPU use: 1.06 seconds BlkIO use: 27.51 MiB Memory use: 53.46 MiB KMem use: 0 bytes Link: veth4RCREA TX bytes: 1.25 KiB RX bytes: 2.21 KiB Total bytes: 3.46 KiB
A container can be stopped using lxc-stop ( or from within the container using the usual commands like shutdown -h or poweroff.
[root@localhost ~]# lxc-info -n ol7cont1
Name: ol7cont1 State: STOPPED
Exercise: Logging into a container
Now you can log into the container instance's console using the lxc-console command and take a look at its configuration.The container's root password defaults to root, it is strongly recommended to change this to a more secure password using the passwd command before deploying a container on an untrusted network!
[root@localhost ~]# lxc-console -n ol7cont1
[root@localhost ~]# cat /etc/oracle-release
[root@localhost ~]# ps x
[root@localhost ~]# ip addr show eth0
[root@localhost ~]# ip route
[root@localhost ~]# logout
ol7cont1 login: CTRL-A Q
Connected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Oracle Linux Server 7.0 Kernel 3.8.13-44.el7uek.x86_64 on an x86_64 ol7cont1 login: root Password: root
[root@localhost ~]# cat /etc/oracle-release
Oracle Linux Server release 7.0
[root@localhost ~]# ps x
PID TTY STAT TIME COMMAND 1 ? Ss 0:00 /sbin/init 11 ? Ss 0:00 /usr/lib/systemd/systemd-journald 25 ? Ssl 0:00 /usr/sbin/rsyslogd -n 26 ? Ss 0:00 /usr/lib/systemd/systemd-logind 31 ? Ss 0:00 login -- root 32 lxc/tty2 Ss+ 0:00 /sbin/agetty --noclear tty2 33 lxc/tty3 Ss+ 0:00 /sbin/agetty --noclear tty3 34 lxc/tty4 Ss+ 0:00 /sbin/agetty --noclear tty4 35 lxc/console Ss+ 0:00 /sbin/agetty --noclear -s console 115200 38400 9600 193 ? Ss 0:00 /sbin/dhclient -H ol7cont1 -1 -q -lf /var/lib/dhclien 240 ? Ss 0:00 /usr/sbin/sshd -D 327 lxc/tty1 Ss 0:00 -bash 344 lxc/tty1 R+ 0:00 ps x
[root@localhost ~]# ip addr show eth0
4: eth0 <BROADCAST,MULTICAST,IP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:68:a3:2e:97:1b brd ff:ff:ff:ff:ff:ff inet 192.168.122.219/24 brd 192.168.122.255 scope global eth0 inet6 fe80::fc68:a3ff:fe2e:971b/64 scope link valid_lft forever preferred_lft forever
[root@localhost ~]# ip route
default via 192.168.122.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1004 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.223
[root@localhost ~]# logout
Oracle Linux Server 7.0 Kernel 3.8.13-44.el7uek.x86_64 on an x86_64
ol7cont1 login: CTRL-A Q
So make sure to first logout of the container before you disconnect!
Alternatively, you can also log in to the container using Secure Shell (SSH) from the host system. All containers have their own IP address and are connected to a virtual bridge device virbr0 by default, which is also reachable from the host system (but not from outside the host).
This way, you can easily set up simple client/server architectures within a host system. To obtain the currently assigned IP addresses, take a look at the default.leases file from dnsmasq running on the host:
[root@localhost ~]# grep ol7cont1 /var/lib/libvirt/dnsmasq/default.leases
1411508916 fe:fb:3b:71:05:3b 192.168.122.223 ol7cont1 *
[root@localhost ~]# ssh oracle@192.168.122.223
[oracle@ol7cont1 ~]$ logout
The authenticity of host '192.168.122.223 (192.168.122.223)' can't be established. ECDSA key fingerprint is b0:84:d5:69:35:74:12:5f:49:c0:a2:90:24:16:00:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.122.223' (ECDSA) to the list of known hosts. oracle@192.168.122.223's password:
[oracle@ol7cont1 ~]$ logout
Connection to 192.168.122.223 closed.
Exercise: Updating and installing software inside a container
The container's system configuration can be modified using the usual operating system tools (e.g. yum or rpm to install additional software).Log into the container ol7cont1 (using lxc-console or ssh, see the previous exercise for details) and install and enable the Apache web server:
[root@ol7cont1 ~]# yum install httpd
[root@ol7cont1 ~]# systemctl start httpd.service
[root@ol7cont1 ~]# systemctl status httpd.service
Loaded plugins: lxc-patch Resolving Dependencies --> Running transaction check ---> Package httpd.x86_64 0:2.4.6-18.0.1.el7_0 will be installed --> Processing Dependency: httpd-tools = 2.4.6-18.0.1.el7_0 for package: httpd-2.4.6-18.0.1.el7_0.x86_64 --> Processing Dependency: system-logos >= 7.92.1-1 for package: httpd-2.4.6-18.0.1.el7_0.x86_64 --> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-18.0.1.el7_0.x86_64 --> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-18.0.1.el7_0.x86_64 --> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-18.0.1.el7_0.x86_64 --> Running transaction check ---> Package apr.x86_64 0:1.4.8-3.el7 will be installed ---> Package apr-util.x86_64 0:1.5.2-6.0.1.el7 will be installed ---> Package httpd-tools.x86_64 0:2.4.6-18.0.1.el7_0 will be installed ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed ---> Package oracle-logos.noarch 0:70.0.3-4.0.7.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================ Package Arch Version Repository Size ================================================================================================================ Installing: httpd x86_64 2.4.6-18.0.1.el7_0 ol7_latest 1.2 M Installing for dependencies: apr x86_64 1.4.8-3.el7 ol7_latest 99 k apr-util x86_64 1.5.2-6.0.1.el7 ol7_latest 91 k httpd-tools x86_64 2.4.6-18.0.1.el7_0 ol7_latest 77 k mailcap noarch 2.1.41-2.el7 ol7_latest 30 k oracle-logos noarch 70.0.3-4.0.7.el7 ol7_latest 4.0 M Transaction Summary ================================================================================================================ Install 1 Package (+5 Dependent packages) Total download size: 5.5 M Installed size: 11 M Is this ok []: y Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. (1/6): apr-util-1.5.2-6.0.1.el7.x86_64.rpm | 91 kB 00:00:00 (2/6): apr-1.4.8-3.el7.x86_64.rpm | 99 kB 00:00:00 (3/6): httpd-tools-2.4.6-18.0.1.el7_0.x86_64.rpm | 77 kB 00:00:00 (4/6): httpd-2.4.6-18.0.1.el7_0.x86_64.rpm | 1.2 MB 00:00:00 (5/6): mailcap-2.1.41-2.el7.noarch.rpm | 30 kB 00:00:00 (6/6): oracle-logos-70.0.3-4.0.7.el7.noarch.rpm | 4.0 MB 00:00:03 ---------------------------------------------------------------------------------------------------------------- Total 1.0 MB/s | 5.5 MB 00:00:05 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : apr-1.4.8-3.el7.x86_64 1/6 Installing : apr-util-1.5.2-6.0.1.el7.x86_64 2/6 Installing : httpd-tools-2.4.6-18.0.1.el7_0.x86_64 3/6 Installing : mailcap-2.1.41-2.el7.noarch 4/6 Installing : oracle-logos-70.0.3-4.0.7.el7.noarch 5/6 Installing : httpd-2.4.6-18.0.1.el7_0.x86_64 6/6 lxc-patch: checking if updated pkgs need patching... Verifying : oracle-logos-70.0.3-4.0.7.el7.noarch 1/6 Verifying : apr-1.4.8-3.el7.x86_64 2/6 Verifying : mailcap-2.1.41-2.el7.noarch 3/6 Verifying : httpd-tools-2.4.6-18.0.1.el7_0.x86_64 4/6 Verifying : apr-util-1.5.2-6.0.1.el7.x86_64 5/6 Verifying : httpd-2.4.6-18.0.1.el7_0.x86_64 6/6 Installed: httpd.x86_64 0:2.4.6-18.0.1.el7_0 Dependency Installed: apr.x86_64 0:1.4.8-3.el7 apr-util.x86_64 0:1.5.2-6.0.1.el7 httpd-tools.x86_64 0:2.4.6-18.0.1.el7_0 mailcap.noarch 0:2.1.41-2.el7 oracle-logos.noarch 0:70.0.3-4.0.7.el7 Complete!
[root@ol7cont1 ~]# systemctl start httpd.service
[root@ol7cont1 ~]# systemctl status httpd.service
httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Tue 2014-09-23 21:21:28 UTC; 9s ago Main PID: 528 (httpd) Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec" CGroup: /user.slice/user-1000.slice/session-1.scope/system.slice/httpd.service ├─528 /usr/sbin/httpd -DFOREGROUND ├─529 /usr/sbin/httpd -DFOREGROUND ├─530 /usr/sbin/httpd -DFOREGROUND ├─531 /usr/sbin/httpd -DFOREGROUND ├─532 /usr/sbin/httpd -DFOREGROUND └─533 /usr/sbin/httpd -DFOREGROUND Sep 23 21:21:28 ol7cont1 httpd[528]: AH00558: httpd: Could not reliably determine the server's fully qua...ssage Sep 23 21:21:28 ol7cont1 systemd[1]: Started The Apache HTTP Server. Hint: Some lines were ellipsized, use -l to show in full.
1
2
3
4
5
6
7
8
9
| < html > < head > < title >ol7cont1 test page</ title > </ head > < body > < h1 >ol7cont1 web server is running</ h1 > Congratulations, the web server in container ol7cont1 is working properly! </ body > </ html > |
Try to open the container's IP address (e.g. 192.168.122.230 in our example) in the host's Firefox browser.
You should see your page and that means the Apache web server running within the ol7cont1 container has successfully delivered the web page you created!
Installing and starting an Oracle Linux 6.5 container
Now, let's create a container called ol65cont1 using a local ISO as our repo source. The first step will be to create a mount point and mount the ISO, then we will pass the location of the ISO as a template option ('--baseurl=') when we create the container. As root, type the following commands:
[root@localhost ~]# mkdir -p /mnt
[root@localhost ~]# mount -o loop /ISO/V41362-01.iso /mnt
[root@localhost ~]# lxc-create -n ol65cont1 -t oracle -B btrfs -- --baseurl=file:///mnt -a x86_64 -R 6.5
[root@localhost ~]# mount -o loop /ISO/V41362-01.iso /mnt
[root@localhost ~]# lxc-create -n ol65cont1 -t oracle -B btrfs -- --baseurl=file:///mnt -a x86_64 -R 6.5
Host is OracleServer 7.0 Create configuration file /container/ol65cont1/config Yum installing release 6.5 for x86_64 Loaded plugins: langpacks lxc-install | 3.7 kB 00:00 lxc-install/primary_db | 3.0 MB 00:00 lxc-install/group_gz | 203 kB 00:00 Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 will be installed --> Processing Dependency: libc.so.6(GLIBC_2.2.5)(64bit) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.8)(64bit) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 --> Processing Dependency: libpopt.so.0(LIBPOPT_0)(64bit) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 --> Processing Dependency: rtld(GNU_HASH) for package: chkconfig-1.3.49.3-2.el6_4.1.x86_64 [] --> Processing Dependency: python-sqlite for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: pygpgme for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: rpm-python for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: /usr/bin/python for package: yum-3.2.29-40.0.1.el6.noarch --> Running transaction check ---> Package audit-libs.x86_64 0:2.2-2.el6 will be installed ---> Package bash.x86_64 0:4.1.2-15.el6_4 will be installed [] ---> Package libtasn1.x86_64 0:2.3-3.el6_2.1 will be installed ---> Package pinentry.x86_64 0:0.7.6-6.el6 will be installed --> Running transaction check ---> Package groff.x86_64 0:1.18.1.4-21.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: chkconfig x86_64 1.3.49.3-2.el6_4.1 lxc-install 158 k dhclient x86_64 12:4.1.1-38.P1.0.1.el6 lxc-install 317 k initscripts x86_64 9.03.40-2.0.1.el6 lxc-install 940 k openssh-clients x86_64 5.3p1-94.el6 lxc-install 401 k openssh-server x86_64 5.3p1-94.el6 lxc-install 311 k oraclelinux-release x86_64 6:6Server-5.0.2 lxc-install 22 k passwd x86_64 0.77-4.el6_2.2 lxc-install 89 k policycoreutils x86_64 2.0.83-19.39.0.1.el6 lxc-install 648 k rootfiles noarch 8.1-6.1.el6 lxc-install 6.3 k rsyslog x86_64 5.8.10-8.0.1.el6 lxc-install 649 k vim-minimal x86_64 2:7.2.411-1.8.el6 lxc-install 363 k yum noarch 3.2.29-40.0.1.el6 lxc-install 995 k Installing for dependencies: MAKEDEV x86_64 3.24-6.el6 lxc-install 88 k audit-libs x86_64 2.2-2.el6 lxc-install 60 k basesystem noarch 10.0-4.0.1.el6 lxc-install 4.3 k bash x86_64 4.1.2-15.el6_4 lxc-install 904 k binutils x86_64 2.20.51.0.2-5.36.el6 lxc-install 2.8 M [] python-iniparse noarch 0.3.1-2.1.el6 lxc-install 36 k python-libs x86_64 2.6.6-51.el6 lxc-install 5.3 M python-pycurl x86_64 7.19.0-8.el6 lxc-install 76 k python-urlgrabber noarch 3.9.1-9.el6 lxc-install 85 k readline x86_64 6.0-4.el6 lxc-install 178 k [] upstart x86_64 0.6.5-12.el6_4.1 lxc-install 176 k ustr x86_64 1.0.4-9.1.el6 lxc-install 85 k util-linux-ng x86_64 2.17.2-12.14.el6 lxc-install 1.5 M xz-libs x86_64 4.999.9-0.3.beta.20091007git.el6 lxc-install 89 k yum-metadata-parser x86_64 1.1.2-16.el6 lxc-install 26 k zlib x86_64 1.2.3-29.el6 lxc-install 72 k Transaction Summary ================================================================================ Install 12 Packages (+131 Dependent packages) Total download size: 82 M Installed size: 303 M Downloading packages: -------------------------------------------------------------------------------- Total 32 MB/s | 82 MB 00:02 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libgcc-4.4.7-4.el6.x86_64 1/143 Installing : setup-2.8.14-20.el6_4.1.noarch 2/143 Installing : filesystem-2.4.30-3.el6.x86_64 3/143 [] Installing : libuser-0.56.13-5.el6.x86_64 115/143 Installing : libcap-ng-0.6.4-3.el6_0.1.x86_64 116/143 Installing : gdbm-1.8.0-36.el6.x86_64 117/143 Installing : python-2.6.6-51.el6.x86_64 118/143 Installing : python-libs-2.6.6-51.el6.x86_64 119/143 Installing : rpm-python-4.8.0-37.el6.x86_64 120/143 Installing : yum-metadata-parser-1.1.2-16.el6.x86_64 121/143 Installing : python-pycurl-7.19.0-8.el6.x86_64 122/143 [] Installing : yum-3.2.29-40.0.1.el6.noarch 140/143 Installing : passwd-0.77-4.el6_2.2.x86_64 141/143 Installing : 2:vim-minimal-7.2.411-1.8.el6.x86_64 142/143 Installing : rootfiles-8.1-6.1.el6.noarch 143/143 Verifying : pam-1.1.1-17.el6.x86_64 1/143 Verifying : openssl-1.0.1e-15.el6.x86_64 2/143 Verifying : rpm-python-4.8.0-37.el6.x86_64 3/143 Verifying : gamin-0.1.10-9.el6.x86_64 4/143 Verifying : procps-3.2.8-25.el6.x86_64 5/143 Verifying : 2:ethtool-3.5-1.el6.x86_64 6/143 [] Verifying : pinentry-0.7.6-6.el6.x86_64 76/143 Verifying : nss-3.15.1-15.0.1.el6.x86_64 77/143 Verifying : expat-2.0.1-11.el6_2.x86_64 78/143 Verifying : pcre-7.8-6.el6.x86_64 79/143 Verifying : openldap-2.4.23-32.el6_4.1.x86_64 80/143 Verifying : grep-2.6.3-4.el6.x86_64 81/143 Verifying : rpm-4.8.0-37.el6.x86_64 82/143 Verifying : MAKEDEV-3.24-6.el6.x86_64 83/143 Verifying : glibc-common-2.12-1.132.el6.x86_64 84/143 [] Verifying : libcurl-7.19.7-37.el6_4.x86_64 140/143 Verifying : 1:findutils-4.4.2-6.el6.x86_64 141/143 Verifying : hwdata-0.233-9.1.el6.noarch 142/143 Verifying : sysvinit-tools-2.87-5.dsf.el6.x86_64 143/143 Installed: chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 dhclient.x86_64 12:4.1.1-38.P1.0.1.el6 initscripts.x86_64 0:9.03.40-2.0.1.el6 openssh-clients.x86_64 0:5.3p1-94.el6 openssh-server.x86_64 0:5.3p1-94.el6 oraclelinux-release.x86_64 6:6Server-5.0.2 passwd.x86_64 0:0.77-4.el6_2.2 policycoreutils.x86_64 0:2.0.83-19.39.0.1.el6 rootfiles.noarch 0:8.1-6.1.el6 rsyslog.x86_64 0:5.8.10-8.0.1.el6 vim-minimal.x86_64 2:7.2.411-1.8.el6 yum.noarch 0:3.2.29-40.0.1.el6 Dependency Installed: MAKEDEV.x86_64 0:3.24-6.el6 audit-libs.x86_64 0:2.2-2.el6 basesystem.noarch 0:10.0-4.0.1.el6 bash.x86_64 0:4.1.2-15.el6_4 [] openssh.x86_64 0:5.3p1-94.el6 openssl.x86_64 0:1.0.1e-15.el6 oracle-logos.noarch 0:60.0.14-1.0.1.el6 p11-kit.x86_64 0:0.18.5-2.el6 p11-kit-trust.x86_64 0:0.18.5-2.el6 pam.x86_64 0:1.1.1-17.el6 [] yum-metadata-parser.x86_64 0:1.1.2-16.el6 zlib.x86_64 0:1.2.3-29.el6 Complete! Rebuilding rpm database Patching container rootfs /container/ol65cont1/rootfs for Oracle Linux 6.5 Configuring container for Oracle Linux 6.5 Added container user:oracle password:oracle Added container user:root password:root Container : /container/ol65cont1/rootfs Config : /container/ol65cont1/config Network : eth0 (veth) on virbr0
The second container should now be up and running:
[root@localhost ~]# lxc-ls --active -1
ol65cont1 ol7cont1
[root@localhost ~]# lxc-console -n ol65cont1
[oracle@ol65cont1 ~]$ cat /etc/oracle-release
Oracle Linux Server release 6.5 Kernel 3.8.13-44.el7uek.x86_64 on an x86_64 ol65cont1 login: oracle Password: oracle
[oracle@ol65cont1 ~]$ cat /etc/oracle-release
Oracle Linux Server release 6.5
[oracle@ol65cont1 ~]$ ping -c 1 ol7cont1
[oracle@ol65cont1 ~]$ logout
PING ol7cont1 (192.168.122.223) 56(84) bytes of data. 64 bytes from ol7cont1 (192.168.122.223): icmp_seq=1 ttl=64 time=0.045 ms --- ol7cont1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms
[oracle@ol65cont1 ~]$ logout
We now have two containers up and running, using two different major versions of Oracle Linux.
However, for the container we created using media ISO, you may need to install additional software later but the /mnt directory containing the ISO image is on the host not the container. Let's learn how to use a bind mount in order to allow the container access to content that is residing on the host, in this case our media ISO.
From your host system, as the root user, you will need to edit the container config file. In this case, open /container/ol65cont1/config using the text editor of your choice, such as vi and add the following statement to the end of the file:
lxc.mount.entry=/mnt mnt none ro,bind 0 0
[root@localhost ~] # lxc-stop -n ol65cont1
Because we did the bind mount in the previous step, you will be able to reach the media ISO to install additional software. Let's trying this by logging into ol65cont1 as the root user to install w3m:
[root@localhost ~]# yum install w3m
Loaded plugins: lxc-patch Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package w3m.x86_64 0:0.5.2-16.el6 will be installed --> Processing Dependency: /usr/bin/perl for package: w3m-0.5.2-16.el6.x86_64 --> Processing Dependency: libgc.so.1()(64bit) for package: w3m-0.5.2-16.el6.x86_64 --> Processing Dependency: libgpm.so.2()(64bit) for package: w3m-0.5.2-16.el6.x86_64 --> Running transaction check ---> Package gc.x86_64 0:7.1-10.el6 will be installed [] ---> Package perl-version.x86_64 3:0.77-136.el6 will be installed --> Running transaction check ---> Package perl-Pod-Escapes.x86_64 1:1.04-136.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: w3m x86_64 0.5.2-16.el6 lxc-install 890 k Installing for dependencies: gc x86_64 7.1-10.el6 lxc-install 146 k gpm-libs x86_64 1.20.6-12.el6 lxc-install 28 k perl x86_64 4:5.10.1-136.el6 lxc-install 10 M perl-Module-Pluggable x86_64 1:3.90-136.el6 lxc-install 39 k perl-Pod-Escapes x86_64 1:1.04-136.el6 lxc-install 32 k perl-Pod-Simple x86_64 1:3.13-136.el6 lxc-install 211 k perl-libs x86_64 4:5.10.1-136.el6 lxc-install 577 k perl-version x86_64 3:0.77-136.el6 lxc-install 50 k Transaction Summary ================================================================================ Install 9 Package(s) Total download size: 12 M Installed size: 38 M Is this ok []: y Downloading Packages: -------------------------------------------------------------------------------- Total 30 MB/s | 12 MB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : 1:perl-Pod-Escapes-1.04-136.el6.x86_64 1/9 Installing : 4:perl-libs-5.10.1-136.el6.x86_64 2/9 [] Installing : w3m-0.5.2-16.el6.x86_64 9/9 lxc-patch: checking if updated pkgs need patching... Verifying : 1:perl-Module-Pluggable-3.90-136.el6.x86_64 1/9 Verifying : gpm-libs-1.20.6-12.el6.x86_64 2/9 [] Verifying : w3m-0.5.2-16.el6.x86_64 8/9 Verifying : 3:perl-version-0.77-136.el6.x86_64 9/9 Installed: w3m.x86_64 0:0.5.2-16.el6 Dependency Installed: gc.x86_64 0:7.1-10.el6 gpm-libs.x86_64 0:1.20.6-12.el6 perl.x86_64 4:5.10.1-136.el6 perl-Module-Pluggable.x86_64 1:3.90-136.el6 perl-Pod-Escapes.x86_64 1:1.04-136.el6 perl-Pod-Simple.x86_64 1:3.13-136.el6 perl-libs.x86_64 4:5.10.1-136.el6 perl-version.x86_64 3:0.77-136.el6 Complete!
Exercise: Monitoring containers
In Oracle Linux 7, you have two options for monitoring your containers. You can either login and run your processes directly from the container or from the host system you would use the lxc-attach command. Let's take a look at a couple of examples using lxc-attach .
[root@localhost ~]# lxc-attach -n ol7cont1 -- /bin/ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 47368 3352 ? Ss 15:36 0:00 /sbin/init root 11 0.0 0.0 42968 2196 ? Ss 15:36 0:00 /usr/lib/system root 25 0.0 0.0 210064 2588 ? Ssl 15:36 0:00 /usr/sbin/rsysl root 26 0.0 0.0 34668 1628 ? Ss 15:36 0:00 /usr/lib/system dbus 28 0.0 0.0 26408 1432 ? Ss 15:36 0:00 /bin/dbus-daemo root 32 0.0 0.0 76808 2236 ? Ss 15:36 0:00 login -- root root 33 0.0 0.0 6416 796 lxc/tty2 Ss\+ 15:36 0:00 /sbin/agetty -- root 34 0.0 0.0 6416 792 lxc/tty3 Ss\+ 15:36 0:00 /sbin/agetty -- root 35 0.0 0.0 6416 792 lxc/tty4 Ss\+ 15:36 0:00 /sbin/agetty -- root 36 0.0 0.0 6416 788 lxc/console Ss\+ 15:36 0:00 /sbin/agetty root 195 0.0 0.3 104184 13088 ? Ss 15:36 0:00 /sbin/dhclient root 242 0.0 0.0 82772 3584 ? Ss 15:36 0:00 /usr/sbin/sshd root 284 0.0 0.0 11720 1884 lxc/tty1 Ss\+ 16:24 0:00 \-bash root 306 0.0 0.1 209452 4748 ? Ss 16:25 0:00 /usr/sbin/httpd apache 307 0.0 0.0 209452 3204 ? S 16:25 0:00 /usr/sbin/httpd apache 308 0.0 0.0 209452 2428 ? S 16:25 0:00 /usr/sbin/httpd apache 309 0.0 0.0 209452 2428 ? S 16:25 0:00 /usr/sbin/httpd apache 310 0.0 0.0 209452 2428 ? S 16:25 0:00 /usr/sbin/httpd apache 311 0.0 0.0 209452 2428 ? S 16:25 0:00 /usr/sbin/httpd root 331 0.0 0.0 123348 1392 ? R\+ 16:36 0:00 /bin/ps aux
[root@localhost ~]# lxc-attach -n ol7cont1 -- /bin/netstat -at
Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:ssh 0.0.0.0:\* LISTEN tcp6 0 0 \[\]:http \[\]:\* LISTEN tcp6 0 0 \[\]:ssh \[\]:\* LISTEN
If you'd like to determine the amount of memory currently used by a given container, you can obtain this information from the control groups subsystem:
[root@localhost ~]# lxc-cgroup -n ol7cont1 memory.usage_in_bytes
68608000
To monitor the state of a container, use the lxc-monitor command. Open a second command line terminal and start it with the following command. Now start and stop your containers using the lxc-start and lxc-shutdown commands from another shell and observe how lxc-monitor indicates the state changes:
[root@localhost ~]# lxc-monitor -n '.*'
'ol65cont1' changed state to [STOPPING] 'ol65cont1' changed state to [STOPPED] 'ol7cont1' changed state to [STOPPING] 'ol7cont1' changed state to [STOPPED] 'ol7cont1' changed state to [STARTING] 'ol7cont1' changed state to [RUNNING]
Exercise: Changing a container's network configuration
By default, the lxc-oracle template script sets up networking by setting up a virtual ethernet (veth) bridge. In this mode, a container obtains its IP address from the dnsmasq server that libvirtd runs on the private virtual bridge network (virbr0) between the container and the host. The host allows a container to connect to the rest of the network by using NAT rules in iptables, but these rules do not allow incoming connections to the container. Both the host and other containers on the veth bridge have network access to the container via the bridge.If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host's network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.
To modify a container so that it uses the macvlan bridge, shut down the ol7cont1 container, edit /container/ol7cont1/config and look for the following lines:
lxc.network.type = veth lxc.network.flags = up lxc.network.link = virbr0
lxc.network.type = macvlan lxc.network.macvlan.mode = bridge lxc.network.flags = up lxc.network.link = eth1
To configure a static IP address that a container does not obtain using DHCP:
- Edit
/container/
, wherename
/rootfs/etc/sysconfig/network-scripts/ifcfg-iface
iface
is the name of the network interface, and change the following line:BOOTPROTO=dhcp
to read:BOOTPROTO=none
- Add the following line to
/container/
:name
/configlxc.network.ipv4 = xxx.xxx.xxx.xxx/prefix_length
where
is the IP address of the container in CIDR format, for example:xxx
.xxx
.xxx
.xxx
/prefix_length
192.168.56.100/24
.NoteThe address must not already be in use on the network or potentially be assignable by a DHCP server to another system.
You might also need to configure the firewall on the host to allow access to a network service that is provided by a Container. - LXC 8.0 allows for the Containers container/name/
config
file to configure the default gateway.
lxc.network.ipv4.gateway = 10.1.0.1
Edit
/container/
name
/rootfs/etc/resolv.conf and add the nameservers:
nameserver 8.8.8.8
Exercise: Destroying containers
Containers that are no longer needed can be discarded using the lxc-destroy command. Use the -f option to stop the container if it's still running (which would otherwise abort the container destruction):
[root@localhost ~]# lxc-ls
ol65cont1 ol7cont1 ol7cont2
[root@localhost ~]# lxc-ls --active
ol65cont1 ol7cont2
[root@localhost ~]# lxc-destroy -n ol7cont1
[root@localhost ~]# lxc-destroy -n ol65cont1
ol65cont1 is running
[root@localhost ~]# lxc-destroy -f -n ol65cont1
[root@localhost ~]# lxc-ls --active
ol7cont2
[root@localhost ~]# lxc-ls
ol7cont2
Conclusion
In this hands-on lab, we covered the basics of working with Linux Containers (LXC). Hopefully this information was useful and made you curious to learn more about this technology, which is still evolving.If you'd like to learn more about this topic, there is a dedicated chapter about Linux containers in the Oracle Linux Administrator's Guide for Oracle Linux 7. It covers the creation, configuration and starting/stopping as well as monitoring of containers in more detail. Also take a look at the following resources for more details and practical hints.
References
- Chapter: Linux Containers in the Oracle Linux 7 Administrator's Guide
- Oracle Linux Technology Spotlight: LXC - Linux Containers
- Wikipedia: Linux Containers
- OTN Garage blog: Linux-Containers - Part 1: Overview
- OTN Garage blog: Linux Container (LXC) - Part 2: Working With Containers
- OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy
- Video on the Oracle Linux YouTube channel: Linux Containers Explained
- Linux Advocates: Linux Containers and Why They Matter
- libvirt - The virtualization API
Comentarios
Publicar un comentario