Installing and Configuring the Software
- Use yum to install the
btrfs-progs
package.
[root@host ~]# yum install btrfs-progs
- Install the
lxc
packages.
[root@host ~]# yum install lxc
This command installs all of the required packages, such aslibvirt
,libcgroup
, andlxc-libs
. The LXC template scripts are installed in/usr/share/lxc/templates
. - Start the Control Groups (cgroups) service,
cgconfig
, and configure the service to start at boot time.
[root@host ~]# service cgconfig start [root@host ~]# chkconfig cgconfig on
LXC uses the cgroups service to control the system resources that are available to containers. - Start the virtualization management service,
libvirtd
, and configure the service to start at boot time.
[root@host ~]# service libvirtd start [root@host ~]# chkconfig libvirtd on
LXC uses the virtualization management service to support network bridging for containers. - If you are going to compile applications that require the LXC header files and libraries, install the
lxc-devel
package.
[root@host ~]# yum install lxc-devel
Setting up the File System for the Containers
Note
The LXC template scripts assume that containers are created in /container
. You must edit the script if your system's configuration differs from this assumption. /container
file system:- Create a btrfs file system on a suitably sized device such as
/dev/sdb
, and create the/container
mount point.
[root@host ~]# mkfs.btrfs /dev/sdb [root@host ~]# mkdir /container
- Mount the
/container
file system.
[root@host ~]# mount /dev/sdb /container
- Add an entry for
/container
to the/etc/fstab
file.
/dev/sdb /container btrfs defaults 0 0
Creating and Starting a Container
Note
The procedure in this section uses the LXC template script for Oracle Linux (lxc-oracle
), which is located in /usr/share/lxc/templates
.An Oracle Linux container requires a minimum of 400 MB of disk space.
- Create an Oracle Linux 6 container named
ol6ctr1
using thelxc-oracle
template script.
[root@host ~]# lxc-create -n ol6ctr1 -t oracle -- --release=6.latest lxc-create: No config file specified, using the default config /etc/lxc/default.conf Host is OracleServer 6.4 Create configuration file /container/ol6ctr1/config Downloading release 6.latest for x86_64 . . . yum-metadata-parser.x86_64 0:1.1.2-16.el6 zlib.x86_64 0:1.2.3-29.el6 Complete!
The lxc-create command runs the template scriptlxc-oracle
to create the container in/container/ol6ctr1
with the btrfs subvolume/container/ol6ctr1/rootfs
as its root file system. The command then uses yum to install the latest available update of Oracle Linux 6 from the Public Yum repository. It also writes the container's configuration settings to the file/container/ol6ctr1/config
and itsfstab
file to/container/ol6ctr1/fstab
. The default log file for the container is/container/ol6ctr1/ol6ctr1.log
.
You can specify the following template options after the -- option to lxc-create:
- --arch=i386|x86_64
- Specifies the architecture. The default value is the architecture of the host.
- --release=
major
.minor
- Specifies the major release number and minor update number of the Oracle release to install. The value of
major
can be set to 4, 5, or 6. If you specifylatest
forminor
, the latest available release packages for the major release are installed. If the host is running Oracle Linux, the default release is the same as the release installed on the host. Otherwise, the default release is the latest update of Oracle Linux 6. - --templatefs=
rootfs
- Specifies the path to the root file system of an existing system, container, or Oracle VM template that you want to copy. Do not specify this option with any other template option. See Section 9.4, “Creating Additional Containers”.
- --url=
repo_URL
- Specifies a yum repository other than the Public Yum repository. For example, you might want to perform the installation from a local yum server. The repository file in configured in
/etc/yum.repos.d
in the container's root file system. The default URL ishttp://public-yum.oracle.com
.
- If you want to create additional copies of the container in its initial state, create a snapshot of the container's root file system, for example:
# btrfs subvolume snapshot /container/ol6ctr1/rootfs /container/ol6ctr1/rootfs_snap
See Chapter 5, The Btrfs File System and Section 9.4, “Creating Additional Containers”. - Start the container
ol6ctr1
as a daemon that writes its diagnostic output to a log file other than the default log file.
[root@host ~]# lxc-start -n ol6ctr1 -d -o /container/ol6ctr1_debug.log -l DEBUG
NoteIf you omit the -d option, the container's console opens in the current shell.
The following logging levels are available:FATAL
,CRIT
,WARN
,ERROR
,NOTICE
,INFO
, andDEBUG
. You can set a logging level for all lxc-* commands.lxc-start
process shows that the/usr/sbin/sshd
and/sbin/mingetty
processes have started in the container, you can log in to the container from the host. See Section 9.3, “Logging in to Containers”.
About the lxc-oracle Template Script
Note
If you amend a template script, you alter the configuration files of all containers that you subsequently create from that script. If you amend the config
file for a container, you alter the configuration of that container and all containers that you subsequently clone from it.lxc-oracle
template script defines system settings and resources that are assigned to a running container, including:- the default passwords for the
oracle
androot
users, which are set tooracle
androot
respectively - the host name (
lxc.utsname
), which is set to the name of the container - the number of available terminals (
lxc.tty
), which is set to 4 - the location of the container's root file system on the host (
lxc.rootfs
) - the location of the
fstab
mount configuration file (lxc.mount
) - all system capabilities that are not available to the container (
lxc.cap.drop
) - the local network interface configuration (
lxc.network
) - all whitelisted cgroup devices (
lxc.cgroup.devices.allow
)
lxc.network.type
) and bridge (lxc.network.link
) to veth
and virbr0
. If you want to use a macvlan bridge or Virtual Ethernet Port Aggregator that allows external systems to access your container via the network, you must modify the container's configuration file. See Section 9.2.5, “About Veth and Macvlan” and Section 9.2.6, “Modifying a Container to Use Macvlan”.To enhance security, you can uncomment
lxc.cap.drop
capabilities to prevent root
in the container from performing certain actions. For example, dropping the sys_admin
capability prevents root
from remounting the container's fstab
entries as writable. However, dropping sys_admin
also prevents the container from mounting any file system and disables the hostname command. By default, the template script drops the following capabilities: mac_admin
, mac_override
, setfcap
, setpcap
, sys_module
, sys_nice
, sys_pacct
, sys_rawio
, and sys_time
.For more information, see Chapter 8, Control Groups and the
capabilities(7)
and lxc.conf(5)
manual pages.When you create a container, the template script writes the container's configuration settings and mount configuration to
/container/name
/config
and /container/name
/fstab
, and sets up the container's root file system under /container/name
/rootfs
.Unless you specify to clone an existing root file system, the template script installs the following packages under
rootfs
(by default, from Public Yum at http://public-yum.oracle.com
):Package | Description |
---|---|
chkconfig | chkconfig utility for maintaining the /etc/rc*.d hierarchy. |
dhclient | DHCP client daemon (dhclient ) and dhclient-script . |
initscripts | /etc/inittab file and /etc/init.d scripts. |
openssh-server | Open source SSH server daemon, /usr/sbin/sshd . |
oraclelinux-release | Oracle Linux 6 release and information files. |
passwd | passwd utility for setting or changing passwords using PAM. |
policycoreutils | SELinux policy core utilities. |
rootfiles | Basic files required by the root user. |
rsyslog | Enhanced system logging and kernel message trapping daemons. |
vim-minimal | Minimal version of the VIM editor. |
yum | yum utility for installing, updating and managing RPM packages. |
rootfs
to set up networking in the container and to disable unnecessary services including volume management (LVM), device management (udev
), the hardware clock, readahead
, and the Plymouth boot system.About Veth and Macvlan
lxc-oracle
template script sets up networking by setting up a veth bridge. In this mode, a container obtains its IP address from the dnsmasq
server that libvirtd
runs on the private virtual bridge network (virbr0
) between the container and the host. The host allows a container to connect to the rest of the network by using NAT rules in iptables
, but these rules do not allow incoming connections to the container. Both the host and other containers on the veth bridge have network access to the container via the bridge.Figure 9.1 illustrates a host system with two containers that are connected via the veth bridge
virbr0
.If you want to allow network connections from outside the host to be able to connect to the container, the container needs to have an IP address on the same network as the host. One way to achieve this configuration is to use a macvlan bridge to create an independent logical network for the container. This network is effectively an extension of the local network that is connected the host's network interface. External systems can access the container as though it were an independent system on the network, and the container has network access to other containers that are configured on the bridge and to external systems. The container can also obtain its IP address from an external DHCP server on your local network. However, unlike a veth bridge, the host system does not have network access to the container.
Figure 9.2 illustrates a host system with two containers that are connected via a macvlan bridge.
If you do not want containers to be able to see each other on the network, you can configure the Virtual Ethernet Port Aggregator (VEPA) mode of macvlan. Figure 9.3 illustrates a host system with two containers that are separately connected to a network by a macvlan VEPA. In effect, each container is connected directly to the network, but neither container can access the other container nor the host via the network.
For information about configuring macvlan, see Section 9.2.6, “Modifying a Container to Use Macvlan” and the
lxc.conf(5)
manual page.
To modify a container so that it uses the bridge or VEPA mode of macvlan, edit
/container/name
/config
and replace the following lines:lxc.network.type = veth lxc.network.flags = up lxc.network.link = virbr0with these lines for bridge mode:
lxc.network.type = macvlan lxc.network.macvlan.mode = bridge lxc.network.flags = up lxc.network.link = eth0or these lines for VEPA mode:
lxc.network.type = macvlan lxc.network.macvlan.mode = vepa lxc.network.flags = up lxc.network.link = eth0In these sample configurations, the setting for
lxc.network.link
assumes that you want the container's network interface to be visible on the network that is accessible via the host's eth0
interface.
By default, a container connected by macvlan relies on the DHCP server on your local network to obtain its IP address. If you want the container to act as a server, you would usually configure it with a static IP address. You can configure DHCP to serve a static IP address for a container or you can define the address in the container's
To configure a static IP address that a container does not obtain using DHCP:
You can use the lxc-console command to log in to a running container.
For example, log in to a terminal on
Alternatively, you can use ssh to log in to a container if you install the
config
file.To configure a static IP address that a container does not obtain using DHCP:
- Edit
/container/
, wherename
/rootfs/etc/sysconfig/network-scripts/ifcfg-iface
iface
is the name of the network interface, and change the following line:
BOOTPROTO=dhcp
to read:
BOOTPROTO=none
- Add the following line to
/container/
:name
/config
lxc.network.ipv4 = xxx.xxx.xxx.xxx/prefix_length
where
is the IP address of the container in CIDR format, for example:xxx
.xxx
.xxx
.xxx
/prefix_length
192.168.56.100/24
.
NoteThe address must not already be in use on the network or potentially be assignable by a DHCP server to another system.
You might also need to configure the firewall on the host to allow access to a network service that is provided by a Container. - LXC 8.0 allows for the Containers container/name/
config
file to configure the default gateway.
- See more at: http://serverservice.sytes.net/?p=460lxc.network.ipv4.gateway = 10.1.0.1
Logging in to Containers
[root@host ~]# lxc-console -nIf you do not specify a tty number, you log in to the first available terminal.name
[-ttty_number
]
For example, log in to a terminal on
ol6ctr1
:[root@host ~]# lxc-console -n ol6ctr1To exit an lxc-console session, type
Ctrl-A
followed by Q
.Alternatively, you can use ssh to log in to a container if you install the
lxc-0.9.0-2.0.5
package (or later version of this package).
Note
To be able to log in using lxc-console, the container must be running an /sbin/mingetty
process for the terminal. Similarly, using ssh requires that the container is running the SSH daemon (/usr/sbin/sshd
).
very Informative Blog Post !
ResponderEliminar