Vamos a partir ce la base de que tenemos 3 nodos con oracle linux 7.0 enterprise (El mejor sistema operativo del mundo despues de Solaris 11.2).
serian :
- ol7.server1 192.168.178.201
- ol7.server2 192.168.178.202
- ol7.cliente1 192.168.l178.203
#yum install ocfs2-tools-devel ocfs2-tools
serian :
- ol7.server1 192.168.178.201
- ol7.server2 192.168.178.202
- ol7.cliente1 192.168.l178.203
Configuramos el iSCSI. target en ol7.serve1
[root@ol7.server1]# yum -y install targetcli
- [Opcional] For example, create an disk-image under the /iscsi_disks directory and set it as a SCSI device.
create the directory
[root@ol7.sever1]# mkdir /iscsi_disks
- Run the targetcli shell:
# targetcli targetcli shell version 2.1.fb31 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'.
List the object hierarchy, which is initially empty:/> ls o- / ..................................................................... [...] o- backstores .......................................................... [...] | o- block .............................................. [Storage Objects: 0] | o- fileio ............................................. [Storage Objects: 0] | o- pscsi .............................................. [Storage Objects: 0] | o- ramdisk ............................................ [Storage Objects: 0] o- iscsi ........................................................ [Targets: 0] o- loopback ..................................................... [Targets: 0]
- Change to the
/backstores/block
directory and create a block storage object for the disk partitions that you want to provide as LUNs, for example:/> cd /backstores/block /backstores/block> create name=LUN_0 dev=/dev/sdb Created block storage object LUN_0 using /dev/sdb. /backstores/block> create name=LUN_1 dev=/dev/sdc Created block storage object LUN_1 using /dev/sdc.
The names that you assign to the storage objects are arbitrary.
/> cd /backstores/fileio
create a disk-image with the name "disk01" on /iscsi_disks/disk01.img with 20G
/backstores/fileio> create disk01 /iscsi_disks/disk01.img 20G
Created fileio disk01 with size 21474836480
To configure an Oracle Linux system as an iSCSI initiator:
Created fileio disk01 with size 21474836480
/backstores/fileio>
- Change to the
/iscsi
directory and create an iSCSI target:/> cd /iscsi /iscsi> create Created target iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344. Created TPG 1.
List the target portal group (TPG) hierarchy, which is initially empty:/iscsi> ls o- iscsi .......................................................... [Targets: 1] o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 .............. [TPGs: 1] o- tpg1 ............................................. [no-gen-acls, no-auth] o- acls ........................................................ [ACLs: 0] o- luns ........................................................ [LUNs: 0] o- portals .................................................. [Portals: 0]
- Change to the
luns
subdirectory of the TPG directory hierarchy and add the LUNs to the target portal group:/iscsi> cd iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344/tpg1/luns /iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_0 Created LUN 0. /iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_1 Created LUN 1.
- Change to the
portals
subdirectory of the TPG directory hierarchy and specify the IP address and port of the iSCSI endpoint:/iscsi/iqn.20...344/tpg1/luns> cd ../portals /iscsi/iqn.20.../tpg1/portals> create 192.168.178.201 3260 Using default IP port 3260 Created network portal 192.168.178.201:3260.
If you omit the port number, the default value is 3260.
List the object hierarchy, which now shows the configured block storage objects and TPG:/iscsi/iqn.20.../tpg1/portals> ls / o- / ..................................................................... [...] o- backstores .......................................................... [...] | o- block .............................................. [Storage Objects: 1] | | o- LUN_0 ....................... [/dev/sdb (10.0GiB) write-thru activated] | | o- LUN_1 ....................... [/dev/sdc (10.0GiB) write-thru activated]
| o- fileio ............................................. [Storage Objects: 0]
| | disk01...........................[//iscsi_disks/disk01.img 20G | o- pscsi .............................................. [Storage Objects: 0] | o- ramdisk ............................................ [Storage Objects: 0] o- iscsi ........................................................ [Targets: 1] | o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 ............ [TPGs: 1] | o- tpg1 ........................................... [no-gen-acls, no-auth] | o- acls ...................................................... [ACLs: 0] | o- luns ...................................................... [LUNs: 1] | | o- lun0 ..................................... [block/LUN_0 (/dev/sdb)] | | o- lun1 ..................................... [block/LUN_1 (/dev/sdc)] | | o- lun2 ......................[fileio/disk01 (/iscsi_disk/disk01.img)]
o- portals ................................................ [Portals: 1] | o- 192.168.178.201:3260 ............................................ [OK] o- loopback ..................................................... [Targets: 0]
- Configure the access rights for logins by initiators. For example, to configure demonstration mode that does not require authentication, change to the TGP directory and set the values of the
authentication
anddemo_mode_write_protect
attributes to 0 andgenerate_node_acls
cache_dynamic_acls
to 1:/iscsi/iqn.20.../tpg1/portals> cd .. /iscsi/iqn.20...14f87344/tpg1> set attribute authentication=0 demo_mode_write_protect=0 \ generate_node_acls=1 cache_dynamic_acls=1 Parameter authentication is now '0'. Parameter demo_mode_write_protect is now '0'. Parameter generate_node_acls is now '1'. Parameter cache_dynamic_acls is now '1'.
CautionDemonstration mode is inherently insecure. For information about configuring secure authentication modes, see http://linux-iscsi.org/wiki/ISCSI#Define_access_rights. - Change to the root directory and save the configuration so that it persists across reboots of the system:
/iscsi/iqn.20...14f87344/tpg1> cd / /> saveconfig Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json
targetcli saves the current configuration to the JSON-format file/etc/target/saveconfig.json
.
Configuring an iSCSI Initiator
- Install the
iscsi-initiator-utils
package:# yum install iscsi-initiator-utils
- Use the SendTargets discovery method to discover the iSCSI targets at a specified IP address:
# iscsiadm -m discovery -t sendtargets -p 192.168.178.201 192.168.178.201:3260,1 iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344
NoteAn alternate discovery method is Internet Storage Name Service (iSNS).iscsid
service if it is not already running.
The following command displays information about the targets that is now stored in the discovery database:# iscsiadm -m discoverydb –t st –p 192.168.178.201 # BEGIN RECORD 6.2.0.873-14 discovery.startup = manual discovery.type = sendtargets discovery.sendtargets.address = 192.168.178.201 discovery.sendtargets.port = 3260 discovery.sendtargets.auth.authmethod = None discovery.sendtargets.auth.username = <empty> discovery.sendtargets.auth.password = <empty> discovery.sendtargets.auth.username_in = <empty> discovery.sendtargets.auth.password_in = <empty> discovery.sendtargets.timeo.login_timeout = 15 discovery.sendtargets.use_discoveryd = No discovery.sendtargets.discoveryd_poll_inval = 30 discovery.sendtargets.reopen_max = 5 discovery.sendtargets.timeo.auth_timeout = 45 discovery.sendtargets.timeo.active_timeout = 30 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 # END RECORD
- Establish a session and log in to a specific target:
# iscsiadm -m node -T iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 \ –p 192.168.178.201:3260 -l Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.localhost.x8664: sn.ef8e14f87344, portal: 10.150.30.72,3260] successful.
- Verify that the session is active, and display the available LUNs:
# iscsiadm -m session –P 3 iSCSI Transport Class version 2.0-870 version 6.2.0.873-14 Target: iqn.2003-01.com.mydom.host01.x8664:sn.ef8e14f87344 (non-flash) Current Portal: 10.0.0.2:3260,1 Persistent Portal: 10.0.0.2:3260,1 ********** Interface: ********** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1994-05.com.mydom:ed7021225d52 Iface IPaddress: 10.0.0.2 Iface HWaddress: <empty> Iface Netdev: <empty> SID: 5 iSCSI Connection State: LOGGED IN iSCSI Session State: LOGGED_IN Internal iscsid Session State: NO CHANGE . . . ************************ Attached SCSI devices: ************************ Host Number: 8 State: running scsi8 Channel 00 Id 0 Lun: 0 Attached scsi disk sdb State: running scsi8 Channel 00 Id 0 Lun: 1 Attached scsi disk sdc State: running
The LUNs are represented as SCSI block devices (sd*
) in the local/dev
directory, for example: # fdisk –l | grep /dev/sd[bc]
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
- Disk /dev/sdd: 20.0 GB To distinguish between target LUNs, examine their paths under
/dev/disk/by-path
:# ls -l /dev/disk/by-path/ lrwxrwxrwx 1 root root 9 May 15 21:05 ip-10.150.30.72:3260-iscsi-iqn.2013-01.com.mydom.host01.x8664: sn.ef8e14f87344-lun-0 -> ../../sdb lrwxrwxrwx 1 root root 9 May 15 21:05 ip-10.150.30.72:3260-iscsi-iqn.2013-01.com.mydom.host01.x8664: sn.ef8e14f87344-lun-1 -> ../../sdcYou can view the initialization messages for the LUNs in the
/var/log/messages
file, for example:# grep sdb /var/log/messages ... May 18 14:19:36 localhost kernel: [12079.963376] sd 8:0:0:0: [sdb] Attached SCSI disk ...You can configure and use a LUN in the same way as you would any other physical storage device. For example, you can configure it as an LVM physical volume, file system, swap partition, Automatic Storage Management (ASM) disk, or raw device.
Specify the _
netdev
option when creating mount entries for iSCSI LUNs in /etc/fstab
, for example:UUID=084591f8-6b8b-c857-f002-ecf8a3b387f3 /iscsi_mount_point ext4 _netdev 0 0This option indicates the file system resides on a device that requires network access, and prevents the system from attempting to mount the file system until the network has been enabled.
Note
Specify an iSCSI LUN in /etc/fstab
by using UUID=
UUID
rather than the device path. A device path can change after re-connecting the storage or rebooting the system. You can use the blkid command to display the UUID
of a block device.Any discovered LUNs remain available across reboots provided that the target continues to serve those LUNs and you do not log the system off the target.
INSTALANDO OCFS2 EN CADA NODO.
#yum install ocfs2-tools-devel ocfs2-tools
Creating the Configuration File for the Cluster Stack
You can create the configuration file by using the o2cb command or a text editor.
To configure the cluster stack by using the o2cb command:
- Use the following command to create a cluster definition.
#
o2cb add-cluster
cluster_name
For example, to define a cluster namedmycluster
with four nodes:#
o2cb add-cluster mycluster
The command creates the configuration file/etc/ocfs2/cluster.conf
if it does not already exist. - For each node, use the following command to define the node.
#
o2cb add-node
cluster_name
node_name
--ipip_address
The name of the node must be same as the value of system'sHOSTNAME
that is configured in/etc/sysconfig/network
. The IP address is the one that the node will use for private communication in the cluster.For example, to define a node namednode0
with the IP address 10.1.0.100 in the clustermycluster
:#
o2cb add-node mycluster node0 --ip 10.1.0.100
- If you want the cluster to use global heartbeat devices, use the following commands.
#
o2cb add-heartbeat
. . . #cluster_name
device1
o2cb heartbeat-mode
cluster_name
globalNoteYou must configure global heartbeat to use whole disk devices. You cannot configure a global heartbeat device on a disk partition.For example, to use/dev/sdd
,/dev/sdg
, and/dev/sdj
as global heartbeat devices:#
o2cb add-heartbeat mycluster /dev/sdd
#o2cb add-heartbeat mycluster /dev/sdg
#o2cb add-heartbeat mycluster /dev/sdj
#o2cb heartbeat-mode mycluster global
- Copy the cluster configuration file
/etc/ocfs2/cluster.conf
to each node in the cluster.NoteAny changes that you make to the cluster configuration file do not take effect until you restart the cluster stack.
The following sample configuration file
/etc/ocfs2/cluster.conf
defines a 4-node cluster named mycluster
with a local heartbeat.node: name = node0 cluster = mycluster number = 0 ip_address = 10.1.0.100 ip_port = 7777 node: name = node1 cluster = mycluster number = 1 ip_address = 10.1.0.101 ip_port = 7777 node: name = node2 cluster = mycluster number = 2 ip_address = 10.1.0.102 ip_port = 7777 node: name = node3 cluster = mycluster number = 3 ip_address = 10.1.0.103 ip_port = 7777 cluster: name = mycluster heartbeat_mode = local node_count = 4
If you configure your cluster to use a global heartbeat, the file also include entries for the global heartbeat devices.
node: name = node0 cluster = mycluster number = 0 ip_address = 10.1.0.100 ip_port = 7777 node: name = node1 cluster = mycluster number = 1 ip_address = 10.1.0.101 ip_port = 7777 node: name = node2 cluster = mycluster number = 2 ip_address = 10.1.0.102 ip_port = 7777 node: name = node3 cluster = mycluster number = 3 ip_address = 10.1.0.103 ip_port = 7777 cluster: name = mycluster heartbeat_mode = global node_count = 4 heartbeat: cluster = mycluster region = 7DA5015346C245E6A41AA85E2E7EA3CF heartbeat: cluster = mycluster region = 4F9FBB0D9B6341729F21A8891B9A05BD heartbeat: cluster = mycluster region = B423C7EEE9FC426790FC411972C91CC3
The cluster heartbeat mode is now shown as
global
, and the heartbeat regions are represented by the UUIDs of their block devices.
If you edit the configuration file manually, ensure that you use the following layout:
- The
cluster:
,heartbeat:
, andnode:
headings must start in the first column. - Each parameter entry must be indented by one tab space.
- A blank line must separate each section that defines the cluster, a heartbeat device, or a node.
To configure the cluster stack:
- Run the following command on each node of the cluster:
#
/etc/init.d/o2cb configure
The following table describes the values for which you are prompted.To verify the settings for the cluster stack, enter the systemctl status o2cb command:#
systemctl status o2cb
Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster "mycluster": Online Heartbeat dead threshold: 61 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Heartbeat mode: Local Checking O2CB heartbeat: ActiveIn this example, the cluster is online and is using local heartbeat mode. If no volumes have been configured, the O2CB heartbeat is shown asNot active
rather thanActive
.The next example shows the command output for an online cluster that is using three global heartbeat devices:#
systemctl status o2cb
Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster "mycluster": Online Heartbeat dead threshold: 61 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Heartbeat mode: Global Checking O2CB heartbeat: Active 7DA5015346C245E6A41AA85E2E7EA3CF /dev/sdd 4F9FBB0D9B6341729F21A8891B9A05BD /dev/sdg B423C7EEE9FC426790FC411972C91CC3 /dev/sdj - Configure the
o2cb
andocfs2
services so that they start at boot time after networking is enabled:#
systemctl enable o2cb
#systemctl enable ocfs2
These settings allow the node to mount OCFS2 volumes automatically when the system starts.
Starting and Stopping the Cluster Stack
The following table shows the commands that you can use to perform various operations on the cluster stack.
You can use the mkfs.ocfs2 command to create an OCFS2 volume on a device. If you want to label the volume and mount it by specifying the label, the device must correspond to a partition. You cannot mount an unpartitioned disk device by specifying a label. The following table shows the most useful options that you can use when creating an OCFS2 volume.
For example, create an OCFS2 volume on
/dev/sdc1
labeled as myvol
using all the default settings for generic usage (4 KB block and cluster size, eight node slots, a 256 MB journal, and support for default file-system features).# mkfs.ocfs2 -L "myvol" /dev/sdc1
Create an OCFS2 volume on
/dev/sdd2
labeled as dbvol
for use with database files. In this case, the cluster size is set to 128 KB and the journal size to 32 MB.# mkfs.ocfs2 -L "dbvol" -T datafiles /dev/sdd2
Create an OCFS2 volume on
/dev/sde1
with a 16 KB cluster size, a 128 MB journal, 16 node slots, and support enabled for all features except refcount trees.#mkfs.ocfs2 -C 16K -J size=128M -N 16 --fs-feature-level=max-features
\--fs-features=norefcount /dev/sde1
Note
Do not create an OCFS2 volume on an LVM logical volume. LVM is not cluster-aware.
You cannot change the block and cluster size of an OCFS2 volume after it has been created. You can use thetunefs.ocfs2 command to modify other settings for the file system with certain restrictions. For more information, see the
tunefs.ocfs2(8)
manual page.
As shown in the following example, specify the
_netdev
option in /etc/fstab
if you want the system to mount an OCFS2 volume at boot time after networking is started, and to unmount the file system before networking is stopped.myocfs2vol /dbvol1 ocfs2 _netdev,defaults 0 0
Note
The file system will not mount unless you have enabled the
o2cb
and ocfs2
services to start after networking is started. See Section 20.2.5, “Configuring the Cluster Stack”.
Comentarios
Publicar un comentario