CLUSTER OCFS2 SOBRE ISCSI EN ORACLE LINUX 7.0

Vamos a partir ce la base de que tenemos 3 nodos con oracle linux 7.0 enterprise (El mejor sistema operativo del mundo despues de Solaris 11.2).

serian  :

- ol7.server1   192.168.178.201
- ol7.server2   192.168.178.202
- ol7.cliente1  192.168.l178.203

Configuramos el  iSCSI. target  en ol7.serve1


[root@ol7.server1]# yum -y install targetcli
  1. [Opcional] For example, create an disk-image under the /iscsi_disks directory and set it as a SCSI device.
  2.           create the directory

            [root@ol7.sever1]# mkdir  /iscsi_disks
          
  1. Run the targetcli shell:
    # targetcli
    targetcli shell version 2.1.fb31
    Copyright 2011-2013 by Datera, Inc and others.
    For help on commands, type 'help'. 
    List the object hierarchy, which is initially empty:
    /> ls
    o- / ..................................................................... [...]
      o- backstores .......................................................... [...]
      | o- block .............................................. [Storage Objects: 0]
      | o- fileio ............................................. [Storage Objects: 0]
      | o- pscsi .............................................. [Storage Objects: 0]
      | o- ramdisk ............................................ [Storage Objects: 0]
      o- iscsi ........................................................ [Targets: 0]
      o- loopback ..................................................... [Targets: 0]
  2. Change to the /backstores/block directory and create a block storage object for the disk partitions that you want to provide as LUNs, for example:
    /> cd /backstores/block
    /backstores/block> create name=LUN_0 dev=/dev/sdb
    Created block storage object LUN_0 using /dev/sdb.
    /backstores/block> create name=LUN_1 dev=/dev/sdc
    Created block storage object LUN_1 using /dev/sdc.
    The names that you assign to the storage objects are arbitrary.
          /> cd  /backstores/fileio    

       create a disk-image with the name "disk01" on /iscsi_disks/disk01.img with 20G
           /backstores/fileio> create  disk01  /iscsi_disks/disk01.img 20G

           Created fileio disk01 with size 21474836480
           /backstores/fileio> 
  1. Change to the /iscsi directory and create an iSCSI target:
    /> cd /iscsi
    /iscsi> create
    Created target iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344.
    Created TPG 1.
    List the target portal group (TPG) hierarchy, which is initially empty:
    /iscsi> ls
    o- iscsi .......................................................... [Targets: 1]
      o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 .............. [TPGs: 1]
        o- tpg1 ............................................. [no-gen-acls, no-auth]
          o- acls ........................................................ [ACLs: 0]
          o- luns ........................................................ [LUNs: 0]
          o- portals .................................................. [Portals: 0]
  2. Change to the luns subdirectory of the TPG directory hierarchy and add the LUNs to the target portal group:
    /iscsi> cd iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344/tpg1/luns 
    /iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_0
    Created LUN 0.
    /iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_1
    Created LUN 1.
  3. Change to the portals subdirectory of the TPG directory hierarchy and specify the IP address and port of the iSCSI endpoint:
    /iscsi/iqn.20...344/tpg1/luns> cd ../portals
    /iscsi/iqn.20.../tpg1/portals> create 192.168.178.201 3260
    Using default IP port 3260
    Created network portal 192.168.178.201:3260.
    If you omit the port number, the default value is 3260.
    List the object hierarchy, which now shows the configured block storage objects and TPG:
    /iscsi/iqn.20.../tpg1/portals> ls /
    o- / ..................................................................... [...]
      o- backstores .......................................................... [...]
      | o- block .............................................. [Storage Objects: 1]
      | | o- LUN_0 ....................... [/dev/sdb (10.0GiB) write-thru activated]
      | | o- LUN_1 ....................... [/dev/sdc (10.0GiB) write-thru activated] 
  4.   | o- fileio ............................................. [Storage Objects: 0]
  5.   | | disk01...........................[//iscsi_disks/disk01.img 20G
      | o- pscsi .............................................. [Storage Objects: 0]
      | o- ramdisk ............................................ [Storage Objects: 0]
      o- iscsi ........................................................ [Targets: 1]
      | o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 ............ [TPGs: 1]
      |   o- tpg1 ........................................... [no-gen-acls, no-auth]
      |     o- acls ...................................................... [ACLs: 0]
      |     o- luns ...................................................... [LUNs: 1]
      |     | o- lun0 ..................................... [block/LUN_0 (/dev/sdb)]
      |     | o- lun1 ..................................... [block/LUN_1 (/dev/sdc)]
      |     | o- lun2 ......................[fileio/disk01 (/iscsi_disk/disk01.img)]  
  6.          o- portals ................................................ [Portals: 1]
      |       o- 192.168.178.201:3260 ............................................ [OK]
      o- loopback ..................................................... [Targets: 0]
  7. Configure the access rights for logins by initiators. For example, to configure demonstration mode that does not require authentication, change to the TGP directory and set the values of the authenticationand demo_mode_write_protect attributes to 0 and generate_node_acls cache_dynamic_acls to 1:
    /iscsi/iqn.20.../tpg1/portals> cd ..
    /iscsi/iqn.20...14f87344/tpg1> set attribute authentication=0 demo_mode_write_protect=0 \
    generate_node_acls=1 cache_dynamic_acls=1
    Parameter authentication is now '0'.
    Parameter demo_mode_write_protect is now '0'.
    Parameter generate_node_acls is now '1'.
    Parameter cache_dynamic_acls is now '1'.
    Caution
    Demonstration mode is inherently insecure. For information about configuring secure authentication modes, see http://linux-iscsi.org/wiki/ISCSI#Define_access_rights.
  8. Change to the root directory and save the configuration so that it persists across reboots of the system:
    /iscsi/iqn.20...14f87344/tpg1> cd /
    /> saveconfig
    Last 10 configs saved in /etc/target/backup.
    Configuration saved to /etc/target/saveconfig.json
    targetcli saves the current configuration to the JSON-format file /etc/target/saveconfig.json.

 

Configuring an iSCSI Initiator


To configure an Oracle Linux system as an iSCSI initiator:
  1. Install the iscsi-initiator-utils package:
    # yum install iscsi-initiator-utils
  2. Use the SendTargets discovery method to discover the iSCSI targets at a specified IP address:
    # iscsiadm -m discovery -t sendtargets -p 192.168.178.201
    192.168.178.201:3260,1 iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344
    Note
    An alternate discovery method is Internet Storage Name Service (iSNS).
    The command also starts the iscsid service if it is not already running.
    The following command displays information about the targets that is now stored in the discovery database:
    # iscsiadm -m discoverydb –t st –p 192.168.178.201
    # BEGIN RECORD 6.2.0.873-14
    discovery.startup = manual
    discovery.type = sendtargets
    discovery.sendtargets.address = 192.168.178.201
    discovery.sendtargets.port = 3260
    discovery.sendtargets.auth.authmethod = None
    discovery.sendtargets.auth.username = <empty>
    discovery.sendtargets.auth.password = <empty>
    discovery.sendtargets.auth.username_in = <empty>
    discovery.sendtargets.auth.password_in = <empty>
    discovery.sendtargets.timeo.login_timeout = 15
    discovery.sendtargets.use_discoveryd = No
    discovery.sendtargets.discoveryd_poll_inval = 30
    discovery.sendtargets.reopen_max = 5
    discovery.sendtargets.timeo.auth_timeout = 45
    discovery.sendtargets.timeo.active_timeout = 30
    discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
    # END RECORD
  3. Establish a session and log in to a specific target:
    # iscsiadm -m node -T iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 \
      –p 192.168.178.201:3260 -l
    Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.localhost.x8664:
    sn.ef8e14f87344, portal: 10.150.30.72,3260] successful.
  4. Verify that the session is active, and display the available LUNs:
    # iscsiadm -m session –P 3
    iSCSI Transport Class version 2.0-870
    version 6.2.0.873-14
    Target: iqn.2003-01.com.mydom.host01.x8664:sn.ef8e14f87344 (non-flash)
     Current Portal: 10.0.0.2:3260,1
     Persistent Portal: 10.0.0.2:3260,1
      **********
      Interface:
      **********
      Iface Name: default
      Iface Transport: tcp
      Iface Initiatorname: iqn.1994-05.com.mydom:ed7021225d52
      Iface IPaddress: 10.0.0.2
      Iface HWaddress: <empty>
      Iface Netdev: <empty>
      SID: 5
      iSCSI Connection State: LOGGED IN
      iSCSI Session State: LOGGED_IN
      Internal iscsid Session State: NO CHANGE
    .
    .
    .
      ************************
      Attached SCSI devices:
      ************************
      Host Number: 8 State: running
      scsi8 Channel 00 Id 0 Lun: 0
       Attached scsi disk sdb  State: running
      scsi8 Channel 00 Id 0 Lun: 1
       Attached scsi disk sdc  State: running
    The LUNs are represented as SCSI block devices (sd*) in the local /dev directory, for example:
  5. # fdisk –l | grep /dev/sd[bc]
  6. Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
    Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
  7. Disk   /dev/sdd:    20.0 GB
  8. To distinguish between target LUNs, examine their paths under /dev/disk/by-path:
    # ls -l /dev/disk/by-path/
    lrwxrwxrwx  1 root root  9 May 15 21:05
      ip-10.150.30.72:3260-iscsi-iqn.2013-01.com.mydom.host01.x8664:
      sn.ef8e14f87344-lun-0 -> ../../sdb
    lrwxrwxrwx 1 root root  9 May 15 21:05
      ip-10.150.30.72:3260-iscsi-iqn.2013-01.com.mydom.host01.x8664:
      sn.ef8e14f87344-lun-1 -> ../../sdc
    You can view the initialization messages for the LUNs in the /var/log/messages file, for example:
    # grep sdb /var/log/messages
    ...
    May 18 14:19:36 localhost kernel: [12079.963376] sd 8:0:0:0: [sdb] Attached SCSI disk
    ...
    You can configure and use a LUN in the same way as you would any other physical storage device. For example, you can configure it as an LVM physical volume, file system, swap partition, Automatic Storage Management (ASM) disk, or raw device.
    Specify the _netdev option when creating mount entries for iSCSI LUNs in /etc/fstab, for example:
    UUID=084591f8-6b8b-c857-f002-ecf8a3b387f3     /iscsi_mount_point     ext4     _netdev   0  0
    This option indicates the file system resides on a device that requires network access, and prevents the system from attempting to mount the file system until the network has been enabled.
    Note
    Specify an iSCSI LUN in /etc/fstab by using UUID=UUID rather than the device path. A device path can change after re-connecting the storage or rebooting the system. You can use the blkid command to display the UUID of a block device.
    Any discovered LUNs remain available across reboots provided that the target continues to serve those LUNs and you do not log the system off the target.


INSTALANDO OCFS2  EN CADA NODO.


#yum install ocfs2-tools-devel   ocfs2-tools

Creating the Configuration File for the Cluster Stack

You can create the configuration file by using the o2cb command or a text editor.
To configure the cluster stack by using the o2cb command:
  1. Use the following command to create a cluster definition.
    # o2cb add-cluster cluster_name 
    For example, to define a cluster named mycluster with four nodes:
    # o2cb add-cluster mycluster
    The command creates the configuration file /etc/ocfs2/cluster.conf if it does not already exist.
  2. For each node, use the following command to define the node.
    # o2cb add-node cluster_name node_name --ip ip_address
    The name of the node must be same as the value of system's HOSTNAME that is configured in/etc/sysconfig/network. The IP address is the one that the node will use for private communication in the cluster.
    For example, to define a node named node0 with the IP address 10.1.0.100 in the cluster mycluster:
    # o2cb add-node mycluster node0 --ip 10.1.0.100
  3. If you want the cluster to use global heartbeat devices, use the following commands.
    # o2cb add-heartbeat cluster_name device1
    .
    .
    .
    # o2cb heartbeat-mode cluster_name global
    Note
    You must configure global heartbeat to use whole disk devices. You cannot configure a global heartbeat device on a disk partition.
    For example, to use /dev/sdd/dev/sdg, and /dev/sdj as global heartbeat devices:
    # o2cb add-heartbeat mycluster /dev/sdd
    # o2cb add-heartbeat mycluster /dev/sdg
    # o2cb add-heartbeat mycluster /dev/sdj
    # o2cb heartbeat-mode mycluster global
  4. Copy the cluster configuration file /etc/ocfs2/cluster.conf to each node in the cluster.
    Note
    Any changes that you make to the cluster configuration file do not take effect until you restart the cluster stack.
The following sample configuration file /etc/ocfs2/cluster.conf defines a 4-node cluster named myclusterwith a local heartbeat.
node:
 name = node0
 cluster = mycluster
 number = 0
 ip_address = 10.1.0.100
 ip_port = 7777

node:
        name = node1
        cluster = mycluster
        number = 1
        ip_address = 10.1.0.101
        ip_port = 7777

node:
        name = node2
        cluster = mycluster
        number = 2
        ip_address = 10.1.0.102
        ip_port = 7777

node:
        name = node3
        cluster = mycluster
        number = 3
        ip_address = 10.1.0.103
        ip_port = 7777

cluster:
        name = mycluster
        heartbeat_mode = local
        node_count = 4
If you configure your cluster to use a global heartbeat, the file also include entries for the global heartbeat devices.
node:
        name = node0
        cluster = mycluster
        number = 0
        ip_address = 10.1.0.100
        ip_port = 7777

node:
        name = node1
        cluster = mycluster
        number = 1
        ip_address = 10.1.0.101
        ip_port = 7777

node:
        name = node2
        cluster = mycluster
        number = 2
        ip_address = 10.1.0.102
        ip_port = 7777

node:
        name = node3
        cluster = mycluster
        number = 3
        ip_address = 10.1.0.103
        ip_port = 7777

cluster:
        name = mycluster
        heartbeat_mode = global
        node_count = 4

heartbeat:
        cluster = mycluster
        region = 7DA5015346C245E6A41AA85E2E7EA3CF

heartbeat:
        cluster = mycluster
        region = 4F9FBB0D9B6341729F21A8891B9A05BD

heartbeat:
        cluster = mycluster
        region = B423C7EEE9FC426790FC411972C91CC3
The cluster heartbeat mode is now shown as global, and the heartbeat regions are represented by the UUIDs of their block devices.
If you edit the configuration file manually, ensure that you use the following layout:
  • The cluster:heartbeat:, and node: headings must start in the first column.
  • Each parameter entry must be indented by one tab space.
  • A blank line must separate each section that defines the cluster, a heartbeat device, or a node.

 Configuring the Cluster Stack

To configure the cluster stack:
  1. Run the following command on each node of the cluster:
    # /etc/init.d/o2cb configure
    The following table describes the values for which you are prompted.
    Prompt
    Description
    Load O2CB driver on boot (y/n)
    Whether the cluster stack driver should be loaded at boot time. The default response is n.
    Cluster stack backing O2CB
    The name of the cluster stack service. The default and usual response is o2cb.
    Cluster to start at boot (Enter "none" to clear)
    Enter the name of your cluster that you defined in the cluster configuration file, /etc/ocfs2/cluster.conf.
    Specify heartbeat dead threshold (>=7)
    The number of 2-second heartbeats that must elapse without response before a node is considered dead. To calculate the value to enter, divide the required threshold time period by 2 and add 1. For example, to set the threshold time period to 120 seconds, enter a value of 61. The default value is 31, which corresponds to a threshold time period of 60 seconds.
    Note
    If your system uses multipathed storage, the recommended value is 61 or greater.
    Specify network idle timeout in ms (>=5000)
    The time in milliseconds that must elapse before a network connection is considered dead. The default value is 30,000 milliseconds.
    Note
    For bonded network interfaces, the recommended value is 30,000 milliseconds or greater.
    Specify network keepalive delay in ms (>=1000)
    The maximum delay in milliseconds between sending keepalive packets to another node. The default and recommended value is 2,000 milliseconds.
    Specify network reconnect delay in ms (>=2000)
    The minimum delay in milliseconds between reconnection attempts if a network connection goes down. The default and recommended value is 2,000 milliseconds.
    To verify the settings for the cluster stack, enter the systemctl status o2cb command:
    # systemctl status o2cb
    Driver for "configfs": Loaded
    Filesystem "configfs": Mounted
    Stack glue driver: Loaded
    Stack plugin "o2cb": Loaded
    Driver for "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking O2CB cluster "mycluster": Online
      Heartbeat dead threshold: 61
      Network idle timeout: 30000
      Network keepalive delay: 2000
      Network reconnect delay: 2000
      Heartbeat mode: Local
    Checking O2CB heartbeat: Active
    In this example, the cluster is online and is using local heartbeat mode. If no volumes have been configured, the O2CB heartbeat is shown as Not active rather than Active.
    The next example shows the command output for an online cluster that is using three global heartbeat devices:
    # systemctl status o2cb
    Driver for "configfs": Loaded
    Filesystem "configfs": Mounted
    Stack glue driver: Loaded
    Stack plugin "o2cb": Loaded
    Driver for "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking O2CB cluster "mycluster": Online
      Heartbeat dead threshold: 61
      Network idle timeout: 30000
      Network keepalive delay: 2000
      Network reconnect delay: 2000
      Heartbeat mode: Global
    Checking O2CB heartbeat: Active
      7DA5015346C245E6A41AA85E2E7EA3CF /dev/sdd
      4F9FBB0D9B6341729F21A8891B9A05BD /dev/sdg
      B423C7EEE9FC426790FC411972C91CC3 /dev/sdj
  2. Configure the o2cb and ocfs2 services so that they start at boot time after networking is enabled:
    # systemctl enable o2cb
    # systemctl enable ocfs2
    These settings allow the node to mount OCFS2 volumes automatically when the system starts.

Starting and Stopping the Cluster Stack

The following table shows the commands that you can use to perform various operations on the cluster stack.
Command
Description
systemctl status o2cb
Check the status of the cluster stack.
/etc/init.d/o2cb online
Start the cluster stack.
/etc/init.d/o2cb offline
Stop the cluster stack.
/etc/init.d/o2cb unload
Unload the cluster stack.

20.2.8 Creating OCFS2 volumes

You can use the mkfs.ocfs2 command to create an OCFS2 volume on a device. If you want to label the volume and mount it by specifying the label, the device must correspond to a partition. You cannot mount an unpartitioned disk device by specifying a label. The following table shows the most useful options that you can use when creating an OCFS2 volume.
Command Option
Description
-b block-size
--block-size block-size
Specifies the unit size for I/O transactions to and from the file system, and the size of inode and extent blocks. The supported block sizes are 512 bytes, 1 KB, 2 KB, and 4 KB. The default and recommended block size is 4K (4 KB).
-C cluster-size
--cluster-sizecluster-size
Specifies the unit size for space used to allocate file data. The supported cluster sizes are 4KB, 8KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, and 1 MB. The default cluster size is 4K (4 KB). If you intend the volume to store database files, do not specify a cluster size that is smaller than the block size of the database.
--fs-feature-level=feature-level
Allows you select a set of file-system features:
default
Enables support for the sparse files, unwritten extents, and inline data features.
max-compat
Enables only those features that are understood by older versions of OCFS2.
max-features
Enables all features that OCFS2 currently supports.
--fs_features=feature
Allows you to enable or disable individual features such as support for sparse files, unwritten extents, and backup superblocks. For more information, see the mkfs.ocfs2(8) manual page.
-J size=journal-size
--journal-optionssize=journal-size
Specifies the size of the write-ahead journal. If not specified, the size is determined from the file system usage type that you specify to the -T option, and, otherwise, from the volume size. The default size of the journal is 64M (64 MB) for datafiles, 256M (256 MB) for mail, and 128M (128 MB) forvmstore.
-L volume-label
--label volume-label
Specifies a descriptive name for the volume that allows you to identify it easily on different cluster nodes.
-N number
--node-slots number
Determines the maximum number of nodes that can concurrently access a volume, which is limited by the number of node slots for system files such as the file-system journal. For best performance, set the number of node slots to at least twice the number of nodes. If you subsequently increase the number of node slots, performance can suffer because the journal will no longer be contiguously laid out on the outer edge of the disk platter.
-T file-system-usage-type
Specifies the type of usage for the file system:
datafiles
Database files are typically few in number, fully allocated, and relatively large. Such files require few metadata changes, and do not benefit from having a large journal.
mail
Mail server files are typically many in number, and relatively small. Such files require many metadata changes, and benefit from having a large journal.
vmstore
Virtual machine image files are typically few in number, sparsely allocated, and relatively large. Such files require a moderate number of metadata changes and a medium sized journal.
For example, create an OCFS2 volume on /dev/sdc1 labeled as myvol using all the default settings for generic usage (4 KB block and cluster size, eight node slots, a 256 MB journal, and support for default file-system features).
# mkfs.ocfs2 -L "myvol" /dev/sdc1
Create an OCFS2 volume on /dev/sdd2 labeled as dbvol for use with database files. In this case, the cluster size is set to 128 KB and the journal size to 32 MB.
# mkfs.ocfs2 -L "dbvol" -T datafiles /dev/sdd2
Create an OCFS2 volume on /dev/sde1 with a 16 KB cluster size, a 128 MB journal, 16 node slots, and support enabled for all features except refcount trees.
# mkfs.ocfs2 -C 16K -J size=128M -N 16 --fs-feature-level=max-features \
  --fs-features=norefcount /dev/sde1
Note
Do not create an OCFS2 volume on an LVM logical volume. LVM is not cluster-aware.
You cannot change the block and cluster size of an OCFS2 volume after it has been created. You can use thetunefs.ocfs2 command to modify other settings for the file system with certain restrictions. For more information, see the tunefs.ocfs2(8) manual page.

 Mounting OCFS2 Volumes

On each node:

# mount -t ocfs2 /dev/sdd1   /almacen1

As shown in the following example, specify the _netdev option in /etc/fstab if you want the system to mount an OCFS2 volume at boot time after networking is started, and to unmount the file system before networking is stopped.
myocfs2vol  /dbvol1  ocfs2     _netdev,defaults  0 0
Note
The file system will not mount unless you have enabled the o2cb and ocfs2 services to start after networking is started. See Section 20.2.5, “Configuring the Cluster Stack”.



Comentarios