Tag Archives: iSCSI

Red Hat – Clustering and Storage Management – Course Objectives – part 2

Post 1 – http://www.guldmyr.com/blog/red-hat-clustering-and-storage-management-course-objectives/ Where I checked out udev, multipathing, iscsi, LVM and xfs.

This post is about getting using luci/ricci to get a Red Hat cluster working, but not on a RHEL machine because sadly I do not have one available for practice purposes. So CentOS64 it is. Using openstack for virtualization.

Topology: Four hosts on all three networks, -a, -b and internal. Three cluster nodes and one management node.

Get the basic cluster going:

  • image four identical nodes
  • ssh-key is distributed
  • /etc/hosts file has all hosts, IPs and networks
    • network interfaces are configured –
    • set a gateway in /etc/sysconfig/network
  • firewall
    • all traffic allowed from -a and -b networks
    • at a minimum allow traffic from the network that the hostname corresponds to that you enter in luci
  • dns (PEERDNS=no is good with several dhcp interfaces)
  • timesync with ntpd
  • luci installed on mgmt-node # ricci is a web gui
  • ricci installed on all cluster nodes # this is the service talks with corosync
    • password set for user ricci on cluster nodes
  • create cluster in luci
    • multicast perhaps doesn’t work so well in openstack ?
    • on cluster nodes this runs “yum -y install cman rgmanager lvm2-cluster sg3_utils gfs2-utils” if shared storage is selected, probably less if not.
  • fencing is really important, how to do it in openstack would require a bit of work though. Not as easy as with kvm/xvm to send a destroy domain message.

Tests:

  • Update and distribute cluster.conf
  • Have a service run on a node on the cluster (doesn’t have to have a shared storage for this).
  • Commands:
    • clustat
    • cman_tool
    • rg_test test /etc/cluster/cluster.conf start service name-of-service
    • ccs_config_validate

 

Share an iSCSI target between all nodes:

  • Using management node to share the iSCSI LUN.
  • tgtd, multipath
  • clvmd running on all nodes
  • lvmconf – make sure locking is set correctly
  • create vg with clustering
  • partprobe; multipath -r # do this often
  • vgs/lvs and make sure all nodes see the clusterd lv
  • minimum GFS filesystem is around 128M – you didn’t use all the vg right? =)
    • for testing/small cluster lowering the journal size is goodness
  • mount!

 

Red Hat – Clustering and Storage Management – Course Objectives

Attending “Red Hat Enterprise Clustering and Storage Management” in August. Quite a few of these technologies I haven’t touched upon before so probably best to go through them before the course.

Initially I wonder how many of these are Red Hat specific, or how many of these I can accomplish by using the free clones such as CentOS or Scientific Linux. We’ll see :) At least a lot of Red Hat’s guides will include their Storage Server.

I used the course content summary as a template for this post, my notes are made within them.. below.

For future questions and trolls: this is not a how-to for lazy people who just want to copy and paste. There are plenty of other sites for that. This is just the basics and it might have some pointers so that I know which are the basic steps and names/commands for each task. That way I hope it’s possible to figure out how to use the commands and such by RTFM.

 

 

Course content summary :

Clusters and storage

Get an overview of storage and cluster technologies.

ISCSI configuration

Set up and manage iSCSI.

Step 1: Setup a server that can present iSCSI LUNs. A target.

  1. CentOS 6.4 – minimal. Set up basic stuff like networking, user account, yum update, ntp/time sync then make a clone of the VM.
  2. Install some useful software like: yum install ntp parted man
  3. Add a new disk to the VM

Step 2: Make nodes for the cluster.

  1. yum install iscsi-initiator-utils

Step 3: Setup an iSCSI target on the iSCSI server.

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi

  1. yum install scsi-target-utils
  2. allow port 3260
  3. edit /etc/tgt/target.conf
  4. if you do comment out the ip range and authentication it’s free-for-all

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi&f=2

Step 4: Login to the target from at least two nodes by running ‘iscsiadm’ commands.

Next step would be to put an appropriate file system on the LUN.

UDEV

Learn basic manipulation and creation of udev rules.

http://www.reactivated.net/writing_udev_rules.html is an old link but just change the commands to “udevadm” instead of “udev*” and at least the sections I read worked the same.

udevadm info -a -n /dev/sdb

Above command helps you find properties which you can build rules from. Only use properties from one parent.

I have a USB key that I can pass through to my VM in VirtualBox, without any modifications it pops up as /dev/sdc.

By looking in the output of the above command I can create /etc/udev/rules.d/10-usb.rules that contains:

SUBSYSTEMS=="usb", ATTRS{serial}=="001CC0EC3450BB40E71401C9", NAME="my_usb_disk"

After “removing” the USB disk from the VM and adding it again the disk (and also all partitions!) will be called /dev/my_usb_disk. This is bad.

By using SYMLINK+=”my_usb_disk” instead of NAME=”my_usb_disk” all the /dev/sdc devices are kept and /dev/my_usb_disk points to /dev/sdc5. And on next boot it pointed to sdc6 (and before that sg3 and sdc7..). This is also bad.

To make one specific partition with a specific size be symlinked to /dev/my_usb_disk I could set this rule:

SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk"

You could do:

KERNEL=="sd*" SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk%n"

Which will create /dev/my_usb_disk5 !

This would perhaps be acceptable, but if you ever want to re-partition the disk then you’d have to change the udev rules accordingly.

If you want to create symlinks for each partition (based on it being a usb, a disk and have the USB with specified serial number):

SUBSYSTEMS=="usb", KERNEL=="sd*", ATTRS{serial}=="001CC0EC3450BB40E71401C9", SYMLINK+="my_usb_disk%n"

These things can be useful if you have several USB disks but you always want the disk to be called /dev/my_usb_disk and not sometimes /dev/sdb and sometimes /dev/sdc.

For testing one can use “udevadm test /sys/class/block/sdc”

Multipathing

Combine multiple paths to SAN devices into one fault-tolerant virtual device.

Ah, this one I’ve been in touch with before with fibrechannel, it also works with iSCSI.
Multipath is the command and be wary of devices/multipaths vs default settings.
Multipathd can be used in case there are actually multiple paths to a LUN (the target is perhaps available on two IP addresses/networks) but it can also be used to set a user_friendly name to a disk, based on its wwid.

Some good commands:

service multipathd status
yum provides */multipath.conf # device-mapper-multipath is the package. 
multipath -ll

Copy in default multipath.conf to /etc; reload and hit multipath -ll to see what it does.
After that the Fun begins!

 

Red Hat high-availability overview

Learn the architecture and component technologies in the Red Hat® High Availability Add-On.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/index.html

Quorum

Understand quorum and quorum calculations.

Fencing

Understand Fencing and fencing configuration.

Resources and resource groups

Understand rgmanager and the configuration of resources and resource groups.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/ch.gfscs.cluster-overview-rgmanager.html

Advanced resource management

Understand resource dependencies and complex resources.

Two-node cluster issues

Understand the use and limitations of 2-node clusters.

http://en.wikipedia.org/wiki/Split-brain_(computing)

LVM management

Review LVM commands and Clustered LVM (clvm).

Create Normal LVM and make a snapshot:

Tutonics has a good “ubuntu” guide for LVMs, but at least the snapshot part works the same.

  1. yum install lvm2
  2. parted /dev/vda # create two primary large physical partitions. With a CentOS64 VM in openstack I had to reboot after this step.
  3. pvcreate /dev/vda3 pvcreate /dev/vda4
  4. vgcreate VG1 /dev/vda3 /dev/vda4
  5. lvcreate -L 1G VG1 # create a smaller logical volume (to give room for snapshot volume)
  6. mkfs.ext4 /dev/VG1/
  7. mount /dev/VG1/lvol0 /mnt
  8. date >> /mnt/datehere
  9. lvcreate -L 1G -s -n snap_lvol0 /dev/VG1/lvol0
  10. date >> /mnt/datehere
  11. mkdir /snapmount
  12. mount /dev/VG1/snap_lvol0 /snapmount # mount the snapshot :)
  13. diff /snapmount/datehere /mnt/datehere

Revert a Logival Volume to the state of the snapshot:

  1. umount /mnt /snapmount
  2. lvconvert –merge /dev/VG1/snap_lvol0 # this also removes the snapshot under /dev/VG1/
  3. mount /mnt
  4. cat /mnt/datehere

XFS

Explore the Features of the XFS® file system and tools required for creating, maintaining, and troubleshooting.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsmain.html

yum provides */mkfs.xfs

yum install quota

XFS Quotas:

mount with uquota for user quotas, mount with uqnoenforce for soft quotas.
use xfs_quota -x to set quotas
help limit

To illustrate the quotas: set a limit for user “user”:

xfs -x -c "limit bsoft=100m bhard=110m user"

Then create two 50M files. While writing the 3rd file the cp command will halt when it is at the hard limit:

[user@rhce3 home]$ cp 50M 50M_2
cp: writing `50M_2': Disk quota exceeded
[user@rhce3 home]$ ls -l
total 112636
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M_1
-rw-rw-r-- 1 user user 10477568 Aug 15 09:29 50M_2

Red Hat Storage

Work with Gluster to create and maintain a scale-out storage solution.

http://chauhan-rhce.blogspot.fi/2013/04/gluster-file-system-configuration-steps.html

Updates to the Red Hat Enterprise Clustering and Storage Management course

Comprehensive review

Set up high-availability services and storage.

SAN Primer – Introduction to Data Storage

You may have heard about this storage or SAN stuff, but what is it? Is it complicated and cool? Yes. Now it doesn’t have to be complicated, but it sure can be sometimes.
This post is just a brief primer/introduction to storage and what it entails. In case maybe you got a job interview or just would like to know a little bit more about it.

I’ll update this post as I go, last update 2012-07-13 – added some books and free pdfs and links.

What is a SAN?

‘Storage Area Network’ – or storage network.
Generally it doesn’t have to be a ‘network’ it could just be direct connected equipment or peer 2 peer. But what it always entails is a shared storage, most often disk or tape.

What is in a SAN?
When it comes to disk storage on fibre channel there’s a few standard components: FC HBA in the server, SFP and cables, SAN-switch, SFP and cables, FC port in the disk array controller and then there’s something behind the controller that connects disks.

You can connect the FC HBA directly to the disk array.

What is storage?
It’s somewhere where you can store data. Most common today would be: hard drives, flash drives (ssd), magnetic media (tape) and optical media (dvd/blueray/cd).  In a computer you cannot fit hundred of hard drives, but sometimes there is an application that requires lots and lots of data (maybe for example CAD drawings, video editing). This is when a SAN comes in, with only the help of for example a fibre channel card you can give a server access to lots of storage.

How do you do it?
If you want to give a server disk space from a fibre channel SAN this is what you do:

  1. Fullfil the hardware requirements (so fibre channel HBA+drivers and multipath software, SAN-switch, disk array and sfps + cables)
  2. On the SAN-switch create a zone with the disk array’s and the FC HBA’s domain id, port id or port wwn. It’s possible to do it without zones, but they are good for fault isolation.
  3. On the disk array you should now see the server/host, create a disk and map/present it to the host.
  4. On the host you most likely need to do a rescan/reinitialize of the fc-bus.
  5. After the server sees the LUN it will have a new hard disk available, you can use your normal partitioning/format/filesystem tools to create some usable space.

Can I use the same disk on two servers?
This is a pretty common question, the answer is sometimes and the sometimes depends on which file system you are using. It needs to support that more than one host can access it at the same time. NTFS does not support this and if you try it anyway you’ll corrupt the file system. For Windows you need to look into CSV – clustered shared volumes or other networked file systems like NFS/CIFS.

What is the difference between fibre channel and iscsi?

FC is sending SCSI commands over fibre channel, it’s not always fibre or optical cables.
While iSCSI is sending SCSI commands over TCP/IP.
FC is a whole network technology while iSCSI is running on top of a network technology -> TCP/IP.

Some literature:

Both the IBM and the HP one are quite lengthy. The HP one has a lot of HP specific guides, best practices and supported configurations. The FC 101 by Brocade actually goes quite deep into the theory of the FC protocol.

Next Generation EVA – P6000

On theregister there was recently a post that maybe HP will announce the new EVA’s in Vegas in June – the P6000.

I just opened a discussion on the ITRC forum, maybe we’ll see some more posts there. I myself am hoping for some thin provisioning or why not SAS backends instead of the quite notorious FC loops. How about native iSCSI or FCoE?

*** 2011-05-05 Update, HP has now officially announced some information about it. For example that it will have the SAS-backend! Whoop! No more loop problems :)

*** 2011-05-05 Another one! Now it came – Thin Provisioning!
http://www.theregister.co.uk/2011/05/05/hp_p6000_data/