Monthly Archives: August 2013

Vyatta: a router/vpn/firewall in a VM

Brocade has a beta exam up for BCVRE – Certified vRouter Engineer – which is on the Vyatta software from the company with the same name that Brocade bought last year.

There is the free open source core. Download from here: http://vyatta.org/downloads (no you don’t have to register).  The evaluation/subscriber version has the API and web gui available, I’ll probably check those out closer to the exam date.

I grabbed VC6.6 – Virtualization ISO. Use it in a VM and assign 5GB disk (install only requires 1G, or you could just run it on the iso, but then it doesn’t keep state between reboots) and 1GB RAM. Two NICs: One NAT and one private. But to get more acquainted with it you’ll likely have to do a bit more configuration on the hypervisor side. Such as turn off dhcpd in your virtual networks.

To install it to disk: hit “install system” at the CLI after it’s booted.

More documentation: http://docs.vyatta.com/current/wwhelp/wwhimpl/js/html/wwhelp.htm – there are descriptions how to get for example ssh management working ( set service ssh ).

The server is basically Debian with a more recent kernel (6.6 has 3.3) and a shell to make it more switch-like. It actually uses the bash completion to make it look like this. Check out /etc/bash_completion.d/vyatta-*

To remove a setting use “delete” (comparable to no in other CLIs). There is a web interface, but this is only for subscribers. Core version allows SNMP though if you want to use that :)

What to do with vyatta? A bunch of tutorials are here: http://www.vyatta.org/documentation/tips-tricks

  • NAT
  • VPN (for example connect private cloud <-> Amazon VPN)
  • Firewall
  • Routing (OSPF, BGP, etc)

But no SDN stuff (separate data and the control plane). It looks like it’s not possible to modify the flow table of a switch via Vyatta. This looks like a software router/VPN/firewall with some extras added to it.

Red Hat – Clustering and Storage Management – Course Objectives – part 2

Post 1 – http://www.guldmyr.com/blog/red-hat-clustering-and-storage-management-course-objectives/ Where I checked out udev, multipathing, iscsi, LVM and xfs.

This post is about getting using luci/ricci to get a Red Hat cluster working, but not on a RHEL machine because sadly I do not have one available for practice purposes. So CentOS64 it is. Using openstack for virtualization.

Topology: Four hosts on all three networks, -a, -b and internal. Three cluster nodes and one management node.

Get the basic cluster going:

  • image four identical nodes
  • ssh-key is distributed
  • /etc/hosts file has all hosts, IPs and networks
    • network interfaces are configured –
    • set a gateway in /etc/sysconfig/network
  • firewall
    • all traffic allowed from -a and -b networks
    • at a minimum allow traffic from the network that the hostname corresponds to that you enter in luci
  • dns (PEERDNS=no is good with several dhcp interfaces)
  • timesync with ntpd
  • luci installed on mgmt-node # ricci is a web gui
  • ricci installed on all cluster nodes # this is the service talks with corosync
    • password set for user ricci on cluster nodes
  • create cluster in luci
    • multicast perhaps doesn’t work so well in openstack ?
    • on cluster nodes this runs “yum -y install cman rgmanager lvm2-cluster sg3_utils gfs2-utils” if shared storage is selected, probably less if not.
  • fencing is really important, how to do it in openstack would require a bit of work though. Not as easy as with kvm/xvm to send a destroy domain message.

Tests:

  • Update and distribute cluster.conf
  • Have a service run on a node on the cluster (doesn’t have to have a shared storage for this).
  • Commands:
    • clustat
    • cman_tool
    • rg_test test /etc/cluster/cluster.conf start service name-of-service
    • ccs_config_validate

 

Share an iSCSI target between all nodes:

  • Using management node to share the iSCSI LUN.
  • tgtd, multipath
  • clvmd running on all nodes
  • lvmconf – make sure locking is set correctly
  • create vg with clustering
  • partprobe; multipath -r # do this often
  • vgs/lvs and make sure all nodes see the clusterd lv
  • minimum GFS filesystem is around 128M – you didn’t use all the vg right? =)
    • for testing/small cluster lowering the journal size is goodness
  • mount!

 

Red Hat – Clustering and Storage Management – Course Objectives

Attending “Red Hat Enterprise Clustering and Storage Management” in August. Quite a few of these technologies I haven’t touched upon before so probably best to go through them before the course.

Initially I wonder how many of these are Red Hat specific, or how many of these I can accomplish by using the free clones such as CentOS or Scientific Linux. We’ll see :) At least a lot of Red Hat’s guides will include their Storage Server.

I used the course content summary as a template for this post, my notes are made within them.. below.

For future questions and trolls: this is not a how-to for lazy people who just want to copy and paste. There are plenty of other sites for that. This is just the basics and it might have some pointers so that I know which are the basic steps and names/commands for each task. That way I hope it’s possible to figure out how to use the commands and such by RTFM.

 

 

Course content summary :

Clusters and storage

Get an overview of storage and cluster technologies.

ISCSI configuration

Set up and manage iSCSI.

Step 1: Setup a server that can present iSCSI LUNs. A target.

  1. CentOS 6.4 – minimal. Set up basic stuff like networking, user account, yum update, ntp/time sync then make a clone of the VM.
  2. Install some useful software like: yum install ntp parted man
  3. Add a new disk to the VM

Step 2: Make nodes for the cluster.

  1. yum install iscsi-initiator-utils

Step 3: Setup an iSCSI target on the iSCSI server.

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi

  1. yum install scsi-target-utils
  2. allow port 3260
  3. edit /etc/tgt/target.conf
  4. if you do comment out the ip range and authentication it’s free-for-all

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi&f=2

Step 4: Login to the target from at least two nodes by running ‘iscsiadm’ commands.

Next step would be to put an appropriate file system on the LUN.

UDEV

Learn basic manipulation and creation of udev rules.

http://www.reactivated.net/writing_udev_rules.html is an old link but just change the commands to “udevadm” instead of “udev*” and at least the sections I read worked the same.

udevadm info -a -n /dev/sdb

Above command helps you find properties which you can build rules from. Only use properties from one parent.

I have a USB key that I can pass through to my VM in VirtualBox, without any modifications it pops up as /dev/sdc.

By looking in the output of the above command I can create /etc/udev/rules.d/10-usb.rules that contains:

SUBSYSTEMS=="usb", ATTRS{serial}=="001CC0EC3450BB40E71401C9", NAME="my_usb_disk"

After “removing” the USB disk from the VM and adding it again the disk (and also all partitions!) will be called /dev/my_usb_disk. This is bad.

By using SYMLINK+=”my_usb_disk” instead of NAME=”my_usb_disk” all the /dev/sdc devices are kept and /dev/my_usb_disk points to /dev/sdc5. And on next boot it pointed to sdc6 (and before that sg3 and sdc7..). This is also bad.

To make one specific partition with a specific size be symlinked to /dev/my_usb_disk I could set this rule:

SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk"

You could do:

KERNEL=="sd*" SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk%n"

Which will create /dev/my_usb_disk5 !

This would perhaps be acceptable, but if you ever want to re-partition the disk then you’d have to change the udev rules accordingly.

If you want to create symlinks for each partition (based on it being a usb, a disk and have the USB with specified serial number):

SUBSYSTEMS=="usb", KERNEL=="sd*", ATTRS{serial}=="001CC0EC3450BB40E71401C9", SYMLINK+="my_usb_disk%n"

These things can be useful if you have several USB disks but you always want the disk to be called /dev/my_usb_disk and not sometimes /dev/sdb and sometimes /dev/sdc.

For testing one can use “udevadm test /sys/class/block/sdc”

Multipathing

Combine multiple paths to SAN devices into one fault-tolerant virtual device.

Ah, this one I’ve been in touch with before with fibrechannel, it also works with iSCSI.
Multipath is the command and be wary of devices/multipaths vs default settings.
Multipathd can be used in case there are actually multiple paths to a LUN (the target is perhaps available on two IP addresses/networks) but it can also be used to set a user_friendly name to a disk, based on its wwid.

Some good commands:

service multipathd status
yum provides */multipath.conf # device-mapper-multipath is the package. 
multipath -ll

Copy in default multipath.conf to /etc; reload and hit multipath -ll to see what it does.
After that the Fun begins!

 

Red Hat high-availability overview

Learn the architecture and component technologies in the Red Hat® High Availability Add-On.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/index.html

Quorum

Understand quorum and quorum calculations.

Fencing

Understand Fencing and fencing configuration.

Resources and resource groups

Understand rgmanager and the configuration of resources and resource groups.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/ch.gfscs.cluster-overview-rgmanager.html

Advanced resource management

Understand resource dependencies and complex resources.

Two-node cluster issues

Understand the use and limitations of 2-node clusters.

http://en.wikipedia.org/wiki/Split-brain_(computing)

LVM management

Review LVM commands and Clustered LVM (clvm).

Create Normal LVM and make a snapshot:

Tutonics has a good “ubuntu” guide for LVMs, but at least the snapshot part works the same.

  1. yum install lvm2
  2. parted /dev/vda # create two primary large physical partitions. With a CentOS64 VM in openstack I had to reboot after this step.
  3. pvcreate /dev/vda3 pvcreate /dev/vda4
  4. vgcreate VG1 /dev/vda3 /dev/vda4
  5. lvcreate -L 1G VG1 # create a smaller logical volume (to give room for snapshot volume)
  6. mkfs.ext4 /dev/VG1/
  7. mount /dev/VG1/lvol0 /mnt
  8. date >> /mnt/datehere
  9. lvcreate -L 1G -s -n snap_lvol0 /dev/VG1/lvol0
  10. date >> /mnt/datehere
  11. mkdir /snapmount
  12. mount /dev/VG1/snap_lvol0 /snapmount # mount the snapshot :)
  13. diff /snapmount/datehere /mnt/datehere

Revert a Logival Volume to the state of the snapshot:

  1. umount /mnt /snapmount
  2. lvconvert –merge /dev/VG1/snap_lvol0 # this also removes the snapshot under /dev/VG1/
  3. mount /mnt
  4. cat /mnt/datehere

XFS

Explore the Features of the XFS® file system and tools required for creating, maintaining, and troubleshooting.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsmain.html

yum provides */mkfs.xfs

yum install quota

XFS Quotas:

mount with uquota for user quotas, mount with uqnoenforce for soft quotas.
use xfs_quota -x to set quotas
help limit

To illustrate the quotas: set a limit for user “user”:

xfs -x -c "limit bsoft=100m bhard=110m user"

Then create two 50M files. While writing the 3rd file the cp command will halt when it is at the hard limit:

[user@rhce3 home]$ cp 50M 50M_2
cp: writing `50M_2': Disk quota exceeded
[user@rhce3 home]$ ls -l
total 112636
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M_1
-rw-rw-r-- 1 user user 10477568 Aug 15 09:29 50M_2

Red Hat Storage

Work with Gluster to create and maintain a scale-out storage solution.

http://chauhan-rhce.blogspot.fi/2013/04/gluster-file-system-configuration-steps.html

Updates to the Red Hat Enterprise Clustering and Storage Management course

Comprehensive review

Set up high-availability services and storage.