Tag Archives: Linux

Lustre 2.5 + CentOS6 Test in OpenStack

Reason: Testing to Lustre 2.5 from a clean CentOS 6.5 install in an openstack.

Three VMs: two servers, one MDS, one OSS and one Client. CentOS65 on all. An open internal ethernet network for the lustre traffic (don’t forget firewalls). Yum updated to latest kernel. Two volumes presented to the lustreserver and lustreoss for MDT + OST, both are at /dev/vdc. Hostnames set. /etc/hosts updated with three IPs: lustreserver,  lustreoss and lustreclient.

With 2.6.32-431.17.1.el6.x86_64 there’s some issues at the moment for building the server components. One needs to use the latest branch for 2.5 so the instructions are https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

Server side

MDT/OST: Install e2fsprogs and reboot after yum update (to run the latest kernel kernel).

yum localinstall all files from: http://downloads.whamcloud.com/public/e2fsprogs/1.42.9.wc1/el6/RPMS/x86_64/

Next is to rebuild lustre kernels to work with the kernel you are running and the one you have installed for next boot: https://wiki.hpdd.intel.com/display/PUB/Rebuilding+the+Lustre-client+rpms+for+a+new+kernel

RPMS are here: http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/SRPMS/

For rebuilding these are also needed:

yum -y install kernel-devel* kernel-debug* rpm-build make libselinux-devel gcc

basically:

  • git clone -b b2_5 git://git.whamcloud.com/fs/lustre-release.git
  • autogen
  • install kernel.src from redhat (puts tar.gz in /root/rpmbuild/SOURCES/)
  • if rpmbuilding as user build, then copy files from /root/rpmbuild into /home/build/rpmbuild..
  • rebuilding kernel requires quite a bit of hard disk space, as I only had 10G for / then I made symlinks under $HOME to the $HOME/kernel and $HOME/lustre-release

yum -y install expect and install the new kernel with lustre patches and the lustre and lustre modules.

Not important?: WARNING: /lib/modules/2.6.32-431.17.1.el6.x86_64/weak-updates/kernel/fs/lustre/fsfilt_ldiskfs.ko needs unknown symbol ldiskfs_free_blocks

/sbin/new-kernel-pkg –package kernel –mkinitrd –dracut –depmod –install 2.6.32.431.17.1.el6_lustre

chkconfig lustre on

edit /etc/modprobe.d/lustre.conf and add the lnet parameters

modprobe lnet
lctl network up
# lctl list_nids

creating MDT: mkfs.lustre –mdt –mgs –index=0 –fsname=wrk /dev/vdc1
mounting MDT: mkdir /mnt/MDT; mount.lustre /dev/vdc1 /mnt/MDT

creating OST: mkfs.lustre –ost –index=0 –fsname=wrk –mgsnode=lustreserver /dev/vdc1
mounting OST: mkdir /mnt/OST1; mount -t lustre /dev/vdc1 /mnt/OST1

Client Side

rpmbuild --rebuild --without servers

cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh lustre-client*

add modprobe.d/lustre.conf
modprobe lnet
lctl network up
lctl list_nids

mount.lustre lustreserver@tcp:/wrk /wrk

lfs df!

Arch Linux on a T400 – Log

Trying out Arch Linux on a Lenovo T400!

First step after getting the T400 Dual Core Centrino 2.26GHz 4GB RAM: replace the 160GB spinning rust to a 120GB Samsung 840. Very easy to take out the old disk and put the new SSD in the carrier.

https://wiki.archlinux.org/index.php/Lenovo_ThinkPad_T400 has some details.

https://wiki.archlinux.org/index.php/Installation_Guide and the Beginner’s Guide are quite helpful.

To make it easy, the partition layout I created with parted was:

mklabel: MSDOS
mkpart: pri, 0%, 300M /boot ext4 – flag bootable
mkpart: pri, 300M, 100%, /, ext4
No swap!

Install base and some important packages to disk after mkfs and mounting and setting up mirrors:

pacstrap /mnt base dialog iw wpa_supplicant grub parted sudo alsa-utils vim ttf-dejavu openssh screen

Beginners’_Guide had sufficient instructions to set up the boot loader, except this was needed or the grub-mkconfig would fail with syntax error.

echo “GRUB_DISABLE_SUBMENU=y” >> /etc/default/grub

After successful boot time to get a graphical display up and running :)

Beginner’s guide to the rescue: Beginners’_Guide#Graphical_User_Interface

Lightdm was easy to set up and worked right out of the box, which GDM did not. DWM was easy too, I think I can get used to using the ABS/makepkg stuff. Nicer than compiling it and copying the binary around.

Windows key is Mod4Key in config.h for DWM.

Packages and getting chromium-pepper-flash working:

  1. Install all the dependencies for Aura
  2. Download aura and makepkg -i install it
  3. aura -A chromium-pepper-flash

alsamixer and pressing “m” mutes/unmutes a channel :)

Red Hat – Clustering and Storage Management – Course Objectives – part 2

Post 1 – http://www.guldmyr.com/blog/red-hat-clustering-and-storage-management-course-objectives/ Where I checked out udev, multipathing, iscsi, LVM and xfs.

This post is about getting using luci/ricci to get a Red Hat cluster working, but not on a RHEL machine because sadly I do not have one available for practice purposes. So CentOS64 it is. Using openstack for virtualization.

Topology: Four hosts on all three networks, -a, -b and internal. Three cluster nodes and one management node.

Get the basic cluster going:

  • image four identical nodes
  • ssh-key is distributed
  • /etc/hosts file has all hosts, IPs and networks
    • network interfaces are configured –
    • set a gateway in /etc/sysconfig/network
  • firewall
    • all traffic allowed from -a and -b networks
    • at a minimum allow traffic from the network that the hostname corresponds to that you enter in luci
  • dns (PEERDNS=no is good with several dhcp interfaces)
  • timesync with ntpd
  • luci installed on mgmt-node # ricci is a web gui
  • ricci installed on all cluster nodes # this is the service talks with corosync
    • password set for user ricci on cluster nodes
  • create cluster in luci
    • multicast perhaps doesn’t work so well in openstack ?
    • on cluster nodes this runs “yum -y install cman rgmanager lvm2-cluster sg3_utils gfs2-utils” if shared storage is selected, probably less if not.
  • fencing is really important, how to do it in openstack would require a bit of work though. Not as easy as with kvm/xvm to send a destroy domain message.

Tests:

  • Update and distribute cluster.conf
  • Have a service run on a node on the cluster (doesn’t have to have a shared storage for this).
  • Commands:
    • clustat
    • cman_tool
    • rg_test test /etc/cluster/cluster.conf start service name-of-service
    • ccs_config_validate

 

Share an iSCSI target between all nodes:

  • Using management node to share the iSCSI LUN.
  • tgtd, multipath
  • clvmd running on all nodes
  • lvmconf – make sure locking is set correctly
  • create vg with clustering
  • partprobe; multipath -r # do this often
  • vgs/lvs and make sure all nodes see the clusterd lv
  • minimum GFS filesystem is around 128M – you didn’t use all the vg right? =)
    • for testing/small cluster lowering the journal size is goodness
  • mount!

 

Red Hat – Clustering and Storage Management – Course Objectives

Attending “Red Hat Enterprise Clustering and Storage Management” in August. Quite a few of these technologies I haven’t touched upon before so probably best to go through them before the course.

Initially I wonder how many of these are Red Hat specific, or how many of these I can accomplish by using the free clones such as CentOS or Scientific Linux. We’ll see :) At least a lot of Red Hat’s guides will include their Storage Server.

I used the course content summary as a template for this post, my notes are made within them.. below.

For future questions and trolls: this is not a how-to for lazy people who just want to copy and paste. There are plenty of other sites for that. This is just the basics and it might have some pointers so that I know which are the basic steps and names/commands for each task. That way I hope it’s possible to figure out how to use the commands and such by RTFM.

 

 

Course content summary :

Clusters and storage

Get an overview of storage and cluster technologies.

ISCSI configuration

Set up and manage iSCSI.

Step 1: Setup a server that can present iSCSI LUNs. A target.

  1. CentOS 6.4 – minimal. Set up basic stuff like networking, user account, yum update, ntp/time sync then make a clone of the VM.
  2. Install some useful software like: yum install ntp parted man
  3. Add a new disk to the VM

Step 2: Make nodes for the cluster.

  1. yum install iscsi-initiator-utils

Step 3: Setup an iSCSI target on the iSCSI server.

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi

  1. yum install scsi-target-utils
  2. allow port 3260
  3. edit /etc/tgt/target.conf
  4. if you do comment out the ip range and authentication it’s free-for-all

http://www.server-world.info/en/note?os=CentOS_6&p=iscsi&f=2

Step 4: Login to the target from at least two nodes by running ‘iscsiadm’ commands.

Next step would be to put an appropriate file system on the LUN.

UDEV

Learn basic manipulation and creation of udev rules.

http://www.reactivated.net/writing_udev_rules.html is an old link but just change the commands to “udevadm” instead of “udev*” and at least the sections I read worked the same.

udevadm info -a -n /dev/sdb

Above command helps you find properties which you can build rules from. Only use properties from one parent.

I have a USB key that I can pass through to my VM in VirtualBox, without any modifications it pops up as /dev/sdc.

By looking in the output of the above command I can create /etc/udev/rules.d/10-usb.rules that contains:

SUBSYSTEMS=="usb", ATTRS{serial}=="001CC0EC3450BB40E71401C9", NAME="my_usb_disk"

After “removing” the USB disk from the VM and adding it again the disk (and also all partitions!) will be called /dev/my_usb_disk. This is bad.

By using SYMLINK+=”my_usb_disk” instead of NAME=”my_usb_disk” all the /dev/sdc devices are kept and /dev/my_usb_disk points to /dev/sdc5. And on next boot it pointed to sdc6 (and before that sg3 and sdc7..). This is also bad.

To make one specific partition with a specific size be symlinked to /dev/my_usb_disk I could set this rule:

SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk"

You could do:

KERNEL=="sd*" SUBSYSTEM=="block", ATTR{partition}=="5", ATTR{size}=="1933312", SYMLINK+="my_usb_disk%n"

Which will create /dev/my_usb_disk5 !

This would perhaps be acceptable, but if you ever want to re-partition the disk then you’d have to change the udev rules accordingly.

If you want to create symlinks for each partition (based on it being a usb, a disk and have the USB with specified serial number):

SUBSYSTEMS=="usb", KERNEL=="sd*", ATTRS{serial}=="001CC0EC3450BB40E71401C9", SYMLINK+="my_usb_disk%n"

These things can be useful if you have several USB disks but you always want the disk to be called /dev/my_usb_disk and not sometimes /dev/sdb and sometimes /dev/sdc.

For testing one can use “udevadm test /sys/class/block/sdc”

Multipathing

Combine multiple paths to SAN devices into one fault-tolerant virtual device.

Ah, this one I’ve been in touch with before with fibrechannel, it also works with iSCSI.
Multipath is the command and be wary of devices/multipaths vs default settings.
Multipathd can be used in case there are actually multiple paths to a LUN (the target is perhaps available on two IP addresses/networks) but it can also be used to set a user_friendly name to a disk, based on its wwid.

Some good commands:

service multipathd status
yum provides */multipath.conf # device-mapper-multipath is the package. 
multipath -ll

Copy in default multipath.conf to /etc; reload and hit multipath -ll to see what it does.
After that the Fun begins!

 

Red Hat high-availability overview

Learn the architecture and component technologies in the Red Hat® High Availability Add-On.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/index.html

Quorum

Understand quorum and quorum calculations.

Fencing

Understand Fencing and fencing configuration.

Resources and resource groups

Understand rgmanager and the configuration of resources and resource groups.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/High_Availability_Add-On_Overview/ch.gfscs.cluster-overview-rgmanager.html

Advanced resource management

Understand resource dependencies and complex resources.

Two-node cluster issues

Understand the use and limitations of 2-node clusters.

http://en.wikipedia.org/wiki/Split-brain_(computing)

LVM management

Review LVM commands and Clustered LVM (clvm).

Create Normal LVM and make a snapshot:

Tutonics has a good “ubuntu” guide for LVMs, but at least the snapshot part works the same.

  1. yum install lvm2
  2. parted /dev/vda # create two primary large physical partitions. With a CentOS64 VM in openstack I had to reboot after this step.
  3. pvcreate /dev/vda3 pvcreate /dev/vda4
  4. vgcreate VG1 /dev/vda3 /dev/vda4
  5. lvcreate -L 1G VG1 # create a smaller logical volume (to give room for snapshot volume)
  6. mkfs.ext4 /dev/VG1/
  7. mount /dev/VG1/lvol0 /mnt
  8. date >> /mnt/datehere
  9. lvcreate -L 1G -s -n snap_lvol0 /dev/VG1/lvol0
  10. date >> /mnt/datehere
  11. mkdir /snapmount
  12. mount /dev/VG1/snap_lvol0 /snapmount # mount the snapshot :)
  13. diff /snapmount/datehere /mnt/datehere

Revert a Logival Volume to the state of the snapshot:

  1. umount /mnt /snapmount
  2. lvconvert –merge /dev/VG1/snap_lvol0 # this also removes the snapshot under /dev/VG1/
  3. mount /mnt
  4. cat /mnt/datehere

XFS

Explore the Features of the XFS® file system and tools required for creating, maintaining, and troubleshooting.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsmain.html

yum provides */mkfs.xfs

yum install quota

XFS Quotas:

mount with uquota for user quotas, mount with uqnoenforce for soft quotas.
use xfs_quota -x to set quotas
help limit

To illustrate the quotas: set a limit for user “user”:

xfs -x -c "limit bsoft=100m bhard=110m user"

Then create two 50M files. While writing the 3rd file the cp command will halt when it is at the hard limit:

[user@rhce3 home]$ cp 50M 50M_2
cp: writing `50M_2': Disk quota exceeded
[user@rhce3 home]$ ls -l
total 112636
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M
-rw-rw-r-- 1 user user 52428800 Aug 15 09:29 50M_1
-rw-rw-r-- 1 user user 10477568 Aug 15 09:29 50M_2

Red Hat Storage

Work with Gluster to create and maintain a scale-out storage solution.

http://chauhan-rhce.blogspot.fi/2013/04/gluster-file-system-configuration-steps.html

Updates to the Red Hat Enterprise Clustering and Storage Management course

Comprehensive review

Set up high-availability services and storage.

SDN Course – Interview with Google Network Lead

This week in the SDN course on coursera there were lots of examples of real use of SDN stuff, for example like the B4 WAN by Google. They got a really interesting and cool interview with the Network Lead at Google – Amin Vahdat.
And! They actually put this interview up on youtube so you don’t have to be registered for the course on coursera to view the interview. Actually I just noticed all the interviews are there, including the one I mentioned before with the Internetz Architect David Clark.

Programming assignment for this week is to work with pyresonance, which is based on resonance + pyretic which is a controller that can change how network is forwarded/routed based on outside things, like network intrusion or bandwidth caps. This is really new stuff. The code that was put on github was put there just 3 days ago :)

Assignment is to create a load balancer and forward traffic to hosts depending on load :)

Make your own L2 Firewall!

Is what I did this week during the SDN Course on Coursera :)

Within mininet or with a real OpenFlow capable switch, you can point the switch to use a controller. The controller would figure out all the smart stuff and the switch only does what the controller tells it to do.

POX is one of these APIs that you can use to create controllers, it’s good for learning about controllers as it’s not so low level as it’s sibling NOX, which is in C++. There are switches in JAVA too (Floodlight) and many more.

With POX there are some example switches, for example a basic L2 learning switch. It remembers (among quite a few other things) MAC addresses for hosts and remembers in which ports the MAC addresses can be found. With a simple ping: After L2 broadcast is done to find the MAC of the recipient, the controllers install the MAC_source+port and MAC_destination+port as flows on the switches.

What we did this week was to right after the switch is executed, run some extra code that parses a .csv file for MAC address pairs that are not allowed to talk and add these pairs into the flow table.

Pretty cool I think :)

Running Ubuntu on a Desktop

Introduction

Recently I’ve had the pleasure of installing Ubuntu on my desktop. Here are some thoughts and what I initially do:

Machine is a:

Motherboard: Gigayte GA-EX58-UD3R
CPU: Intel Core i7
Memory: 8GB
Disks: Intel i300 SSD, 2x500GB and 1x3TB Western Digital Drives.
Graphics Card: AMD HD6800

Here are some of the things figured out along the way:

  • grub2 does not like keyboard, but if I boot on the Ultimate Boot CD (grub) – the keyboard does work. This is with flipping all the USB keyboard, mouse, storage, legacy .. settings on and off in BIOS.
    • After removing Ubuntu 12.10 and installing 12.04 (this is with USB things enabled in BIOS) – the keyboard works in grub2 menu.
  • to install better drivers, easiest is to open Ubuntu Software, edit sources and allow post-release things.
    • then go to settings and “additional drivers”
  • after upgrading to fglrx,
    • to change speed of the GPU fan to 20% hit: aticonfig –pplib-cmd “set fanspeed 0 20”
    • to view the speed of the GPU fan hit: aticonfig –pplib-cmd “get fanspeed 0”
    • to view the GPU temperature hit: aticonfig –adapter=0 –od-gettemperature

If you do decide to try with newer drivers for the ATI – card, make sure you have the installation CD/DVD handy. Or even better, get it on a USB-drive, way faster.

To find the devices for / and /boot – at least when I boot up there are these disk icons in the side-bar on the left side. If you hover over the icon you’ll see the size, if you click, it mounts and then a folder is opened. Then in the output of ‘mount’ you can see which device it is. Unmount and then proceed with these to get a working chroot:

sudo mount /mnt /dev/devicethathasroot
sudo mount /mnt/boot /dev/devicethathasslashboot
sudo mount ‐‐bind /dev /mnt/dev
sudo mount ‐‐bind /proc /mnt/proc
sudo mount ‐‐bind /sys /mnt/sys
xhost +local:
# above xhost is to get x things working from within a chroot (possibly unsafe?).
chroot /mnt
# you can get network working, it needs a good /etc/resolv.conf first. Either overwrite the existing one or somehow get the local nameserver up and running

Install some good software

apt-get install screen openssh-server tmux openjdk-6-jre unrar p7zip pidgin vim

Spotify Repository – http://www.ubuntuupdates.org/ppa/spotify

Google Chrome Repository – http://www.ubuntuupdates.org/ppa/google_chrome

Universal Media Server

For UMS you’ll also need:

apt-get install ffmpeg mplayer mencoder libzen-dev mediainfo

You also need to make sure zen and mediainfo works – check trace log in UMS to see if there are any errors in the beginning that they are not found. If so the hack is to create symlinks. For me the libzen.so and libmediainfo.so are in the /usr/lib/x86_64-linux-gnu/ directory.

Getting graphite running (for graphing / system monitoring )

One reason was that I wanted to learn more about this tool – but another reason is that it’s quite light weight, especially if you’re going to be running an httpd anyway.

To install it, follow this guide: http://coreygoldberg.blogspot.fi/2012/04/installing-graphite-099-on-ubuntu-1204.html

Carbon-cache initd-script: https://gist.github.com/1492384

To monitor temperature and fan speed of your AMD/ATI card put this script in /usr/local/bin/atitemp.sh:

#!/bin/bash
# Script to monitor temp and fanspeed of an AMD/ATI card.
# amdccle required? Also X?

HOST1="$(hostname -s)"
DEBUG="0"

CMD="$(which aticonfig)"

if [ "$?" != "0" ]; then
        echo "Cannot find aticonfig, check paths and if it's even installed."
        exit $?
fi

while [ 1 ];
do

TEMP=$(/usr/bin/aticonfig --adapter=0 --od-gettemperature|grep Sensor|awk '{print $5}'|sed -e 's/\.00//')
SPEED=$(/usr/bin/aticonfig --adapter=0 --pplib-cmd "get fanspeed 0"|grep Result|awk '{print $4}'|tr -d "%")
DATU="$(date +%s)"
LOGF="/tmp/atitemp.log"

echo "##########" >> $LOGF

if [ "$TEMP" != "" ]; then
echo "servers.$HOST1.atitemp $TEMP $DATU"|nc localhost 2003

if [ "$DEBUG" == "1" ]; then
echo $TEMP >> $LOGF
fi

fi

sleep 1
if [ "$SPEED" != "" ]; then
echo "servers.$HOST1.atifanspeed $SPEED $DATU"|nc localhost 2003

if [ "$DEBUG" == "1" ]; then
echo $SPEED >> $LOGF
fi

fi

sleep 60;
done

To graph useful system resources (network bandwidth, cpu/mem-usage, disk space)

Would be good to install collectl and just export to graphite. But this does not work well currently because the version of collectl with Ubuntu 12.04 LTS is 3.6.0 (3.6.3 with 12.10).
3.6.1 is needed to make it work with graphite and 3.6.5 to make it work good if you want to group servers.

There are plenty of other options though. You can write some scripts yourself, use diamond or a few other tools that has graphite-support.

Diamond is another option, to install follow these two links https://github.com/BrightcoveOS/Diamond/wiki/Installation (only addition is first you have to clone the git repository, from in there you run make builddeb).

How to manually configure a custom collector: https://github.com/BrightcoveOS/Diamond/wiki/Configuration

cfengine – some useful examples / or how I learn’t about the bomb and tried Puppet instead / salt?

Building on the initial post about cfengine we’re going to try out some things that may actually be useful.

My goal would be to make /etc/resolv.conf identical between all the machines.

The server setup is the lustre cluster we built in a previous post.

In this post you’ll first see two attempts at getting cfengine and then puppet to do my bidding until success was finally accomplished with salt.

Cfengine

Set up name resolution to be identical on all machines.

http://blog.normation.com/2011/03/21/why-we-use-cfengine-file-editing/

Thought about

Make oss1 and client1 not get the same promises.

Perhaps some kind of rule / IF-statement in the promise?

Cfengine feels archaic. Think editing named/bind configs are complicated? They are not even close to setting up basic promises in cfengine.

Puppet ->

http://puppetlabs.com/

CentOS 6 Puppet Install

vi /etc/yum.repos.d/puppet.repo
pdcp -w oss1,client1 /etc/yum.repos.d/puppet.repo /etc/yum.repos.d/puppet.repo

Sign certificates:

puppet cert list
puppet cert sign 
sudo puppet cert sign --all

For puppet there’s a dashboard. This sounds interesting. Perhaps I won’t have to write these .pp files which at a glancelooks scarily similar to the cfengine promises.

yum install puppet-dashboard mysqld

service start mysqld

set mysqld password

create databases (as in the database.yml file)

after this I didn’t get much further… But I did get the web-server up. Although it was quite empty…

salt

Easy startup instructions here for getting a parallel shell going:

After it’s set up you can run a bunch of built-in special commands, see the help section about modules.

salt ‘*’ sys.doc|less

will give you all the available modules you use :)

Want to use it for configuration management too? Check out the ‘states‘ section.

What looks bad with salt is that it’s a quite new (first release in 2011)

Salt is a very common word so it makes googling hard. Most hits tend to be about cryptography or cooking.

To distribute (once) the resolv.conf do you run this on the admin-server: salt-cp ‘*’ /etc/resolv.conf /etc/resolv.conf

On to states to make sure that the resolv.conf stays the same:

  1. uncomment the defaults in the master-file about file_roots and restart the salt-master service
  2. create /srv/salt and ln -s /etc/resolv.conf /srv/salt/resolv.conf
  3. create a /srv/salt/top.sls and a /srv/salt/resolver.sls

 

In top.sls put:

base:
 '*':
   - resolver

In resolver.sls put:

/etc/resolv.conf:
 file:
  - managed
  - source: salt://resolv.conf

Then run: salt ‘*’ salt.highstate

How to get this to run every now and then? Setting up a cronjob works.

Haven’t been able to find a built-in function to accomplish this but then again, all I’m doing here is scratching at the surface so it’s working and I’m happy :)

Setup a 3 Node Lustre Filesystem

Introduction

Lustre is a filesystem often used by clusters because many computers can mount the filesystem simultaneously.

This is a small log/instruction for how to setup Lustre in 3 virtualized machines (one metadata server, one object storage server and one client).

Basic components:

VMWare Workstation
3 x CentOS 6.3 VMs.
Latest Lustre from Whamcloud

To use Lustre your kernel needs to support it. There’s a special one for server and one for the client. Some packages are needed on both.

Besides lustre you’ll need an updated version of e2fsprogs as well (because the version that comes from RHEL6.3 does not support large partitions).

Starting with the MDS. When the basic OS setup is done will make a copy of that to use for OSS and Client.

Setup basic services.

Install an MDS

This will run the MDT – the metadata target.

2GB RAM, 10GB disk, bridged networking, 500GB for /boot, rest for / (watch out, it may create a really large swap). Minimal install. Setup OS networking (static ip for servers, start on boot, open port 988 in firewall, possibly some for outgoing if you decide to restrain that too), run yum update and setup ntp. Download latest lustre and e2fsprogs to /root/lustre-client, lustre-server and e2fsprogs appropriately (x86_64). Lustre also does not support selinux, so disable that (works fine with it in enforcing until time to create mds/mdt, also fine with permissive until it’s time to mount).
Put all hostnames into /etc/hosts.
Poweroff and make two full clones.
Set hostname.

Install an OSS

This will contain the OST (object storage target). This is where the data will be stored.

Networking may not work (maybe device name changed to eth1 or eth2).
You may want to change this afterwards to get the interface back to be called (eth0). A blog post about doing that.

Install a client

This will access and use the filesystem.

Clone of the OSS before installing any lustre services or kernels.

Install Lustre

Before you do this it may be wise to take a snapshot of each server. In case you screw the VM up you can then go back :)

Starting with the MDS.

Installing e2fsprogs, kernel and lustre-modules.

Skipping debuginfo and devel packages, installing all the rest.

yum localinstall \ 
kernel-2.6.32-220.4.2.el6_lustre.x86_64.rpm kernel-firmware-2.6.32-220.4.2.el6_lustre.x86_64.rpm \
kernel-headers-2.6.32-220.4.2.el6_lustre.x86_64.rpm \
lustre-2.2.0-2.6.32_220.4.2.el6_lustre.x86_64.x86_64.rpm \ 
lustre-ldiskfs-3.3.0-2.6.32_220.4.2.el6_lustre.x86_64.x86_64.rpm \
lustre-modules-2.2.0-2.6.32_220.4.2.el6_lustre.x86_64.x86_64.rpm

The above was not the order they were installed. Yum changed the order so that for example kernel-headers was last.

yum localinstall e2fsprogs-1.42.3.wc3-7.el6.x86_64.rpm \
e2fsprogs-debuginfo-1.42.3.wc3-7.el6.x86_64.rpm \
e2fsprogs-devel-1.42.3.wc3-7.el6.x86_64.rpm \
e2fsprogs-libs-1.42.3.wc3-7.el6.x86_64.rpm \
e2fsprogs-static-1.42.3.wc3-7.el6.x86_64.rpm \
libcom_err-1.42.3.wc3-7.el6.x86_64.rpm \
libcom_err-devel-1.42.3.wc3-7.el6.x86_64.rpm \
libss-1.42.3.wc3-7.el6.x86_64.rpm \
libss-devel-1.42.3.wc3-7.el6.x86_64.rpm

After boot, confirm that you have lustre kernel installed by typing:

uname -av

and

mkfs.lustre --help

to see if you have that and

rpm -qa 'e2fs*'

to see if that was installed properly too.

By the way, you probably want to run this to exclude automatic yum kernel updates:

echo "exclude=kernel*" >> /etc/yum.conf

After install and reboot into new kernel it’s time to modprobe lustre, start creating MDT, OST and then mount things!
But hold on to your horses, first we ned to install the client :)

 

And then the Client

Install the e2fsprogs*

We cannot just install the lustre-client packages, because we run a different kernel than the ones that whamcloud have compiled the lustre-client against.

We can either back-pedal and install an older kernel. Or we can build (from source / SRPMS) a lustre-client that works on a kernel of our choosing. The later option seems like a better way, because we can then upgrade the kernel if we want to.

 

Build custom linux-client rpms

Because of a bug it appears that some ext4 source packages are needed – while they are not. You need to add some parameters to ./configure. This will be the topic of a future post.

The above rpmbuild should create rpms for the running kernel. If you want to create rpms for a non-running kernel you are supposed to be able to run.

Configure Lustre

Whamcloud have good instructions. Don’t be afraid to check out their wiki or use google.

/var/log/messages is the place to look for more detailed errors.

On the MDS

Because we do not have infiniband you want to change the parameters slightly for lnet to include tcp(eth0). These changes are not reflected until reboot (quite possibly something else) – but just editing a file under /etc/modprobe.d/ called for example lustre.conf is not enough.

Added a 5GB disk to the mds.

fdisk -cu /dev/sdb; n, p, 1, (first-last)

modprobe lustre lnet

mkfs.lustre –mdt –mgs

mount

On the OSS

Also add the parameters into modprobe.

mkfs.lustre –ost

mount

On the client

Add things into modprobe.

mount!

Write something.

Then hit: lfs df -h

To see usage!

 

Get it all working on boot

You want to start the MDS, then the OSS and last the client.
But while it’s running you can restart any node and eventually it will start working again.

Fstab on the client:
ip@tcp:/fsname /mnt lustre defaults,_netdev 0 0

Fstab on the OSS and MDS:
/dev/sdb1 /mnt/MDS lustre defaults,_netdev 0 0

While it’s running you can restart any node and eventually it will start working again.

Asus Eee Pad Transformer TF 101 + root + arch in chroot

Just got one of these – thought it would be a great tool when going to conferences for example or somewhere where I would need a small computer but don’t want to bring a long my normal heavy laptop.

Normally I prefer pen and paper when going to meetings or conferences, but if there’s a lot of information needed to be written down or if I want to check something online it sounds quite nice.

Got a keyboard with it too. The stuff I’ve wanted do to so far works perfectly and it is very nice to play around with – though I haven’t done any serious work or task for any longer period of time yet. If I can do that without any/much issues I will be very happy about it.

Rooted it without any problems (from a Windows 7 x64 PC). Needed to install the USB drivers from Asus’s page –  (choose OS android).

It would also be nice to have a Linux chroot terminal running inside Android. This tutorial works pretty great – at least to get a basic setup :) Still need to play some more with it to get things working (vpn perhaps?). After you got the sshd running on the android you can connect to localhost with an ssh client, for example irssi connectbot. In there you run the commands outlined in the last link.

After you create a user you need to add the user to the appropriate group. At least if you want network access.
What was strange was that if my user was in only aid_inet I could ssh and irssi to an IP-address, but I could not ping said address. Neither could I ping or ssh to a dns-name. After adding group aid_net_raw and your user to that group that was possible.

After that you can use ‘pacman -S irssi’ to install for example irssi!

Happy transformer arching!

 

Update to Spotify – An RSS Feed

After some time the solution I devised on http://www.guldmyr.com/blog/script-to-check-for-an-update-on-a-web-page/ just did not elegant enough (also it stopped working).

Instead of getting some kind of output in a terminal sometimes somewhere I decided to make an RSS feed that updates http://guldmyr.com/spotify/spot.xml instead :)

I suspect that the repository itself could be used to see if there’s an update to it. It has all these nice looking files in here: http://repository.spotify.com/dists/stable/ – but I also suspect this is a repository for debian/ubuntu which I cannot use on my RHEL-based workstation.

Thus:

A bash script was written. It uploads the spot.xml whenever there is an update. The script does not run on the web-server so it ftps the file to the web-server, it would be nice if it did because then the actual updating of the feed would be so much more simple (just move/copy a file).

But, I hope it works :) Guess we’ll see next time there’s an update to spotify!

The script itself is a bit long and I hope not too badly documented, so it’s available in the link below: http://guldmyr.com/spotify/update.spotify.rss.feed.sh

Or, more easily, you can just add http://guldmyr.com/spotify/spot.xml to your RSS reader (google’s reader, mozilla’s thunderbird, there are many of them).

Some things I learned:

  • Latest post in an RSS feed is just below the header, making it a bit awkward to update via a script as you cannot just remove the </channel> and </rss>, add a new <item></item> and then add the </channel> and </rss> at the end again.
  • lastBuildDate in the header also needs to be updated each time the feed is updated. In the end I decided to re-create the file/feed completely every time there was an update.
  • Some rss-readers appear to have a built-in interval that they use to check if there’s an update. So for example you could update the rss-feed and press ‘refresh’ but the client still won’t show the new feeds. Google Reader does this for example. With Mozilla’s Thunderbird you can ask it to update (Get Messages) and it will. You don’t need an e-mail account in Thunderbird to use it as an RSS reader by the way.
  • http://feedvalidator.org is a great tool, use it.

I claim no responsibility if you actually use the script, the feed however should be fairly safe to subscribe to.

 

[Valid RSS]

Red Hat Certification – RHCE – KVM via CLI

In a previous post while preparing for RHCSA I installed kvm post-installation, via the GUI.

But how to install, configure and use it only from the CLI?

Virt-Manager

http://virt-manager.org/page/Main_Page has some details

As a test-machine I’m using a server with Scientific Linux 6.2 (with virtualization enabled as seen by ‘cat /proc/cpuinfo|grep vmx’).

None of the Virtualization Groups are installed, as seen by ‘yum grouplist’. While doing that you’ll find four different groups. You can use

yum groupinfo "Virtualization Client"

or correspondingly to get more information about the group.

yum groupinstall Virtualization "Virtualization Tools" "Virtualization Platform" "Virtualization Client"

This installs a lot of things. Libvirt, virt-manager, qemu, gnome and python things.

lsmod|grep kvm
service libvirtd start
lsmod|grep kvm

This also sets up a bridge-interface (virbr0).

Now, how to install a machine or connect to the hypervisor?

How to get console?

ssh -XYC user@kvmserver
virt-manager

did not work.

On the client you could try to do:

yum groupinstall "Virtualization Client"
yum install libvirt
virt-manager

Then start virt-manager and connect to your server. However this didn’t work for me either. Is virtualization needed on the client too?

Noit is not, first: check if Virtualization is enabled on the server. Look in /var/log/messages for

kernel: kvm: disabled by bios

If it says that you’ll need to go into BIOS / Processor Options / and enable Virtualization.

Then you can start virt-manager, check that you can connect to the KVMserver.

Copy a .iso to /var/lib/libvirt/images on the server.

Re-connect to the kvm-server in virt-manager.

Add a new VM called test. Using 6.2 net-install and NAT network interface. This may take a while.

Pointing the VM to kvm-server where a httpd is running (remember firewall rules) and an SL 6.2 is stored. Installing a Basic Server.

OK, we could use virt-manager, it’s quite straight-forward and doesn’t require any edits of config files at all.

Moving on to virsh.

To install a vm you use ‘virt-install’.

You can get lots of info from ‘virsh’

virsh pool-list
virsh vol-list default
virsh list
virsh list-all
virsh dumpxml test > /tmp/test.xml
cp /tmp/test.xml /tmp/new.xml

Edit new.xml

change name to new and remove line with UUID

virt-xml-validate /tmp/new.xml
virsh help create
virsh create --file /tmp/new.xml
virsh list

This creates a new VM that uses the same disk and setup. But, if you shut down this new domain, it will disappear from virsh list –all and the list. To keep it you need to define it first:

virsh define --file /tmp/new.xml
virsh start new

This can become quite a bit more complicated. You would probably want to make clones (virt-clone) or snapshots (virsh help snapshot) instead of using the same disk file.

Making your own .xml from scratch looks fairly complicated. You could use ‘virt-install’ however.

virt-install --help
virt-install -n awesome -r 1024 --vcpus 1 --description=AWESOME --cdrom /var/lib/libvirt/images/CentOS-6.2-x86_64-netinstall.iso --os-type=linux --os-variant=rhel6 --disk path=/var/lib/libvirt/images/awesome,size=8 --hvm

For this the console actually works while running ‘virt-install’ over ssh on the kvm-server.

To make edit to a vm over ssh:

virsh edit NAMEOFVM

Red Hat Certification – RHCE – Course Outline

Howdy!

In case you saw my previous posts I’ve been prepping for a RHCE course the last couple of weeks.

Here are the posts based on the objectives:

Odds are quite high that I’ve missed something or not gone deep enough into some subjects and for the record some subjects I decided to skip.

I’m taking the course over at Tieturi here in Helsinki and they have published the schedule for the course, with quite detailed outline.

This outline of the course can with benefit be used to see if you missed any terms or functions while going through the objectives.

I’ll go through the ones I find more interesting below:

Network Resource Access Controls

-Internet Protocol and Routing

OK, well this is quite obvious, some commands:

ip addr
ip route
route add
netstat -rn

IPv6

-IPv6: Dynamic Interface Configuration
-IPv6: StaticInterface Configuration
-IPv6: Routing Configuration

You can add IPV6 specific lines in the ifcfg-device files in /etc/sysconfig/network-scripts/. See /usr/share/doc/initscripts*/sysconfig

Some settings can also go into /etc/sysconfig/network

iptables

Netfilter Overview
-Rules: General Considerations
Connection Tracking
-Network Address Translation (NAT)
-IPv6 and ip6tables

 

Web Services

-Squid Web Proxy Cache

On client check what IP you get:

curl --proxy squid-server.example.com:3128 www.guldmyr.com/ip.php

On server install and setup squid:

yum install squid
vi /etc/squid/squid.conf
#add this line in the right place:
acl localnet src 192.168.1.1/32
#allow port 3128 TCP in the firewall (use very strict access here)
service squid start

On client:

curl --proxy squid-server.example.com:3128 www.guldmyr.com/ip.php

Beware that this is unsecure. Very unsecure. You should at least set up a password for the proxy, change the default port and have as limited firewall rules as possible.

E-mail Services

-Simple Mail Transport Protocol
-Sendmail SMTP Restrictions
-Sendmail Operation

 

Securing Data

-The Need For Encryption

-Symmetric Encryption

Symmetric uses a secret/password to encrypt and decrypt a message.
You can use GnuPG (cli command is ‘gpg’) to encrypt and decrypt a file symmetrically. Arguments:

–symmetric/-c == symmetric cipher (CAST5 by default)
–force-mdc == if you don’t have this you’ll get “message was not integrity protected”

There are many more things you can specify.

echo "awesome secret message" > /tmp/file
gpg --symmetric --force-mdc /tmp/file
#(enter password)
#this creates a /tmp/file.gpg
#beware that /tmp/file still exists
#to decrypt:
gpg --decrypt /tmp/file.gpg
gpg: 3DES encrypted data
gpg: encrypted with 1 passphrase
awesome secret message

 

-Asymmetric Encryption

Uses a key-pair. A public key and a private key.
A message encrypted with the public key can only be decrypted with the private key.
A message encrypted with the private key can only be decrypted with the public key.

GnuPG can let you handle this.

Login with a user called ‘labber’:

gpg --gen-key
# in this interactive dialog enter username: labber, e-mail and password
# this doesn't always work, might take _long_time_, eventually I just tried on another machine
echo "secret message" > /tmp/file
gpg -e -r labber /tmp/file
# enter password
gpg --decrypt /tmp/file
# enter password

To export the public key in ASCII format you can:

gpg --armor --output "key.txt" --export "labber"

However, how to encrypt a file with somebody else’s public key?

-Public Key Infrastructures – PKI

Consists of:

  • CA – certificate authority – issues and verifies digital certiciates
  • RA – registration authoriy – verifies user identity requesting info from the CA
  • central directory – used to store and index keys

-Digital Certificates

A certificate has user details and the public key.

Account Management

-Account Management
-Account Information (Name Service)
Name Service Switch (NSS)
Pluggable Authentication Modules (PAM)
-PAM Operation
-Utilities and Authentication

 

PAM

Basically a way to authenticate users. You can put different types of authentication ways behind PAM. So that a software only needs to learn to authenticate to PAM and then PAM takes care of the behind-the-scenes-work.

For example you can have PAM connect to an ldap-server.

CLI: authconfig

Files:
/etc/sysconfig/authconfig
/etc/pam.d/
/etc/sssd/sssd.conf

 

Red Hat Certification – RHCE – Network Services – NTP

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

NTP:

You could possibly test this from Windows as well.

On linux it’s fairly straight-forward, you can use ntpd both as a client and as a server.

Check in /var/log/messages for details

The time-synchronization with ntpd is slow by design (to not overload or cause dramatic changes in the time set).

ntpdate is instant but it’s not recommended to be used. For example with ‘ntpdate -q’.

man ntp.conf
this then points to :
man ntp_acc
man ntp_auth
man ntp_clock
man ntp_misc

  • Install the packages needed to provide the service.
    • yum install ntp
  • Configure SELinux to support the service
    • nothing to configure??
  • Configure the service to start when the system is booted.
    • chkconfig ntpd on
  • Configure the service for basic operation.
    • /etc/ntp.conf
      • server ntp.server.com
    • service ntpd start
    • ntpq -p # to see status
  • Configure host-based and user-based security for the service
    • iptables
      • port 123 (UDP)

Enable ntpd as a client

What’s a bit reverse for ntpd is that first you need to configure the server as a client

So that your local ntp-server gets good time from somewhere else. You can find a good time-server to use on www.pool.ntp.org

You only need to add one server line but for redundancy you should probably have more than one.

As an example with your client on 192.168.0.0/24 and server is on 192.168.1.0/24.

All you need to do is for the client part:

server ntp.example.com
service ntpd restart
ntpq -p

 

Enable ntpd as a server

You need to add a restrict line in ntp.conf.

You also need to allow port 123 UDP in the firewall.

restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap
service ntpd restart

Client to use your ntp server

Basically the same as the above for client, but you specify the address to your NTP-server instead of one from pool.ntp.org.

Extra

  • Synchronize time using other NTP peers.

I believe this has been covered.

More Extra

One extra thing you may want to check out is the ‘tinker’ command.

This is put on top of ntp.conf and more info are available in ‘man ntp_misc’.

However, most of the time you just need to wait a bit for the time change to come through.

tcpdump

There’s not much to go in logs on either server or client for ntpd. You’ll get messages in /var/log/messages though that says “synchronized” and when the service is starting.

You can also use tcpdump on the server to see if there are any packets coming in.

tcpdump -i eth0 -w /tmp/tcmpdump.123 -s0 'udp port 123 and host NTP.CLIENT.IP'
# wait a while, restart ntpd on client
tcpdump -r /tmp/tcmpdump.123
# this will then show some packets if you have a working communication between server and client

To test that it’s working

Start with the server still connecting to an ntp-server with good time.

You could then set the date and time manually on the server to something else. For example, let’s say the current time is 6 JUN 2012 17:15:00.

Set it to 15 minutes before:

date -s "6 JUN 2012 17:00:00"
service ntpd restart

Also restart ntpd on the client, then wait, this will probably take a bit longer than before.

If you set the time manually to something too big it won’t work. You could then experiment with ‘tinker panic 0’

Red Hat Certification – RHCE – Network Services – ssh

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

SSH:

To test from windows you can use putty.

But in linux you just need ssh for client and sshd for server.

man 5 sshd_config and this blogpost has an overview.

  • Install the packages needed to provide the service.
    • yum install openssh
  • Configure SELinux to support the service
    • getsebool -a|grep ssh
  • Configure the service to start when the system is booted.
    • chkconfig sshd on
  • Configure the service for basic operation.
    • /etc/ssh/sshd_config
  • Configure host-based and user-based security for the service
    • iptables
      • port 22 (TCP)
    • tcp.wrapper

 

TCP Wrapper

More info in man tcpd and man 5 hosts_access

Check that your daemon supports it:

which sshd
ldd /usr/sbin/sshd|grep wrap

For this test, let’s say that the server you are configuring has IP/netmask 192.168.1.1/24 and that you have a client on 192.168.0.0/24

cat /etc/hosts.allow

sshd: 192.168.0.0/255.255.255.0
sshd: ALL : twist /bin/echo DEATH

The last row sends a special message to a client connecting from a non-allowed network.

cat /etc/hosts.deny

ALL: ALL

If you on the server with these settings try to do “ssh -v root@localhost” or “ssh -v root@192.168.1.1” you’ll get the message from twist.

If you in hosts.allow add:

sshd: KNOWN

You can log on to the localhost, but not if you add “LOCAL”.

If you add

sshd: 192.168.1.

you can log on from localhost to the public IP of the server.

Extra

  • Configure key-based authentication.
    • ssh-keygen
    • ssh-copy-id user@host
    • ssh user@host
    • set PasswordAuthentication to no in sshd_config
    • service sshd restart
  • Configure additional options described in documentation.
    • many things can be done, see “man 5 sshd_config”
    • chrootdirectory looks quite cool but requires a bit of work

Red Hat Certification – RHCE – Network Services – e-mail

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

SMTP:

Hackmode has a good article about setting postfix for the first time.

To test that e-mail is working you can – tada – use an e-mail client.

You have lots of details in /usr/share/doc/postfix-N ( the path should be in /etc/postfix/main.cf )

  • Install the packages needed to provide the service.
    • yum install postfix
  • Configure SELinux to support the service
    • getsebool -a|grep postfix
  • Configure the service to start when the system is booted.
    • chkconfig postfix on
  • Configure the service for basic operation.
    • set hostname to host.example.com
    • /etc/postfix/main.cf and define (this assumes hostname is host.example.com):
      • myhostname = host.example.com
      • mydomain = example.com
      • myorigin = $mydomain
      • inet_interfaces = all
      • mydestination = add $mydomain to the default one
      • home_mailbox = Maildir/
      • Update firewall to allow port 25 tcp
      • Test with: nc localhost 25
  • Configure host-based and user-based security for the service
    • iptables or $mynetworks in main.cf
    • user: postmap

In CLI (important to use ‘ and not “):

#hostname - record the output of this
postconf -e 'myhostname = output from hostname in here'
#hostname -d
postconf -e 'mydomain = output from hostname -d in here'
postconf -e 'myorigin = $mydomain'
postconf -e 'inet_interface = all'
postconf -e 'mydestination = $myhostname, localhost, $mydomain'
postconf -e 'mynetworks = 127.0.0.0/8 [::1]/128, /32'
postconf -e 'relay_domains = $mydestination'
postconf -e 'home_mailbox = Maildir/'

To use it:

useradd -s /sbin/nologin labber
passwd labber

Edit /etc/aliases and add:

labber: labber

Then run:

newaliases
service postfix start
service postfix status
netstat -nlp|grep master

Send e-mail:

mail -s "Test e-mail here" labber@mydomain
test123
.

The . at the end is quite nice, that stops the input.

Check e-mail:

cat /home/labber/Maildir/new/*

Real E-mail Client

But, perhaps you want to check this out with a real e-mail client like thunderbird 10.

For this there needs to be a e-mail server that stores the e-mails on the server.

For this we can use ‘dovecot’

yum install dovecot
service dovecot start
  1. Update iptables to allow ports 25 and 143 (TCP)
  2. Update main.cf to allow from your IP
  3. Restart services
  4. Add new account in thunderbird –
    1. do use the IP address of your server, not the DNS
    2. do not use SMTP security (or username), but use password authentication
    3. do use IMAP STARTTLS security, username: labber, password auth

Thunderbird is quite nice, it will often tell you which setting is wrong.

You can use /var/log/maillog for details on the server-side (to see if you get connections at all for example).

 

Deny a User

To illustrate this feature we first need to add a second user/e-mail account:

useradd -s /sbin/nologin labrat
passwd labrat
echo "labrat: labrat" >> /etc/aliases
newaliases
service postfix restart
service dovecot restart
mail -s "test" labrat@mydomain

You need to send an e-mail to the e-mail address before you can add it in Thunderbird (because the user does not have a $HOME/Maildir until you do).

After the new user has been created and added to your e-mail client do the following:

cd /etc/postfix
echo "labber@mydomain REJECT" >> sender_access
postmap hash:sender_access
echo "smtpd_sender_restrictions = check_sender_access hash:/etc/postfix/sender_access" >> /etc/postfix/main.cf
service postfix restart

Try:

  • to send an e-mail from and to both accounts

Extra

  • Configure a mail transfer agent (MTA) to accept inbound email from other systems.
    • inet_interfaces = all
  • Configure an MTA to forward (relay) email through a smart host.
    • relayhost=hostname.domain.com

If I understand this correctly to setup the above two we would need to have two servers.

Red Hat Certification – RHCE – Network Services – SMB

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

SMB:

Testing an SMB server may be quite easy from Windows, but from Linux I suppose it’s a bit trickier.

The CLI client is called ‘smbclient’

The tool to set passwords: ‘smbpasswd’

You can also get some information with commands starting with ‘net’, for example ‘net -U username session’

testparm is another tool you can use to test that the config file – smb.conf – is not missing anything structural or in syntax.

The server is called ‘samba’.

There are more packages, for example ‘samba-doc’, samba4. You can find them by typing: ‘yum install samba*’

samba-doc installs lots of files in /usr/share/doc/samba*

  • Install the packages needed to provide the service.
    • yum install samba
  • Configure SELinux to support the service
    • getsebool -a |grep smb; getsebool -a|grep samba
    • /etc/samba/smb.conf # has some information about selinux
  • Configure the service to start when the system is booted.
    • chkconfig samba on
  • Configure the service for basic operation.
    • server#: open firewall (check man smb.conf, port 445 and 139 are mentioned)
    • server#: mkdir /samba; chcon -t type_in_smb_conf /samba
    • server#: edit /etc/samba/smb.conf:
      • copy an existing share – make it browseable and allow guest to access
    • server#: service smb start
    • server#: touch /samba/fileonshare
    • client#: smbclient \\\\ip.to.smb.server\\share
      • hit enter and it will attempt to log in as anonymous (guest)
    • client#: get fileonehsare
  • Configure host-based and user-based security for the service
    • server#: check that ‘security = user’ in smb.conf.
    • server#: add” writable = yes” or “read only = no” to the share in smb.conf
    • server#: smbpasswd -a username
    • server#: mkdir /samba/upload
    • server#: chown username /samba/upload
    • server#: chmod 777 /samba/upload
    • client#: smbclient -U username \\\\ip.to.smb.server\\share
    • client#: cd upload; mkdir newfolder; cd newfolder
    • client#: put file

Extra

  • Provide network shares to specific clients.
    • things you can set on the share:
      • write list = +staff
      • invalid users =
      • valid users =
      • hosts allow = 192.168.0.0/255.255.255.0
      • hosts deny =
  • Provide network shares suitable for group collaboration.
    • groupadd staff
    • usermod -a -G staff bosse
    • chown root.staff /samba/upload
    • chmod 775 /samba/upload
    • connect with bosse – do things,
    • connect with another user – can you do things?

Red Hat Certification – RHCE – Network Services – NFS

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

NFS:

Testing an NFS server is generally easier from another linux-server.

  • Install the packages needed to provide the service.
    • yum install nfs ?? (already installed on mine)
  • Configure SELinux to support the service
    • getsebool -a |grep nfs
  • Configure the service to start when the system is booted.
    • chkconfig nfs on
    • edit /etc/fstab on the client to mount on boot
  • Configure the service for basic operation.
    • server#: mkdir /foo
    • server#: vi /etc/exports
      • /foo          192.168.0.0/24(rw)
    • server#: iptables – port 2049 tcp and udp
    • server#: service nfs start
    • client#: mount -t nfs IP:/foo /mnt
    • server#: mkdir /foo/upload
    • server#: chown username.username /foo/upload
    • server#: chmod 777 /foo/upload
    • client#: touch /mnt/upload/file2
    • server#: cd /net/ip.to.server/foo
  • Configure host-based and user-based security for the service
    • iptables to deny hosts
    • add permissions appropriately in /etc/exports
      • man exports

Extra

  • Provide network shares to specific clients.
    • Add a new folder / line in /etc/exports and only allow certain clients to connect to it
  • Provide network shares suitable for group collaboration.
    • With the help of permissions. Use unix group ID number or names.

Red Hat Certification – RHCE – Network Services – FTP

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

FTP:

An ftp-server is also quite easy to test. You can test it from many web-browsers, telnet, ftp, lftp or a myriad of other clients.

  • Install the packages needed to provide the service.
    • yum install vsftpd
  • Configure SELinux to support the service
    • this might be more interesting, you may need to do some magic here for sharing files
    • getsebool -a|grep ftp
  • Configure the service to start when the system is booted.
    • chkconfig vsftpd on
  • Configure the service for basic operation.
    • for basic – only open firewall then start the service
    • that is enough for anonymous read to /var/ftp/pub/
      • cp /root/anaconda-ks.cfg /var/ftp/pub/
      • chmod 755 /var/ftp/pub/anaconda-ks.cfg
  • Configure host-based and user-based security for the service
    • iptables to deny hosts
    • you can deny users by putting them in /etc/vsftpd/ftp_users and/or user_list
    • in vsftpd.conf there is a tcp_wrappers variable

Extra

  • Configure anonymous-only download
    • Deny all other users :)

 

Red Hat Certification – RHCE – Network Services – DNS

1st post – System Management and Configuration

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

DNS:

A DNS-server is quite easy to test as well, just point a client to the IP of your local DNS server and check /var/log/messages on the DNS-server.

  • Install the packages needed to provide the service.
    • yum install bind
  • Configure SELinux to support the service
    • working from scratch, after adding new zones and things you may need to add correct context to the files
  • Configure the service to start when the system is booted.
    • chkconfig named on
  • Configure the service for basic operation.
    • /etc/named.conf
      • after editing you need to restart named
    • edit ‘allow-query’ and ‘listen-on port 53’ – update firewall, start named
    • configure a client to use it with /etc/resolv.conf
    • see examples in: /usr/share/doc/bind*/
  • Configure host-based and user-based security for the service
    • host-based can be done via firewall (port 53 UDP and TCP)
    • host-based: allow-query { localhost; };
    • but user-based??

Extra

  • Configure a caching-only name server.
    • This is what the default /etc/named.conf does it – (this is also stored in the /usr/shar/doc/bind*/ – but, it a good thing to try would be to try to configure this from an empty named.conf
  • Configure a caching-only name server to forward DNS queries.
    • Almost same config as caching-only, except for the addition of two lines:
      • forward only;
      • forwarders  { dns.ip; dns.ip2 }
  • Note: Candidates are not expected to configure master or slave name servers.

 

Linux’s ctrl-alt-SysRq – Magic SysRq Key

http://en.wikipedia.org/wiki/Magic_SysRq_key

Wow, awesome.

Looks like you can do lots of fun stuff here in case you get a stalled system.

Quite fun that this is a quite useful thing and I have never heard of this before. Maybe Linux community is being quiet about this because it’s close to ctrl-alt-del :)

mnemonic: BRUISER (REISUB)

unRaw      (take control of keyboard back from X),
 tErminate (send SIGTERM to all processes, allowing them to terminate gracefully),
 kIll      (send SIGKILL to all processes, forcing them to terminate immediately),
  Sync     (flush data to disk),
  Unmount  (remount all filesystems read-only),
reBoot.

Red Hat Certification – RHCE – Network Services – httpd

1st post – System Management and Configuration

This post is about Network Services.

During all these exercises I try my hardest not to use google, as that’s not available during the exam anyway.

Objectives

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

  • Install the packages needed to provide the service.
  • Configure SELinux to support the service.
  • Configure the service to start when the system is booted.
  • Configure the service for basic operation.
  • Configure host-based and user-based security for the service.

User should be able to do the following for all these services:

  • http/https
  • dns
  • ftp
  • nfs
  • smb
  • smtp
  • ssh
  • ntp

httpd:

  • Install the packages needed to provide the service.
    • yum install httpd
  • Configure SELinux to support the service.
    • supports by default, if changing documentroot/defaultroot use:
    • chkcon -R –reference /var/www/html /var/newhtmldir
  • Configure the service to start when the system is booted.
    • chkconfig httpd on
  • Configure the service for basic operation.
    • rpm -qc httpd (find config file)
  • Configure host-based and user-based security for the service
    • host-based -> iptables
    • user-based -> htpasswd for httpd

htpasswd

An htpasswd file contains users/passwords.

A .htaccess file points to the htpasswd

The .htaccess file is not the recommended way to set up authentication, instead you should do it in the Directory section of httpd.conf.

To get more information about httpd in general do:

yum install httpd-manual

Then surf to http://hostname/manual.

To generate a htpasswd:

[root@rhce webpages]# htpasswd -c /etc/httpd/conf/.htpasswd user
New password:
Re-type new password:
Adding password for user user

Then add this .htaccess file:

AuthUserFile /etc/httpd/conf/.htpasswd
AuthGroupFile /dev/null
AuthName "Private Area"
AuthType Basic
AuthBasicProvider file
Require user user

https

The s – means the httpd uses another port – 443 and that it uses certificates.

yum install mod_ssl

This adds /etc/httpd/conf.d/ssl.conf

That config file actually has a ‘listen’ directive for port 443.

So add that port in the firewall and restart httpd.

After that you can surf to https://ip and it will complain about the certificate (which is a default generated one).

But wait, there’s more!

Configure a virtual host.

This is can be used when you want to have several hostnames or domains on the same machine.

There’s some info in httpd.conf but there’s quite a lot in the manual via httpd-manual package.

To test this you could either put several IP addresses on the server or point several domains towards it (might be easiest, /etc/hosts). But in VMWare it’s very easy to just add another network interface.

  1. Add another ethernet interface on the same network as the existing one (mine is bridged behind a NAT).
  2. Edit /etc/hosts on a client and on the server so that ww1.example.com and ww2.example.com points to the IP addresses on the server
  3. Make sure /etc/nsswitch.conf has ‘files’ in the hosts row.
  4. If you have very narrow firewall add the new IP address.
  5. mkdir /var/www/ww1.example.com; mkdir /var/www/ww2.example.com; chcon -R –reference =/var/www/html /var/www/ww*
  6. Edit /etc/httpd/conf/httpd.conf

and add this at the end:

NameVirtualHost *:80

    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /var/www/ww1.example.com
    ServerName ww1.example.com

    ServerAdmin webmaster@dummy-host.example.com
    DocumentRoot /var/www/ww2.example.com
    ServerName ww2.example.com

7. service httpd restart

Then on the client point your browser to and (add different index.html in each to make it easy to see).

Configure private directories.

I’d say this fall under the htpasswd section.

Deploy a basic CGI application.

FOSwiki for example uses CGI. Perhaps it should be a custom CGI application, like a small hello-world script.

/var/www/cgi-bin is where CGI scripts are stored by default.

A simple .cgi script is just a perl script with another extension that outputs .HTML text.

Configure group-managed content.

Group-managed. So this would be somehow using the AuthGroupFile in .htaccess?

Or could be done by creating a new directory under www-root and give specific access to this directory. That means it can be managed by a unix group, (access is a different story however).

Red Hat Certification – RHCE – System Configuration and Management

RHCE Preparation – System Configuration and Management

This is post 1 in a series of posts where I will be going through the objectives for the RHCE certifications. It builds on the initial post that has the objectives:

Red Hat Certification – RHCE – Preparation

It appears that the objectives have been updated, at least if you compare between my post above and https://www.redhat.com/training/courses/ex300/examobjective

for example build a simple rpm is installs one package is not in the list.

I bet there are many blogs about this topic. I’m doing this quite a lot for myself, but maybe somebody else finds these useful.

This post will be about the section ‘System Configuration and Management’.

My setup: Core i7, 8GB RAM, Windows 7 x64, VMWare Workstation with CentOS installed.

Installing a fresh VM with 4 cores, 5GB RAM, virtualization and CentOS.

CentOS is a free clone of Red Hat, it’s missing some stuff (satellite for example) but it does the job for learning. You can find it in many places, for example here: http://www.nic.funet.fi/pub/Linux/INSTALL/Centos/6/isos/x86_64/

IP Routing and NAT

The part “Routing / NAT” will be tricky, as I do not have a second computer that I could use for this. Maybe I can get something working inside the virtual machines though, but for now I think I will skip these two and get straight into the other ones.

 

Use /proc/sys and sysctl to modify and set kernel runtime parameters.

 

Edit /etc/sysctl.conf

Or use sysctl -w to set it temporary

For example one is: vm.overcommit_ratio

You can then do either of these to view the current setting:

cat /proc/sys/vm/overcommit_ratio
sysctl vm.overcommit_ratio

To set it temporarily:

echo "60" > /proc/sys/vm/overcommit_ratio
sysctl -w vm.overcommit_ratio="50"

To set each time on boot:

echo "vm.overcommit_ratio = 50" >> /etc/sysctl.conf

 Configure a system to authenticate using Kerberos.

Waiting with this. Need to set up a KDC – kerberos service first.

 

Build a simple RPM that packages a single file.

This appears to be a bit complicated – the details below are about as simple as this can be made. There is a lot more nifty things that you can do with an rpm.

Would be nice to have a guide of this in for example /usr/share/doc

yum install rpm-build
cd $HOME/rpmbuild
mkdir {BUILD,RPMS,SOURCES,SPECS,SRPMS}
mkdir GetIP
cd GetIP

The “program”:

cat getip.sh
#!/bin/bash

wget -q http://guldmyr.com/ip.php -O/tmp/ip
cat /tmp/ip
chmod +x getip.sh

Make an archive and put it in the SOURCES DIR:

cd $HOME/rpmbuild
tar -cf GetIP.tar.gz GetIP
mv GetIP.tar.gz SOURCES/

Edit a spec-file (do this as a normal user instead of root, it will show the default entries):

cd SPECS
vi sample.spec

Make it look like this:

Name:GetIP
Version:1.0
Release:        1%{?dist}
Summary: Get an IP wooop

Group:  Development/Tools
License:        GPL
URL:            http://guldmyr.com/blog
Source0:        %{name}.tar.gz
BuildRoot:      %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)

BuildRequires:bash
Requires:bash

%description
Get an IP woop!

%prep
%setup -n GetIP

%build

%install
mkdir -p "$RPM_BUILD_ROOT/opt/GetIP"
cp -R * "$RPM_BUILD_ROOT/opt/GetIP"

%clean
rm -rf "$RPM_BUILD_ROOT"

%files
/opt/GetIP
%defattr(-,root,root,-)
%doc

%changelog

Then make an rpm:

rpmbuild -v -bb $HOME/rpmbuild/SPECS/sample.spec

Then as root:

cd /home/user/rpmbuild/RPMS/x86_64/
rpm -ivh GetIP-1.0-1.el6.x86_64.rpm

Then as normal user you can now execute the installed file:

/opt/getip/getip.sh

If you wonder about things – check this fairly unreadable blog post out.

Basically you want to use the $RPM_BUILD_ROOT in front of where you want to install the software. By default there are ‘make’, ‘configure’ and nothing in the ‘require’ entries. I removed the make, configured and just put ‘bash’ in the require entries, it seemed to do the trick though.

More info is also available on rpm.org – which recommend to use /usr/src/redhat for building packages.

Configure a system as an iSCSI initiator that persistently mounts an iSCSI target.

Waiting with this. Need to set up an iSCSI target first.

Produce and deliver reports on system utilization (processor, memory, disk, and network).

sar -A

/etc/cron.d/sysstat

Use shell scripting to automate system maintenance tasks.

Well, this can be a lot of things and is quite hard to prepare for.

But I think a ‘for loop’ is a good thing to know about and can help with a lot of system maintenance tasks.

an input file with usernames:

[martbhell@rhce ~]$ cat /tmp/userlist
bengt
goran

a scriptfile:

[root@rhce ~]# cat usersndirs.sh
#!/bin/sh

userlist=/tmp/userlist

for i in `cat $userlist`; do
echo useradd $i;
echo mkdir $i;
done

Remove the “echo” to create the users.

Of course, you could also use the ‘newuser’ command (interactive or send a file).

This happens a lot I think: You get an idea that “hey, I can do this with a script”. But then a random amount of time later you find out that there is already a command that does this for you. That doesn’t mean the time spent is a total waste, hopefully you learned something while doing it. Maybe your script even does a better job than the new one you found.

Configure a system to log to a remote system.

syslog / rsyslog

man rsyslog.conf has an example for how to log to a remote machine

edit /etc/rsyslog.conf

add

       To  forward  messages  to another host via UDP, prepend the hostname with the at sign ("@").  To forward it via plain tcp, prepend two at
       signs ("@@"). To forward via RELP, prepend the string ":omrelp:" in front of the hostname.

       Example:
              *.* @@192.168.0.8

Set the IP to the machine that will be receiving the logs.

Configure a system to accept logging from a remote system.

So this step you may want to do before the previous step (unless you already have a working syslogd server).

You edit /etc/rsyslog.conf

and uncomment the “reception” parts (don’t forget firewall and restart service).

To test try to “su -” with the wrong password and then check in /var/log/secure on the loghost.

Create a private repository

“To create a private repository you should proceed as follows: – Install the createrepo software package – Create a <directory> where files can be shared (via FTP or HTTP) – Create a subdirectory called Packages and copy all packages to be published in Packages – run createrepo -v <directory>”

 

Red Hat Certification – RHCE – Skills Assessment

Just tried out the “Red Hat Skills Assessment” for RHCE.

These are apparently the ones I need to work on:

  • Managing Simple Partitions and Filesystems Some Understanding
  • Managing Flexible Storage with Logical Volumes Substantial Knowledge
  • BASH Scripting and Tools Substantial Knowledge
  • Manage System Resources Some Understanding
  • Logical Volume Management Substantial Knowledge
  • Centralized and Secure Storage Familiarity
  • Web Server Additional Configuration Familiarity
  • Basic SMTP Configuration Familiarity
  • Caching-Only DNS Server Familiarity
  • File Sharing with NFS Familiarity

But the assessment doesn’t say which questions I missed, so it could be that some of the ones with “Deep Understanding” I could have gotten enough answers right by guessing. Best to not answer if you don’t know so you get a better pointer at what to look at?