Tag Archives: scientific linux

Red Hat Certification – RHCE – KVM via CLI

In a previous post while preparing for RHCSA I installed kvm post-installation, via the GUI.

But how to install, configure and use it only from the CLI?

Virt-Manager

http://virt-manager.org/page/Main_Page has some details

As a test-machine I’m using a server with Scientific Linux 6.2 (with virtualization enabled as seen by ‘cat /proc/cpuinfo|grep vmx’).

None of the Virtualization Groups are installed, as seen by ‘yum grouplist’. While doing that you’ll find four different groups. You can use

yum groupinfo "Virtualization Client"

or correspondingly to get more information about the group.

yum groupinstall Virtualization "Virtualization Tools" "Virtualization Platform" "Virtualization Client"

This installs a lot of things. Libvirt, virt-manager, qemu, gnome and python things.

lsmod|grep kvm
service libvirtd start
lsmod|grep kvm

This also sets up a bridge-interface (virbr0).

Now, how to install a machine or connect to the hypervisor?

How to get console?

ssh -XYC user@kvmserver
virt-manager

did not work.

On the client you could try to do:

yum groupinstall "Virtualization Client"
yum install libvirt
virt-manager

Then start virt-manager and connect to your server. However this didn’t work for me either. Is virtualization needed on the client too?

Noit is not, first: check if Virtualization is enabled on the server. Look in /var/log/messages for

kernel: kvm: disabled by bios

If it says that you’ll need to go into BIOS / Processor Options / and enable Virtualization.

Then you can start virt-manager, check that you can connect to the KVMserver.

Copy a .iso to /var/lib/libvirt/images on the server.

Re-connect to the kvm-server in virt-manager.

Add a new VM called test. Using 6.2 net-install and NAT network interface. This may take a while.

Pointing the VM to kvm-server where a httpd is running (remember firewall rules) and an SL 6.2 is stored. Installing a Basic Server.

OK, we could use virt-manager, it’s quite straight-forward and doesn’t require any edits of config files at all.

Moving on to virsh.

To install a vm you use ‘virt-install’.

You can get lots of info from ‘virsh’

virsh pool-list
virsh vol-list default
virsh list
virsh list-all
virsh dumpxml test > /tmp/test.xml
cp /tmp/test.xml /tmp/new.xml

Edit new.xml

change name to new and remove line with UUID

virt-xml-validate /tmp/new.xml
virsh help create
virsh create --file /tmp/new.xml
virsh list

This creates a new VM that uses the same disk and setup. But, if you shut down this new domain, it will disappear from virsh list –all and the list. To keep it you need to define it first:

virsh define --file /tmp/new.xml
virsh start new

This can become quite a bit more complicated. You would probably want to make clones (virt-clone) or snapshots (virsh help snapshot) instead of using the same disk file.

Making your own .xml from scratch looks fairly complicated. You could use ‘virt-install’ however.

virt-install --help
virt-install -n awesome -r 1024 --vcpus 1 --description=AWESOME --cdrom /var/lib/libvirt/images/CentOS-6.2-x86_64-netinstall.iso --os-type=linux --os-variant=rhel6 --disk path=/var/lib/libvirt/images/awesome,size=8 --hvm

For this the console actually works while running ‘virt-install’ over ssh on the kvm-server.

To make edit to a vm over ssh:

virsh edit NAMEOFVM

password when starting a linux server in single-user mode

http://www.cromwell-intl.com/unix/linux-break-in-howto.html

On RHEL 6.2-based systems (like Scientific Linux 6.2):
edit /etc/sysconfig/init

# Set to ‘/sbin/sulogin’ to prompt for password on single-user mode
# Set to ‘/sbin/sushell’ otherwise

Like this:

SINGLE=/sbin/sulogin

Then if you add an ‘s’ to the grub entry when the server boots it will ask you for a password , or hit ctrl-d. Ctrl-d makes the server enter normal boot (telinit *).

Should all linux machines be installed this way? To me this sounds like a definite deal, especially if you have the console physically or remotely accessible.

CentOS 5.8 Released

CentOS 5.8 was released today 8th of March.

http://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.8

You can download it from many mirrors, for example from FUNET: http://ftp.funet.fi/pub/Linux/INSTALL/Centos/

It installs just fine on an HP DL360 G7 with P410 and P411 controller.

CentOS has as far as I understand been slower at releasing updates than Scientific Linux (for example 6.2 was out 5 days earlier on SLC than on CentOS), this was not the case today though, SLC 5.8 is not available yet. Why?

Compare release dates here:

http://en.wikipedia.org/wiki/CentOS#Release_history

http://en.wikipedia.org/wiki/Scientific_linux#Release_history

HEPIX Spring 2011 – Day 3

Day 3 woop!

An evaluation of gluster: uses distributed metadata, so no bottleneck that comes with a metadata server, can or will do do some replication/snapshot.

Virtualization of mass storage (tapes). Using IBM’s TSM (Tivoli Storage Manager) and ERMM. Where ERMM manages the libraries, so that TSM only sees the link to the ERMM. No need to set up specific paths from each agent to each tape drive in each library.
They were also using Oracle/SUN’s T10000c tape drives that goes all the way up to 5TB – which is quite far ahead of LTO consortium’s LTO-5 that only goes to 1.5/3TB per tape. Some talk about buffered tape marks which speeds up tape operations significantly.

Lustre success story at GSI. They have 105 servers that provide 1.2PB of storage and max throughput seen is 160Gb/s. Some problems with

Adaptec 5401 – boots longer than entire linux. Not very nice to administrate. Controller complains about high temps – and missing fans of non-existing enclosures. Filter out e-mails with level “ERROR” and look at the ones with “WARNING” instead.

Benchmarking storage with trace/replay. Using strace (comes default with most Unixes) to record some operations and the ioreplay to replay them. Proven to give very similar workloads. Especially great for when you have special applications.

IPv6 – running out of IPv4 addresses, when/will there be sites that are IPv6? Maybe if a new one comes up? What to do? Maybe collect/share IPv4 addresses?

Presentations about the evolve needed of two data centers to accomodate requirements of more resource/computing power.

Implementing ITIL with Service-Now (SNOW) at CERN.

Scientific Linux presentation. Live CD can be found here:

www.livecd.ethz.ch. They might port NFS 4.1 that comes with Linux Kernel 2.6.38 to work with SL5. There aren’t many differences between RHEL and SL but in SL there is a tool called Revisor, which can be used to create your own linux distributions/CDs quite easily.

 

Errata is a term – this means security fixes.

Dinner later today!

 

Next Days:
Day 5
Day 4

Previous Days:
Day 2
Day 1