Tag Archives: script

cfengine – some useful examples / or how I learn’t about the bomb and tried Puppet instead / salt?

Building on the initial post about cfengine we’re going to try out some things that may actually be useful.

My goal would be to make /etc/resolv.conf identical between all the machines.

The server setup is the lustre cluster we built in a previous post.

In this post you’ll first see two attempts at getting cfengine and then puppet to do my bidding until success was finally accomplished with salt.

Cfengine

Set up name resolution to be identical on all machines.

http://blog.normation.com/2011/03/21/why-we-use-cfengine-file-editing/

Thought about

Make oss1 and client1 not get the same promises.

Perhaps some kind of rule / IF-statement in the promise?

Cfengine feels archaic. Think editing named/bind configs are complicated? They are not even close to setting up basic promises in cfengine.

Puppet ->

http://puppetlabs.com/

CentOS 6 Puppet Install

vi /etc/yum.repos.d/puppet.repo
pdcp -w oss1,client1 /etc/yum.repos.d/puppet.repo /etc/yum.repos.d/puppet.repo

Sign certificates:

puppet cert list
puppet cert sign 
sudo puppet cert sign --all

For puppet there’s a dashboard. This sounds interesting. Perhaps I won’t have to write these .pp files which at a glancelooks scarily similar to the cfengine promises.

yum install puppet-dashboard mysqld

service start mysqld

set mysqld password

create databases (as in the database.yml file)

after this I didn’t get much further… But I did get the web-server up. Although it was quite empty…

salt

Easy startup instructions here for getting a parallel shell going:

After it’s set up you can run a bunch of built-in special commands, see the help section about modules.

salt ‘*’ sys.doc|less

will give you all the available modules you use :)

Want to use it for configuration management too? Check out the ‘states‘ section.

What looks bad with salt is that it’s a quite new (first release in 2011)

Salt is a very common word so it makes googling hard. Most hits tend to be about cryptography or cooking.

To distribute (once) the resolv.conf do you run this on the admin-server: salt-cp ‘*’ /etc/resolv.conf /etc/resolv.conf

On to states to make sure that the resolv.conf stays the same:

  1. uncomment the defaults in the master-file about file_roots and restart the salt-master service
  2. create /srv/salt and ln -s /etc/resolv.conf /srv/salt/resolv.conf
  3. create a /srv/salt/top.sls and a /srv/salt/resolver.sls

 

In top.sls put:

base:
 '*':
   - resolver

In resolver.sls put:

/etc/resolv.conf:
 file:
  - managed
  - source: salt://resolv.conf

Then run: salt ‘*’ salt.highstate

How to get this to run every now and then? Setting up a cronjob works.

Haven’t been able to find a built-in function to accomplish this but then again, all I’m doing here is scratching at the surface so it’s working and I’m happy :)

Update to Spotify – An RSS Feed

After some time the solution I devised on http://www.guldmyr.com/blog/script-to-check-for-an-update-on-a-web-page/ just did not elegant enough (also it stopped working).

Instead of getting some kind of output in a terminal sometimes somewhere I decided to make an RSS feed that updates http://guldmyr.com/spotify/spot.xml instead :)

I suspect that the repository itself could be used to see if there’s an update to it. It has all these nice looking files in here: http://repository.spotify.com/dists/stable/ – but I also suspect this is a repository for debian/ubuntu which I cannot use on my RHEL-based workstation.

Thus:

A bash script was written. It uploads the spot.xml whenever there is an update. The script does not run on the web-server so it ftps the file to the web-server, it would be nice if it did because then the actual updating of the feed would be so much more simple (just move/copy a file).

But, I hope it works :) Guess we’ll see next time there’s an update to spotify!

The script itself is a bit long and I hope not too badly documented, so it’s available in the link below: http://guldmyr.com/spotify/update.spotify.rss.feed.sh

Or, more easily, you can just add http://guldmyr.com/spotify/spot.xml to your RSS reader (google’s reader, mozilla’s thunderbird, there are many of them).

Some things I learned:

  • Latest post in an RSS feed is just below the header, making it a bit awkward to update via a script as you cannot just remove the </channel> and </rss>, add a new <item></item> and then add the </channel> and </rss> at the end again.
  • lastBuildDate in the header also needs to be updated each time the feed is updated. In the end I decided to re-create the file/feed completely every time there was an update.
  • Some rss-readers appear to have a built-in interval that they use to check if there’s an update. So for example you could update the rss-feed and press ‘refresh’ but the client still won’t show the new feeds. Google Reader does this for example. With Mozilla’s Thunderbird you can ask it to update (Get Messages) and it will. You don’t need an e-mail account in Thunderbird to use it as an RSS reader by the way.
  • http://feedvalidator.org is a great tool, use it.

I claim no responsibility if you actually use the script, the feed however should be fairly safe to subscribe to.

 

[Valid RSS]

Simple RRD graphs

This is how to create simple RRD graphs using one data source that can be 0 and above. It is not an “ever increasing” counter.

It will look like this:

example rrd graph

 

1. Create the rrd database

I wrote this down in a .sh file so I can go back later and see how it was set up.

#!/bin/sh
rrdfile=’/home/$user/rrd/movers.rrd’
rrdtool=’/usr/bin/rrdtool’
$rrdtool create $rrdfile –step 300 DS:movers:GAUGE:600:U:U RRA:AVERAGE:0.5:1:576 RRA:AVERAGE:0.5:6:672 RRA:AVERAGE:0.5:24:732 RRA:AVERAGE:0.5:144:1460

#5 minute step (base interval with which data will be fed into the RRD)
#10 minute heartbeat for the data source
#2 days of 5 minute averages
#2 weeks of 1/2 hour averages
#2 months of 2 hour averages
#2 years of 12 hour averages
 

2 Add data to the rrd

Also done in a bash script. Because the –step above is set to 300, you need to run this script every 300 seconds (or every 5 minutes). The script specified in $allpools prints the path to a file. Then with $output and $movers that file is grepped for ‘RUNNING’ and then it counts how many lines that was – amount of movers.

#!/bin/sh
rrdfile='/home/$user/rrd/movers.rrd'
rrdtool='/usr/bin/rrdtool'

allpools='/home/$user/bash_script.sh'
output=$($allpools)
movers=$(grep RUNNING $output|wc -l)

$rrdtool update $rrdfile N:$movers

The N: is NOW. $movers is the value you want to plot.

3. Make a graph

Add this to another .sh script. This you can run at whatever interval you want.

#!/bin/sh
rrdfile='movers.rrd'
rrdtool='/usr/bin/rrdtool'
dpath="/var/www/html/graphs/movers"
alltimeimage="$dpath/movers.png"
lastweekimage="$dpath/moverslw.png"
lastmonthimage="$dpath/moverslm.png"
last4hoursimage="$dpath/moversl4h.png"
last3monthsimage="$dpath/moversl3m.png"
lastday="$dpath/moversld.png"
enddate=$(date +%s)
#enddate is the same as "now"
cd /home/$user/rrd

$rrdtool graph $alltimeimage --end now --start 1321603000 \
        -v from_beginning -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000
#rrdtool graph /path/to/image.png --end now --start when_I_started_capturing -v label_left -t title_top \
#DEF: as I only have one I only used movers, maybe you can change the names in case you have several data sources
#DEF: you can also use other things than AVERAGE (like MIN/MAX)
#LINE: #000000 is black

$rrdtool graph $lastweekimage --start -1w \
        -v last_week -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000 \
        AREA:movers#8C2E64 \
        GPRINT:movers:LAST:"Current\: %1.0lf" \
        GPRINT:movers:MAX:"Max\: %1.0lf" \
        GPRINT:movers:MIN:"Min\: %1.0lf" \
        GPRINT:movers:AVERAGE:"Avg\: %1.0lf"

#If you want to make it a little more complex. AREA fills the space between the value and the x-axis.
#GPRINT statements print some values relating to the graph.

$rrdtool graph $last4hoursimage --end now --start end-4h \
        -v last_4_hours -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000

$rrdtool graph $lastmonthimage --end now --start -1m \
        -v last_month -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000

$rrdtool graph $last3monthsimage --end now --start -8035200 \
        -v last_3_months -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000

$rrdtool graph $lastday --end now --start -1d \
        -v last_day -t active_movers \
        DEF:movers=$rrdfile:movers:AVERAGE LINE:movers#000000

4. Crontab – scheduling

*/5 * * * * /bin/bash /home/$user/rrd/rrd.update.sh > /dev/null 2>&1
#every 5 minutes
*/15 * * * * /bin/bash /home/$user/rrd/rrd.graph.sh > /dev/null 2>&1
#every 15 minutes

 

5. Final Words

I am not providing the data gathering script here as you probably won’t need it: it lists movers (transfers) on all pools in a dCache system.

Script To Check For an Update on a Web Page

Hey!

This is used for me on my Linux workstation to get a notification if there is a new spotify release whenever I open a new terminal.. It would be applicable for other (probably also simple) pages that aren’t updated frequently.

Reason: http://repository.spotify.com/pool/non-free/s/spotify/

I wanted to see if there was a new spotify release for Linux/QT.

Method: The URL is above – but what if I do not want to go there every day and get disappointed?

Way nicer to have a script do it for me.

This script saves the index.html from the URL above of each day.

Then each day when it downloads the .html it checks if it’s different from yesterday.

This has its limitations, if there is an update in the weekend I will never know.

The script should check the last x amount of days and if any of them are different from today it should tell me. The script checks if any of the files are different from today, if so, it will write something into another file. The script then checks if this file is non-empty, if it has data in it, it will write to this other file that. Tada. :p

If it is, then it will write to a file that is referenced in $HOME/.bashrc.

The layout of the blog doesn’t like really long lines in <pre>, but you can select below and only get the post (and not the stuff on the right side).

spot_check.sh:

#!/bin/sh

dat1=$(date +%Y.%m.%d)
daty=$(perl -MPOSIX=strftime -le 'print strftime "%Y.%m.%d",localtime (time - 86400)')
dat2=$(perl -MPOSIX=strftime -le 'print strftime "%Y.%m.%d",localtime (time - 172800)')
dat3=$(perl -MPOSIX=strftime -le 'print strftime "%Y.%m.%d",localtime (time - 259200)')
dat4=$(perl -MPOSIX=strftime -le 'print strftime "%Y.%m.%d",localtime (time - 345600)')
dat5=$(perl -MPOSIX=strftime -le 'print strftime "%Y.%m.%d",localtime (time - 432000)')

path="$HOME/Downloads/Spotify/saved"
out="$HOME/Downloads/Spotify/diff.log"
bout="$HOME/.spotcheck"
wget -q http://repository.spotify.com/pool/non-free/s/spotify/ -O $path/$dat1.html

diff -q $path/$dat1.html $path/$daty.html > $out
diff -q $path/$dat1.html $path/$dat2.html >> $out
diff -q $path/$dat1.html $path/$dat3.html >> $out
diff -q $path/$dat1.html $path/$dat4.html >> $out
diff -q $path/$dat1.html $path/$dat5.html >> $out

if [[ -s $out ]] ; then
echo $out "is not empty";
echo "#!/bin/sh" > $bout;
echo "echo new spotify release" >> $bout;
chmod +x $bout;
else
echo $out "is empty";
echo "No new spotify release.";
rm $bout;
fi;

Crontab (daily at 0915):

15 09 * * * /bin/bash /home/username/Downloads/Spotify/spot_check.sh 2>&1

.bashrc:

if [ -f ~/.spotcheck ]; then
cd $HOME
./.spotcheck
fi

Ubuntu + Automatic Software Updates

How often do you actually log on to your machine – hit
sudo apt-get update; sudo apt-get upgrade
without reading what the changes are? I do it every time, unless it’s a dist-upgrade we’re talking about.

So how do we get this going?

The tool you’re looking for is called cron-apt.

$sudo apt-get install cron-apt

This installs postfix for you as well (I chose local server, bah to e-mails, no pain, no gain).
After this, edit /etc/cron.d/cron-apt to your preferences.
If you want to see what it does – just hit what it says in that file:

test -x /usr/sbin/cron-apt && /usr/sbin/cron-apt

and see what it does!

Test -x (file exists and execute permission is granted)
Second one runs it (but this did not produce any output)
Check out /var/log/cron-apt/log for details of what it does.

Please note that the cron-apt also runs “apt-get dist-upgrade” which would upgrade your distribution. So be careful.
It also runs autoclean :)

If you want more details – it’s possible to do this other ways (for example with anacron and /or bash scripts).
See this link: https://help.ubuntu.com/community/AutoWeeklyUpdateHowTo

Install Drupal 7 in Debian 6

Time for another go!

Drupal is ..

.. a pretty famous and widely used CMS out there – so here we go ->

1. Get sudo configured on debian. Sucks to have to log on as root all the time when installing apps etc.

2. Download and untar drupal 7

3. Read INSTALL.TXT

Requirements:

– A web server. Apache (version 2.0 or greater) is recommended.
– PHP 5.2.4 (or greater) (http://www.php.net/).
– One of the following databases:
– MySQL 5.0.15 (or greater) (http://www.mysql.com/).

“sudo apt-get install lamp-server^” does not work in Debian 6 :/

Following this guide instead.

  1. aptitude update  and then upgrade (maybe not necessary because I used apt-get.. why have two??)
  2. sudo apt-get install mysql-server mysql-client (in Debian 6 you put in sql root user password during install)
  3. sudo apt-get install apache2 php5 php5-mysql libapache2-mod-php5 phpmyadmin
  4. Surf to http://ip/phpmyadmin and log on to the mysql db – does it work? yay!
  5. Create drupal db – see INSTALL.mysql.txt – basically this just tells you to create a database and a user. It asks you to do this via manual SQL queries, but we have phpmyadmin so we just have to; 1. click on databases and create a new one. 2. after that, click on privileges and create a new user. 3 just type in username and password, leave the rest for default.
  6. Copy extracted files to your www directory. Beware of rights, use chmod and possibly chown. /var/www/ is the default directory.
  7. Surf to http://ip/drupal (where install.php is)
  8. Standard setting
  9. Then it complains that it doesn’t have access. Because I had to set chmod 777 on the ‘sites’ directory under /drupal.
  10. Then I need to copy a file and make it writeable, just doing what the script tells me to.
  11. Configure the database settings.
  12. Now you can remove write access permissions on the sites/default directory and sites/default/settings.php
  13. Put in contact and admin accounts stuff.
  14. Done! Wow, that was easy :)

So much to do in there!
I will have to get back about this in another post :)

Customized WebMail Notifier (x-notifier) Script for Squirrelmail

I use the WebMail Notifier / or as it’s nowadays called – the X-notifier plugin in Firefox to see if I have gotten any new e-mails.

The standard ones – gmail or hotmail works great, but there are also scripts (xnotifier scripts here) to make this work with your own – or other e-mails based on other e-mail servers, for example Squirrelmail.

To customize this to work with your own setup you may need to change the script available on the link above (as of version 2011-01-04).

If your squirrelmail web server enforces https and is installed on for example https://guldmyr.com/squirrelmail and not https://guldmyr.com/src (which the script by default assumes), you will have to alter the script.

I had to change this function in the code to make it work:

function init(){
if(this.server){
if(this.server.indexOf("https")!=0)this.server="https://"+this.server;
if(this.server.charAt(this.server.length-1)=="/")this.server=this.server.substring(0,this.server.length-1);
}else if(this.user.indexOf("@")!=-1)this.server="https://mail."+this.user.split("@")[1];
this.loginData=[this.server+"/squirrelmail/src/redirect.php","login_username", "secretkey",];
this.dataURL=this.server+"/squirrelmail/src/left_main.php";
this.mailURL=this.server+"/squirrelmail/src/webmail.php";
}

The WebMail notifier script squirrelmail_guldmyr

The X-Notifier script squirrelmail_guldmyr_xnotifier

Lifehack currency

Haven’t gotten around to the e-mail script yet, what would qualify? I check it so regularly often anyway that that is not something I want, I also don’t get that many e-mails.

On to another script that would assist me when I need to send money between a non-euro country and a country with euro. How to keep track of when the non-euro currency gives as much euro as possible?

Also, good thing I checked with the girlfriend:
When sending money to a non-euro country from a euro country, you want to get as many non-euro as possible.

When sending money to a euro country from a non-euro country, you want to use as few non-euro as possible to make a euro.
#/bin/sh

webpage=http://url.com/page.html
inputfile=/home/user/valuta/valuta.html
play=/home/martbhell/valuta/playfile
outputfile=/var/www/valuta.log

wget $webpage -O $inputfile
cat $inputfile | grep EUR/SEK -m 1 > $play
awk '{print $4}' $play >> $outputfile
date >> $outputfile

This is how the bash script looks like at the moment.

I would prefer to have the date after the value of EUR/SEK because then it’s a lot easier to read.
But I was thinking maybe I can sort this out via a php-script when presenting the file.
Basically every 2nd line should have a new line, not every one which is how the file looks like after the above bash script, see below:

8.9134
Tue Jan 18 10:15:05 PST 2011

In Ubuntu Server (what I’m running as a virtual machine to test the scripts) to set timezone it is: dpkg-reconfigure tzdata

** just about to go to bed, but why don’t I remove the date from the lines, and then instead add them via php? .. hmm. but then i won’t have the date of when the value was taken.. is it important? seems kind of relevant, it’s not really the date the exchange rate was updated, just when script was run so perhaps during the night we’d see updates on the numbers but not on the rate.
to be contemplated I guess – also maybe put this in a mysql db?**

** After sleeping on it I ended up doing it like this:

wget $webpage -O $inputfile
cat $inputfile | grep EUR/SEK -m 1 > $play
# - this greps for EUR/SEK and only line 1 (there are three on that .html)
P=$(awk '{print $4}' $play)
# - column 4 from output file of the above cat/grep
S=$(date)
# - just to put the date in a variable
T='TAG BR TAG'
# - to put an HTML BR tag at the end of each line
U=,
# - to add a comma between the values, might come in handy if I want to import/export this.

echo $P$U $S$U $T >> $outputfile

This does the trick and the output now looks like this in the file, and comes out like that on the webpage too.

8.9082 Wed Jan 19 07:40:54 EET 2011
8.9082 Wed Jan 19 07:50:54 EET 2011

Now I just have to find a way to run this and put it on my webhost :)
**
OK believe I have a way but it’s not so good as it’s via unsecure ftp :/

Thinking about the mysql thing but as my webhost don’t have a shell that would mean some sneaky “shortcuts” to get the things in the db :/
**
01/20/2011 – Added CSV in the outputfile too.

Common Passwords

So read in a Swedish IDG today about the most common passwords.

On number 4 we had lifehack – but I had never heard about this before, so checked out Wikipedia on this and apparently this is a quickly written script with the intent to simplify life by filtering for example a feed like for example e-mail or RSS!

Unfortunately I do not have a lifehack :/ But it would be nice to have one!
Think I will try to set one up as a filter on my e-mail account – does this qualify?

p.s. also I doubt that this is #4 in Sweden – but IDG Sweden I suspect quite strongly just translate English news
d.s.

p.s.2 Today’s song:  James Vincent McMorrow – If I had a Boat
d.s.2