Reseptit: Tuorepuuro Mustikalla


  • Maitorahka (pehmeä) 250g
  • Maito 100g
  • Nesteinen Hunaja 6-10g
  • Pakaste Mustikka 100g
  • Kaurahiutaleita 40g
  • vähän kardemumma jos tykkää
  • 1 laatikko


Laita mustikoita laatikkoon ja odottaa noin tunti. Sen jälkeen laitta kaikki muut ainesosit laatikkoon ja miksata. Huomata että mustikoita eivät menevät liian rikki kuin haluat. Sulje laatiko ja laitta jääkaapiin yli yö.

Nautista aamulla!

Contributing To OpenStack Upstream

Recently I had the pleasure of contributing upstream to the OpenStack project!

A link to my merged patches:

In a previous OpenStack summit (these days called OpenInfra Summits), (Vancouver 2018) I went there a few days early and attended the Upstream Institute .
It was 1.5 days long or so if I remember right. Looking up my notes from that these were the highlights:

  • Best way to start getting involved is to attend weekly meetings of projects
  • Stickersssss
  • A very similar process to RDO with Gerrit and reviews
  • Underlying tests are all done with ansible and they have ARA enabled so one gets a nice Web UI to view results afterward. Logs are saved as part of the Zuul testing too so one can really dig into and see what is tested and if something breaks when it’s being tested.

Even though my patches were one baby and a bit over 1 year in time after the Upstream Institute I could still figure things out quite quickly with the help of the guides and get bugs created and patches submitted. My general plan when first attending it wasn’t to contribute code changes, but rather to start reading code, perhaps find open bugs and so on.

The thing I wanted to change in puppet-keystone was apparently also possible to change in many other puppet-* modules, and less than a day after my puppet-keystone change got merged into master someone else picked up the torch and made PRs to like ~15 other repositories with similar changes :) Pretty cool!

Testing is hard! is one backport I created for puppet-keystone/rocky, and the Ubuntu testing was not working initially (started with an APT mirror issue and later it was slow and timed out)… After 20 rechecks and two weeks, it still hadn’t successfully passed a test. In the end we got there though with the help of a core reviewer that actually updated some mirror and later disabled some tests :)

Now the change itself was about “oslo_middleware/max_request_body_size” So that we can increase it from the default 114688. The Pouta Cloud had issues where our Federation User Mappings were larger than 114688 bytes and we coudln’t update them anymore, turns out they were blocked by oslo_middleware.

(does anybody know where 114688bytes comes from? Some internal speculation has been that it is from 128kilobytes minus some headers)

Anyway, the mapping we have now is simplified just a long [ list ] of “local_username”: “federation_email”, domain: “default”. I think next step might be to try to figure out if maybe we can make the rules using something like below instead of hardcoding the values into the rules

"name": "{0}" 

It’s been quite hard to find examples that are exactly like our use-case (and playing about with is not a priority right now, just something in the backlog, but could be interesting to look at when we start accepting more federations).

All in all, I’m really happy to have gotten to contribute something to the OpenStack ecosystem!

Taking puppet-ghostbuster for a spin

We use puppet at $dayjob to configure OpenStack.

I wanted to know if there’s a lot of unused code in our manifests!

**From left of stage enters: **

Step one is to install the puppet modules and gems and whatnot, this blog post was good about that:

Next I needed to get the HTTP forwarding of the puppetdb working, this can apparently (I learnt about ssh -J) be done with:

ssh -J INTERNALIPOFPUPPETMASTER -L 8081:localhost:8080

Then for setting some variables pointing to hiera.yaml and setting


Unsure if hiera.yaml works, just copied it in from the puppetmaster

Then running it

find . -type f -name ‘*.pp’ -exec puppet-lint –only-checks ghostbuster_classes,ghostbuster_defines,ghostbuster_facts,ghostbuster_files,ghostbuster_functions,ghostbuster_hiera_files,ghostbuster_templates,ghostbuster_types {} \+|grep OURMODULE

Got some output! Are they correct?

./modules/OURMODULE/manifests/profile/apache.pp – WARNING: Class OURMODULE::Profile::Apache seems unused on line 6

But actually we have a role that contains:

class { ‘OURMODULE::profile::apache’: }

So I’m not sure what is up… But if I don’t run all the ghostbuster and instead skip the ghostbuster_classes test I get a lot fewer warnings for our module.

/modules/OURMODULE/manifests/profile/keystone/user.pp – WARNING: Define OURMODULE::Profile::Keystone::User seems unused on line 2

Looking in that one we have a “OURMODULE::profile::keystone::user” which calls keystone_user and keystone_user_role. However we do call it but like this:

OURMODULE::Profile::Keystone::User<| title == ‘barbican’ |>

Or in this other place:

create_resources(OURMODULE::profile::keystone::user, $users)

Let’s look at the next. which was also a “create_resources” . Meh. Same same. And if I skip the ghostbuster_defines? No errors :) Well it was worth a shot. Some googling on the topic hints that it might not be possible with the way puppet works.

Home Network Convergence

Finally got around to sorting out an issue which basically was that the TV+Chromecast near the TV was on another network than the media server and thus I couldn’t stream videos by using my phone.

I’ve been thinking lately and in previous posts that maybe I should just get an access point and plug it in a port in the correct VLAN near the TV, as mentioned in a previous posts in or

But then the other day I started looking at maybe the raspberry Pi I have as a media player could be turned into an access point? (some googling suggest it could be done, but several talk about basic linux install with hostapd and dnsmasq which maybe openwrt would be more fun).

Then I realized that I already have an access point over there which is what phones and the chromecast is connected to and I don’t want a third wifi network at home!

Finally the solution is to get the media server onto the same network as the chromecast. This I could now after the VLAN changes do quite easily.

– take the desktop’s cable and put it in a dumb 1GbE switch I had unused
– new cable from my desktop’s system board NIC to go same a switch
– at this point ssh into media server from internet (because it has no monitor/keyboard)
– add usb NIC to the media server and connect to the switch
– setup static NIC without default gw etc
– update firewalls

Things learnt:
– the USB NIC got a funny and long interface name when I plugged it in. On next reboot it got eth0. So the network interface config I wrote initially didn’t really work anymore :)

Feels good to not have to this this old and unmaintained media player on the raspberry pi anymore. The android app I use even supports EAC3!

Next I’m wondering what to do with that raspberry pi! retropie maybe?

VLAN in the home network!

Above is a previous post in this series about some improvements to my home network. With two modems from two ISPs.

So! On Alibaba I found two Hasivo 8x1GbE managed fanless switches with VLAN support. Delivery time to Finland was really quick. It didn’t say (ok, I didn’t read all or ask seller) if they included European adapters but turns out they did!

To recap: The idea was to use the one long cable and transport two VLANs over it. Other than that how I would actually implement it was a bit fuzzy.

New layout. Numbers in the switchlike boxes are VLAN ids

Things I’ve learnt while connecting these:

  • Creating a VLAN subinterface in Windows 10 seems to require Hyper-V.. This means if I have one machine and want it in both VLANs I need two NICs. No bother, I found a USB3 1GbE adapter in a box at home when cleaning :)
  • I knew about VLAN trunk cables, and the way they are implemented in this Web Interface is to set both VLANs as tagged on the same port.
    • The web interface of this switch has two pages about VLANs. One is a static setup where you say which port is a member of which VLAN and if it’s tagged or untagged. Changing the default or removing a port from VLAN 1 was not possible in this the first screen. In the second however one can change the PVID which is the untagged/native VLAN.
  • Also found a few extra short ethernet cables in old ISP modem boxes, very nice to have with this as this exercise required a few more cables.
  • So on the desktop I now need to choose which network interface to use to get to the Internet. I learnt that if I just remove default gateway for IPv4 from ISP A and use the NIC to ISP B then IPv6 from ISP A will still be there and used :)
First VLAN config page: The static VLAN/tagged VLAN setup on the other page
Second VLAN config page: The native VLAN / PVID configuration on one switch

Some more bits about the switches is in order:

On a related note, the modems have switches builtin and I also had a 6 port fanless unmanaged switch which has been working great for the last 6 years or so but now that got deprecated, yay one less UK plug adapter :). I prefer using an extra switch as opposed to the modem’s. The modems sometimes reboot which is annoying as it interrupts anything I’m doing, even if it’s only local without going to the Internet.

They have a very basic looking CGI web interface. The web interface is only accessible on VLAN 1. The firmware is from 2013 and has version v1.0.3, I asked the seller (which was very responsive to all of my questions) and apparently a newer one is in the works but unfortunately, there’s no way to subscribe to any news about new firmware coming.. I doubt it’ll ever come.

One switch-like quality was that to save the running configuration you make in the web interface, you have to click on save.

There is a manual, one just had to ask the seller on Ali Baba for it – attaching it here for convenience.

All in all this worked out quite nicely. We’ll see how this keeps up. Some further avenues of interest:

  • On my desktop I now use the USB NIC to get to the internet, I tried once to use the system board NIC but then had some issues.. perhaps that’s a bit faster. Using a USB 3 port vs a USB 2 port gave about half a millisecond faster latency to this place I usually
  • Response time on the DSL is a bit higher (17 vs 12) to
    • tracert shows 17ms to first hop with the DSL’s ISP
    • tracepath shows 10ms to first hop with the cable modem’s ISP
    • pinging the DSL modem is 1ms vs cable modem 3ms
    • ping6 to is 10ms with DSL
  • Maybe time to look into a cheap AP to plug in near ISP modem B but connected to VLAN 10 so wifi clients there can reach the server..
  • The switches have a bunch of other settings that could be fun to play with too.

Was the layout diagram above not clear? Try this:

Some updates to the home network 1/2

Current layout:

  • The corner:
    • Cable MODEM NAT&WiFi ISP A
    • One server
    • One desktop who should be on both networks, default gw on one
    • Phones and tablets wifi
  • TV Area:
    • DSL Modem NAT&WiFi ISP B
    • One raspberry pi connected to the server
    • Phones and tablets wifi
    • One chromecast, would be nice to have connected to the server too
    • One ps3
  • 20m, a microwave, and walls in between the two areas (and most importantly the server and the raspberry pi) so wifi is spotty.

Most import factor: One long ass 30m UTP cable connecting the raspberry pi to the same network as the server

It would be cool to: A) be able to connect the desktop to the modem out by the TV and B) Get the chromecast (WIFI only) onto the same network as the server, perhaps with an AP for ISP A network near the TV area

Stay tuned for another post in the hopefully near future when I’ve got something working to help with A/B :)

Update : another graphical representation of the netwirjs:

A story about writing my first golang script and how to make a mailman archive summarizer

Time to try out another programming language!

Golang I see quite frequently in my twitters so have been thinking for a while – why not give it a shot for the next project!


Took a while to figure out what would be a nice small project. Usually my projects involve some kind of web scraping that helps me somehow, is one which tells me if there “Was An Nhl Game Yesterday”. Also in this case it turned out I wanted something similar, but this time it was work related. I have been tinkering with this for a week or so on and off. Today, the day after Finnish Independence day I thought let’s get this going!

For $dayjob I’m in a team that among many other things manage a CEPH rados gateway object storage service. CEPH is quite a big (OK, it’s quite an active) project and their mailing lists are a decent (OK, I don’t know a better) places to stay up to date. For example the has lots of interesting threads. However it sometimes gets 1000 messages per month! This is way too many for me, especially since most of them are not that interesting to me as I’m not an admin of any CEPH clusters, our services only use them :)

So the idea of an aggregator or filter was born. The mailing list has a digest option when subscribing, but it doesn’t have a filter.

Enter “mailman-summarizer“!

As usual when I play around in my spare time I try to document much more than is necessary. But if I ever need to re-read this code in a year or two because something broke then I want to save myself some time. Most likely I won’t be writing much more Go between now and then so the things I learnt while writing this piece will probably have been purged from memory!

The end result
as of right now looks like below in one RSS reader:

In summary the steps to get there were:

  • Used to do some web scraping of the mailman/pipermail web archive of the ceph-users e-mail list. Golang here was quite different from Python and beautifulsoup. It uses callbacks. I didn’t look too deeply into those but things did not happen in the same order they were written. Maybe it can be used to speed things up a bit, but the slowest part of this scraping is the 1s+random time delay I have between the HTTP GETs to be nice to the Internet ;)
  • It loops over the Months (thread.html) for some of the years and only saves links and their titles which has “GW” in the title.
  • Put this in a map (golang is different here too. Kind of like a python dictionary but one had to initialize it in advance. Lots of googling involved :)
  • Loop over the map and create RSS, JSON, ATOM or HTML output using the gorilla feeds pkg. Use of the time pkg in Golang was needed to have nice fields in the RSS, this was interesting. Not using UNIX 1970 seconds epoch but some date in 2006? Some|most functions?types?interfaces? (I don’t know the names of most things) give a value AND an error on the call makes declaring? a variable a bit funny looking:
for l, _ := range data {
    keys = append(keys, l)

 That was the golang part. I could have just taken the output, stored it in a file and put it in a web server somewhere. uses google’s object store, but it has an appengine python app in front. So I took a break and watched some NHL from yesterday and in the breaks I thought about what would be a slim way of publishing this feed. I did not want to run a virtual machine or container constantly, the feed is a static HTML and can just be put in an object store somewhere. It would need a place to run the code though, to actually generate the RSS feed!

I’m a big fan of travis-ci and as part of this project the continuous integration does this on every commit:

  • spawn a virtual machine with Go configured (this is all part of travis(/any other CI system I’ve played with), just needs the right words in .travis.yml file in the repo)
  • decrypt a file that has the credentials of a service account which has access to a bucket or two in a project in google cloud
  • compiles mailman-summarizer
  • run a bash script which eventually publishes the RSS feed on a website. It does this to a staging object storage bucket:
    • go runs “mailman-summarizer -rss” and writes the output to a file called feed.xml
    • uses the credentials to write feed.xml to the bucket and make the object public-readable
    • Then the script does the same to the production bucket

One could improve the CI part here a few ways:

  • Right now it uses the travis script provider in the deploy phase. There is a ‘gcs’ provider, but I couldn’t find documentation for how to specify the JSON file with the credentials like with appengine. I get a feel that because it’s not easy I should probably use appengine instead..
  • One could do more validation, perhaps validate the RSS feed before actually uploading it. But I couldn’t find a nice program that would validate the feed. There are websites like though so I used that manually. Maybe RSS feeds aren’t so cool anymore , I use them a lot though. 
  • An e2e test would also be cool. For example fetch the feed.xml from the staging and make sure it is the same as what was uploaded. 

Certified OpenStack administrator – check!

Yay! Took the exam last week after having studied a few days. Nothing seemed to be impossible from the list of requirements at least :)

Thought because it’s done online it can be scheduled almost on demand, but one had to wait at least 24h for the exam environment to get provisioned.

The online proctor part was a first for me. For sure it’ll help if you have a non cheap webcam (with a longer wire) that can be moved around.

The results arrived after only a day, 96% so I missed something small somewhere. Maybe about swift if I have to guess :)

I always liked these practical exams. One really need some experience with what is being tested. I don’t thinn it is possible to just study. Fortunately it’s easy to install a lab environment! 

Playing with devstack while studying for OpenStack Certified Administrator

Below I’ll go through some topics I thought about while reading through the requirements for COA:

  • Users and passwords because we use a LDAP at $dayjob. How to set passwords and stuff?
    • openstack user password set
    • openstack role add –user foo member –project demo
  • Users and quota. Can one set openstack to have user quota? 
    • guess not :)
  • How to default quota with CLI?
    • nova quota-class commands. Found in operator’s guide in the docs.
  • Create openrc without horizon
    • TIL that OS_AUTH in devstack is http://IP/identity . No separate port :) And couldn’t really find a nice way. After it’s working there’s an $ openstack configuration show though which tells stuff..
  • Cinder backup
    • cool, but this service is not there by default in devstack.
  • Cinder encryption 
    • another volume type with encryption.  Shouldn’t need barbican with a fixed_key but I don’t know, cinder in my devstack wasn’t really working so couldn’t attach and try it out. Have some volumes with a encryption_key_id of “000000…” so maybe? Attaching my LVMs isn’t working for some reason. Complaining about initiator ?
  • Cinder groups.
    • Details found under cinder admin guide under rocky.. not Pike. Using cinder command one can create volume group types and then volume groups and then volumes in the volume group. All with cinder command. After you have added volumes into a group you can take snapshots of a volume group. And also create a volume group (and volumes) from the list of snapshots.
  • Cinder storage pool
    • backends. In devstack it’s devstack@lvmdriver-1apparently one can set volume_backend_name both as a cinder.conf and as a property
  • Object Expiration. Supported in CEPH rados gateway? Yes, but in luminous
    • available in default devstack, done with a magical header X-Delete-After:epoch
  • Make a Heat template from scratch using the docs. 
    • can be made quite minimal
  • Update a stack
  • Checking status of all the services
  • Forget about ctrl+w.

Study Environment

A devstack setup in an Ubuntu 18.04 in a VM in $dayjob cloud. This means no nested virtualization and I wonder how unhappy neutron will be because port security. But it’s all within one VM – it started OK, not everything worked but that’s fine with me :) Probably just need a local.conf which is not the default!

One thing I got to figure out was the LVM setup for cinder. Always fun to read logs :)

Studying for Openstack Certified Administrator

The plan : study a bit and then attempt the coa exam. If I don’t pass then attend the course during openstack summit: SUSE

And what to study? I’ve been doing openstack admin work for the last year or two. So I have already done and used most services, except Swift. But there are some things that were only done once when each environment was setup. Also at $dayjob our code does a lot for us.

One such thing I noticed while looking through

Was setting the default project quota. I wonder if that’s a cli/webui/API call or service config. But a config file would be weird, unless it’s in Keystone. Turns out default quotas are in each of the services’ config files. It’s also possible to set a default quota with for example the nova command.

Another perhaps useful thing I did was to go through the release notes for the services. $dayjob run Newton so I started with the release after that and tried to grok and look for biggest changes. Introduction of placement was one of them and I got an introduction to that while playing with devstack and “failed to create resource provider devstack” error. After looking through logs I saw a “409 conflict” HTTP error or placement was complaining that the resource already existed. So somehow during setup it was created but in the wrong way? I deleted it and restarted nova and it got created automatically and after that nova started acting a lot better :) – now with user preferences!

As part of my learning some more about modern web developments I’ve learnt that cookies now suck and one should use some kind of local storage in the web browser. One of them is Web Storage . got some more updates over the weekend :)

Now if you choose a team in the /menu and later (from the same browser) visits you’ll get the results for that team. The selection can be cleared in bottom of the menu. – site rename and automatic deployments!

This is a good one!

Previous entries in this series: and

Renamed to

First things first! The website has been renamed to! Nobody in their right mind would type out so now it’s an acronym of wasthereannhlgameyesterday. . Using Sweden .se top level domain because there was an offer making it really cheap :)


Automatic testing and deployment

Second important update is that now we do some automatic testing and deployment.

This is done with where one can view builds, the configuration is done in this file.

In google cloud there’s different versions of the apps deployed. If we don’t promote a version it will not be accessible from (or but via some other URL.

Right now the testing happens like this on every commit:

  1. deploy the code to a testing version (which we don’t promote)
  2. then we run some scripts:
    1. pylint on the python scripts
    2. an end to end test which tries to visit the website.
  3. if the above succeeds we do deploy to master (which we do promote) – now using object storage!

To continue this series of blog posts about the awesome web site where you can see if there was in fact, an NHL game last night :)

Some background: First I had a python script that scraped the website of and later changed that to just grab the data from the JSON REST API of – much nicer. But it was still outputing the result to stdout as a set and a dictionary. And then I would in the application import this file to get the schedule. This was quite hacky and ugly :) But hey it worked.

As of this commit it now uses Google’s Cloud Object Storage:

  • a special URL (one has to be an admin to be able to access it)
  • there’s a cronjob which calls this URL once a day (22:00 in some time zone)
  • when this URL is called a python script runs which:
    • checks what year it is and composes the URL to the API so that we only grab this season’s games (to be a bit nicer to the API)
    • does some sanity checking – that the fetched data is not empty
    • extracts the dates and teams as before and writes two variables,
      • one list which has the dates when there’s a game
      • one dictionary which has the dates and all the games on each date
        • probably the last would be enough ;)
    • finally always overwrites the schedule


To only update it when there are changes would be cool as then I could notify myself (and possibly others) when there have been changes, but it would mean that the JSON dict has to be ordered, which they aren’t by default so I’d have to change some stuff. The GCSFileStat has a checksum-like metadata of the files called ETAG. But probably it would be best to first compute a checksum of the generated JSON and then add that as an extra metadata to the object as this ETAG is probably implemented differently between providers. – fixed – working again! – fixed – working again!

With NHL 2017-2018 season coming up and I had some extra spare time I thought why not finally fix this great website again :)

As NHL changed the layout of their schedule page about two seasons ago – there’s these days “infinite scrolling” or whatever it’s called when the page only loads what you see on the screen. This means it’s a bit difficult to scrape the page (but not impossible).

Lately I’ve been using REST API and JSON data for quite many things – after a short search I managed to find this hidden gem:,schedule.linescore,schedule.broadcasts,schedule.ticket,

Now that’s a link to an API provided by NHL where you get the schedule and you can filter it. I’m not sure what all the parameters do, they’re not all needed. You just need the startDate and endDate. The API also has standings and results. I have not managed to find any documentation for it. Best so far seems to be this blog post.  So I’m not sure about if it’s OK to use it or if there are any restrictions.

p.s. – there is a shorter URL to the main page: – but the commands – like – does not work.

Was there an NHL game last night?

haproxy lab setup!

Been seeing haproxy more and more lately as it seems even the stuff I work with are moving towards web :)

So a good time as any to play around with it!

First setup is the tag “single-node” in – this means it just configures one apache httpd and one haproxy. In the haproxy it creates multiple vhosts with content being served from different directories, and then it points to each of these as a haproxy backend.

To illustrate the load balancing the playbook also installs php and shows the path of the file that’s being served.

I used ansible for this and only tested it with CentOS7 in an OpenStack. The playbook also sets up some “dns” in /etc/hosts.

There are also “ops_playbooks” for disabling/enabling backends and setting weights.

I wonder what’s a good next step. Maybe multiple hosts / Docker containers? Maybe SSL termination + letsencrypt? Maybe some performance benchmarking/tuning?
I like the help for the configuration file – it begins with some detail about what an HTTP request looks like :)

Automated testing of ansible roles

What is this?

Basic idea: whenever most things happen in your ansible repository (for example commit, pull request or release) then you want to automatically test the ansible code.

The basic tools:

  • syntax-checking
  • lint / codying style adherence
  • actually running the code
  • is it idempotent
  • does the end result look like you want it to?

How it should be done

Use something like molecule which can launch your container/virtual machine, run ansible, check for lint and also run some testing framework like serverspec/testinfra.

How I currently to do it

I use travis to test many ansible roles and playbooks. From travis you basically get an Ubuntu machine and in that you can run whatever you want.

Basic process I’ve used for ansible testing:

  • Configure docker on the Ubuntu machine (or LXC in some roles)
  • Launch a docker with the OS you want to test on (in my case mostly CentOS 7, but sometimes Debian)
  • Run ansible-playbook with –syntax-check, –check and twice to check for idempotency
  • Run some manual commands at the end to test whatever was configured / or at least print some config files to make sure they look OK

All of the above and more should be doable now with molecule, first and last time I tried I couldn’t get it to work but it’s looking better.

Actual commands to test

  • ansible-playbook –syntax-check
  • ansible-lint
  • ansible-playbook
  • ansible-playbook
  • ansible-playbook –check

Order Matters

Do you want to run it in noop mode ( –check ) before or after the role has first run at least once to configure all the things?

How to actually set this up

Official travis documentation

Login with your github account on (or if it’s a private repo) ( and connect your github organization ).

Enable the repository, for example

Add some files to your repo. I usually copy .travis.yml and tests/ directory from an existing repository like ansible-role-cvmfs .

Modify the test playbook – tests/test.yml to include the new role, maybe change some default variables and have a look in script if there are anything you want to add or remove from there too.

Push to github and watch the build log :)

Working Fighting with Travis

Fighting with docker took a lot of my time when getting this working the first time. Especially as I use ansible to configure servers that run multiple services and want to have a full systemd inside the container.

Commands to run on an Ubuntu 14.04 VM to get a kind of similar environment as in travis:

sudo apt update
sudo apt upgrade
sudo apt install build-essential libssl-dev libffi-dev python-dev git
sudo apt install cgroup-lite
echo 'DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock -s devicemapper"' | sudo tee /etc/default/docker > /dev/null
sudo cgroups-mount

And then from there run the commands you have in .travis.yml and a FreeIPA workshop

In preparation for the RH414 course I’m taking next week I think I should have a look at kerberos, freeipa and bind a bit :)

During there was a workshop on FreeIPA. (There were many other interesting talks there, for example the Network Performance Tuning by Jamie Bainbridge).

There is a video to accompany it:



  • Bonus feature: get acquainted with vagrant too!

Vagrant 1.7.4 and Virtualbox 5.0 works just fine together (except I had some issues with network interfaces on Ubuntu 15.10 and Virtualbox 5 and Vagrant – the MAC addresses were the same on the VM’s interfaces to the “NAT” network- they also got some weird IP addresses there). I could only find that IP used in resolv.conf (from the dhcp) – so that could be changed.

RH413 – Red Hat Server Hardening

I’m attending this training in a week or so. This post will be updated as I go through the sections I want to check out before the training starts.

  • Track security updates
    • Understand how Red Hat Enterprise Linux produces updates and how to use yum to perform queries to identify what errata are available.
  • Manage software updates
    • Develop a process for applying updates to systems including verifying properties of the update.
  • Create file systems
    • Allocate an advanced file system layout and use file system encryption.
  • Manage file systems
    • Adjust file system properties through security related options and file system attributes.
  • Manage special permissions
    • Work with set user ID (SUID), set group ID (SGID), and sticky (SVTX) permissions and locate files with these permissions enabled.
  • Manage additional file access controls
    • Modify default permissions applied to files and directories; work with file access control lists.
  • Monitor for file system changes
    • Configure software to monitor the files on your machine for changes.
  • Manage user accounts
    • Set password-aging properties for users; audit user accounts.
  • Manage pluggable authentication modules (PAMs)
    • Apply changes to PAMs to enforce different types of rules on users.
  • Secure console access
    • Adjust properties for various console services to enable or disable settings based on security.
  • Install central authentication
    • Install and configure a Red Hat Identity Management server and client.
  • Manage central authentication
    • Configure Red Hat Identity Management rules to control both user access to client systems and additional privileges granted to users on those systems.
  • Configure system logging
    • Configure remote logging to use transport layer encryption and manage additional logs generated by remote systems.
  • Configure system auditing
    • Enable and configure system auditing.
  • Control access to network services
    • Manage firewall rules to limit connectivity to network services.

From the exam

  • Identify Red Hat Common Vulnerabilities and Exposures (CVEs) and Red Hat Security Advisories (RHSAs) and selectively update systems based on this information
  • Verify package security and validity
  • Identify and employ standards-based practices for configuring file system security, create and use encrypted file systems, tune file system features, and use specific mount options to restrict access to file system volumes
  • Configure default permissions for users and use special file permissions, attributes, and access control lists (ACLs) to control access to files
  • Install and use intrusion detection capabilities in Red Hat Enterprise Linux to monitor critical system files
  • Manage user account security and user password security
  • Manage system login security using pluggable authentication modules (PAM)
  • Configure console security by disabling features that allow systems to be rebooted or powered off using bootloader passwords
  • Configure system-wide acceptable use notifications
  • Install, configure, and manage identity management services and configure identity management clients
  • Configure remote system logging services, configure system logging, and manage system log files using mechanisms such as log rotation and compression
  • Configure system auditing services and review audit reports
  • Use network scanning tools to identify open network service ports and configure and troubleshoot system firewalling

Let’s encrypt the web – renewal

So easy!


As I ran the letsencrypt-auto last time, I did again.

  • sudo systemctl stop nginx
  • cd letsencrypt
  • git pull
  • ./letsencrypt-auto
  • enter enter etc
  • sudo apache2ctl stop # .. why did it start apache2 automatically?
  • sudo systemctl start nginx


Since letsencrypt-auto version 0.5.0 it’s:

  • sudo systemctl stop nginx
  • cd letsencrypt
  • git pull
  • ./letsencrypt-auto –standalone –domains “,”
  • sudo systemctl restart nginx

Since certbot-auto (renamed from letsencrypt):

  • sudo systemctl stop nginx
  • ./certbot-auto renew
  • sudo systemctl start nginx


let’s encrypt the web!

Letsencrypt is finally in public beta!

Got from https enabled on my own play webhost today with let’s encrypt!

There are many good guides for getting this setup. This is how I got it working with nginx (without using the experimental nginx plugin of letsencrypt).

on the webhost (not as root):

git clone
#eventually this generates some certificates into /etc/letsencrypt
#of course you should read scripts before running anything, there are for example acme-tiny, and letsencrypt-nosudo that might be better.
#mozilla has some server side SSL recommendations on

Modify your nginx site file to have something like this:


server {
 listen [::]:443 ssl ipv6only=off;
ssl on;
 ssl_certificate /etc/letsencrypt/live/;
 ssl_certificate_key /etc/letsencrypt/live/;
ssl_session_cache shared:SSL:50m;
 ssl_session_timeout 5m;
 ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
 ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
# ssl_stapling on;
# ssl_stapling_verify on;
# resolver; 
 root /var/www;
 index index.html index.htm index.php;
# Make site accessible from http://localhost/
 server_name localhost;
add_header Strict-Transport-Security "max-age=15724800";

Was there an NHL game last night?

Yesterday my Internet activities was restricted unnecessarily!

While waiting for the replay of last night’s NHL game to air, I didn’t want to browse quite a large chunk of my normal Internets – because knowing the score while watching the game sucks. Unbeknownst to me – there was no game last night! Queue impatience, etc.

No more! (at least for the remainder of 2015 edition of the Stanley Cup).


Today (2015-06-07) it says YES, hopefully tomorrow (2015-06-08) it will say NO :) //update – it did!

This is my first trek into google cloudappengine thingy. Very much work in progress but it’s enough for now.

check_irods – nagios plugin to check the functionality of an iRODS server

Part of my $dayjob as a sysadmin is to monitor all things.

Today I felt like checking if the users on our servers could use the local iRODS storage and thus check_irods was born!

It checks if it can:

  1. put
  2. list
  3. get
  4. remove

a temporary file.


  •  iRODS 3.2 with OS trusted authentication
  • mktemp


Nagios Health Check of a DDN SFA12K

Part of my $dayjob as a sysadmin is to monitor all things.

I’ll be publishing my home-made nagios checks on github in the near future.

Here is the first one that uses the Web API of a DDN’s SFA12K (might work on the 10k too, haven’t tried) which is a storage platform.

The URL to the check is located here:

Unfortunately it seems that the Python Egg (the library / API bindings) is still not available online so one has to ask DDN Support to get that.

It’s not perfect, there’s much room for improvement, refactoring, moving the password/username out of a variable and it makes many assumptions.
But making it work for you shouldn’t be too hard. If you have any questions comment here or on github :)