VLAN in the home network!

Above is a previous post in this series about some improvements to my home network. With two modems from two ISPs.

So! On Alibaba I found two Hasivo 8x1GbE managed fanless switches with VLAN support. Delivery time to Finland was really quick. It didn’t say (ok, I didn’t read all or ask seller) if they included European adapters but turns out they did!

To recap: The idea was to use the one long cable and transport two VLANs over it. Other than that how I would actually implement it was a bit fuzzy.

New layout. Numbers in the switchlike boxes are VLAN ids

Things I’ve learnt while connecting these:

  • Creating a VLAN subinterface in Windows 10 seems to require Hyper-V.. This means if I have one machine and want it in both VLANs I need two NICs. No bother, I found a USB3 1GbE adapter in a box at home when cleaning :)
  • I knew about VLAN trunk cables, and the way they are implemented in this Web Interface is to set both VLANs as tagged on the same port.
    • The web interface of this switch has two pages about VLANs. One is a static setup where you say which port is a member of which VLAN and if it’s tagged or untagged. Changing the default or removing a port from VLAN 1 was not possible in this the first screen. In the second however one can change the PVID which is the untagged/native VLAN.
  • Also found a few extra short ethernet cables in old ISP modem boxes, very nice to have with this as this exercise required a few more cables.
  • So on the desktop I now need to choose which network interface to use to get to the Internet. I learnt that if I just remove default gateway for IPv4 from ISP A and use the NIC to ISP B then IPv6 from ISP A will still be there and used :)
First VLAN config page: The static VLAN/tagged VLAN setup on the other page
Second VLAN config page: The native VLAN / PVID configuration on one switch

Some more bits about the switches is in order:

On a related note, the modems have switches builtin and I also had a 6 port fanless unmanaged switch which has been working great for the last 6 years or so but now that got deprecated, yay one less UK plug adapter :). I prefer using an extra switch as opposed to the modem’s. The modems sometimes reboot which is annoying as it interrupts anything I’m doing, even if it’s only local without going to the Internet.

They have a very basic looking CGI web interface. The web interface is only accessible on VLAN 1. The firmware is from 2013 and has version v1.0.3, I asked the seller (which was very responsive to all of my questions) and apparently a newer one is in the works but unfortunately, there’s no way to subscribe to any news about new firmware coming.. I doubt it’ll ever come.

One switch-like quality was that to save the running configuration you make in the web interface, you have to click on save.

There is a manual, one just had to ask the seller on Ali Baba for it – attaching it here for convenience.

All in all this worked out quite nicely. We’ll see how this keeps up. Some further avenues of interest:

  • On my desktop I now use the USB NIC to get to the internet, I tried once to use the system board NIC but then had some issues.. perhaps that’s a bit faster. Using a USB 3 port vs a USB 2 port gave about half a millisecond faster latency to this place I usually ping.funet.fi
  • Response time on the DSL is a bit higher (17 vs 12) to ping.funet.fi
    • tracert shows 17ms to first hop with the DSL’s ISP
    • tracepath shows 10ms to first hop with the cable modem’s ISP
    • pinging the DSL modem is 1ms vs cable modem 3ms
    • ping6 to ping.funet.fi is 10ms with DSL
  • Maybe time to look into a cheap AP to plug in near ISP modem B but connected to VLAN 10 so wifi clients there can reach the server..
  • The switches have a bunch of other settings that could be fun to play with too.

Was the layout diagram above not clear? Try this:

Some updates to the home network 1/2

Current layout:

  • The corner:
    • Cable MODEM NAT&WiFi ISP A
    • One server
    • One desktop who should be on both networks, default gw on one
    • Phones and tablets wifi
  • TV Area:
    • DSL Modem NAT&WiFi ISP B
    • One raspberry pi connected to the server
    • Phones and tablets wifi
    • One chromecast, would be nice to have connected to the server too
    • One ps3
  • 20m, a microwave, and walls in between the two areas (and most importantly the server and the raspberry pi) so wifi is spotty.

Most import factor: One long ass 30m UTP cable connecting the raspberry pi to the same network as the server

It would be cool to: A) be able to connect the desktop to the modem out by the TV and B) Get the chromecast (WIFI only) onto the same network as the server, perhaps with an AP for ISP A network near the TV area

Stay tuned for another post in the hopefully near future when I’ve got something working to help with A/B :)

Update : another graphical representation of the netwirjs:

A story about writing my first golang script and how to make a mailman archive summarizer

Time to try out another programming language!

Golang I see quite frequently in my twitters so have been thinking for a while – why not give it a shot for the next project!

TLDR; https://github.com/martbhell/mailman-summarizer

Took a while to figure out what would be a nice small project. Usually my projects involve some kind of web scraping that helps me somehow, https://wtangy.se/ is one which tells me if there “Was An Nhl Game Yesterday”. Also in this case it turned out I wanted something similar, but this time it was work related. I have been tinkering with this for a week or so on and off. Today, the day after Finnish Independence day I thought let’s get this going!

For $dayjob I’m in a team that among many other things manage a CEPH rados gateway object storage service. CEPH is quite a big (OK, it’s quite an active) project and their mailing lists are a decent (OK, I don’t know a better) places to stay up to date. For example the http://lists.ceph.com/pipermail/ceph-users-ceph.com/ has lots of interesting threads. However it sometimes gets 1000 messages per month! This is way too many for me, especially since most of them are not that interesting to me as I’m not an admin of any CEPH clusters, our services only use them :)

So the idea of an aggregator or filter was born. The mailing list has a digest option when subscribing, but it doesn’t have a filter.

Enter “mailman-summarizer“! https://github.com/martbhell/mailman-summarizer

As usual when I play around in my spare time I try to document much more than is necessary. But if I ever need to re-read this code in a year or two because something broke then I want to save myself some time. Most likely I won’t be writing much more Go between now and then so the things I learnt while writing this piece will probably have been purged from memory!

The end result https://storage.googleapis.com/ceph-rgw-users/feed.xml
as of right now looks like below in one RSS reader:

In summary the steps to get there were:

  • Used https://github.com/bcongdon/colly-example to do some web scraping of the mailman/pipermail web archive of the ceph-users e-mail list. Golang here was quite different from Python and beautifulsoup. It uses callbacks. I didn’t look too deeply into those but things did not happen in the same order they were written. Maybe it can be used to speed things up a bit, but the slowest part of this scraping is the 1s+random time delay I have between the HTTP GETs to be nice to the Internet ;)
  • It loops over the Months (thread.html) for some of the years and only saves links and their titles which has “GW” in the title.
  • Put this in a map (golang is different here too. Kind of like a python dictionary but one had to initialize it in advance. Lots of googling involved :)
  • Loop over the map and create RSS, JSON, ATOM or HTML output using the gorilla feeds pkg. Use of the time pkg in Golang was needed to have nice fields in the RSS, this was interesting. Not using UNIX 1970 seconds epoch but some date in 2006? Some|most functions?types?interfaces? (I don’t know the names of most things) give a value AND an error on the call makes declaring? a variable a bit funny looking:
for l, _ := range data {
    keys = append(keys, l)

 That was the golang part. I could have just taken the output, stored it in a file and put it in a web server somewhere.

https://wtangy.se uses google’s object store, but it has an appengine python app in front. So I took a break and watched some NHL from yesterday and in the breaks I thought about what would be a slim way of publishing this feed. I did not want to run a virtual machine or container constantly, the feed is a static HTML and can just be put in an object store somewhere. It would need a place to run the code though, to actually generate the RSS feed!

I’m a big fan of travis-ci and as part of this project the continuous integration does this on every commit:

  • spawn a virtual machine with Go configured (this is all part of travis(/any other CI system I’ve played with), just needs the right words in .travis.yml file in the repo)
  • decrypt a file that has the credentials of a service account which has access to a bucket or two in a project in google cloud
  • compiles mailman-summarizer
  • run a bash script which eventually publishes the RSS feed on a website. It does this to a staging object storage bucket:
    • go runs “mailman-summarizer -rss” and writes the output to a file called feed.xml
    • uses the credentials to write feed.xml to the bucket and make the object public-readable
    • Then the script does the same to the production bucket

One could improve the CI part here a few ways:

  • Right now it uses the travis script provider in the deploy phase. There is a ‘gcs’ provider, but I couldn’t find documentation for how to specify the JSON file with the credentials like with appengine. I get a feel that because it’s not easy I should probably use appengine instead..
  • One could do more validation, perhaps validate the RSS feed before actually uploading it. But I couldn’t find a nice program that would validate the feed. There are websites like https://validator.w3.org/feed/ though so I used that manually. Maybe RSS feeds aren’t so cool anymore , I use them a lot though. 
  • An e2e test would also be cool. For example fetch the feed.xml from the staging and make sure it is the same as what was uploaded. 

Certified OpenStack administrator – check!

Yay! Took the exam last week after having studied a few days. Nothing seemed to be impossible from the list of requirements at least :)

Thought because it’s done online it can be scheduled almost on demand, but one had to wait at least 24h for the exam environment to get provisioned.

The online proctor part was a first for me. For sure it’ll help if you have a non cheap webcam (with a longer wire) that can be moved around.

The results arrived after only a day, 96% so I missed something small somewhere. Maybe about swift if I have to guess :)

I always liked these practical exams. One really need some experience with what is being tested. I don’t thinn it is possible to just study. Fortunately it’s easy to install a lab environment! 

Playing with devstack while studying for OpenStack Certified Administrator

Below I’ll go through some topics I thought about while reading through the requirements for COA:

  • Users and passwords because we use a LDAP at $dayjob. How to set passwords and stuff?
    • openstack user password set
    • openstack role add –user foo member –project demo
  • Users and quota. Can one set openstack to have user quota? 
    • guess not :)
  • How to default quota with CLI?
    • nova quota-class commands. Found in operator’s guide in the docs.
  • Create openrc without horizon
    • TIL that OS_AUTH in devstack is http://IP/identity . No separate port :) And couldn’t really find a nice way. After it’s working there’s an $ openstack configuration show though which tells stuff..
  • Cinder backup
    • cool, but this service is not there by default in devstack.
  • Cinder encryption 
    • another volume type with encryption.  Shouldn’t need barbican with a fixed_key but I don’t know, cinder in my devstack wasn’t really working so couldn’t attach and try it out. Have some volumes with a encryption_key_id of “000000…” so maybe? Attaching my LVMs isn’t working for some reason. Complaining about initiator ?
  • Cinder groups.
    • Details found under cinder admin guide under rocky.. not Pike. Using cinder command one can create volume group types and then volume groups and then volumes in the volume group. All with cinder command. After you have added volumes into a group you can take snapshots of a volume group. And also create a volume group (and volumes) from the list of snapshots.
  • Cinder storage pool
    • backends. In devstack it’s devstack@lvmdriver-1apparently one can set volume_backend_name both as a cinder.conf and as a property
  • Object Expiration. Supported in CEPH rados gateway? Yes, but in luminous
    • available in default devstack, done with a magical header X-Delete-After:epoch
  • Make a Heat template from scratch using the docs. 
    • can be made quite minimal
  • Update a stack
  • Checking status of all the services
  • Forget about ctrl+w.

Study Environment

A devstack setup in an Ubuntu 18.04 in a VM in $dayjob cloud. This means no nested virtualization and I wonder how unhappy neutron will be because port security. But it’s all within one VM – it started OK, not everything worked but that’s fine with me :) Probably just need a local.conf which is not the default!

One thing I got to figure out was the LVM setup for cinder. Always fun to read logs :)

Studying for Openstack Certified Administrator

The plan : study a bit and then attempt the coa exam. If I don’t pass then attend the course during openstack summit: SUSE

And what to study? I’ve been doing openstack admin work for the last year or two. So I have already done and used most services, except Swift. But there are some things that were only done once when each environment was setup. Also at $dayjob our code does a lot for us.

One such thing I noticed while looking through https://github.com/AJNOURI/COA/wiki/02.-Compute:-Nova

Was setting the default project quota. I wonder if that’s a cli/webui/API call or service config. But a config file would be weird, unless it’s in Keystone. Turns out default quotas are in each of the services’ config files. It’s also possible to set a default quota with for example the nova command.

Another perhaps useful thing I did was to go through the release notes for the services. $dayjob run Newton so I started with the release after that and tried to grok and look for biggest changes. Introduction of placement was one of them and I got an introduction to that while playing with devstack and “failed to create resource provider devstack” error. After looking through logs I saw a “409 conflict” HTTP error or placement was complaining that the resource already existed. So somehow during setup it was created but in the wrong way? I deleted it and restarted nova and it got created automatically and after that nova started acting a lot better :)

wtangy.se – now with user preferences!

As part of my learning some more about modern web developments I’ve learnt that cookies now suck and one should use some kind of local storage in the web browser. One of them is Web Storage .

https://wtangy.se/ got some more updates over the weekend :)

Now if you choose a team in the /menu and later (from the same browser) visits https://wtangy.se/ you’ll get the results for that team. The selection can be cleared in bottom of the menu.

wtangy.se – site rename and automatic deployments!

This is a good one!

Previous entries in this series: http://www.guldmyr.com/blog/wasthereannhlgamelastnight-com-now-using-object-storage/ and  http://www.guldmyr.com/blog/wasthereannhlgamelastnight-appspot-com-fixed-working-again/

Renamed to wtangy.se

First things first! The website has been renamed to wtangy.se! Nobody in their right mind would type out wasthereannhlgamelastnight.com.. so now it’s an acronym of wasthereannhlgameyesterday. wtangy.se . Using Sweden .se top level domain because there was an offer making it really cheap :)


Automatic testing and deployment

Second important update is that now we do some automatic testing and deployment.

This is done with travis-ci.org where one can view builds, the configuration is done in this file.

In google cloud there’s different versions of the apps deployed. If we don’t promote a version it will not be accessible from wtangy.se (or wasthereannhlgamelastnight.appspot.com) but via some other URL.

Right now the testing happens like this on every commit:

  1. deploy the code to a testing version (which we don’t promote)
  2. then we run some scripts:
    1. pylint on the python scripts
    2. an end to end test which tries to visit the website.
  3. if the above succeeds we do deploy to master (which we do promote)

wasthereannhlgamelastnight.com – now using object storage!

To continue this series of blog posts about the awesome https://wasthereannhlgamelastnight.appspot.com/WINGS web site where you can see if there was in fact, an NHL game last night :)

Some background: First I had a python script that scraped the website of nhl.com and later changed that to just grab the data from the JSON REST API of nhl.com – much nicer. But it was still outputing the result to stdout as a set and a dictionary. And then I would in the application import this file to get the schedule. This was quite hacky and ugly :) But hey it worked.

As of this commit it now uses Google’s Cloud Object Storage:

  • a special URL (one has to be an admin to be able to access it)
  • there’s a cronjob which calls this URL once a day (22:00 in some time zone)
  • when this URL is called a python script runs which:
    • checks what year it is and composes the URL to the API so that we only grab this season’s games (to be a bit nicer to the API)
    • does some sanity checking – that the fetched data is not empty
    • extracts the dates and teams as before and writes two variables,
      • one list which has the dates when there’s a game
      • one dictionary which has the dates and all the games on each date
        • probably the last would be enough ;)
    • finally always overwrites the schedule


To only update it when there are changes would be cool as then I could notify myself (and possibly others) when there have been changes, but it would mean that the JSON dict has to be ordered, which they aren’t by default so I’d have to change some stuff. The GCSFileStat has a checksum-like metadata of the files called ETAG. But probably it would be best to first compute a checksum of the generated JSON and then add that as an extra metadata to the object as this ETAG is probably implemented differently between providers.


wasthereannhlgamelastnight.appspot.com – fixed – working again!

wasthereannhlgamelastnight.appspot.com – fixed – working again!

With NHL 2017-2018 season coming up and I had some extra spare time I thought why not finally fix this great website again :)

As NHL changed the layout of their schedule page about two seasons ago – there’s these days “infinite scrolling” or whatever it’s called when the page only loads what you see on the screen. This means it’s a bit difficult to scrape the page (but not impossible).

Lately I’ve been using REST API and JSON data for quite many things – after a short search I managed to find this hidden gem: https://statsapi.web.nhl.com/api/v1/schedule?startDate=2016-01-31&endDate=2016-02-05&expand=schedule.teams,schedule.linescore,schedule.broadcasts,schedule.ticket,schedule.game.content.media.epg&leaderCategories=&site=en_nhl&teamId=

Now that’s a link to an API provided by NHL where you get the schedule and you can filter it. I’m not sure what all the parameters do, they’re not all needed. You just need the startDate and endDate. The API also has standings and results. I have not managed to find any documentation for it. Best so far seems to be this blog post.  So I’m not sure about if it’s OK to use it or if there are any restrictions.

p.s. – there is a shorter URL to the main page: https://rix.fi/nhl – but the commands – like  https://wasthereannhlgamelastnight.appspot.com/MTL – does not work.

Was there an NHL game last night?

haproxy lab setup!

Been seeing haproxy more and more lately as it seems even the stuff I work with are moving towards web :)

So a good time as any to play around with it!

First setup is the tag “single-node” in https://github.com/martbhell/haproxy-lab – this means it just configures one apache httpd and one haproxy. In the haproxy it creates multiple vhosts with content being served from different directories, and then it points to each of these as a haproxy backend.

To illustrate the load balancing the playbook also installs php and shows the path of the file that’s being served.

I used ansible for this and only tested it with CentOS7 in an OpenStack. The playbook also sets up some “dns” in /etc/hosts.

There are also “ops_playbooks” for disabling/enabling backends and setting weights.

I wonder what’s a good next step. Maybe multiple hosts / Docker containers? Maybe SSL termination + letsencrypt? Maybe some performance benchmarking/tuning?
I like the help for the configuration file – it begins with some detail about what an HTTP request looks like :)

Automated testing of ansible roles

What is this?

Basic idea: whenever most things happen in your ansible repository (for example commit, pull request or release) then you want to automatically test the ansible code.

The basic tools:

  • syntax-checking
  • lint / codying style adherence
  • actually running the code
  • is it idempotent
  • does the end result look like you want it to?

How it should be done

Use something like molecule https://github.com/metacloud/molecule which can launch your container/virtual machine, run ansible, check for lint and also run some testing framework like serverspec/testinfra.

How I currently to do it

I use travis to test many ansible roles and playbooks. From travis you basically get an Ubuntu machine and in that you can run whatever you want.

Basic process I’ve used for ansible testing:

  • Configure docker on the Ubuntu machine (or LXC in some roles)
  • Launch a docker with the OS you want to test on (in my case mostly CentOS 7, but sometimes Debian)
  • Run ansible-playbook with –syntax-check, –check and twice to check for idempotency
  • Run some manual commands at the end to test whatever was configured / or at least print some config files to make sure they look OK

All of the above and more should be doable now with molecule, first and last time I tried I couldn’t get it to work but it’s looking better.

Actual commands to test

  • ansible-playbook –syntax-check
  • ansible-lint
  • ansible-playbook
  • ansible-playbook
  • ansible-playbook –check

Order Matters

Do you want to run it in noop mode ( –check ) before or after the role has first run at least once to configure all the things?

How to actually set this up

Official travis documentation

Login with your github account on travis.org (or travis.com if it’s a private repo) ( and connect your github organization ).

Enable the repository, for example https://travis-ci.org/CSCfi/ansible-role-dhcp_server

Add some files to your repo. I usually copy .travis.yml and tests/ directory from an existing repository like ansible-role-cvmfs .

Modify the test playbook – tests/test.yml to include the new role, maybe change some default variables and have a look in test-in-docker-image.sh script if there are anything you want to add or remove from there too.

Push to github and watch the build log :)

Working Fighting with Travis

Fighting with docker took a lot of my time when getting this working the first time. Especially as I use ansible to configure servers that run multiple services and want to have a full systemd inside the container.

Commands to run on an Ubuntu 14.04 VM to get a kind of similar environment as in travis:

sudo apt update
sudo apt upgrade
sudo apt install build-essential libssl-dev libffi-dev python-dev git
sudo apt install docker.io cgroup-lite
echo 'DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock -s devicemapper"' | sudo tee /etc/default/docker > /dev/null
sudo cgroups-mount

And then from there run the commands you have in .travis.yml

linux.conf.au.2016 and a FreeIPA workshop


In preparation for the RH414 course I’m taking next week I think I should have a look at kerberos, freeipa and bind a bit :)

During linux.conf.au.2016 there was a workshop on FreeIPA. (There were many other interesting talks there, for example the Network Performance Tuning by Jamie Bainbridge).

There is a video to accompany it: https://www.youtube.com/watch?v=VLhNcirKFDs



  • Bonus feature: get acquainted with vagrant too!

Vagrant 1.7.4 and Virtualbox 5.0 works just fine together (except I had some issues with network interfaces on Ubuntu 15.10 and Virtualbox 5 and Vagrant – the MAC addresses were the same on the VM’s interfaces to the “NAT” network- they also got some weird IP addresses there). I could only find that IP used in resolv.conf (from the dhcp) – so that could be changed.

RH413 – Red Hat Server Hardening

I’m attending this training in a week or so. This post will be updated as I go through the sections I want to check out before the training starts.


  • Track security updates
    • Understand how Red Hat Enterprise Linux produces updates and how to use yum to perform queries to identify what errata are available.
  • Manage software updates
    • Develop a process for applying updates to systems including verifying properties of the update.
  • Create file systems
    • Allocate an advanced file system layout and use file system encryption.
  • Manage file systems
    • Adjust file system properties through security related options and file system attributes.
  • Manage special permissions
    • Work with set user ID (SUID), set group ID (SGID), and sticky (SVTX) permissions and locate files with these permissions enabled.
  • Manage additional file access controls
    • Modify default permissions applied to files and directories; work with file access control lists.
  • Monitor for file system changes
    • Configure software to monitor the files on your machine for changes.
  • Manage user accounts
    • Set password-aging properties for users; audit user accounts.
  • Manage pluggable authentication modules (PAMs)
    • Apply changes to PAMs to enforce different types of rules on users.
  • Secure console access
    • Adjust properties for various console services to enable or disable settings based on security.
  • Install central authentication
    • Install and configure a Red Hat Identity Management server and client.
  • Manage central authentication
    • Configure Red Hat Identity Management rules to control both user access to client systems and additional privileges granted to users on those systems.
  • Configure system logging
    • Configure remote logging to use transport layer encryption and manage additional logs generated by remote systems.
  • Configure system auditing
    • Enable and configure system auditing.
  • Control access to network services
    • Manage firewall rules to limit connectivity to network services.

From the exam https://www.redhat.com/en/services/training/ex413-red-hat-certificate-expertise-server-hardening-exam

  • Identify Red Hat Common Vulnerabilities and Exposures (CVEs) and Red Hat Security Advisories (RHSAs) and selectively update systems based on this information
  • Verify package security and validity
  • Identify and employ standards-based practices for configuring file system security, create and use encrypted file systems, tune file system features, and use specific mount options to restrict access to file system volumes
  • Configure default permissions for users and use special file permissions, attributes, and access control lists (ACLs) to control access to files
  • Install and use intrusion detection capabilities in Red Hat Enterprise Linux to monitor critical system files
  • Manage user account security and user password security
  • Manage system login security using pluggable authentication modules (PAM)
  • Configure console security by disabling features that allow systems to be rebooted or powered off using bootloader passwords
  • Configure system-wide acceptable use notifications
  • Install, configure, and manage identity management services and configure identity management clients
  • Configure remote system logging services, configure system logging, and manage system log files using mechanisms such as log rotation and compression
  • Configure system auditing services and review audit reports
  • Use network scanning tools to identify open network service ports and configure and troubleshoot system firewalling

Let’s encrypt the web – renewal

So easy!


As I ran the letsencrypt-auto last time, I did again.

  • sudo systemctl stop nginx
  • cd letsencrypt
  • git pull
  • ./letsencrypt-auto
  • enter enter etc
  • sudo apache2ctl stop # .. why did it start apache2 automatically?
  • sudo systemctl start nginx


Since letsencrypt-auto version 0.5.0 it’s:

  • sudo systemctl stop nginx
  • cd letsencrypt
  • git pull
  • ./letsencrypt-auto –standalone –domains “my.example.com,2.example.com”
  • sudo systemctl restart nginx

Since certbot-auto (renamed from letsencrypt):

  • sudo systemctl stop nginx
  • ./certbot-auto renew
  • sudo systemctl start nginx


let’s encrypt the web!

Letsencrypt is finally in public beta!

Got from ssllabs.com https enabled on my own play webhost today with let’s encrypt!

There are many good guides for getting this setup. This is how I got it working with nginx (without using the experimental nginx plugin of letsencrypt).

on the webhost (not as root):

git clone https://github.com/letsencrypt/letsencrypt
#eventually this generates some certificates into /etc/letsencrypt
#of course you should read scripts before running anything, there are for example acme-tiny, gethttpsforfree.com and letsencrypt-nosudo that might be better.
#mozilla has some server side SSL recommendations on https://wiki.mozilla.org/Security/Server_Side_TLS

Modify your nginx site file to have something like this:


server {
 listen [::]:443 ssl ipv6only=off;
ssl on;
 ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_session_cache shared:SSL:50m;
 ssl_session_timeout 5m;
 ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
 ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
# ssl_stapling on;
# ssl_stapling_verify on;
# resolver; 
 root /var/www;
 index index.html index.htm index.php;
# Make site accessible from http://localhost/
 server_name localhost;
add_header Strict-Transport-Security "max-age=15724800";

Was there an NHL game last night?

Yesterday my Internet activities was restricted unnecessarily!

While waiting for the replay of last night’s NHL game to air, I didn’t want to browse quite a large chunk of my normal Internets – because knowing the score while watching the game sucks. Unbeknownst to me – there was no game last night! Queue impatience, etc.

No more! (at least for the remainder of 2015 edition of the Stanley Cup).

Introducing: http://wasthereannhlgamelastnight.appspot.com/

Today (2015-06-07) it says YES, hopefully tomorrow (2015-06-08) it will say NO :) //update – it did!

This is my first trek into google cloudappengine thingy. Very much work in progress but it’s enough for now.

check_irods – nagios plugin to check the functionality of an iRODS server

Part of my $dayjob as a sysadmin is to monitor all things.

Today I felt like checking if the users on our servers could use the local iRODS storage and thus check_irods was born!

It checks if it can:

  1. put
  2. list
  3. get
  4. remove

a temporary file.


  •  iRODS 3.2 with OS trusted authentication
  • mktemp

Source: https://github.com/martbhell/nagios-checks/tree/master/plugins/check_irods

Nagios Health Check of a DDN SFA12K

Part of my $dayjob as a sysadmin is to monitor all things.

I’ll be publishing my home-made nagios checks on github in the near future.

Here is the first one that uses the Web API of a DDN’s SFA12K (might work on the 10k too, haven’t tried) which is a storage platform.

The URL to the check is located here: https://github.com/martbhell/nagios-checks/tree/master/plugins/check_ddn

Unfortunately it seems that the Python Egg (the library / API bindings) is still not available online so one has to ask DDN Support to get that.

It’s not perfect, there’s much room for improvement, refactoring, moving the password/username out of a variable and it makes many assumptions.
But making it work for you shouldn’t be too hard. If you have any questions comment here or on github :)

High amount of Load_Cycle_Count on Green Western Digital disks

You are monitoring the SMART values of your disks right? They’re usually a real good indicator of the health of the drive.

Thought I’d check out the SMART value of the disks in my desktop today (while checking if I had notifications from smartd on).

Low and behold, the Load_Cycle_Count (LLC) was really high, much higher than power_cycle_count on the 3TB WD disk I have. It turns out this is quite an old problem so there are a few posts about this on the Internets.
The Interwebs says max in the specs are 300k load cycles. Smartctl -a says I’m already at 218602 after 9302 power on hours (387 days but I power off the computer at night).


Model Family:     Western Digital Caviar Green (AF, SATA 6Gb/s)
Device Model:     WDC WD30EZRX-00DC0B0

For Windows there’s a wdidle3.exe that is a DOS program that one can put on a bootable floppy (…) and boot a computer on to change some stuff on a disk.

Fortunately I run Linux (Ubuntu 14.10 since yesterday) and there’s a tool called idl3ctl – one can grab it from here: http://idle3-tools.sourceforge.net/

I got the latest source code and compiled it myself because there had been some updates to it since the last release (2012 vs 2011 ..).
“idl3ctl -g” shows that the disk was set to park itself after 8s. I disabled that with idl3ctl and powered off and on the computer and now the tool says it’s disabled.

Hopefully this should increase the lifetime of my disk.

Finnish words from ankidroid

Lately I’ve been using ankidroid to study Finnish – or at least to expand my vocabulary a bit. I think it’s great but it won’t work by itself and requires quite a bit of tenacity.

The good Finnish deck:https://ankiweb.net/shared/info/1918695216 it’s called “Finnish järjestyksessä”.

It’s made by Rendall and it’s quite good, it took a list of the most common Finnish words (assembled by my company ;)

The deck won’t work by itself though, I get a lot of help by:

  • Get the word in a sentence helps tremendously for learning it. Just learning the words helps a lot, especially if you don’t care so much about using the exact right form. Learning how to use them is also important, and that’s where talking or reading things help.
  • Asking someone for assistance or clarifications about words is important. For example you might have words that are translated to the same word in English but you’d use them differently in Finnish.

Some notes about the contents:

  • Some translations are not awesome or the first word that shows up is the archaic meaning. So for those you’d want to click the edit card and change the meaning to something modern.
  • Some erisnimi words are just fairly useless unless you want to learn a city /person name, like NATO, Salo, Joensuu, Jorma or Jukka. They might have a different meaning besides the city name but they’re rarely/never used for that meaning.


Making changes to the template:

On each card there is this “info” and sometimes hints. I found myself wanting to change the URL to wiktionary on the cards because google chrome didn’t redirect wiktionary.org to en.wiktionary org but to www.m.wiktionary.org which doesn’t exist. It’s quite easy to change this, because that part is done with a template and is not added in each card. To change:

  • synch your decks
  • install anki on your compute
  • login with your account and synch it
  • browse the deck and on the right-hand side you’ll see your decks and the ones you have marked and so on, there’s also a few called “adjektiivi”, “prononimit” and so on. These are the templates.
  • Click on a template and then on the “Cards…” button.
  • This will show “Card 1” and “Card 2”. In the back template you can then change the URL to whatever you want!

BCEFP 2015 certified!

Passed the Brocade Certified Ethernet Fabric Professional 2015 exam in May and I finally got the results back!


This one felt quite hairy compared to the other tests I’ve taken. Definitely recommend doing the course / getting some real hands-on experience for these certifications.

Lustre 2.5 + CentOS6 Test in OpenStack

Reason: Testing to Lustre 2.5 from a clean CentOS 6.5 install in an openstack.

Three VMs: two servers, one MDS, one OSS and one Client. CentOS65 on all. An open internal ethernet network for the lustre traffic (don’t forget firewalls). Yum updated to latest kernel. Two volumes presented to the lustreserver and lustreoss for MDT + OST, both are at /dev/vdc. Hostnames set. /etc/hosts updated with three IPs: lustreserver,  lustreoss and lustreclient.

With 2.6.32-431.17.1.el6.x86_64 there’s some issues at the moment for building the server components. One needs to use the latest branch for 2.5 so the instructions are https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

Server side

MDT/OST: Install e2fsprogs and reboot after yum update (to run the latest kernel kernel).

yum localinstall all files from: http://downloads.whamcloud.com/public/e2fsprogs/1.42.9.wc1/el6/RPMS/x86_64/

Next is to rebuild lustre kernels to work with the kernel you are running and the one you have installed for next boot: https://wiki.hpdd.intel.com/display/PUB/Rebuilding+the+Lustre-client+rpms+for+a+new+kernel

RPMS are here: http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/SRPMS/

For rebuilding these are also needed:

yum -y install kernel-devel* kernel-debug* rpm-build make libselinux-devel gcc


  • git clone -b b2_5 git://git.whamcloud.com/fs/lustre-release.git
  • autogen
  • install kernel.src from redhat (puts tar.gz in /root/rpmbuild/SOURCES/)
  • if rpmbuilding as user build, then copy files from /root/rpmbuild into /home/build/rpmbuild..
  • rebuilding kernel requires quite a bit of hard disk space, as I only had 10G for / then I made symlinks under $HOME to the $HOME/kernel and $HOME/lustre-release

yum -y install expect and install the new kernel with lustre patches and the lustre and lustre modules.

Not important?: WARNING: /lib/modules/2.6.32-431.17.1.el6.x86_64/weak-updates/kernel/fs/lustre/fsfilt_ldiskfs.ko needs unknown symbol ldiskfs_free_blocks

/sbin/new-kernel-pkg –package kernel –mkinitrd –dracut –depmod –install

chkconfig lustre on

edit /etc/modprobe.d/lustre.conf and add the lnet parameters

modprobe lnet
lctl network up
# lctl list_nids

creating MDT: mkfs.lustre –mdt –mgs –index=0 –fsname=wrk /dev/vdc1
mounting MDT: mkdir /mnt/MDT; mount.lustre /dev/vdc1 /mnt/MDT

creating OST: mkfs.lustre –ost –index=0 –fsname=wrk –mgsnode=lustreserver /dev/vdc1
mounting OST: mkdir /mnt/OST1; mount -t lustre /dev/vdc1 /mnt/OST1

Client Side

rpmbuild --rebuild --without servers

cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh lustre-client*

add modprobe.d/lustre.conf
modprobe lnet
lctl network up
lctl list_nids

mount.lustre lustreserver@tcp:/wrk /wrk

lfs df!

BCEFP 2015 – Studying for the exam – part 3

This third post  focuses on the remaining sources of information I had for studying for the BCEPF. At the time this post is published I have taken the exam.

When I make comments to CLI commands I put them after a #.

This is part of a series of posts on the topic of studying for Brocade’s Certified Ethernet Fabric Professional.

The two previous posts: Objectives and reading materialscourse and nutshell guide and NOS Admin Guide


VDX Troubleshooting Course


The material available also feels very short, same as the beta material available for the CEF300 , like only the parts of the slides that were updated for the BCEFP 2015 beta were included.
When a slide says “(cont.)” but there was no previous slides on this topic, that’s a hint :)
Take the (currently free) course on Brocade’s SABA – it’s under Education on my.brocade.com. It has way more slides and info.


Some notes from the course:

Firmware Upgrade

  • Can upgrade all/selected RBridges in a logical chassis: firmware download logical-chassis
  • FTP/SCP/SFTP/USB(only local switch with USB)
  • By default it stages firmware only – so no reboot or activate. By adding auto-activate it reboots all RBRidges at the same time, not recommended.


  • When BNA discovers a switch it automagically configures the switch to send traps (UDP 162) to the BNA server.

Fabric Formation:

  • Requires: Licenses. Same VCS ID, unique RBridge ID and same VCS mode (Fabric Cluster or Logical Chassis)
  • Check:
    • ISL ports are operational (show fabric islports)
    • Incompatible Firmware Levels


  • no fabric isl enable # this disables ISL formation. This makes it an edge port
  • CPU could be too busy to send ISL keepalives
  • If ISL is segmented and interface is up/up – it’s probably a config issue.



  • show running-config interface TenGigabitEthernet 1/0/2 # shows config
    • no shutdown
    • channel-group $NUMBER mode active type standard # active – LACP. Standard/Brocade proprietary.
  • show interface TenGigabitEthernet 1/0/2 # shows status
    • When counters are non-zero and looking for errors. Clear them and compare the delta.



  • show interface stats brief # shows discards, errors and CRC
  • VRRP:
    • show vrrp detail
    • pre-empting : if a virtual router comes online with higher priority than the current it will take over
    • VRRPE: Can enable short-path-forwarding. If one of the backup virtual routers (that don’t own the Virtual IP) can actually forward traffic if that is advantageous.



  • show running-config zoning # show FCoE zoning
  • show fabric all #
    • RBRidge with this name: fcr_fd_160 # this comes online when fabrics are connected and Fibre Channel Routing is used.
    • RBRidge with this name: fcr_xd_4_100 # this comes online when devices across FC Fabrics can communicate. Don’t see this? Check zoning.


BCEFP practice questions / answers


These are decent practice questions and is nice because the answers give some explanation to the answers too.


Intro to VCS Fabric Technology: http://www.brocade.com/downloads/documents/white_papers/intro-vcs-fabric-technology-wp.pdf

CFP- MSA CFP2 Hardweare Specs:

  • about the 40/100Gbps CFP2 SFP. MSA – multi-source agreement.
  • CFP2 module shall support LC, MTP12 and MTP24 optical connector types. MPO

NOS 4.1.1 release notes (p4,10,28,50): 

  • 4.1.0 and later support VRRP-E across VCS fabrics.
  • 4.1.0 and later have vlag ignore split on by default
  • clear mac-address-table can clear MAC addresses associated with vLAGs and on other switches
  • Page 50 Has a table of scalability numbers for various features such as (6710 VCS, 6740 VCS, 8770 VCS):
    • max members of a LAG (8,16,8)
    • max switches in a fabric/logical cluster (24,32,32)
    • max ECMP paths (8,8,16)
    • max member ports in a vLAG (64)
    • max member of VMs (8k)
    • max ARP entries (8k,12,50k)


Network OS Command Reference v4.1.1 53-1003226-01

Pages 299, 1258-1260,1266,1297,1317,1318

  • firmware download
  • snmp-server user # access
  • snmp-server v3host # trap recipients
  • spanning-tree edgeport # quickly transitions to forwarding state: only for RSTP/MSTP. Portfast for STP.
  • switchport access # only allows untagged and priority tagged
  • switchport trunk allowed vlan ${rspan-vlan} # add allowed VLAN on trunks on L2 interfaces in trunk mode
  • switchport trunk default-vlan # put all non-matching traffic into this VLAN


Hardware reference manuals

VDX 6740 Hardware Reference Manual 53-1002829-02: Page 1

  • 6470: 24 1/10GbE SFP+ ports.
  • 6740T: 24 RJ-45
  • 6740-1G: 48 RJ-45 Base-T. 10Gb with license.

VDX 8770-4 / 8770-8 Hardware Reference Manual 53-1002563-03: 

  • Chapter 1, Page 1:
    • Features CloudPlex.
    • Requires NOS 3.0.0 or greater.
    • 8770-8:
      • Up to 384 10GbE or 96 40GbE. Dual MM. 6 SFM. Max 8 PSU. 4 Fans. SX or LX 1Gbps SFP transceivers.
    • 8770-4:
  • Chapter 3, Page 32
    • For copper connections to < 1Gbps BaseT switches a crossover cable is needed (but it might not be if MDI/MDIX works..).
    • LC connectors for fiber ports

VDX 6730 Hardware Reference Manual 53-1002389-06: Pages 1,2,15

  • 6730-32: 32-ports. 6730-76: 76 ports. 8 or 16 x 8GB FC ports.


Network OS Software Licensing Guide v4.1 53-1003164-01

Pages 11-13

  • All have FCoE license (except 6710).
  • All have POD licenses (except 8770)
  • 6740 have 10/40GbE port upgrades
  • 8770 have L3 and Advanced Services


  • for multi-hop FCoE it is needed on each node
  • L3: OSFP, VRRP, PIM-SM, Route-Maps, prefix list
  • Advanced: FCoE and L3
  • After installing a time-based license you cannot change system date or time. NTP is however not blocked. If you are using NTP, don’t change system date/time when a time-based license is installed.