Tag Archives: puppet

Contributing To OpenStack Upstream

Recently I had the pleasure of contributing upstream to the OpenStack project!

A link to my merged patches: https://review.opendev.org/#/q/owner:+guldmyr+status:merged

In a previous OpenStack summit (these days called OpenInfra Summits), (Vancouver 2018) I went there a few days early and attended the Upstream Institute https://docs.openstack.org/upstream-training/ .
It was 1.5 days long or so if I remember right. Looking up my notes from that these were the highlights:

  • Best way to start getting involved is to attend weekly meetings of projects
  • Stickersssss
  • A very similar process to RDO with Gerrit and reviews
  • Underlying tests are all done with ansible and they have ARA enabled so one gets a nice Web UI to view results afterward. Logs are saved as part of the Zuul testing too so one can really dig into and see what is tested and if something breaks when it’s being tested.

Even though my patches were one baby and a bit over 1 year in time after the Upstream Institute I could still figure things out quite quickly with the help of the guides and get bugs created and patches submitted. My general plan when first attending it wasn’t to contribute code changes, but rather to start reading code, perhaps find open bugs and so on.

The thing I wanted to change in puppet-keystone was apparently also possible to change in many other puppet-* modules, and less than a day after my puppet-keystone change got merged into master someone else picked up the torch and made PRs to like ~15 other repositories with similar changes :) Pretty cool!

Testing is hard! https://review.opendev.org/#/c/669045/1 is one backport I created for puppet-keystone/rocky, and the Ubuntu testing was not working initially (started with an APT mirror issue and later it was slow and timed out)… After 20 rechecks and two weeks, it still hadn’t successfully passed a test. In the end we got there though with the help of a core reviewer that actually updated some mirror and later disabled some tests :)

Now the change itself was about “oslo_middleware/max_request_body_size” So that we can increase it from the default 114688. The Pouta Cloud had issues where our Federation User Mappings were larger than 114688 bytes and we coudln’t update them anymore, turns out they were blocked by oslo_middleware.

(does anybody know where 114688bytes comes from? Some internal speculation has been that it is from 128kilobytes minus some headers)

Anyway, the mapping we have now is simplified just a long [ list ] of “local_username”: “federation_email”, domain: “default”. I think next step might be to try to figure out if maybe we can make the rules using something like below instead of hardcoding the values into the rules

"name": "{0}" 

It’s been quite hard to find examples that are exactly like our use-case (and playing about with is not a priority right now, just something in the backlog, but could be interesting to look at when we start accepting more federations).

All in all, I’m really happy to have gotten to contribute something to the OpenStack ecosystem!

cfengine – some useful examples / or how I learn’t about the bomb and tried Puppet instead / salt?

Building on the initial post about cfengine we’re going to try out some things that may actually be useful.

My goal would be to make /etc/resolv.conf identical between all the machines.

The server setup is the lustre cluster we built in a previous post.

In this post you’ll first see two attempts at getting cfengine and then puppet to do my bidding until success was finally accomplished with salt.

Cfengine

Set up name resolution to be identical on all machines.

http://blog.normation.com/2011/03/21/why-we-use-cfengine-file-editing/

Thought about

Make oss1 and client1 not get the same promises.

Perhaps some kind of rule / IF-statement in the promise?

Cfengine feels archaic. Think editing named/bind configs are complicated? They are not even close to setting up basic promises in cfengine.

Puppet ->

http://puppetlabs.com/

CentOS 6 Puppet Install

vi /etc/yum.repos.d/puppet.repo
pdcp -w oss1,client1 /etc/yum.repos.d/puppet.repo /etc/yum.repos.d/puppet.repo

Sign certificates:

puppet cert list
puppet cert sign 
sudo puppet cert sign --all

For puppet there’s a dashboard. This sounds interesting. Perhaps I won’t have to write these .pp files which at a glancelooks scarily similar to the cfengine promises.

yum install puppet-dashboard mysqld

service start mysqld

set mysqld password

create databases (as in the database.yml file)

after this I didn’t get much further… But I did get the web-server up. Although it was quite empty…

salt

Easy startup instructions here for getting a parallel shell going:

After it’s set up you can run a bunch of built-in special commands, see the help section about modules.

salt ‘*’ sys.doc|less

will give you all the available modules you use :)

Want to use it for configuration management too? Check out the ‘states‘ section.

What looks bad with salt is that it’s a quite new (first release in 2011)

Salt is a very common word so it makes googling hard. Most hits tend to be about cryptography or cooking.

To distribute (once) the resolv.conf do you run this on the admin-server: salt-cp ‘*’ /etc/resolv.conf /etc/resolv.conf

On to states to make sure that the resolv.conf stays the same:

  1. uncomment the defaults in the master-file about file_roots and restart the salt-master service
  2. create /srv/salt and ln -s /etc/resolv.conf /srv/salt/resolv.conf
  3. create a /srv/salt/top.sls and a /srv/salt/resolver.sls

 

In top.sls put:

base:
 '*':
   - resolver

In resolver.sls put:

/etc/resolv.conf:
 file:
  - managed
  - source: salt://resolv.conf

Then run: salt ‘*’ salt.highstate

How to get this to run every now and then? Setting up a cronjob works.

Haven’t been able to find a built-in function to accomplish this but then again, all I’m doing here is scratching at the surface so it’s working and I’m happy :)

HEPIX Spring 2011 – Day 2

Guten aben!

Darmstadt is a very beautiful city. It’s quite old and there are lots of parks and eh, cool, houses.

A person from the UK said yesterday (in the pub Ratkeller) something like this: “A particle physicist’s raison d’être is to find complexities, they wouldn’t turn away from one if their life depended on it. These are the people we provide IT for.”

So no wonder that their IT systems/infrastructure is a little bit complex too!

Today’s topics are: Site Reports, IT Infrastructure (Drupal, Indico, Invenio, Fair 3D cube) and Computing(OpenMP, CMS and Batch nodes).

Site reports

Some of these institutions have a synchrotron which is a cyclic particle accelerator – looks quite cool on the pictures. Some use cfengine for managing the clusters – as in they want to avoid logging on to each node and doing configuration but instead do it from a tool. One such tool that is quite common (Puppet) can also be used for Desktops.

Not many use HP storage stuff, DDN is quite common. Nexsan, bluearc

One site had big problems with their Dell servers – caused by misapplied cooling paste on the CPUs – Dell replaced 90% of the heatsinks and fixed this.


One also had disk failures during high load.They ran the HS06 – Hep Spec 06 – test and while running that disks dropped off.Disk failures traced to anomalously high cooling fan vibration. After replacing all components, and then moving fans to another machine, they saw the error.

IT Infrastructure

CERN is working on moving to Drupal for their web sites. Investigating Varnish (good for ddos, caching and load balancing). Drupal is hard to learn.

Then there were some sessions about programming – CMS 64-bit and OpenMP.
One thought here: is it possible to discern the properties of an Intel/AMD CPU based on the name? Like E5530? Maybe this link on intel.com can be of some assistance.

Fair 3D Tier-0 Green-IT Cube

Quite cool concept(patented) that they are very soon starting to build here in Germany.
Using water vaporization with outside air (and fans in summer) to cool the air, and also water based heat exchangers in each rack to push warm air (by pressure built up by fans, so the racks needs to be quite air tight) from the back of the servers through the heat exchanger that cools the air, and then pushes it over the aisle to the next row of racks. They managed to get down to a PUE of 1.062 at best.

Next Days:
Day 5
Day 4
Day 3

Previous Day:
Day 1