Tag Archives: git

Contributing To OpenStack Upstream

Recently I had the pleasure of contributing upstream to the OpenStack project!

A link to my merged patches: https://review.opendev.org/#/q/owner:+guldmyr+status:merged

In a previous OpenStack summit (these days called OpenInfra Summits), (Vancouver 2018) I went there a few days early and attended the Upstream Institute https://docs.openstack.org/upstream-training/ .
It was 1.5 days long or so if I remember right. Looking up my notes from that these were the highlights:

  • Best way to start getting involved is to attend weekly meetings of projects
  • Stickersssss
  • A very similar process to RDO with Gerrit and reviews
  • Underlying tests are all done with ansible and they have ARA enabled so one gets a nice Web UI to view results afterward. Logs are saved as part of the Zuul testing too so one can really dig into and see what is tested and if something breaks when it’s being tested.

Even though my patches were one baby and a bit over 1 year in time after the Upstream Institute I could still figure things out quite quickly with the help of the guides and get bugs created and patches submitted. My general plan when first attending it wasn’t to contribute code changes, but rather to start reading code, perhaps find open bugs and so on.

The thing I wanted to change in puppet-keystone was apparently also possible to change in many other puppet-* modules, and less than a day after my puppet-keystone change got merged into master someone else picked up the torch and made PRs to like ~15 other repositories with similar changes :) Pretty cool!

Testing is hard! https://review.opendev.org/#/c/669045/1 is one backport I created for puppet-keystone/rocky, and the Ubuntu testing was not working initially (started with an APT mirror issue and later it was slow and timed out)… After 20 rechecks and two weeks, it still hadn’t successfully passed a test. In the end we got there though with the help of a core reviewer that actually updated some mirror and later disabled some tests :)

Now the change itself was about “oslo_middleware/max_request_body_size” So that we can increase it from the default 114688. The Pouta Cloud had issues where our Federation User Mappings were larger than 114688 bytes and we coudln’t update them anymore, turns out they were blocked by oslo_middleware.

(does anybody know where 114688bytes comes from? Some internal speculation has been that it is from 128kilobytes minus some headers)

Anyway, the mapping we have now is simplified just a long [ list ] of “local_username”: “federation_email”, domain: “default”. I think next step might be to try to figure out if maybe we can make the rules using something like below instead of hardcoding the values into the rules

"name": "{0}" 

It’s been quite hard to find examples that are exactly like our use-case (and playing about with is not a priority right now, just something in the backlog, but could be interesting to look at when we start accepting more federations).

All in all, I’m really happy to have gotten to contribute something to the OpenStack ecosystem!

HEPIX Spring 2011 – Day 5

What day it is can be told by all the suitcases around the room.

Version Control

An overview of the version control used in CERN. Quite cool, they’re not using Git yet but they are moving away from CVS to SVN (subversion) which is not updated anymore. Apparently hard to migrate.

They use DNS load balancing

  • Browse code / logging, revisions, branches: WEBSVN – on the fly tar creation.
  • TRAC – web SVN browsing tool plus: ticketing system, wiki, plug-ins.
  • SVNPlot – generate SVN statsw. No need to checkout source code (svnstats do ‘co’).

Mercurial was also suggested at the side of Git (which is founded by Linus Torvalds).

Cern – VM – FS

Cern-VM-FS (CVMFS) looked very promising. The last one is not intended at the moment for images but more for sending applications around. It uses Squid proxy server and looked really excellent. Gives you a mount point like /cvmfs/ and under there you have the softwares.

http://twitter.com/cvmfs

Requirements needed to set it up:

  • Rpms: cvmfs, -init-scripts, -keys, -auto-setup (for tier-3 sites does some system configs), fuse, fuse-libs, autofs
  • squid cache – you need to have one. Ideally two or more for resilience. Configured (at least) to accept traffic from your site to one or more cvmfs repository servers. You could use existing frontier-squids.

 

National Grid Service Cloud

A Brittish cloud.

Good for teaching with a VM – if a machine is messed up it can be reinstalled.

Scalability – ‘cloudbursting‘ – users make use of their local systems/clusters – until they are full – and then if they need to they can do extra work in the cloud. Scalability/cloudbursting is the key feature that users are looking for.

Easy way to test an application on a number of operating systems/platforms.

Two cases were not suitable. Intensive – with a lot of number crunching.

Good: you don’t have to worry about physical assembly or housing. They do have to install the servers and networking etc. Usually this is done by somebody else. Images are key to making this easier.

Bad: Eucalyptus stability – not so good. Bottlenecks: networking is important. More is required to the whole physical server when it’s running vms.

To put a 5GB vm on a machine you would need 10GB. 5 for the image and 5 for the actual machine.
Some were intending to develop the images locally on this cloud and then move it on to Amazon.

Previous Days:
Day 4
Day 3
Day 2
Day 1