Author: Michael

Home / Author: Michael


In this post, we’ll be talking about setting up a Site Pair and interconnect with HCX, between your on-prem datacenter and your VMware Cloud on AWS SDDC.

Video version:

Written version:

In this post, we are going to create a Site Pair in HCX and an interconnect. Like the name suggests, a site pair pairs two HCX sites together, so you can create an interconnect between them. Interconnects are used for network stretching among other things.

First, we go to our on-prem vCenter. Under the HCX plugin, we go to site pairing. We fill in the FQDN of the HCX server, along with the [email protected] account and password. It will look like this once it is done:

Once this is done, we can create an interconnect. Click on Interconnect and then Compute Profiles. Create a compute profile.
Give it a name, select a cluster(s) where the HCX appliances should be located, then selected a datastore, folder and CPU/memory reservation settings.
Select a Management Network Profile, Uplink Network Profile, vMotion Network Profile, vSphere Replication Network Profile and a distrbuted switch to be used for network extensions.
Once it is done, you can view the required connections that should be allowed in your firewall.Once you are done, it will look something like this:

Now we can create a service mesh. Select a Source Compute Profile, which is the network profile you just created. Then select a Remote Network Profile, which is called ComputeProfile.
Select the Source Site Uplink Network Profile(s), which is the one(s) you created, and the Destination Site Uplink Network Profile(s), which is called externalNetwork. Select the appliance count (I leave it as 1) and optionally you can set a bandwidth limit.
Give the interconnect a friendly name and once it is done and everything is green (this takes a while), it will look this like:

And that’s it, we’re done! Now we are ready to use the functions of HCX, which I will cover in a next blog post.

Stay safe and I hope that you learned something. Feel free to contact me with any questions.


This blog post serves more as a collection of links to other blogs about new announcements that VMware has made during VMworld.

1. The new vRealize Cloud Universal offering.

With this new hybrid subscription offering, VMware is providing customers the flexibility to consume both on-premise and SaaS vRealize Suite products and services using a single subscription license. This offering allows customers the freedom to move workloads between on-premise and SaaS offerings interchangeably without the requirement to purchase new licenses. Additionally, this new offering provides access to Cloud Federated Analytics and Cloud Federated Catalog capabilities.

You can read more about it here.

2. Announcing vRealize AI. (formerly Project Magna)

vRealize AI will focus on vSAN in it’s first release, but VMware’s vision is to bring this to the rest of the infrastructure as well covering the full SDDC.

vRealize AI will be a part of vRealize Operations Cloud, at least in its first release, and only as a SaaS offering. The latter kind of makes sense as there’s quite some machine learning happening under the covers which needs to be powered.

You can read more about it here (scroll down a bit).

3. Nutshell: vRealize Operations Cloud 8.2, Automation Cloud 8.2 and Log Insight Cloud 8.2


4. Announcing vRealize Network Insight 6.0

Running a large network is always a challenge especially compounded with numerous application and user requirements. With the latest 6.0 release, VMware vRealize Network Insight will continue to deliver end-to-end network, application, and security visibility converged across virtual and physical networks. vRealize Network Insight will continue to improve the integrations with VMware NSX, VMware SD-WAN, VMware Cloud on AWS, Microsoft Azure, Amazon AWS, and Kubernetes environments. The new vRealize Network Insight 6.0 release will help enable a transition from a reactive to a proactive understanding of the end-to-end network so managing the day to day is easier with more accuracy and will allow more time for strategic initiatives.

I highly recommend reading this post from VMware, where they go into detail on what’s new.

Are you a blog poster and have I missed your post? Feel free to contact me and I will add it here.

Thank you for reading, have a great day and a great VMworld!

Hi everyone,

In this blog post, we will be going over how to deploy HCX on both your VMware on AWS SDDC and also on-prem.

So what exactly is HCX? One of the main features of HCX is stretching networks between on-prem and your VMC on AWS SDDC. (It does not have to specifically be to/from a VMC SDDC). Also you can migrate VMs, as a vMotion (live) migration, or as a bulk migration.

For the visual people, here is a video:

For the people who prefer to read, here’s a written version of the video.

In order to use HCX, we first need to enable it. Go to your SDDC console. Click on add ons, and then activate HCX. This will take a while, so sit back while it activates.
Once it has activated, click on “Open HCX”.

Once you are in the HCX console, click on Activate HCX at the SDDC you wish to activate HCX for. This will take a while. HCX will be deployed for you.

Once HCX is activated, you will need to create some firewall rules. Go to your SDDC console and under Networking & Security go to Gateway Firewall. Under the Management Gateway, create a new rule. The source is the user-group of your on-prem network and the destination is the HCX system group. Publish this rule and you should now be able to access the HCX VM. Back to the HCX console, click on Open HCX and login with the [email protected] account (or any other admin account).

Under Administration, click on System Updates. Then click on Request Download link.

This will generate a download link for you that you can use to download the HCX Connector. You can either download it and upload the OVA, or copy the URL and paste it into vCenter.

Which brings us to our local vCenter. Deploy a OVA template in your cluster (or on a single host) and go through the process. Fill in the information like you always do. It will ask for things like a password, FQDN, IP, gateway, and the usual questions. Let it deploy, depending on your configuration this may take a little bit. Don’t forget to power it on after deployment and let it start up.

Once it has booted up, open up your web browser and visit:
Then login with the username admin and the password you set during OVA deployment.

Now we need to fill in our HCX License Key.

Go back to the VMC HCX Console and click on Activation Keys. Click on Create Activation Key. Then select your VMC on AWS subscription and then HCX Connector under System Type. Copy the generated key and paste that in the HCX License Key field on the HCX Connector, then click on Activate.

Fill in the location of your on-prem datacenter, then on the next screen fill in a system name. Click on Continue and now you will be given the option of entering your on-prem vCenter and optionally on-prem NSX.

For the identity source, fill in your vCenter’s FQDN if you have the embedded PSC deployment. (Which you should, and if not then migrate to it, since the external PSC is deprecated with vCenter 6.7 and higher. With vCenter 7, it’s not even an option anymore during deployment.)

Next you can click on next, and then click on Restart. This will restart the HCX Connector service and you are up and running after this.

In the next video and blog post, we will be doing a Site Pair and create an Interconnect.

If you have any questions, feel free to email me or tweet at me.

Have a great day and I hope to see you in the next one.

Hi readers,

In this post, we will be taking our previously deployed TKG workload cluster and make it accessible through a public IP using NAT.

First, we deploy an application. I’ll use yelb. See the previous post on how that works.

Here we see that our application is deployed. Let’s look at the load balancer IP.

As we can see, the IP is

Now we go to the VMC on AWS console and request a public IP. Once we have one, we go to the NAT section and create a new rule:

Be sure to also create a rule for it in the compute gateway firewall.

Now when we access it, it works!:

Thank you for reading and I hope you learned something useful. Feel free to contact me in case of any questions.

Hi everyone,

In this post we will be making our own deployment VM to deploy Tanzu Kubernetes Grid on VMware Cloud on AWS. There is a demo appliance, but I would like to use the latest versions of everything. If you would like to know more about Tanzu Kubernetes Grid, please check out this link. This also works for on-prem, but the UI is different for NSX-T for the on-prem manager.

I’d like to give a special shout-out to the TKG Demo Appliance which can also be used and it includes demos and already has the programs. You can get it here and skip this section.

First we need a virtual machine to deploy it from. From my experience, deploying through a VPN does not work.

We start with an image. In my case, I like to use the Ubuntu Server cloudimage which you can get here.

Create a NSX-T segment in the VMware Cloud on AWS console for TKG. Also create a management group and rule so that the TKG segment can access the vCenter.

When you have it deployed, resize the VM. In my case I use 8 CPUs and 16GB of memory, but this is not something I really looked into. Once it’s on, login as the ubuntu user and execute this to become the root user:

sudo su -

And then we get to installing.

We will need the tkg CLI, kubectl and docker. Visit and download the TKG CLI for Linux, the Photon v3 Kubernetes v3 Kubernetes vx.xx.x OVA (get the latest version), Photon v3 capv haproxy vx.x.x OVA and VMware Tanzu Kubernetes Grid Extensions Manifest x.x

You can then SCP the .gz and .tar.gz files over to the VM, or upload them somewhere and then wget/curl them. The two OVAs should be uploaded to a Content Library that the VMC on AWS SDDC can access. Once you have done that, you deploy one VM from each OVA, and then convert this to a template.

To set up the deployment VM further, let’s install docker. We can do this with a one-liner:

curl | sudo sh

Once this is installed, get kubectl:

curl -LO "$(curl -s"

Then we extract the files:

tar xvf filename.tar.gz
gunzip filename.gz

And we run:

mv kubectl /bin/
mv tkg-linux-amd64-rest_of_filename /bin/tkg

Now we are ready. Let’s fire up a second SSH session. On the first SSH session, we run the init with the UI:

tkg init --ui

This will run the installer UI on port 8080. In the second SSH session, be on a new line that you have not typed into yet. Then press ~ (shift ` in most cases) and then C. Then type:

-L 8080:localhost:8080

This will open up port 8080 on your local system and forward it to port 8080 on the TKG VM.

Visit http://localhost:8080 and let’s get started.

Click on deploy under VMware vSphere.

In my case, I fill in the local IP of the vCenter, with the [email protected] user and the password. I press connect, then select a datacenter and fill in a public SSH key. (If you don’t have any, run ‘ ssh-keygen ‘ and then ‘ cat .ssh/ ‘

Click on next. Now we select if we want a single node development management cluster, of the production one with three management nodes. I select the production one for my case and select the appropriate size, which I will use medium for. Then you can fill in a management cluster name, and select the API server load balancer. You can also select the worker node instance type along with the load balancer and click on next.

Now we select a VM Folder to put it into, select the WorkloadDatastore for the datastore and the Compute-ResourcePool or a resource pool under it. Click on next.

Select the TKG network and in my case I leave the CIDRs as default. Click on next.

Select the OS image and click on next.

Click on review configuration and then on Deploy Management Cluster. Now we sit back and wait.

15 minutes later, and we are done for the management cluster!

Now we can deploy our first workload cluster. Go to the root shel where you run the init, and type:

tkg create cluster clustername --plan=prod/dev

# Example:
tkg create cluster tkg-workload-1 --plan=prod

Once that finishes, run:

tkg get credentials clustername

# Example:
tkg get credentials tkg-workload-1

And now you’re ready to start deploying applications. In the next posts, I’ll cover some demos and assigning an VMC on AWS Public IP to one of the kubernetes load balancers.

Thank you for reading, if you have any questions feel free to contact me.

How I got into IT

July 21, 2020 | IRL, Virtualization | No Comments

Hi everyone,

This post will be all about me and how I got into the world of IT. (not covering anything else like school, etc because then it would be even longer, for that, I forward to this older post)

When I was born, and currently still do, I live with two “families” under one roof. A few years ago, the house was split in two with a wall, but before that I could just walk from one living room to another. I live there with my parents, and on the other side there is my great-uncle with my great-aunt.

Back then, my dad and great-uncle had a computer repair shop together. They would fix computers, install them, help out at people’s homes, etc.
When I was around six years old, I took an interest at it.
At this point my dad switched jobs and it was just my great-uncle doing this. He noticed that it interested me to be around him when he was fixing people’s PCs.
I would always watch and ask why he does what he is doing to learn how he does things. After a few months, he would slowly let me do small things to help him out, like turn off a computer. You know, the basics.
As I got older he would let me do more things, he would show me how to install Windows, how to clean up a computer properly, how to install certain programs, the basics of networking. (what a switch is, what a router does, etc)

I believe when I was around six or seven years old, I got my first laptop. It ran Windows XP, Windows 7 just released and this laptop was quite old, like really old. Old enough that it came with Windows XP when it was released.

Fast forward some more years, I believe I was ten or twelve, and I had my first computer, my dad’s old PC. I started to look at website builders that were free, just to play around with. I would build out websites for fun, that would represent something, I’m no longer sure what.

In 2013, I got started with a game, GTA: San Andreas. I found a multiplayer mod for this, MTA: San Andreas. (MTA standing for Multi Theft Auto instead of Grand Theft Auto). I was playing on a roleplay server at the time, I used my English as best as I could for my age to play on the biggest roleplay server at the time. (Roleplaying in this case, was just real life, but then in a game. So you’d have to make money, you can buy houses, drive vehicles after passing an actual theoretical test, and a practical one where you had to drive to way-points without getting your vehicle damaged or getting pulled over, such things.)
I wanted to start my own, because I found it a lot of fun. MTA: SA uses the LUA language for the “resources” you can put in its server. Using a leaked script as a base (I know, I know.. I was young okay?), I started to explore this and learn LUA, this were my first bits of programming experience next to HTML. I then started to add my own stuff and the gameserver grew, this is also my first experience as a bit of a system administrator, having to maintain my own Windows Server 2008 R2 virtual server I rented and getting DDoS attacks often from jealous people.

A different server became popular in mid-2014, which is when I closed mine down. It was a lot of fun and I learned some Windows System Administration basics, which was very nice. At that point, I realized that IT is definitely the way I want to go to. I really enjoyed maintaining it all and I somehow wanted to continue with this.
I kept the virtual server but used it as a webserver instead, I created my very first website from a template and edited it and ran it from there. Then I discovered ShareX, which can upload screenshots directly to a FTP server, which I had pointed at my virtual server. (Today it points at a S3 bucket that has a CloudFront CDN attached to it.)
Later I also wanted to learn Linux, so I obtained myself VMware Workstation and got my feet wet into the world of VMware and virtualization. Using Workstation to create Linux VMs, starting with CentOS and Ubuntu Server 14.04 (I believe) and later expanding to ESXi and vCenter.

In mid-2017, I bought my very first real server (and it was an old one… and LOUD!). I got myself a HP ProLiant DL140G3. Armed with two 4 core processors (no HT) and 32GB of memory, it was a loud jet engine at 1U. The server was made in 2008, so it’s also not very power efficient, but it was the only thing I could afford at that time. In the beginning of 2018 I got a second one. (I still have both of them, but I no longer use them.)
Both of these ran ESXi and through the internet, I got some licenses that I could use to deploy vCenter. This is when I got really excited and curious for the power of VMware products. I ran some basic workloads on it (ADDS, DNS, File server) on the two servers and that was all for a while.

Meanwhile on my main laptop I was able to run small VMware labs such as with NSX-V, though it’s really slow and small because of the lack of CPU and memory.

At the beginning of 2019, I got some extra money monthly from my parents, and I wanted to rent a more powerful server, because with what I got, saving up for a 1000 euro server would take years. So I went with Hetzner, and got myself one of their servers from the server auction. (I actually still rent this one to this day). Having my own dedicated server (rented), I got more experience with things like networking and remote networks. I deployed a pfSense VM on it with its own dedicated IP, and through an IPsec tunnel, I connected my home LAN with that server’s LAN and also added the ESXi host to my home vCenter.

I then in mid-2019 had saved up enough money through various means (Patreon for example and birthday money) to get my first real server with actual power. a DL380G6. Installed ESXi 6.7 on that (The DL140s only supported ESXi 6.0) and it’s still going strong today as my main host. It did go through an upgrade though at around April 2020, going from 144GB memory to 288GB memory. It has two 2.8 GHz Intel Xeon X5660s. 6 cores/12 threads per CPU.

During this time, I’ve had labs with a lot of products and situations. Such as VMware NSX-V, NSX-T, Cloud Director, HCX, Horizon and vSAN. (That’s what I can think of as of writing this post.) and non-VMware stuff like Palo Alto Network virtual firewalls, GNS3 with Cisco/Nokia gear and more.

This has greatly improved my knowledge on a lot of fields. Virtualization, system management, network management.

A few months before I got the memory upgrade, I rented a second server with Hetzner. A bit more powerful, and I run some more infrastructure VMs on it. Exchange Server 2019, cPanel and an extra web+database server. I also run vRealize Network Insight on that host.

As my income grew and I wanted to earn a little extra, I invested in my own Autonomous System number. In my case AS208751. I rent a /44 IPv6 subnet with it that I announce with a virtual server in Amsterdam, and from there I tunnel it over to my remote and home servers. I started to sell management for virtual servers and eventually also web hosting as well as virtual servers. This allows me to also scale up, I rented a /24 IPv4 subnet which I use partially for the renting stuff, but also for myself, assigning a /28 block to each of my servers. This came with a lot more learning. I suddenly had to learn about BGP and how to do this securely, making sure to have route filters in place and possible add RPKI.

This is all going well, and as of 08/07/2020 (DD/MM/YYYY) I bought a second DL380G6. It has two six core CPUs (Xeon X5660) with 144GB of memory. This extra server will be dedicated to larger labs, like Cloud Foundation and vSphere with Kubernetes.

I’d also like to mention that during my elementary school period, since I was about the age of 10, I was already helping out the fellow classmates with issues on the computers at school and even help out carry basic tasks for the system administrator. When I got in High School (that I did finish, but just barely) on my student account in their Active Directory I had some extra permissions so that I could help out the System Administrators there. I actually also found a type of leak, well, it was a share in Sharepoint that was a bit too wide open. They were happy I catched it before anyone else did and possibly abused it.

On July 17th, I got the amazing news that I’m part of the vExpert program now! Here is the link to me in the vExpert directoy. This is incredible and lots of it is thanks to, and they deserve a special shout-out; Lindy Collier and Heath Johnson.

There are still some solutions that I want to try out further, beyond the Hands on Labs. However, due to limitations like money I cannot do this. Solutions I would be very much interested in getting hands-on experience with, are VMware Cloud on AWS and GCP’s VMware Engine. For the VMware Engine, I did try to request a quota increase, but I got as a reply that “the quota could not be assigned at this time”, which made me a bit sad as I was excited for it that I could use some of my credits on that.

I hope that this long blog post gives you an insight of my past in IT and how I got into it. Feel free to email or tweet me any questions, thank you so much for reading and I hope to see you in a further post.

I’m a vExpert now!

July 17, 2020 | vExpert, VMware | No Comments

Hi everyone,

I applied for the second-half vExpert applications and I got accepted!

This is the first time and I’m really, really happy! This is my fourth attempt and here is to hopefully more years of vExpert.

Here is my public profile page.

If you don’t know what vExpert is, you are missing out on a lot, quoting from the site:
“The VMware vExpert program is VMware’s global evangelism and advocacy program. The program is designed to put VMware’s marketing resources towards your advocacy efforts. Promotion of your articles, exposure at our global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap. The awards are for individuals, not companies, and last for one year. Employees of both customers and partners can receive the awards. In the application, we consider various community activities from the previous year as well as the current year’s (only for 2nd half applications) activities in determining who gets awards. We look to see that not only were you active but are still active in the path you chose to apply for.”

Thank you all for reading this short announcement!

See you in the next post!

Hi everyone,

Here is how I upgraded my NSX-T deployment from 3.0.0 to 3.0.1.

If you tried to in-place update a NSX-T Host that has multiple host switches, you will get an error. You can use this trick to get around this limitation.

First, we log into the NSX-T Manager, and go to the Upgrade tab under system. Here, we upload the file and continues with the upgrade.
We upgrade like normal until we get on the hosts tab. We SSH into the host and we first visit this URL: http://:8080/repository/metadata/manifest
This file contains the link to the zip file that we need to wget in /tmp on the ESXi host. Then we run:

esxcli software vib install -d /tmp/

Once the installation finishes, we refresh the Hosts tab and then we can continue onto the management nodes, and from there we can continue has normal.

However, once the upgrade finishes, you will need to F5 (sometimes CTRL F5 or CTRL R) the page so the error goes away.

Thank you for reading this quick post and I hope that it was useful to you.

Hi everyone,

In this post, we will be deploying a PyKMIP server that stores its keys in a database. Unlike the docker container, the keys will be saved so on a reboot your keys are not lost.

So what exactly is this for? Well, in my use-case, I will be using this server to encrypted virtual machine files and drives.

For this tutorial, we will be using self-signed certs and this keys will be stored in a sqlite database. This is not secure at all! However, it will allow you to evaluate and learn the KMS functions within vCenter.

What we will need:

  • Ubuntu Server 18.04 or 20.04 LTS installation ISO.
  • One virtual machine to install Ubuntu Server 18.04 or 20.04 LTS on.
  • A network connection to install some packages.

First what we do is we create a virtual machine. This is just how it’s always done. You create a Ubuntu VM and install Ubuntu on it, this should be straightforward.

Now comes the fun part. the green commands should be executed as a user, the red commands as root. Re-place <$username> with your regular account’s username.

sudo -i
apt-get update
apt-get upgrade
mkdir /usr/local/PyKMIP
mkdir /etc/pykmip
mkdir /var/log/pykmip
chown <$username>: -R /usr/local/PyKMIP
chown <$username>: -R /etc/pykmip
chown <$username>: -R /var/log/pykmip
apt -get install python2-dev libffi-dev libssl-dev libsqlite3-dev python2-setuptools python2-requests
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt

Then fill out the form for the SSL certificate. The above certificate will be valid for 10 years. (3650 days)

chown <$username>: -R /etc/ssl/private
chown <$username>: /etc/ssl/certs/selfsigned.crt

cd /usr/local
git clone

nano /etc/pykmip/server.conf

Paste the following into the file: (replace x.x.x.x with your VM’s IP)


Almost done! Now we need to edit our crontab to start the service at startup.

crontab -e

Paste the following in on a new line:

@reboot ( sleep 30s; python2 /usr/local/PyKMIP/bin/ & )

This will make sure that it starts automatically on startup. Reboot your VM or type this in to start it as a background process:

python2 /usr/local/PyKMIP/bin/ &

Now we need to go to our vCenter. We click on the vCenter and go to configure. Then under Key Providers, we click “Add Standard Key Provider”.

Give it a name under “Name” and “KMS”. Type in the IP address under “Address” and the port number, which by default is 5696 under “Port”. Then click on “Add Key Provider”.

Once you have done that we need establish trust. Click on the Key Provider, then at the bottom click on the KMS server. Click on “Establish Trust” followed by “Make KMS trust vCenter”. Click on “KMS certificate and private key” and then on “Next”.

Now, we need to fill in the KMS certificate and private key. On the VM, run:

cat /etc/ssl/certs/selfsigned.crt

Paste the output (with the dashes!) under KMS certificate.

cat /etc/ssl/private/selfsigned.key

Paste the output (with the dashes!) under “KMS Private Key”.

Now click on “Establish Trust” and we’re done! Now you should be able to use your new KMS server in your lab!

If you want to somewhat tighten security, don’t use the self-signed certificate but use your own certificates and lock down access to the VM, since the database with all your VM keys sits as a file on the filesystem of the VM.

If you have any questions, feel free to contact me through email or Twitter.

Have a great day!

Hilltop CTF – Writeups

June 4, 2020 | CTF, Hilltop CTF, Security | No Comments

Hi everyone,

A blog post on a different topic this time. I was a Content Engineer for the Hilltop CTF event.

Write-up for the Fuzz challenge.

Challenge name: Fuzz

Creator: MasterWayZ

Category: Analysis/Fuzzing


The user is given a URL to look at:


  1. Using a program like gobuster, we can try to see what directories exist: gobuster dir -u -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt.
  2. We get a /301 redirect of /penguins. From here, it’s a matter of running gobuster dir -u -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -x txt. The txt extension comes from the .txt file part in the file.


  • We use gobuster to try to see what folder it is hidden in. dir specifies directory mode, -u specifies the URL and -w specifies the wordlist.
  • a 301 redirect of /penguins means that we’ve found something. Now we need to find the file in that directory. The new flag, -x specifies the extension used by gobuster to try to find files.

Write-up for the Fuzzy challenge.

Challenge name: Fuzzy

Creator: MasterWayZ

Category: Analysis/Fuzzing, Attacks/Cracking


For this challenge, you need to fuzz a Flask webserver to start with. Followed by brute-forcing a password and then automating or guessing the missing character in the flag.


  1. First we try to fuzz a directory. We can use gobuster with this. gobuster dir -u -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
  2. We will find that we get a 401 error with /email. Looking at it in the webbrowser, we see that we are given a clue that the username is admin, so we just need to brute-force the password. We can do this with hydra. hydra -l admin -P /usr/share/wordlists/rockyou.txt -s 39354 -f http-get /email. We find the the password is ‘michelle’.
  3. We see the flag, sort of. The ‘!’ in the flag has to be some kind of ASCII character. You can try all of them by hand or automate it. Once you’ve done that, you find that it is 5, making the flag HilltopCTF{FuzzyFuzzyFuzzyFuzzyFuzzy5_08b353dc330dcad5734172ff5009f5e6b3826d49fa7243f3e598effb85bef982}.


  • We first use gobuster to try to find out if there’s a hidden directory. dir specifies directory mode and -w specifies the wordlist.
  • We get a 401 for /email. Which means that it is most likely asking for some kind of authentication. Visiting it with a browser shows that the username is ‘admin’.
  • We use hydra to try to crack the password. -l specifies the login, in this case ‘admin’. -P specifies a password file, in this case rockyou.txt, -s is for the port number and -f specifies the hostname, http-get means that we want to use an HTTP GET request and /email is the path that you want to crack.
  • We get access to a partial flag. the ‘!’ in the flag is not valid. We need to try to see what ASCII or Extended ASCII character we can put in there. You can try this yourself, or automate it.

Write-up for the Injection challenge.

Challenge name: Injection

Creator: MasterWayZ

Category: Attacks/SQL Injection


For this you need to perform a SQL injection on a webform in order to dump the database.


  1. Go to the website. Look at the site and you will see a form. Proxy this form through something like burp, and save the request made.
  2. Use sqlmap with the request. For example, sqlmap -r injection.req.
  3. After running, it will find some ways to perform a SQL injection. After this, the easiest thing to do is to dump everything with sqlmap. Run: sqlmap -r injection.req --dump.
  4. Piece the flag together. It’s spread across three tables. It’s clear what the start and end is, because of the start of the flag and the } at the end of the flag. The piece with no bracket in it at all is the middle one.


  • We visit the website and only see search related things and a web form. This form submits ID to index.php, which is trying to indicate to the user at some kind of SQL is being performed. We proxy the request through burp and save it to a file. (Proxying through Burp is an easy way to get a request. We need the POST request.)
  • We use sqlmap to test the form for SQL Injection. The -r flag specifies the request and we give it the saved request.
  • When it runs for a while, we specify that we want to do MySQL tests only, as it successfully identified it as a MySQL based SQL server. After a while, it will find multiple vulnerabilities.
  • We use sqlmap again with -r to specify the request file and --dump to dump all contents.
  • From here it’s a case of finding the three flag pieces. It’s easy to see the start and end piece (from the { and }) and there is only one middle piece.

Flag locations:

  • The first part is located in the federal table.
  • The second part is located in the taskforce table.
  • The third part is located in the Guests table.

Write-up for the Backstab challenge.

Challenge name: Backstab

Creator: MasterWayZ

Category: Analysis/Fuzzing, Cryptography/Cracking


For this challenge, you had to fuzz the initial webserver to see a passwd file and a secured area. You crack the hash and gain access to the secured area, where you fuzz for the flag.


  1. Use a program like gobuster to do the inital scan. gobuster dir -u -w /usr/share/wordlists/directory-list-2.3-medium.txt
  2. After a while, you will see /secure and /passwd. Download the passwd file and use a program like hashcat with rockyou.txt to crack it: hashcat -m 3700 encrypted.hash /usr/share/wordlists/rockyou.txt.
  3. The password will be found and you can log into /secure with the username and password. The username is given in the challenge description.
  4. Fuzz the /secure/ area and you will find /flag. This contains the flag. We can do that with gobuster dir -u -w /usr/share/wordlists/directory-list-2.3-medium.txt. (note the /secure/ at the end, especially the last /!)


  • First we run gobuster in directory mode to fuzz some directories and files on the webserver. -u specifies the URL and -w specifies the wordlist.
  • Once we download the hash file, we use hashcat to crack this. -m 3700 is equal to setting mode 3700 which is bcrypt. Finally with -w we specifiy the wordlist, in this case rockyou.
  • Once we have the password, we use that to log into the /secure area.
  • Once in the /secure area, we use gobuster on /secure/ to check for hidden files, and it will find /secure/flag which contains the flag.

Write-up for the Johnny challenge.

Challenge name: Johnny

Creator: MasterWayZ

Category: Cryptography/Cracking


For this challenge, the user is given an encrypted .zip file with the flag inside.

The user has to crack the password, which is in rockyou.txt

There are many ways to do this, here is one way:


  1. Install john
  2. Download the .zip file
  3. Run zip2john > encrypted-zip.john
  4. Run john --format=zip encrypted-zip.john --wordlist=/usr/share/wordlists/rockyou.txt
  5. Once it finishes, run john encrypted-zip.john --show
  6. Use the password, which in this case is ‘patricia’.
  7. Unzip the ZIP file, run unzip


  • What we just did above was use the power of John the Ripper to crack the password.
  • zip2john converts the zip file into a format that john can read.
  • The command after that forces the ZIP format on the encrypted john file and cracks it using the rockyou wordlist.
  • Finally, the --show is used to let john show the password.
  • Then you can extract the file and obtain the flag.

Write-up for the Julius challenge.

Challenge name: Julius

Creator: MasterWayZ

Category: Cryptography/Cracking


The best way would be to think of the ROT cipher and then identity to use ROT47 as this is the only one that fits. ROT13 and ROT18 lack some of the characters used.

The rotation is a method of brute-force, in this case it’s 13.

The title of this challenge was chosen to make the user think of the the Caesar chipher and hint towards ROT ciphers. However, it’s also a bit misleading as the user will see that the characters used in the encrypted message cannot exist in a Caesar encoded textfile.

Write-up for the OhSINT challenge.

Challenge name: OhSINT

Creator: MasterWayZ

Category: OSINT/Forensics


For this challenge you had to perform some OSINT.

We start with looking at the EXIF data of the jpg file, which leaks a URL to a website. From there, you can to find the pieces of the flag that are spread over the website.


  1. Run exiftool image.jpg and look at the comments.
  2. Access the website and look for the clues, you will find them here: one is located on the index page, if you press CTRL and A you will find it, or if you view the source. The second one is located in the view source of the index page as well, but can also be found by clicking the Maps button. The third is one found under the blog button and then view source.


  • We download the file and then run exiftool to look at the EXIF data of the fail, which contains a comment with an URL to visit.
  • We visit the url and are presented with a web page. Here it’s a sign to view sources, use CTRL A and visit every page and click everything to find the three hidden flags.

Flag locations:

  • The first part of the flag is hidden as a near-white text on the index.html page.
  • The second part is under the Blog button at blog.html, view the source and see the flag in an HTML comment.
  • The third part is back on the index.html page, under the Location Maps button.

Write-up for the Fuzzy challenge.

Challenge name: Fuzzy

Creator: MasterWayZ

Category: Steganography


The user downloads the image.jpg file, opens it in a text editor, finds the ascii85 flag and decodes it.


  1. We download the file using wget.
  2. Running strings image.jpg is one of the ways to get the flag.
  3. Identify that the flag is ascii85 encrypted and decrypt it.


  • wget followed by the URL is used to download a file.
  • cat, or strings (and many more tools) are used to display the contents of a file. In this case, both work as the flag is hidden at the bottom of the image.
  • One of the ways to identify that it is ascii85 is because of the characters used in the encoding. You can use a local tool or online tool to decode it and get the flag.

If you have any questions, please let me know. I’ll be seeing if I can release the files and/or containers somehow.

Have a great day!