How we hosted the slntCTF20

General dic 18, 2020

In this edition, the CTF was only local but we didn't lose the opportunity to test an international CTF-ready infrastructure.
It was all an experiment, we were ready with a backup plan to go back to a configuration more similar to last year (we know ourselves and we know how our experiments turn out).
Fortunately, and against all odds, everything worked out and we couldn't be happier.

The infrastructure

network--2-
The whole infrastructure was hosted with many virtual machines on a Proxmox server, kindly granted by the LIIS laboratory.
It was the first time I could play with a truly powerful server, so I designed a robust, advanced, and scalable infrastructure to play with. (I had to play)

This year's big guest was Kubernetes, a container orchestrator which we used to deploy the challenges. Indeed all the services which needed to expose a service were dockerized.
It has been difficult to set up at first, but once I figured out how it works it was all downhill. It is always useful knowledge in the real world!
Kubernetes allowed us to allocate specific resources to each challenge and recreate the containers at blazing speeds in case of a failure. It would also have allowed us to easily create replicas to handle more traffic, but we didn't need that.

Another important component for the CTF was CTFd, the famous open-source framework where users could register, see the list of challenges and submit the flags.
It was hosted in a separate VM and deployed with Docker. CTFd uses a persistent MySQL database so Kubernetes didn't seem to be an easy solution.

Last but not least we hosted an OpnSense VM which had the task of load balancing, port mapping and applying SSL certificates.

The Kubernetes cluster

The cluster consisted of three VMs: a master with 2 cores and 16GB of RAM and two workers with 1 core and 8GB of RAM each.
The CPU usage remained almost the same from before the CTF to the end, also due to the relatively low number of participants, with an average of 30%.
Yes, I know, it is quite high considering that before the start the services were idle. We are investigating, next time we won't make the same mistakes.

In case of necessity, we were ready to deploy another worker, in fact another upside of Kubernetes is that you can add and remove workers with ease!
We prepared a VM template with Kubernetes installed. To add a new worker to the cluster we would just have to clone the template and run a command to link it to the master. Easy and fast scalability.

Load balancing and port mapping

The choice to use a firewall was due to the need to do the port mapping between different machines and the possibility of having to block IP addresses, in case some player flooded the network with requests.
OPNSense was a good choice because it also allowed us to configure LetsEncrypt and HaProxy as a load balancer for the services. This last part was fundamental in the first idea I had about the Kubernetes cluster: I wanted to create a High Availability cluster, but I needed at least 3 master nodes. We ended up with a single master node because an HA cluster was too much. But the load balancer was still useful to redirect the incoming traffic to the nodes.

The services in the cluster were exposed in NodePort mode, so we created a rule for HaProxy (the load balancer) to redirect the ports 31100-31199, assigned for the challenges, to one of the three IPs of the nodes.

The ports 80 and 443 were always used by HaProxy to expose CTFd. In this case the reverse proxy was not used to balance the load, because the platform was hosted on a single VM, but it applied the SSL certificate from LetsEncrypt with an automated renewal script.

Tools

A complex setup is nothing without the appropriate tools to use it with.
In particular, we needed a tool to send the challenges list to CTFd and a tool to deploy the containers to the cluster.

To interact with CTFd we used ctfcli, a tool created by the same CTFd team. It allows us to send to the platform a list of challenges with their information (name, description, points...) and their attachments.
Unfortunately, this tool is in an early stage and it allows to do that only by executing a command for each challenge. We had more than 30 challenges and it would have taken a long time. So we used and edited this python script from the team csivituv which organized another CTF.

To deploy the challenges to the Kubernetes cluster we used ctfup, a tool created by csivituv (thanks again!) which automatically built the containers, sent them to a Docker Registry and created the deployment in the cluster based on a .yml file for each challenge, the same which we used for ctfcli.

Conclusions

This year's network design was a success! Probably next year we will use a similar configuration, maybe in the cloud. Indeed it's also easy to replicate using some of the most famous cloud providers.

We have great ideas for the future
STAY TUNED!

Tag

Riccardo Tornesello

Co-founder of the r00tstici team and Information Engineering student at the Salento University.