RECAPPED: Red vs. Blue Spring 2022


How SWIFT pulled off its first ever in-person Red vs. Blue competition at Cal Poly Pomona. Wasabi style.

Published on March 17, 2022 by baseq

post rvb recapped

12 min READ

SWIFT Spring Red vs. Blue

On April 16th, 2022, SWIFT hosted its second-ever Red vs. Blue (RvB) event and first-ever in-person RvB at Cal Poly Pomona (CPP). While there were many mistakes, slip-ups, and goofs, it was a remarkable event for SWIFT that culminated in hundreds of hours from various individuals.

SWIFT Red vs. Blue is a simulated, adversarial cyber-defense competition that puts participants in the shoes of an incident response team for a vulnerable, fictitious company. RvB provides an intense, high-density learning experience for participants to showcase their skills and what they’ve learned in both technical and business operations.

Huge shoutout to CPP’s Collegiate Cyber Defense Competition (CCDC) team and Collegiate Penetration Testing Competition (CPTC) team who helped put everything together and helped out immensely during the event as part of the Green Team and Red Team. And a huge thanks to the Offensive Security Society (OSS) from Cal State Fullerton for joining the red team during this RvB!

winners

The red team secretly watching blue teams from behind

Event Overview

Summary

Each RvB contains a unique theme to give context behind the entire event, and to—you know—have fun. The special theme for this year’s spring event, “Wasabi Factory,” was loosely inspired by a CPP and CCDC alumnus whose alias is “wasabi”. The Wasabi Factory was going through a security audit, and the factory’s CEO, Abe Wozz, a typical scatter-brained-but-well-meaning character, was understandably frustrated and stressed. Abe Wozz piled his IT team with tasks and demands to hurriedly secure the company’s systems to pass the audit.

Participants were dropped into an unknown environment where they had to inventory their network and secure what they could on the fly. In addition to securing their network, teams had to fulfill business tasks called injects to showcase their understanding of business objectives, security policy, and other soft skills relating to security and IT management. This event’s injects included documenting inventory reports, giving presentations, drafting a BYOD policy, and writing up an incident response form.

Network Environment

Of course there would be no recap without the network environment, so here’s a summary:

NAMEOSSERVICESINTERNAL IP
cakeWindows 7SSH, HTTP192.168.10.99
joeWindows Server 2016AD, DNS, HTTP192.168.10.5
peasWindows Server 2019AD, DNS, SSH192.168.10.15
backboneDebian 11MySQL, SSH192.168.10.2
pasteUbuntu 16.04HTTP, HTTP192.168.10.44
blobUbuntu 14.04HTTP192.168.10.207

In addition to these 6 boxes, there was also a pfSense router in every team’s pod. (The pfSense was also in scope!)

Scoring and the Engine

In past RvBs, SWIFT used the scoring engine made by pwnbus straight out of the box. The scoring engine is very fleshed out and has plenty of quality-of-life features that we love, but the scoring engine is limited to only service checks. I took it upon myself to add scored configuration settings (SCS) to the scoring engine. If you’ve ever competed in CyberPatriot, it is essentially the same concept as the scoring report.

You can find this version of our scoring engine here.

Development and Execution

Writing the Recipe

This RvB development cycle was a bit different than others: CPP’s CCDC team had just come off a rough performance at the WRCCDC Regional Finals but was inspired by their one solace: wasabi. No, not the green, Japanese paste. Well kinda. It’s the alias of one of SWIFT’s spiciest alum. The development team unanimously voted for the Wasabi Factory. Thence, the development team set out on the next phase: designing the environment.

Picking out operating systems is always an easy task. We aimed for an even number of boxes and split 50/50 Linux and Windows. As shown above, the versions we pick are all over the place, with some being pretty recent (i.e., Debian 11), and others rather legacy (i.e., Windows 7). This mimics the reality of a lot of small to mid-size companies who aren’t able to strictly regulate their IT infrastructure as well as big businesses. Of course, to accommodate the crunched time frame of play (4 hours), the development team took some creative liberties to make the environments a little extra vulnerable.

First, we discussed the contents of the Linux boxes (specifically backbone). It was proposed early on that backbone would contain the router as a Dockerized service that was unscored. Unfortunately, things didn’t turn out the way we wanted, and we decided to scrap the idea in favor of a separate pfSense box. While most boxes took several days to get up and running, cake and blob were checked off the list fairly quickly (credit to Nigerald and cove).

After installing and configuring all the scored services, we added many quick vulnerabilities (the low-hanging fruit) that are designed to be fixed fairly quickly. Many of the vulnerabilities we added are intended to be possible scored configuration setting (SCS) points. These are typical things like removing common PHP webshells, or fixing misconfigurations like PermitRootLogin for SSH.

Refining The Tools

The scoring engine was tackled by myself and dewewRHINO. Implementing SCS into the scoring engine took a lot longer than we planned. The frontend of things were really quick since we could mostly just mimic the structure of other web pages from the engine. The main struggle was finding all the different odds and ends that we needed to insert SCS into. While most of the scoring engine is modular regarding the functionality of service scoring, we found some parts to be hardcoded to only expect services as the only possibility while others performed conditional checking to determine if the thing being scored was a service.

To remedy this issue, we essentially created a new section in the compeititon.yaml file for defining our new SCS objects. These were then input into the engine which we added the proper checks and handlers for. The SCS checks use SSH to login and extract the contents of a particular command then determines if the output was correct.

scs:
- configuration_text: SSH no longer permits root login
  check_name: SCSCheck
  hostname: "blob"
  host: "172.16.1.207"
  port: 22
  accounts:
  - username: wasadmin
    password: widesabi
  environments:
  - matching_content: "PermitRootLogin no"
    properties:
    - name: commands
      value: cat /etc/ssh/sshd_config

In addition, I also created a simple, JavaScript Discord bot, RvBMo, to orchestrate the SWIFT RvB Competition Discord. Technically, I created RvBMo twice. The first time, though, I accidentally lost all of the code. Ouch. RvBMo was a welcome addition that saved us a lot of precious time to set up the Discord server because it allowed us to rapidly spin up and tear down teams.

Finishing Touches

Monday (T-5 days)

The last week leading up to the RvB, the development team focused on final testing and scoring engine checks. We (obviously) want to make sure that every team’s pod environment works 100% before we give it to them. However, the scoring engine kept running into bug after bug relating to service checks, the newly implemented SCS, and also password changes. This put us a day behind, and we weren’t able to start testing until Thursday, only two days before the event. Eventually, we ran through all our snapshots, SCS, and various service scenarios to make sure the engine would respond appropriately.

Friday (T-1 day)

Friday was reserved for deploying the environments and ensuring the team pods functioned properly. Deploying just a handful of pods ended up taking hours. At first, we thought this was just strange behavior, but when we checked the pods that had completed in the afternoon, we realized none of the boxes were accessible. The next several hours were used to troubleshoot, but we had no luck. We decided to just physically move our entire TrueNAS server into the data center that we would be hosting the competition out of directly, but then came another issue: compatibility. The existing virtual machines were designed for ESXi 6.7, but the new data center was running 7.0. Just before midnight, a handful of members from the operations team decided to buckle down and meticulously recreate the entire environment in just 4 hours so that it would be ready as promised to those who had signed up. At approximately 5AM on Saturday morning, our cloning process had begun. The team then decided to rest for the night.

Saturday (T-0)

When the team awoke and assembled in our data center at 8AM the next day, we realized there had been yet another kerfuffle. Last night, we had cloned all the team pods to one ESXi, instead of distributing the pods among the three ESXis we have, which would significantly hamper our performance. Given that our ESXis had HDDs instead of SSDs, any attempt to create new boxes, migrate virtual machines, etc. would take 20 to 30 minutes per action. If we performed the wrong action, we would have to wait about half an hour until we can test another action. We still ran into compatibility issues and other errors that we didn’t have any time or energy to troubleshoot. Our time was limited; our actions were narrow. Through a slow and painful process of trial and error, we finally figured out a system to manually migrate team pods to the other ESXis. The only problem - we tested this on one pod, and it took 45 minutes to fully migrate.

We had to do this for 12 pods.

This was our single shot.

We spent all of our efforts from then on migrating the 12 team pods across our three ESXis. We performed the migration in batches of 2 to 3 pods at a time. Meanwhile, we worked on fixing our scoring engine. We had originally planned to postpone the event by doing an egg-drop challenge as the first RvB inject. But miraculously, after each batch of migrations, we started seeing green checks pop-up like gophers. Virtual machines began to be responsive. The scoreboard continued to light up more and more green until it was greener than Washington state.

After this twist of events, things were finally looking good. We became ahead of schedule.

After completing our last migration batch, we did a last check with the scoring engine, and we were greeted with the holy grail of RvB development (insert holy grail screenshot). At 12:45pm, just 15 minutes before the event, we were ready to play.

Game Time

The event went very smoothly thanks to the morning’s preparation. Participants had their technical skills and checklists battle-tested as they scrambled to secure their network environments. In addition, teams had their technical writing, presentation, and collaborative skills tested right out of the gate with a specially crafted inject that forced all the teams to work together instead of pitting them against each other.

As the afternoon progressed, some teams started to break away from the pack, but red team tried their best to even the playing field. Capitalizing on the vulnerabilities left by the development team, almost every box got popped (meaning the red team got admin access) within a one hour. However, as teams began to revert their boxes and adapt their security strategies, some services managed to be defended.

nosecurity_moment

A blue team attempting to change their passwords on nosecurity.blog

In the end, only one team can win by having the most points. However, since everyone had learned something, the SWIFT Spring 2022 RvB was a victory in our books.

winners

Congratulations to the winning team (above)!

Takeaways

SWIFT is still hammering out the kinks in RvB. We continue to be eager to hear all the feedback we can so that we can build the best version of RvB that we can and provide valuable experiences to those who are starting their journeys into cybersecurity.

Everyone at SWIFT was able to be involved in some capacity with the production of RvB. Whether it be as a developer, a red teamer, or even a member of the marketing team, it was a truly concerted effort that showcased the power of SWIFT.

As the inaugural Red vs. Blue director, it was an honor to see both members of the organizing team and participant teams blossoming and developing into the next generation of cyber folk. I hope for the continued success and expansion of RvB. As I step down as director, who knows, we may have the birth of the next great national-level, cyber defense competition!