What is Red Teaming?
Red Teaming is the process of using tactics, techniques and procedures (TTPs) to emulate a real-world threat, with the goal of measuring the effectiveness of the people, processes and technologies used to defend an environment.
Red teams provide an adversarial perspective by attacking assumptions made by an organization and defenders. Assumptions such as _”we’re secure because we patch”_; _”only X number of people can access that system”_; and _”technology Y would stop that”_ are dangerous and often don’t stand up to scrutiny. By challenging these assumptions, a red team can identify areas for improvement in an organizations operational defense.
Even though there is some cross-over with penetration testing, there are some key differences that I’d like to highlight.
A typical penetration test will focus on a single technology stack — either because it’s part of a project lifecycle or part of a compliance requirement, (e.g. monthly or annual assessments). The goals are to identify as many vulnerabilities as possible, demonstrate how those may be exploited, and provide some contextual risk ratings. The output is typically a report containing each vulnerability and remediation actions, such as install a patch or reconfigure some software. There is no explicit focus on detection or response, does not assess people or processes and there is no specific objective other than “exploit the system(s)”.
In contrast, red teams have a clear objective defined by the organization — whether that be to gain access to a particular system, email account, database or file share. Because after all, organizations are defending “something” and compromising the confidentiality, integrity and/or availability of that “something” represents a tangible risk, be it financial or reputational. A red team will also emulate a real-life threat to the organization. For example, a finance company may be at risk from known FIN groups. In the case of a penetration test, a tester will simply use their personally preferred TTPs whereas a red team will study and re-use (where appropriate) the TTPs of the threat they’re emulating. This allows the organization to build detections and processes designed to combat the very threat(s) they expect to face. Red teams will also look holistically at the overall security posture of an organization and not be laser-focused to one specific area — this of course includes people and processes as well as technology. Finally, red teams put a heavy emphasis on stealth and the “principal of least privilege”. To challenge the detection and response capabilities, they need to reach the objective without getting caught — part of this is not going after high-privileged accounts (such as Domain Admin) unnecessarily. If “Bob from Accounting” can access the objective, then that’s all they’ll do.
What is Operational security (OPSEC) ?
Operations Security (OPSEC) is a term originally coined by the US military and adopted by the information security community. It’s generally used to describe the “ease” by which actions can be observed by “enemy” intelligence.
From the perspective of a red team, this would be a measure of how easy your actions can be observed and subsequently interrupted by a blue team. Although “ease” is probably not a good word to describe it, since it’s relative to the skills and knowledge of those defenders. However, given the overall threat landscape, body of public knowledge and even consultation with the client, you can make some predication regarding their capabilities.
Every action you take will leave indicators, but it’s important to have a good sense of how well those indicators are understood and what the likelihood is that the defenders will see and/or respond to them. Throughout this course you will see notes that attempt to highlight “bad” OPSEC and how it might be improved to reduce the likelihood of detection.
It should also not be assumed that OPSEC works in only one direction. Red teamers may gain access to internal systems used by defenders — such as their Security Information and Event Management (SIEM) system, ticketing systems, response/procedure documentation, email, real-time chat and so on. This intelligence can be used to operate in specific ways that the blue team is blind to, or unable to deal with.
Both red and blue teams should assume that their actions are being monitored and disrupted by the opposite side. Wise operators would also assume that the team you’re up against are better than you.
Phases of an Engagement
An overall engagement can be broken down into three main phases:
The engagement begins by performing external reconnaissance against the target gathering information such as public-facing applications, IP ranges, domain names, technologies and products used, employees, organizational structure, service providers, suppliers and more. This information is then used to plan an attack on the perimeter.
Once a foothold has been obtained in the target organisation, the team will perform internal reconnaissance. The aim is to understand everything possible about the environment including network topology, internal systems & processes and defensive products & capabilities. They may also install backdoors on the foothold(s) to ensure they can maintain persistent access to the environment without having to re-perform the initial compromise steps.
Planning & Client Engagement
It cannot be understated how important it is to properly plan a red team engagement — not just for achieving good outcomes, but for ensuring everybody involved is protected. The majority of that planning is the responsibility of the red team leads, and although this course is aimed at operators, it’s useful to get an early understanding of that process.
Command & Control
Command & Control, often abbreviated to C2 or C&C, is the means by which an adversary can perform actions within a compromised environment. During the initial compromise phase, a malicious payload is executed that will call back to infrastructure controlled by the adversary. This payload is commonly referred to as an “implant”, “agent” or “RAT” (Remote Access Trojan). This infrastructure is the central control point of an engagement and allows an adversary to issue commands to compromised endpoints and receive the results.
The capabilities of these implants will vary between frameworks, but in general they have the ability to execute different flavors of code and tooling to facilitate the adversarial objective(s), such as shell commands, PowerShell, native executables, reflective DLLs and .NET; as well as network pivoting and defense evasion.
Implants will most commonly communicate with this infrastructure over HTTP(S) or DNS, and can even talk to each other over a peer-to-peer mesh using protocols such as SMB and TCP. These protocols are utilised because they will typically blend into most environments.
Many commercial and open-source C2 Frameworks exist including Cobalt Strike, SCYTHE, Covenant, PoshC2, Faction, Koadic, Mythic and the Metasploit Framework. Each framework has their own sets of strengths and weaknesses — the C2 Matrix is a curated list of frameworks that can be filtered by their features and capabilities.
Starting the Team Server
- Access the console of attacker-windows.
- Open PuTTY, select the kali saved session and click Open.
- Start tmux.
- This will ensure the Team Server remains running if you close PuTTY.
- Change directory to /opt/cobaltstrike
- Launch the teamserver binary.
root@kali:/opt/cobaltstrike# ./teamserver 10.10.5.120 Passw0rd!
[*] Generating X509 certificate and keystore (for SSL)
[+] Team server is up on 0.0.0.0:50050
[*] SHA256 hash of SSL cert is: eadd46ff4f74d582290ce1755513ddfc0ffd736f90bed5d8d662ee113faccb43
- `10.10.5.120` is the IP address of the Kali VM.
- `Passw0rd!` is the shared password required to connect to the Team Server.
- Open the Cobalt Strike GUI.
- Enter kali or 10.10.5.120 into the Host field.
- Enter your favourite hacker pseudonym in the User field.
- Use the password you set when starting the Team Server.
- Click Connect.
- Ensure the server’s fingerprint matches before clicking Yes.
The Team Server allows multiple clients to connect to it at the same time. However, if you have remote team members, you shouldn’t expose port 50050 directly to the Internet. Instead, a secure remote-access solution (such as a VPN or SSH tunnel) should be used
A “listener” is a host/port/protocol combination that “listens” for incoming communication from Cobalt Strike’s payload, Beacon. The two main flavours of listeners are egress and peer-to-peer.
The egress listener that you will use the majority of the time is the HTTP listener. This listener acts like a web server, where the Team Server and Beacon will encapsulate their communications over HTTP. The “appearance” (bodies, headers, cookies, URIs etc) of this HTTP traffic can be tightly controlled using Malleable C2 Profiles, which we will cover in more detail towards the end of the course.
Peer-to-peer listeners allow Beacons to chain their communications together over SMB or TCP. These are particularly useful in cases where a machine that you compromise cannot reach your Team Server directly over HTTP.
To create an HTTP listener, go to Cobalt Strike > Listeners and a new tab will open. Click the Add button and a New Listener dialogue will appear. Select Beacon HTTP as the payload type and enter a descriptive name. This listener name is used in several Beacon commands (such as when moving laterally), so make sure it describes the listener well. Click the \+ button next to HTTP Hosts which should autocomplete to the Kali IP address (10.10.5.120). This is fine, so click OK. Leave everything else as it is and click Save.
To generate a payload for this listener, go to Attacks > Packages > Windows Executable (S).
Cobalt Strike is able to generated both staged and stageless payloads. Whenever you see (S) within the UI, it’s an indication that it’s using a stageless payload. Select the HTTP listener created previously, select Windows EXE as the output and tick Use x64.
Staged payloads are good if your delivery method limits the amount of data you can send. However, they tend to have more indicators compared to stageless.
Given the choice, go stageless.
The use of 64-bit payloads on 64-bit Operating Systems is preferable to using 32-bit payloads on 64-bit Operating Systems.
Click Generate and save the file to C:\Payloads. Now execute that EXE and you should see a new Beacon appear
Interacting with Beacon
To interact with a Beacon, simply right-click it and select Interact. This will open a command line interface where you can enter various commands. To get a list of available commands type help.
argue Spoof arguments for matching processes
blockdlls Block non-Microsoft DLLs in child processes
browserpivot Setup a browser pivot session
cancel Cancel a download that's in-progress
To get more detailed help for a command, type help <command>.
beacon> help inject
Use: inject [pid] <x86|x64> [listener]
Open the process and inject shellcode for the listener
Parameters wrapped in [ ] are mandatory, whilst those in < > are optional (although the default value won’t always be what you want).
By default, Beacon will check into the Team Server every 60 seconds. To lower this, we can use the sleep command.
beacon> sleep 5
[*] Tasked beacon to sleep for 5s
[+] host called home, sent: 16 bytes
Fast check-in times can increase the chance of the Beacon traffic being caught. You can also add a jitter to randomize the check-in time by a given percentage.
Some Beacon commands (such as sleep) don’t provide output, instead you will see a ”host called home” message to let you know that Beacon has checked in and received the job. There are also some features of the UI (such as the File Browser) that cannot be accessed on this command-line interface. Instead, you must right-click on a Beacon and use the popup menu (e.g. Explore > File Browser).
We will meet in the next article. 👋