Skip to main content

Red teams are a necessary evil – literally – in today’s cyber threat landscape. Motivations for engaging in offensive testing activities can vary from regulatory requirements to certification aspirations. Truly proactive and progressive security programs incorporate offensive operations almost immediately as security is built and defined.

Most organizations start with vulnerability scanning and then move into penetration testing (pentesting), taking the vulnerability scan one step farther from guessing a vulnerability could be exploited to proving exactly how it can be. Red team programs are often, incorrectly, synonymously associated with pentesting, but it is a very different function.

Pentesting seeks to find as much as possible with no definitive objective in mind, testing a wide array of possible attacks to confirm success or failure of that exploit or post-exploit activity. Pentests generally do not do initial access vectors. Purple team operations are a natural in-between pentesting and red teaming progression to bring those findings full-circle and actively mitigate them in real time to increase proven versus posterized resilience.

Red teams take that maturity model one step farther by executing highly targeted operations using the full hacking lifecycle, from initial access to data exfiltration, to attack an organization‘s people, processes and technology in a stealthy, APT-like manner. Proper red teams are the next step in proactive defense, following purple team operations, before escalating to true adversarial emulation operations.

“From the CISO’s perspective, incorporating red teaming into your cybersecurity program goes beyond checklists and broad-brush security assessments and facilitates targeted and sophisticated techniques to highlight real vulnerabilities, gaps and deficiencies for your most highly prioritized risks,” says Chris Hughes, CISO at Aquia. With modern threat landscape compliance frameworks, checking the “pentesting” box won’t cut it. You need to think and train like your adversary fights and identify attack paths before they do.”

With that outlook in mind, here are some strategies and policies I have observed from an operator’s perspective that leaders can implement to support red team program success.

1. Stop limiting red team scope

Let your red team resources hack the way sophisticated adversaries hack. Know who doesn’t have scope? Cozy Bear. Deep Panda. APT11. If it’s in scope for the enemy, it should be in scope for red teams. I once had a manager say in a meeting with other managers to “get comfortable with the idea that there will be a loss of productivity for certain users, for a small period of time, during a red team operation. If we’re vulnerable to that kind of activity, we need to know now in advance before someone without the constraints of your ‘preferences’ tries it.” To this day it was the most empowering thing I have ever had a manager say on my behalf. His stance empowered us to the most impactful findings and thus, value-driven remediations.

2. Cap red team required meetings

It is the red team’s job to emulate adversaries, not to be aware of what else is going on in the security department. They participate in stakeholder out-briefs and technical deep dives immediately following an operation, but they do not attend meetings as their primary function. It is much more effective to get a manager on the same page about the attack narrative, objectives, findings, and their ratings, and then have that person socialize results on behalf of the red team.

Red teamers need to be considered operators “behind the curtain” while managers and leaders be that first line of liaison. A meeting the entire red team attends is a very expensive for your company and their department.

Conversely, don’t make the red team completely inaccessible. In fact, in many cases it softens their work to have rapport with other members in security and the organization. The more they are seen as people and employees with vested interest in the company’s success like everyone else, the less resistance they will meet during their testing. Meetings centric to strategy, GRC, controls mapping, CI/CD, and other team stand ups do not require the red team.

3. Conduct separate stakeholder out-briefs from technical out-briefs

Saving time is one thing, but much of the information gone over in the “weeds” of a technical deep dive will be outside the depth and scope of a stakeholder meeting. One audience is going to care about the narrative, metrics over time, persistent findings, and resulting risk mitigations or acceptance. The other is going to care about ports, services, syntax, methods, and purpose. Having them all in the same meeting to save time actually wastes time. It’s more effective and efficient to have red teams furnish and deliver separately two versions of their reports: one for executives and laymen and the other for the remediation and technically oriented teams.

4. Assign risk ratings and follow up with risk owners

This happens at the stakeholder level. A red teamer can assign a risk rating and guess at how that finding will be handled after the fact. Without insight into who owns the risk and who should be followed up with on the remediation, their expertise will stop there. So often in executive out-briefs we’re asked, “What was the result of this finding? Where are we at with remediating this?” and red teams can’t answer. We execute in our lane and hand that over with no follow up on whether it was remediated or even reviewed. You’re helping the red team help stakeholders by giving them a correlating person of contact to the risk associated with their findings.

5. Standardize assigning findings and risk ratings

It’s time to move beyond CVSS ratings. They are good for knowing the general difficulty and possibility of a vulnerability being exploited in the wild, but they lack organizational context. If left to red teams, they will arbitrarily assign a low, medium, high, or critical rating. They often meet to discuss why a member of the team labeled a finding with a particular severity and the responses vary from difficulty to access required to, “Don’t we have to think of this in a broader sense? This time it wasn’t that bad but if it’s elsewhere it could be worse.”

The truth is ratings assigned to findings are subjective, so it behooves you to insert as much objectivity into that equation as possible. This is one of the reasons I’m a big fan of risk ratings: inherent and residual. The inherent risk rating takes into account technical and temporal factors getting a weighted likelihood and impact rating. Those are then calculated together to get inherent risk.

This is the closest thing to the unfiltered CVSS we’re used to seeing but with more metadata behind it. From the inherent risk score, which is usually higher, a control score is also assigned based on the unique network and mitigation factors for that organization. The control multiplied by the inherent risk yields a weighted residual risk for that particular vulnerability, custom to your organization. This is much more valuable intel.

6. Be aware of operations cadence

Red teamers aren’t pentesters. The ops cadence of the red team is completely different than that of a pentest team. Pentesters have a standard set of vulnerabilities and misconfigurations they test for to try to find “as much as possible” to give defenders a chance to secure the thing as well as they can. Pentest teams also seek to actively set off alerts and be able to report on positive findings picked up by EDR and SIEM solutions.

From the tester side, Pentest engagements have a two- to three-week test period, a week to write the report, and then usually turn right around and kick off a new engagement the next week. From the customer side you generally do one a year and have the following 12 months to remediate.

In contrast, red teams have clear and defined objectives that are narrow in scope. They do not perform everything that is possible. They perform only the actions necessary to achieve objectives because they recognize every action might, at any time, alert defensive teams to their presence and red teams do not want a race against the clock. Red teams also operate in a clandestine manner and need more time to research, prepare and test kill chains that realistically represent APT behavior.

Therefore, a red team operation’s cadence and lifecycle will be slower and longer because the scope is the entire organization. Keep that in mind when establishing the red team’s objectives and key results (OKRs) for the year. Not to mention, a lot of findings usually come out of a red team operation and not all of them have a correlating TTP or direct mitigation. It’s going to take remediation teams time to work through the findings and mitigate as much as possible. This is to avoid blue team finding fatigue where they may actually ask the red team to cancel or postpone additional operations for a quarter depending on how far behind they are.

7. Track all metrics

Not every metric demonstrating the success of an offensive program will come from the red team. Red team metrics used to track the success of testing, and thus remediation, activity includes mean dwell time: How long were they able to persist in the environment doing discovery and pivot without being alerted on?

A metric from the blue side, in addition to newly created mitigations such as SIEM rules and EDR detections, will include mean time to investigate: How long after the alert came in did it take to begin investigating and triage the serenity of the incident?

Other factors will come from cyber threat intelligence (CTI) and risk teams in the form reduced residual risk scores on findings, improved knowledge on resilience against emulated threat actors, and likelihood their playbooks seen in the wild will be successful. CTI teams will be able to home in on relevant threat actors by knowing definitively whose tactics can and cannot be defended against.

Then security awareness and resilience training can be tailored to those methods. Additionally, manual investigations around those methods will be more value-driven and not a game of guess-and-check on false positives or false negatives.

Finally, having security measures and processes confirmed in practice means the security departments drop off documentation, reports, and results to an auditor and are never asked another question. GRC can also prove due diligence and due care to regulatory investigators and cybersecurity insurance firms for the inevitable day when a zero-day does happen. When the red team’s activities are successful, the effect translates to business departments and functions.

8. Segregate red team roles and responsibilities

Optimized red team operations come from a continuous feedback loop involving CTI, red teams, detection engineers, and risk all working together to reduce attack surface in a value-driven context for the organization. These roles will not look the same across every org chart, but in my opinion, it is best to have the responsibilities segregated.

Many security orgs are small and will have the same person wearing multiple hats, but at least one full-time employee dedicated to each of these functions significantly increases quality of operations. To that point, security teams also need to be segregated under a COO, CRO or CISO whereas vulnerability management, remediation management, and IT operations should be under a CIO or CTO. Having the reporting department (security) answer to the same head as the people they report on causes friction and makes their senior leader less of an advocate when both sides need to be placated.

9. Set realistic expectations

When red teams brief an operation plan, they will give you the relevant objectives and rough approach they plan on taking to that end. It is usually general like “via RCE” or “credential capture via MiTM.” The main stakeholder concern is loss of productivity or denial of service (DoS), and while that is never the goal, the red team will not be listing out every step and method they plan on using.

In fact, during exploit development, we often must pivot from our original ideas while we debug payloads and ensure smooth TTP execution. Being realistic with the information you will get from your red team before, during, and after an operation is going to lessen frustrations and the impression of being unwilling on the red team’s part.

10. Give red teams off-network attack devices

Red teams for their own best practices will burn and turn their own attack infrastructure for each operation. For every operation, the needs of that attack path will be spun up and tested anew. For the part of management, giving them segregated devices from enterprise EDR, AV, and SIEM agents will enable them to test phishing campaigns, landing pages, and payloads, and to work out any bugs so that valuable operation time isn’t wasted during the vulnerability period.


All rights reserved Jenson Knight.