Interdisciplinary

According to DoD, a Red Team is: "An independent, focused threat-based effort by an interdisciplinary, simulated adversary to expose and exploit vulnerabilities to improve IS security posture."

I want to point the interdisciplinary word.

A Red Team assessment is an authorized, adversary-based assessment for defensive purposes, performed by an interdisciplinary team of professionals. It may include:

  • Collecting open source intelligence (OSINT)
  • Performing reconnaissance or stake out operations on both the physical and digital realms
  • Footprinting system, networks, and services
  • Footprinting and profiling people, their behavior and online presence
  • Footprinting the target service providers and external vendors
  • Developing attack vectors
  • Developing exploit payloads to gain entry and escalate privileges,
  • Mounting social engineering attacks
  • Developing backdoors, manipulate audit logs, sniffing networks and generally exploiting configuration errors

At the end, the Red Team will provide an extensive report to detail the problem areas to be addressed, provide solutions to address those issues, and work together with the defenders to train them and make them more resilient.

The key, though, remains in that word: interdisciplinary.

At the end of the day, a good Red Team is there to assume the role of an expert attacker to challenge assumptions, look for unexpected alternatives and find vulnerabilities in new ideas, policies, systems, people, and the intersection of all of that.

The more varied and interdisciplinary the team, the better it will achive its objective.

So Secure... | Red Shadow

It's 9pm. Your phone rings. The number is from your company. You wonder who's calling you so late. You take the call, and a voice says: We have been breached. We discovered a large dump of our customer's private data online. And it's recent! Like, a month old data!

What? How? You have detection and monitoring and AI? How?

From a recently discovered blog, keep an eye on that website.

Why We Red Team?

Security is hard. The security world is full of things that are hard to control. Attacks can occur at any time and place, most of the time in places not of our choosing, and when the time is worst. These attacks usually involve adversaries of unknown size and capabilities, making it harder to have a fixed and solid plan to deal with them. These adversaries, during an active attack, can and will pivot from their initial point of entry or discovery, usually having more than one point of persistence.

Security is hard.

Though there are things that fall under our control, such as the ability to have multiple teams monitoring and engaging (hopefully) these attackers, the reality is that unless you have been put through the ringer of an active incident, or a breach, you don't know what will work and what will fall flat on its ass.

Yes, the adversary usually has the upper hand.

How, then, do we solve this problem? We Red Team it. We inject stress, we do the unexpected, we bring the adversary to you.

Red Teaming is the simulation and emulation of your adversaries, both in their tactics and way of thinking.

By performing Red Teaming exercises, you can begin to stress test your program, your procedures, your standards. From policies to the security teams, a good Red Team can bring stress inoculation to your organization. But, this is not all. Red Teaming engagements will certainly help, but you need to go deeper and change your mindset and culture. Change how you see and approach security, and respond to problems. You have to begin to think like the adversary you are tying to defend against. They don't play by any rules and they don't follow your procedures.

Only when you can apply the adversarial mindset to everything, you will be able to go beyond the known and into the realm of the "what if". By applying the bad-guy-mindset to policies, plans, the teams SOP (standard operating procedures), and educating your people, you can build resiliency, be proactive (and not only reactive), and put in place plans that can adapt to different situations and attackers. You can be both proactive and reactive, giving yourself the best chance to win.

We can help. Start with your organization's top leaders, let us have a two hour conversation with you, and let us set you in the path towards a more robust way of doing security.

Let's start with that conversation, to make you and your company safer.

note: originally posted on ACG.

Keeping Engagement Data Secure

One of the things I think it's crucial during an engagement, is keeping the information about your customer or target, and the information you extract from them secure. There is a need to both keep their privacy and security tight. In the case of a customer, the data you extract belongs to them, and it may contain highly confidential information. It is extremely important to handle this information in a secure way, as much as possible.

Project Name and Customer Name

One thing I like to do, is to give each customer a codename. This will allow me to talk about the customer to another member of a team on a semi-open location (an office, or on the phone) without disclosing who the customer is.
This is also good if you are sitting with another customer, and a call comes in. You can talk about certain things only referring to the customer by its codename. This way you keep each customer's privacy and OPSEC. Unless specifically allowed to use a customer as reference, you should never mention customer names.

The same can be applied to projects within a certain customer. As you may have yearly projects, or even different projects with the same customer, having codewords for projects will help you keep the data organized. Also, it will help compartmentalize this data. Often, you can get a project within a customer that requires your team members to have a security clearance, for example. Those that have no security clearance, and therefore are not part of this project, shouldn't have access to it. This includes client name and project name. So, sometimes within the team you can benefit from having a codeword for projects.

Both customer and project name compartmentalization is part of OPSEC and you should decide what and how it is applied.

Project Data

Project data includes scan results, OSINT dumps, email addressed captures, credentials, and exfiltrated data, among other things. Anything that is collected from and about the customer or target, should be considered sensitive data.
Efforts should be put in place to keep that information secure. Personally, I do a combination of things. I use:

  • Per engagement external USB backup drive
  • Per engagement USB thumb drive
  • Per engagement completely wiped and re-installed laptop

I store all the data about and from the customer or target encrypted on the backup drive. I might dump all the data at the end of the day, or I might copy it as I find it, but all data ultimately goes there.
If I need to use a USB thumb drive, I use only the one assigned to this project (as much as possible, exceptions will occur). Again data copied to it, will be copied to the backup drive at the end of the day.

At the end of the engagement with a customer, and after the report is done and briefed, I usually ask the customer if he wants to keep his data, or he rather I keep it or destroy it. Since all it's stored in one place, it's easy to destroy or safely store on a safe location. And if the customer chooses to get his data back, as it is his right, it's easy to transfer this to him.

In cases where data comes from a target, having it all sorted and encrypted in one drive, allows for better storage, and transfer to law enforcement or other organizations.

End of Engagement

At the end of the engagement, it is important to wipe the laptop clean and re-install a new operating system and software. Be ready for the next engagement.

Question from a Reader: Building a Red Team

Question:

How would you build red team? What positions would you create?

Dan and I talked about this on Episode 3 of the Red Team Podcast, but maybe this question warrants going a little deeper.

The Red Team

Usually a good Red Team, as we think of it, is composed of two very distinctive sub teams: the Operational Team and the Support Team.

The Operational Team usually is forward deployed. Whether performing physical reconnoissance, or open source intelligence. Whether actively trying to get into things, or on the phone working the social engineering angle. They are the people that learn the target, research the possible adversaries, and help identify the vulnerabilities and define the plan of action.
The Support Team, on the other hand, usually stays back, whether at the office listening to shells getting back, monitoring radio, providing access and intelligence to the Operational Team, and coordinating with the customer if needed.

One thing to note is that the Team Leader moves between the two sub teams, however, most often - in our case at least - he or she is on the Operational Team.

The Operational Team

As we mentioned, the Operational Team is in charge of recon, identifying the weaknesses, and executing the plan. Members of this sub team, take different roles, based on their strengths. Though the team composition might vary with each engagement, it is a good idea to cross train each person with another, thus having redundancy.

Usually the Operational Team members include:

  • Physical security expert
  • Digital security expert
  • Surveillance and recon expert
  • OSINT expert
  • Security generalist (someone that can fit on either position)
  • Team Leader

The Support Team

This sub team takes care of all the needs of the team while things are happening. They provide an extra set of eyes when needed, they perform the initial recon once a foothold on the network is gained, the execute further exploits and gain persistence on other systems, the identify more targets and generally speaking, they are in charge of connecting the dots, and the Find and Fix and Analyze on the 3FEAD.

Usually the members are:

  • Digital security expert
  • Exploitation and code writing expert
  • System and networks expert
  • Physical security countermeasure expert
  • Main planner

Again, in both cases individual team members have to cross train in multiple areas of responsibility, covering for each other, and often rotating between those 2 sub teams.

The Red Team Manifesto | Reciprocal Strategies

A great addition to the Red Teaming world by Mark Mateski at Reciprocal Strategies.

I’m a red teamer:

  • I ask questions even when the answer seems obvious.
  • I speak the truth as I understand it.
  • I protect my clients from their adversaries and from themselves.

Go read the entire post. It blends nicely with our own Rules of Red Teaming:

  • 1: The purpose of a Red Team is to become the adversary, to be the worst case scenario.
  • 2: People lacking imagination, skepticism, and a perverse sense of humor should not work as a Red Teamer.
  • 3: Red Teaming is mostly about paying attention.
  • 4: Understand the thing you are Red Teaming, If you don't, the results will be poor. Spend time learning.
  • 5: Don't play by the rules. Make your own and adapt.
  • 6: If you’re happy with your plan, you are not doing it right.
    1. The efficacy of security is determined more by what is done wrong than by what is done right.
  • 7a: Build on this. The bad guys typically attack deliberately and intelligently, not randomly. Mimic that.
  • 8: A Red Team is most vulnerable to detection and disruption just prior to an attack. Don't make mistakes.
  • 9: If you're not failing when you're training, you're not learning anything.
  • 10: There are an unlimited number of security vulnerabilities for a given system, program, or plans, most of which will never be discovered. Tap into that.
  • 11: When in doubt, Red Team it.
  • 12: We are never prepared for what we expect.
  • 12a: During a stressful moment, take a step back and look at the whole system. Analyze whether this is real stress or a deception by the defenders.
  • 12b: Act, don't react. Plan 2-3 steps ahead.
  • 13: The solution is in the problem. “When in doubt, develop the situation.”
  • 14: The more sophisticated the technology, the more vulnerable it is to primitive attacks. People often overlook the obvious.
  • 14a: Most organizations will ignore or seriously underestimate the threat from insiders. That's your in.
  • 15: Make it asymmetrical. Advantage-stacking is your friend..
  • 16: Remember PACE: Primary, Alternate, Contingency and Emergency. Always have a PACE for everything.
  • 17: Use ACTE: Assess the situation; Create a simple plan; Take action and Evaluate your progress.
  • 18: If there’s a question about if it’s necessary, remove it. KISS.
  • 18a: Stay small. Stay light.
  • 19: Don’t become predictable.
  • 20: Prioritize and execute.

Quote of the day

"It often takes a crisis for red teaming to be considered and building an ark when it has been raining for 39 days won’t protect you against the flood. For example, the FAA created their red team in response to the bombing of Pan Am 103 over Lockerbie in 1988. For the next 10 years, this group conducted undercover simulated threats to enhance aviation security systems. Then complacency crept in. Red team warnings were ignored in the late ‘90s and early 2000s and were ultimately considered a contributing factor to 9/11. This in turn gave rise to red teaming programmes, including the CIA and NYPD, in the fight against terrorism. Failure sparks change, and sport is no different."

--Best Laid Plans of Mice and Men

Best-Laid Plans of Mice and Men | Leaders in Sport

How red teaming can transform your stumbling blocks into stepping stones.

In an exclusive feature for Performance, Potts reflects on his tenure and delves into Scottish Rugby’s use of red teaming – a common training practice in the military, intelligence, aviation and politics – to explain why it may prove a valuable tool for others in the world of elite sport.

UPDATES on the tshirts

Since we didn't meet the minimum, the orders will not be fulfilled. Maybe next time.