Safety and Security, Part II | Red Team Journal

Mark's post is so good and so to the point, that I will do something I never do, I will repost it full. Bold is me pointing to your great things.

Original post.

When things go quiet on RTJ, it usually means we’re working on something new. In this case, we’ve been rethinking what red teaming is and what it should be. It was triggered in part by several discussions we’ve had in the past couple of years—discussions that tipped us to disturbing fact that pentesting/red teaming as it exists today is (as much as we hate to say it) starting to turn inward and focus too heavily on the strictly technical aspects of the “hunt.” It’s time to look up and see the whole system, something that is in many ways much more difficult.
      To begin, we return to the accident safety world. Historically, safety professionals approached their problems with a linear, cause-and-effect mindset. In recent years, however, a subset of the safety community has finally gained some traction with the message that many system accidents are too complex to reduce to a linear series of dominoes or the first proximate flaw. Thinkers like Sidney Dekker, Erik Hollnagel, David Woods, and Nancy Leveson are saying new things regarding the safety of complex socio-technical systems, and we need to listen.
      The logic of traditional safety risk management goes something like this: find the flaws (component failure, human failure) that could lead to accidents and fix them before they can cause trouble. This is sometimes dubbed the “find and fix” mentality. Sound familiar? It’s essentially what we do on the security side with pentesting and red teaming: find the holes that adversaries could exploit so the customer can plug them (again, “find and fix”).
      For simple systems, it usually works. Back in the day when safety professionals dealt solely with mechanical systems, it was often possible to work backwards from an accident to the proximate cause and make sure it didn’t happen again. With electromechanical systems it became a bit more challenging; with digital systems it became even more challenging; and with the full socio-technical system (technology, people, organizations) it’s even more challenging. It’s both telling and disturbing that the average red teamer’s approach to system complexity hasn’t even caught up to where safety analysts were in the 1960s!
      As red teaming professionals, we need to follow the thought leaders in the safety community and move past the “find and fix” mentality to address

  • the full hierarchy (both the “blunt end” and the “sharp end”),

  • environmental factors,

  • conflicting goals,

  • positive and negative variations in human performance,

  • variations in local awareness and knowledge,

  • mental models, and

  • the full set of rules and procedures.

      When we do this, the complexity of the full security challenge begins to emerge–a complexity that’s almost always too byzantine to capture in a diagram or list of possible attacks. Add to this the dynamism and adaptation that occurs constantly in every complex open system, and we should actually be more than just a bit intimidated. It demands a different kind of person than most security shops (even big ones) employ and a set of skills and tools that even most CISOs don’t possess. It actually involves much more sophistication than typical attackers possess, which means that red teaming and pentesting should involve much more than “thinking like the bad guy.” In other words, the full red teaming function should always exist at a higher level of perspective and thought than the potential attacker. In fact, we probably need a new term for this broader function. Is it more than “red teaming” as it’s come to be known? Yes.
      To put things into perspective, consider how complex and ambiguous situations often remain even after a significant accident investigation, and you might get an inkling of how challenging it is to imagine the possibilities for hazard in complex systems before they occur. There are plenty of great examples in the safety world; most of the really informative ones involve book-length reports. We might summarize some examples in future posts, but to get a sense of the terrain, you really need to set aside several hours for reading.
      This is in part why our natural response is to simplify, to wrestle complexity into a cage, to attach tidy labels to complex things, to chain actions into an orderly narrative that usually stops once we find the first flaw. Again, it’s what traditional accident safety analysts did: find the first proximate cause and assign blame, just as red teaming has become a never-ending “hunt” for technical flaws. It’s also why the final word from the average red team or pentesting squad should actually be the point of departure for the really important questions. Red teaming is too small a box, and technical red teaming skills are too limited to address higher-level questions.
      Take a few minutes to review the major breach events of the past several years and ask yourself—big picture—how well do present-day security approaches work? If you’re honest—again, big picture—you have to admit that we’re getting pummeled. We’re still in the ring, but we’re bloodied and bruised with an infinite number of rounds ahead of us. Pentesting and traditional red teaming help, but in our opinion, a a major rethink is overdue. More of the same is not good enough.

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies | Bloomberg

Red Team Podcast