Overview

A summary of my presentation "Lessons From The Legion", from Friday October 19th 2018. I've been offered a couple more of opportunities to present this, from which I'm hoping to generate more interesting and useful conversations, so this will undoubtedly evolve, your feedback is welcome. Information on DevSecCon can be found here: https://www.devseccon.com/

A very high level summary would be this tweet, if you want to point someone at a very short summary:

"people trying to excel at self-taught technical skills are sub-optimal at strategic decisions required for a nebulous conflict, their emphasis should be on team work, and on the strategies of, and constraints on, their adversaries; they should seek inspiration elsewhere"

As a less brief summary is as below. A slide by slide summary is too dense, and makes me realise how many ideas I've pushed into the audience's heads in forty minutes.

The presentation was recorded and will be published by DevSecCon in future, I'm hoping they keep the fire alarm in.

Logical Progression of the talk

Introduction

I have a question - In Cyber Security - if we're all so smart, which we are, and we all work so hard, which we do, why is everything so awful?

To try and figure this out my presentation is an "investigation wall", a set of interconnected ideas and theories where I try and figure out what the solution is to this mystery.

I start with John Kindervag's presentation "Winning the Cyberwar With Zero Trust", which explains the difference between a strategy "The Big Idea" and the tactical and operational level solutions you use to achieve your big idea. John Kindervag's "Win the War With Zero Trust" can be found via BrightTalk here: https://www.brighttalk.com/webcast/10903/280059

So what "big idea" has emerged from the tactics we've chosen?

The main three areas I see DevSecOps convering is:

  • System Administrators
  • Developers
  • Security Operations

( yes, this is a different three to previous versions... but I think the issue is endemic )

The strategy in all three areas is based on being the most technically skilful practitioner you can - make your systems as hard as possible, your code as secure as possible, configure what you have to the best of your abilities. Arguably the way this strategy has come about is because of how we train and practice for each area - all of which is based on self-motivated learning, and a passion for the job that is often described as "eat, sleep, breathe security". Therefore the emphasis is on individual skill and knowledge rather than on wider context, on putting in dedicated time focused on a narrow range of knowledge.

I would also argue that the same is true of our processes and systems, it's all about making what we create as secure as possible, and then releasing it out into the world to see how it fares. If it doesn't survive, then we try to make a new version and make that better, the original version survives or dies depending on how well it was made.

Where has this choice got us? I cite various references that illustrate the poor state of cybersecurity, and the danger that poor cybersecurity poses to organisations in general and civilisation as a whole.

BreachLevelIndex.com is, well, here: https://breachlevelindex.com/

Rapid 7 on the number of CVEs is here: https://blog.rapid7.com/2018/04/30/cve-100k-by-the-numbers/

The Global Risks Report 2018 from the World Economic Forum can be obtained here: https://www.weforum.org/reports/the-global-risks-report-2018

( note, this is not the nuanced point of view it should be, I hope to spend more time looking at this - and I'm reminded of Michael Santarcangelo's thoughts on this from a couple of years ago, likening the impact of cybercrime to the impact of fraud )

This method of practising reminds me of golf. Excelling at golf is based on individual skill, which is reflected in how a player performs in the game - because success in the game is based almost solely on individual performances. Even in a team game of golf, with a team and against an opposing team, there is very little your team-mates or opponents can do to directly affect your standard of play. And the actual course will be static also, apart from the vagaries of weather.

There is nothing wrong in practising like golf if you're going to play golf, however the practice of cyber security is nothing like the game of golf, I think we need to look at a different game for a solution to our current predicament.

Using this kind of analogy, and cross-pollentating ideas, between areas is generally derided, but if you look hard enough there are examples where this works. In this version of the presentation I just used the idea of TRIZ, of abstracting problems and solutions in order to determine what kind of solution is required in a rapid way.

TRIZ on Wikipedia is here: https://en.wikipedia.org/wiki/TRIZ and the main British consultancy, as far as I can tell, is here: https://www.triz.co.uk/

From a very shallow reading of some management consultancy concepts, I think we're at a "Strategic Inflection Point" as an industry, where we've got the most out of our current way of thinking, and more and more effort results in smaller and smaller incremental gains. We need to jump to a different strategy to make the gains that we should from the resources we're putting in.

So, if we're practising for golf, but not playing golf, and that explains what we're doing wrong... what game are playing?

I argue that our industry feels a lot more like American Football. It is a ridiculously complex and violent sport, with many specialisms, and very much a team game where your success or failure is very dependent on the quality of your team and your ability to work with them, and how you act against and react to the opposing teams. In addition it's the sport that is closest to actual conflict - and I think cyber security has a lot to learn from wargaming - the simulation of war, and War Studies in general.

( as a side note, a French General, on watching the game around 1916, said something along the lines of "that isn't a sport, that is war" - if you know a good source for that quote do get in touch, I've been looking for it for ages )

Therefore we should look to learn lessons from a successful American Football team. American Football is the only sport where each team has essentially two squads on it - an Offense for when your team has possession of the ball, and a Defense for when your team does not.

I think that as defenders in cyber security, even the red teamers are looking to improve the performce of the blue team and the survivability of defenders, we should look to the best Defense. Possibly influenced by personal biases, but backed up by many sports facts I'll quote in the novella length version of this description, I have chosen the Legion of Boom, the Seattle Seahawks defense from 2011 to 2017, as an example to follow.

Looking at the central tenets of the team, and the defensive philosophy of the Seattle Seahawks head coach, Pete Carroll ( who has approximately 40 years of experience and an exemplary record ), I pick some of the main lessons from the Seahawks' successful Defense:

First lesson - "shift left" your conflict

Because American Football is such a complex game it is necessary to practice complex play calls and formations in advance, and to ensure that each individual knows their responsibility, and everybody else's responsibility, on each play so that they can function as a team.

Because the teams are so large there are enough players for the second and third string players in each squad to form "scout teams". These teams imitate the playing style and formations of upcoming opponents so that both they, and the first string players, understand what is coming up in next week's game, and also are less surprised by any of their opponents individual styles in the game. So when they come to actually play the game... they've already been playing that game for the preceeding week, and so are better prepared, especially as Carroll and the Seahawks advocate particularly aggressive practices.

This links into the concept from wargaming of the Caffrey Triangle, showing how a red team - in a red team exercise specifically designed to assist the blue team - should act depending on the objectives of the engagement. The Caffrey Triangle is mentioned here https://paxsims.wordpress.com/2016/08/19/connections-2016-conference-report/ , I've had it explained to me in person, we all need to be talking about this a lot more, in both cyber security and wargaming. I argue that in military simulations or exercises the red team or red force is often in the bottom left hand corner of that triangle, just there to give the blue team something to shoot at. Penetration testers and similar threat simulations work almost solely at the top of the triangle, being the most effective attackers they can regardless of genuine threats or limitations. I think commonly in cyber security the red force, whatever it is, should operate in the right hand corner, emulating the TTPs of genuine adversaries in order to prepare the blue team for their real world opponents.

And this should happen as early as possible, Adam Shostack highlights leaving threat modelling too late https://www.youtube.com/watch?v=-2zvfevLnp4 as one of the many traps of threat modelling from his presentation at Brucon. Similarly earlier on in DevSecCon Stuart Winter-tear highlighted how we can, and need to, automate threat modelling https://www.devseccon.com/london-2018/session/threat-modeling-speed-scale/.

How do we discuss this threat model, so we can automate it and discuss it? Look at MITRE's ATT&CK Framework, which is described well in this presentation from BSides Las Vegas 2018: https://www.youtube.com/watch?v=p7Hyd7d9k-c .

This issue also reminds me of "The Base of Sand Problem", the RAND report that highlights the problems in military modelling/simulations/wargaming that, for me, resonate with issues we face, can be found here: https://www.rand.org/pubs/notes/N3148.html. This report essentially says that the military modelling and analysis industry has made some crucial mistakes about what it focuses on, which leads to the ineffective use of its resources. In this context, a footnote that states military victories are based on the ratio of effective forces, not on who simply had the largest force compared to their opponent.

It's all about understanding what your opponent will do in what situation and countering those options specifically, rather than trying to think of all attacks and prevent all of them.

As an example of the difference between trying to fix everything, and trying to fix only what our adversaries will exploit, I cite Jeremiah Grossman on the Kenna Security report, highlighting 2% of vulnerabilities are exploited, is here: https://twitter.com/jeremiahg/status/996469856970027008 I've got into interesting discussions on how true or untrue that figure may be, watch this space. Also his blog at https://blog.jeremiahgrossman.com/2018/05/all-these-vulnerabilities-rarely-matter.html?m=1.

Second lesson - eliminate the big play.

There is not time to explain the Seahawks' use of "Cover-3 with a single high Free Safety", and their general approach of keeping the ball in front of the defenders to ensure the Defense always has another chance to prevent their opponents scoring, so I look at personnel choices.

Most NFL defenses, when choosing personnel, have emphasised their Defensive Line, the first line of defense against an opponent, who line up closest to the "enemy". Carroll has always specifically looked to the Defensive Backs, the last line of defense, most notably the Free Safety position, which is what he played in college.

This is reflected in the NIST Cyber Security Framework, and the five Core Functions. I am old enough to remember when Identify and Protect were the only aspects seen as useful, but slowly we are learning that Detect, Respond, and Recover are at least as important in surviving an attack, rather than believing in the "Defender's Dilemma", that if an attacker breaches us we have immediately lost.

I cite Adrian Sanabria from his presentation at the RSA conference earlier this year: https://www.youtube.com/watch?v=bMkVjDx3cqQ, on "It’s Time to Kill the Pentest", but just he has a great slide on how a hack is a series of steps, not a single event. This is like a "drive" in an NFL game ( https://www.sportingcharts.com/dictionary/nfl/drive.aspx ) where an opponent can gain yards, but your aim is to stop them scoring points.

Also, all too quickly, I run through Sounil Yu's Cyber Defense Matrix, grabbing slides from his presentation at the RSA Conference 2017. https://www.rsaconference.com/videos/solving-cybersecurity-in-the-next-five-years-systematizing-progress-for-the-short-term; showing how you can take the functions of the NIST Cyber Security Matrix, and the assets that form the infrastructure, and you can map what products fit where. There's much more to this idea, and it's really worth your time watching that video.

Of note here, although Yu's work is at least a year old, is the gap within Detect, Respond, Recover on the "Applications" row of that matrix. However a couple of vendors who were at DevSecCon, SysDig, and Contrast Security, would appear to have products within that space.

This ability to lose ground but not lose, this ability to recover, is important because we all work in "Cyber Resilience" now, where the emphasis is on recovering from a breach, not just on preventing it. NCSC has a good blog on this concept here: https://www.ncsc.gov.uk/blog-post/cyber-resilience-nothing-sneeze. It's worth reading "Cyber Resilience", Phil Huggins' Black Swan Security blog here: http://blog.blackswansecurity.com/2016/02/cyber-resilience-part-one-introduction/. I emphasise the "Pace of Decision Making" aspect.

This links to John Boyd's OODA loop, OODA loops are described well on Wikipedia https://en.wikipedia.org/wiki/OODA_loop, here, please pay me to research these concepts.

Through a description of the OODA loop process, Observe your current situation and decide all the relevant factors, Orient yourself and your adversaries within that space, Decide on the next course of action, and then Act to execute that execution, Boyd argued that by going through this process faster than your opponent, by "getting inside their OODA loop", you could defeat your opponent through speed rather than sheer power. The problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up?

One approach, which I advocate but I need to spend more time on, was put forward by Paul Schwarzenberger during his presentation the previous day: https://www.devseccon.com/london-2018/session/journey-continuous-cloud-compliance/.

Final lesson - Out hit your opponent

I reference the "The Base of Sand Problem" again https://www.rand.org/pubs/notes/N3148.html, because it states that first order determinants of victory in conflict is based on processes, tactics, and strategy - these are harder to define and measure; but I argue we should focus on our own way of thinking, our opponents way of thinking, and crucially how we can affect our opponents' processes, tactics, and strategy. As I say... the problem is, as we're defenders, the opponent always starts their OODA loop before we start ours, so how do we catch up?

It is a physical game, it is a collision sport, and there are psychological and as well as other gains to be made by simply hitting your opponent as hard as you can.

Also this tallies with the previous aim, to eliminate the big play, as it physically puts the defenders in an excellent position to tackle or otherwise collide with their opponents - but I don't have time to go into this level of detail on the game. For this I use clips from Richard Sherman, Earl Thomas, but mainly Kam "Bam Bam" Chancellor executing the "Shoulder Punch", a Seahawks tackling technique which is as it sounds.

The Seahawks tackling video summarising their techniques is shown here: https://www.youtube.com/watch?v=6Pb_B0c19xA; for Chancellor himself, I think this video sums up what he provided in the narrow focus I use, you may recognise part of it: https://www.youtube.com/watch?v=qgh8HmKVja8

The aim here is to inflict pain on your opponent, and to reduce the speed of their OODA loop. I've learnt here to specifically state that I'm not advocating any kind of "strikeback" methodology, but in showing that on the blue team we've forgotten that we're facing an opponent, and we can affect that opponent. This links to the pyramid of pain I refer to, this is David Bianco's, taken from http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html; that illustrates the more complex aspects of their practice are of more value to your adversaries, so when you understand them and can act against them, you cause them the greatest amount of pain.

To me the explanation of this situation, why we're not doing this, comes from Bartle's Taxonomy of player types, there's a good summary on this page: https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types; the "killers", people who like outwitting, defeating, demoralising a human opponent, those "killers" all join the Red Team when they enter cyber security, which means that aggressive and effective approach is lost from blue team strategies.

The previous day at the conference Yan Cui highlighted that people are often the weakest link in the security chain, yet we ignore the humans who are our adversaries and focus on technical defenses and techniques.

Haroon Meer has been arguing for more hackers to join the blue team for several years, I show a clip from his Null Con keynote, which can be watched here: https://www.youtube.com/watch?v=2F3wWWeaNaM. Do persevere with the flickering screen.

To inflict that required pain I think deception is key, I'm reminded of Clifford Stoll's "The Cuckoo's Egg" book, and how incident response started with deception. Paul Midian's presentation can be found here: https://www.youtube.com/watch?v=KvksyvF6MN4.

From here there's a "brain dump" of references... in his keynote from Black Hat Asia in 2017 ( https://www.youtube.com/watch?v=834S-rqEmFA ) Saumil Shah states, as one of his Seven Axioms of Security, they we need a creative defense, don't give the adversary something they expect.

Similarly in his presentation from the day before, Petko Petkov covered Honey Tokens and Dark Nets: https://www.devseccon.com/london-2018/session/open-dev-sec-ops/; why not use your control of your network to set traps for attackers and improve your position. Also, if they suspect such traps are in place, they could or should slow your adversaries down.

Also a talk from the day before, Matt Pendlebury highlighted the surprising demise of attack aware applications: https://www.devseccon.com/london-2018/session/whatever-happened-attack-aware-applications/ ; again, your application has high fidelity information on whether it's being attacked, and is in the best position to respond appropriately.

I'm reminded of a presentation by Alex Davies at BSides London earlier this year, showing how we should work together, and being able to share information efficiently and quickly will be to the benefit of all. It can be found here: https://www.youtube.com/watch?v=yfEiuJFMisY. This increases the pain imposed on your adversary, as any other campaigns they are running against any other targets will be similarly affected.

The aim is to turn the Defender's Dilemma into the Intruder's Dilemma, which is nicely summarised in a presentation from BSides Munich https://www.youtube.com/watch?v=PQgsEtRcfAA .

There are many more ideas in the presentation "Gaslighting with Honeypits and Mirages" from Kate Pearce, but only the slides are available online http://www.secvalve.com/images/Kate_Pearce_honeypits_ACSC2017.pdf ; I hope she has a chance to present it in future where we can all see the recording.

Because the whole point is to slow the adversary down, make them so unsure of their environment, and whether they're being monitored, that they have to go through more and more checks to make sure they're not burning valuable resources unnecessarily, and that puts their current campaign and all similar campaigns at risk. The emphasis of Change Control was always to ensure that an infrastructure change would not damage the company, force your opponents to move that slowly because they are so unsure of the environment they're in.

A couple of other quick ideas... taken from Kelly Shortridge's presentation at Countermeasure 2017 ( which is impressively a blog and a slidedeck and a video https://medium.com/@kshortridge/the-red-pill-of-resilience-in-infosec-65f2c5d5e863 ), if your environment is resilient, and you can let something like Netflix's Chaos Monkey loose, then an adversary looking to maintain persistence on your network has to be as resilient as you with their C2 infrastructure.

And lastly - and sometimes I'm tempted just to forego my presentation and play this one instead - Sounil Yu takes us through the last five decades of cyber security, matches them up with the NIST Cyber Security Framework, and shows how advances such like DevSecOps are the solution. Resilience fixes the problems of cyber security.

This solution is not a product you can buy, but it's a thing you can do. Otherwise we are doomed to keep being golfers trying to play a much different game.

END

Questions on supporting evidence are welcome by email or in the comments or even on Twitter, and overall if you've any questions please do get in touch.

And while I realise it's not the most useful of documents, a PDF of the presentation is here - especially as I find libreoffice's conversation process "underwhelming"... but that might be my lack of knowledge. Lessons From The Legion - DevSecCon 2018