Learning Library

← Back to Library

Assume Breach: Ethical Hacking Tale

Key Points

  • The speaker emphasizes using “war stories” – real‑world anecdotes about security failures – as cautionary lessons for organizations.
  • Patrick Fussell, IBM X‑Force’s Global Head of Adversarial Simulation, explains that ethical hacking is performed **with permission** to improve security, not to exploit vulnerabilities for personal gain.
  • Their testing follows an “assume breach” mindset inspired by zero‑trust principles, designing defenses as if an attacker is already inside the network.
  • In a showcase exercise, the team simulated an insider attack by having a trusted employee download a payload from a public software store, mirroring how many real breaches begin with internal actors.
  • This adversarial simulation acts as a “sparring partner” for the blue team, exposing weaknesses before malicious actors can exploit them.

Full Transcript

# Assume Breach: Ethical Hacking Tale **Source:** [https://www.youtube.com/watch?v=xHCCIc9n0xE](https://www.youtube.com/watch?v=xHCCIc9n0xE) **Duration:** 00:15:01 ## Summary - The speaker emphasizes using “war stories” – real‑world anecdotes about security failures – as cautionary lessons for organizations. - Patrick Fussell, IBM X‑Force’s Global Head of Adversarial Simulation, explains that ethical hacking is performed **with permission** to improve security, not to exploit vulnerabilities for personal gain. - Their testing follows an “assume breach” mindset inspired by zero‑trust principles, designing defenses as if an attacker is already inside the network. - In a showcase exercise, the team simulated an insider attack by having a trusted employee download a payload from a public software store, mirroring how many real breaches begin with internal actors. - This adversarial simulation acts as a “sparring partner” for the blue team, exposing weaknesses before malicious actors can exploit them. ## Sections - [00:00:00](https://www.youtube.com/watch?v=xHCCIc9n0xE&t=0s) **Ethical Hacking War Stories** - Jeff introduces IBM's ethical hacker Patrick Fussell, emphasizing responsible, permission‑based hacking and the value of cautionary war‑story anecdotes to illustrate real‑world security exercises. - [00:04:16](https://www.youtube.com/watch?v=xHCCIc9n0xE&t=256s) **Remote C2 Implant Overview** - The speaker explains how a malicious command‑and‑control implant on a compromised machine phones home to a server, establishes a foothold, and evades defenses such as EDR and antivirus. - [00:07:46](https://www.youtube.com/watch?v=xHCCIc9n0xE&t=466s) **SQL Pivot to SCCM Credential Dumping** - The speaker explains how attackers move laterally from a compromised SQL server, dump SCCM credentials, and use them to obtain domain administrator rights, illustrating the “land and expand” strategy. - [00:11:16](https://www.youtube.com/watch?v=xHCCIc9n0xE&t=676s) **Prioritizing Fundamentals Over Fancy Tech** - The speaker stresses that organizations should focus on basic identity and access management practices—like using credential vaults, dynamic passwords, and avoiding over‑privileged accounts—rather than chasing the latest security tools or zero‑day exploits. ## Full Transcript
0:00I once heard that the best way to get your company to invest in fire extinguishers is to 0:05burn down the building across the street. For the lawyers that might be listening, let me be clear. I 0:10am not advising anyone to burn down the building across the street. And by the way, who let you in 0:16here anyway? Okay, so put away your matches because we won't be lighting anything up. But what's the 0:22next best thing? Well, it's the judicious use of war stories. These are anecdotes and real-world 0:29stories that serve as cautionary tales and inform us as anti-patterns of what not to do. With 0:36me is Patrick Fussell, Global Head of Adversarial Simulation for IBM's X-Force team. In other words, 0:42he's an ethical hacker. Welcome, Patrick. Thanks, Jeff. We did some previous videos where we talked 0:48about what an ethical hacker is, what the job entails and how to pursue a career in this space. Now, 0:54let's take a look at a real-world example from an ethical hacking exercise. But before we do, 1:00we should emphasize the word ethical. It means just that. You do this for the right reasons, 1:07which are to improve security, not take advantage of vulnerabilities that you find out about. And most 1:13importantly, you do it with permission. If you don't have permission, it's just hacking and you 1:19can go to jail for that. But if you do it ethically, as Patrick's team does, you can actually 1:24get paid, and not have to find out how you look in an orange jumpsuit. That's a win-win scenario. Okay, Patrick, 1:30let's take a look at one of the real-world examples that you and your team went 1:34through. So, take us through it. Sure. So one of the things we're really trying to achieve with this 1:39type of testing is representing the real world. We wanna be a sparring partner for the blue team. 1:44And in particular, this particular engagement that I want to talk about today, we start this testing 1:49from what we call assume breach. So, inside of the network. Yeah, I like that idea of assume breach. We 1:56get this this term, I think, from the area of zero trust. Ah, with zero trust where we're looking 2:02and assuming that the bad guy is already in your environment, and then you're designing the 2:07defenses as if the guy has already penetrated, which is a wholly different paradigm than just 2:13assuming the bad guy's on the outside trying to get in. That's exactly right. And we want this to 2:18be as representative of real threat actors as possible. So, in this particular scenario, ah, we did 2:24our assume breach by having a trusted insider essentially run our payload for us. And how they 2:30got to that payload was we hosted it, ah, somewhere in a public software store. So somewhere, 2:36hypothetically, anybody could get access to. But we designed it so this particular person could 2:41download our payload, run it, which is what kicks off our initial access to the environment and 2:46lets us begin our testing. Okay, so you've got an accomplice on the inside. And that's actually not 2:52so unrealistic because we know that lots of attacks occur from the inside.So, it it may seem a 2:57little bit like you're cheating, but it's actually not 'cause that's actually a real-world, ah, example. 3:02That's 100% correct. I think most organizations at this point have realized it's not a matter of if 3:08you get breached, but when. And designing all of your security with the idea of a breach is gonna 3:13happen, let's make sure that when that breach happens that we're well protected. Yeah. And that 3:17insider has the advantage of superior knowledge and superior access, as we're going to see, I think, 3:23as we go through this. But tell me, how did how did this guy get this bad software? Where did this 3:28malware come from in the first place? Yeah, we wanna put it, ah. There's lots of ways you might go about 3:33this. And again, we're trying to represent how could this happen in the real world. So we look at 3:37real stories of breaches and what are the ways we can affect this in a way that makes sense. In this 3:41case we we hosted it in the software store. The insider downloads it for us and runs it. And that 3:48essentially kicks off what we we call an implant. And an implant is we can think of it as just a piece 3:53of software that's part of a, a C2 framework. And that implant, when it's run, calls out to a server 3:59on the internet and does communication at its most basic level. Okay. So basically, the person 4:05that gave this and enabled them was you. That's right. So there we are. It it tha, that's a very close 4:11likeness that I thought I've captured there. And you said C2. So that means command and control. 4:16This is basically software that allows you remotely to command and control other parts of 4:22their environment. That's right. So you can think this is a piece of software. It's running on a 4:26system. It calls out, ah, over the internet to a server that we control and allows us to interact 4:32directly with that, what you might call a victim box. So the box that's been compromised, our 4:36initial access point in in this case. Okay. So this guy is going to implant the software. Then on some 4:42other system, take the bait that you've given and he's gonna put it on there. And now he has some 4:49sort of a of a foothold established. And what are we gonna do at this point? So this, ah, this connection, 4:55this, ah, box this that's calling out to us, it's calling out to our C2 servers, which, eg, exists 5:02somewhere on the internet for for the ah sake of simplicity, uh, the C2 framework we're using here 5:06is called low key C2, which is actually written by one of the folks on my team named Bobby Cooke. Ah. His 5:12hacker handle might be you might know him as Boku. Um,so, eh and the reason that I say that is it also 5:18has to evade all the defenses inside of the network. So, think things like EDR and antivirus. 5:23We can't get caught. Right? Yeah.So. So you found a way to get around the EDR, which is ah ah a good trick. 5:29And so here's this, this software that this system that now has been, uh, 5:35compromised is making a phone home. And it's gonna phone home and listen to the instructions that 5:41you give it, and you're gonna supply those instructions. And then what happens? So now we're 5:46in what you might think of initial access and reconnaissance phase. We want to learn as much as 5:50we can. You might surprise some people to to find out that a lot of hacking is actually a lot like 5:56detective work. So we started one place where we know nothing, and we wanna learn as much as we 6:00can about the system, the user, the organization. Ah. That might involve looking at things like file 6:06shares or centralized data stores. So, think of things like SharePoint. There's a lot you can 6:11learn by looking at an organization's SharePoint, about people, processes, technology that really 6:16helps you further ah an engagement. Yeah. And the more you know, the more damage you can do. And tha, so that's 6:22part of the the landing in the area and doing reconnaissance. And so what did you find that 6:27allowed you to to do more? So in this case, the sort of first critical step, the thing that lets 6:32us move forward was finding a set of credentials inside of the strip, which is sort of a common 6:38story, but we find these credentials that were in a script from some legacy Active Directory thing 6:43that had happened you know many years in the past, and just nobody had ever bothered to go back and clean it 6:48Credentials in a script. Somebody hardcoded a password into a script. I'm sure that has never 6:53happened in the real world. In my dreams.Ah. I wish that was the case. But in fact, this is a very 6:59common situation. And a lot of times, like you say, if it's been around for a long time, nobody even 7:04thinks about it anymore. The script just keeps working and nobody wants to go in and crack it 7:07open and make changes. Just leave it. Leave it alone.Ah. But obviously if you've hardcoded that in 7:13and now you've sucked that in, now where do we go? So what we wanna do is find out what are those 7:19credentials give us? And that's where the reconnaissance really came into play. And that 7:23allowed us to identify that these credentials give us access to several of the production SQL 7:27servers. So now we can move over the SMB protocol to a new set of systems using these new 7:33credentials. Okay. So now you are into a database and we continue. Now we've done the 7:40land. Now we're doing the expand part of all of this attack.And, ah, we're continuing to move through. 7:46This leads to something else. And then what does this SQL database lead you to be able to do? So 7:52what we want to do is, ah, what's your term as land and expand is we typically call it lateral 7:58movement privilege escalation. So it's where we're getting larger concentric circles of control and 8:03access. And in this case what we did is, from the SQL boxes, we were able to do what we call 8:09credential dumping. So, essentially think of investigating the system for credentials in 8:14memory and on disk and looking to see can we move past this box to the next set of credentials? And, ah, 8:19luckily enough we found some, ah, SCCM credentials. Ah. If you're not familiar with SCCM, it's essentially a, ah, 8:25system management, ah, framework or software for large enterprise environments. Yeah. And what I know 8:31about SCCM is if you get into that, now you've kinda got pretty close to the keys to the 8:36kingdom, because this is where you can control the other systems that are in the environment. Ah. You can 8:41control their access rights and all kinds of other privileges, policies and things like that. So 8:46once you've gotten in there, now what? So once we have, ah, access to the SCCM system, we have the 8:53ability to get credentials that essentially give us access to lots and lots of other workstations 8:57and servers, and it becomes almost trivial to identify where are, eh, the next set of elevated 9:04credentials. And in this case, we were looking for domain administrator credentials because they 9:08allowed us to move on to our final objectives, our business objectives. And that's basically a 9:13privilege escalation. You started in with one low level, and you continue to move through the system 9:19laterally and gathering more information, more passwords, more credentials, more capabilities, more 9:25privileges until now, it's essentially game over. Okay, Patrick. So now we've gone through this. You 9:32and your team are gonna go back and write your report, and you're gonna tell the client then 9:36what you found, how you breach them, how you own them and what should they do. What kind of 9:43recommendations would you give to an organization so that they don't fall victim the same way? Yeah, 9:49I think there's a lot of things that could come out in a report. Obviously, we'd have quite a few 9:52findings from a a test like this, but at a high level, I would start to wrap it up with the idea 9:58of start with thinking about things from the assume breach perspective. Really focus on what 10:03happens when we're breached, because we know that there are a lot of hackers out there working very 10:08hard on a continuous basis to try to get access to just about anything that that touches the internet. 10:13Yeah, absolutely. You have to assume that the bad guy's already in your environment, that they're in 10:19your network, they're in your database, they're in your application, they already have credentials 10:23and can log in. And now define your your security based upon that. And that would lead you to 10:30do some other kinds of things, like what kind of security principles? I think a really good one 10:35that, ah, is often talked about, but not always exercise well is defense and depth. When you're 10:40looking at how does a breach happen, you think, well, what happens if they get past this first one? 10:44Because we always want to question the assumption of is this security control doing everything that 10:49I hope that it is? Yeah. Yeah, exactly. Defense in depth basically means you don't rely on any 10:53single mechanism for your security. So you create essentially an obstacle course for the bad guy to 10:59have to traverse. So, if this first part fails, well then they've still got other backup 11:05mechanisms. How about the other thing? There was, there was that business of, you know, there was a 11:10script that had hard coded in it, ah, some credentials, some secrets that wow. 11:16That never happens in the real world, does it? We wish. Unfortunately, we see it happen pretty much 11:21all the time. It's it's one of the more consistent findings that we write in our reports and we see 11:25in our testing. Um. And I always think clients want to focus on, ah, getting the latest and greatest 11:32technology or looking at the, the, the brand-new zero day that may be dropping in next week, but 11:36they haven't really taken the time to think about their identity and access management policies. And, ah, 11:41have we cleaned up all the data that's on our shares? And what do the hackers see when they land 11:45on one of our boxes? Yeah, absolutely. Basic blocking and tackling the fundamentals of 11:50identity and access management. So, really what they should be doing is storing these credentials 11:55in some sort of vault,ah, a secure space where these can be checked in and checked out and used as 12:02necessary and changed over time. We want these credentials not to be static. They should be 12:07dynamic so that there's no place for somebody to go and just get that. And now they can, they can 12:13run free day after day after day. Ah. You make it a moving target, makes it much more difficult, ah, thing 12:19to break into. And you also mention about IAM ah identity and access management, making sure that 12:25we don't overprivilege individual users because even though that user might be perfectly honest 12:31and would never use that against us, maybe their account gets taken over and now someone leverages 12:37all of these extra capabilities that they have. So, this kinda goes into the notion of the 12:41principle of least privilege. We only give you exactly what you need to do your job and not one 12:46bit more. And we take away anything you don't need anymore. So, this is a big one also, I think. Yeah, I 12:52think if, ah, all organizations that we tested for could really master these two bits, as simple as 12:58they seem, they would make hackers' lives drastically harder. Yeah, yeah, no doubt, no doubt. 13:03And anything else that you can think of that that we should be doing on this? I think that just a always 13:09have a a sort of eye on not assuming that all of your controls are doing what you think that 13:15they're doing. You wanna look for things like continuous improvement. Are you testing these 13:18things and validating that they're actually being, you know, as effective as you want them to be? Yeah, and I 13:23think bottom line is you've gonna be monitoring what's happening out there as well. Make sure that 13:28you see if there's behaviors. Now someone's moving laterally or all of a sudden a lot of data starts 13:34leaving your environment, ah, in ways that you haven't seen before. Ah. Have those kinda capabilities in 13:40place and then ultimately be able to have some sort of incident response capability once you do 13:45discover that you've been breached like this and ah and that way you'll be in better situation 13:52knowing what to do when that occurs. Definitely So good news, Patrick. We didn't have to burn down 13:57that building across the street after all, and I promised that we wouldn't. But, we do have some 14:03war stories. At least we have one more story that is on the shelf that you can use. You could take 14:08this story back to your colleagues, to your managers and say, look, this could happen to us. 14:14This is a very realistic situation that Patrick and his team see all the time. So, what could we do 14:21to make sure that this doesn't happen to us and try to leverage those war stories to do, ah, something 14:27that will improve your environment? I would definitely say, don't be the one organization in 14:32the world who thinks that they're unhackable. They're all hackable, and, ah, they often will be 14:37hacked. So, ah, focus on things like continuous improvement. You have a bunch of security controls 14:41in your environment. You probably installed them and set them up. But did you go validate and test 14:46and make sure that they're doing what you think that they are and improving with the times, making 14:50sure they're getting better over time? Because the bad guys are for sure. Yeah, it's gonna be 14:54continuous improvement. And one of my favorite sayings is if you're satisfied with your security, 15:00so are the bad guys.