Learning Library

← Back to Library

AI-Powered Cyber Attacks Emerging

Key Points

  • AI is becoming a dual‑edged sword: while it powers business innovations, it also equips hackers with more sophisticated tools for attacks.
  • AI‑driven agents can automatically locate login forms on websites with about 95% accuracy, using large language models to parse page elements.
  • These agents enable advanced credential‑guessing techniques such as password spraying and brute‑force attempts, bypassing traditional rate‑limit defenses.
  • Frameworks like BruteForceAI automate the entire penetration testing (or malicious) process, allowing attackers to launch high‑speed login attacks without deep technical knowledge.
  • Understanding and defending against AI‑augmented attack vectors is essential for organizations to safeguard authentication systems against the growing threat.

Full Transcript

# AI-Powered Cyber Attacks Emerging **Source:** [https://www.youtube.com/watch?v=0tHb6U2604g](https://www.youtube.com/watch?v=0tHb6U2604g) **Duration:** 00:18:25 ## Summary - AI is becoming a dual‑edged sword: while it powers business innovations, it also equips hackers with more sophisticated tools for attacks. - AI‑driven agents can automatically locate login forms on websites with about 95% accuracy, using large language models to parse page elements. - These agents enable advanced credential‑guessing techniques such as password spraying and brute‑force attempts, bypassing traditional rate‑limit defenses. - Frameworks like BruteForceAI automate the entire penetration testing (or malicious) process, allowing attackers to launch high‑speed login attacks without deep technical knowledge. - Understanding and defending against AI‑augmented attack vectors is essential for organizations to safeguard authentication systems against the growing threat. ## Sections - [00:00:00](https://www.youtube.com/watch?v=0tHb6U2604g&t=0s) **AI-Powered Cyber Attack Landscape** - The speaker outlines how AI is being weaponized—highlighting six emerging threats such as autonomous brute‑force login tools like BruteForceAI—to illustrate the growing need for stronger defenses. - [00:03:08](https://www.youtube.com/watch?v=0tHb6U2604g&t=188s) **AI-Driven Ransomware Orchestration** - The excerpt outlines the “Prompt Lock” research project, where an autonomous agent powered by a large language model plans target selection, assesses data value, generates encryption code, and executes the ransomware attack end‑to‑end. - [00:06:36](https://www.youtube.com/watch?v=0tHb6U2604g&t=396s) **AI-Generated Phishing Neutralizes Old Cues** - The speaker warns that traditional clues like bad grammar are becoming unreliable because attackers now use large language models to create flawless phishing emails, necessitating a retraining of users to recognize more sophisticated threats. - [00:11:41](https://www.youtube.com/watch?v=0tHb6U2604g&t=701s) **AI Deepfake Scam and Exploit Automation** - The speaker recounts a 2024 video‑deepfake that tricked a CFO into wiring $25 million, then explains how AI can automatically generate exploits by processing public CVE reports. - [00:17:16](https://www.youtube.com/watch?v=0tHb6U2604g&t=1036s) **AI Lowers Attack Barriers** - The speaker warns that AI automates the full cyber‑kill chain, making sophisticated attacks accessible to low‑skill actors and forcing defenders to adopt AI for prevention, detection, and response. ## Full Transcript
0:00AI attacks. Sounds like a bad sci-fi movie, right? Well, unfortunately, in this case, it's actually 0:06happening, and we can expect to see more of it going forward. Businesses are using AI to improve 0:11customer service. Customers are using AI to research products. Unsurprisingly, hackers are 0:18using AI to, well, hack. Agents powered by AI are equipped with the tools to write 0:25code, attempt logins, generate fake videos and much more. While AI is doing amazing things to 0:32reshape our businesses and our lives in positive ways, it's also amping up the threat by putting 0:37more and more power in the hands of the bad guys. In this video, we're going to take a look at six 0:43different examples of attacks that are emerging using AI to power attacks so that you can 0:50prepare your defenses to withstand the onslaught. The first type of attack we're going to take a 0:55look at is an AI-powered login, where we're going to test the security of your system and your 1:01authentication capabilities to see if they'll withstand an attack. In this case, AI is being 1:08leveraged, and it's a pen testing or penetration testing framework that could be used, again, to 1:13test your security, or by a bad guy to break into your system. And what it does,this particular one 1:19is called BruteForceAI, it leverages an agent, which is an AI system which is able to 1:26operate autonomously, and it then uses an LLM to do some of its processing. 1:33So what is it looking for? Well, this agent is going to go out and start identifying login pages. 1:40So it's going to look for web pages that have login information. It takes the page, sends it off 1:46to the large language model that parses the page and figures out if there are any forms, login 1:53forms, areas where you can type in user IDs and passwords and things like that. LLMs are 1:58particularly good at doing that. In fact, this one was able to do it in roughly 95% of cases, 2:04correctly identify where the login area was on that page. Once we have that identified, then the 2:11agent is, uh, conducting an attack and it directs the attack. What it does in this case is you have two 2:17different options. One type of attack is a brute force attack, where you basically try every 2:23combination of user IDs and passwords. This usually is not going to work all that well, 2:28because you're going to run into a three strikes policy on a particular website that's going to 2:32lock you out after three bad attempts or something along those lines. So that's one 2:36possibility is brute force, but a password spraying attempt might get away with, because in 2:43this case, you're sending the user ID and password to a particular system or to a particular ID, and 2:49then trying a different ID with that password. So, you're trying in these cases to a number of 2:55different possibilities, but not all just barreling in on only one possibility. So, this is, 3:01again, AI is running this attack. The user didn't have to figure out all of these capabilities. 3:08They use the pen testing framework and they launch it, and the AI takes care of the rest. By a 3:14similar theme, let's take a look at AI-based ransomware. In this case, we're going to talk about 3:20something called prompt lock. It was a research project that was designed as a way to see what's 3:26possible here. Um, what is the art of the possible in this case? And it also uses an 3:33agent, which then is leveraging a large language model. So, you're probably seeing a 3:40theme here. And this whole thing then is designed where the agent is going to go off 3:46and really orchestrate and do all of the activities, direct all of the activities that are 3:52necessary here. So, it's going to plan the particular attack. It's going to go off and 3:58analyze the information that it needs once it figures out. In other words, it 4:05will figure out what systems do I want to attack and then analyze sensitive data on those systems. So, 4:11it will look for files and say, look, I think this stuff could be really sensitive. They're 4:15going to pay a lot for this. Or, look over here and say, oh, that's probably really not worth my time. 4:20And it can use that information then to figure out how much to charge, for instance. Uh, it's also 4:27going to generate the actual attack. So, whatever code or whatever is 4:33necessary in order to ... to encrypt the files and that sort of thing and then execute the attack. So, 4:40it's going to do all of that under the... the auspices of this agent. So the agent is really 4:46running the whole thing. The LLM is coming into play because it can help with the analysis of 4:52these files you feed them in. And then we know what we're going to do. And the attack itself 4:57could result in an exfiltration of data, where I take your data and keep it for 5:04myself. It could amount to an encryption of your data where I say, I've got your data and I'm not 5:10going to give it back unless you pay me. Or it could just erase the data. So it depends on what 5:14type of ... of attack you want to do or threaten that you're going to erase the data after a certain 5:19period of time. And then some other things that ... that the execution of the attack could do also is 5:24where this AI agent leveraging the LLM, which is able to understand language, can actually 5:31write the ransom note for you. And it could be very personalized. It could, for instance, say, here 5:37are the files that I have ... have leveraged. And the ... here are the files. If you want them back, this is what 5:42it's going to cost you. And do all of that, all completely directed within the AI. And because 5:49it's all being done within an AI, we also have the ability to make every single one of these attacks 5:54different. can be essentially a polymorphic attack where it changes. The first instance of 5:59this attack looks different than the next instance, which looks different than the next 6:03instance, which makes it really difficult to detect. We've seen polymorphic viruses and malware 6:09for decades, and there they present a problem. Now we could see polymorphic generated ransomware 6:16attacks coming from AI. And by the way, this particular instance, this particular project, all 6:23of this capability runs in a cloud, which means you basically end up with ransomware as a service, 6:30all brought to you by AI. The next type of attack we're going to talk about is AI-powered phishing. 6:36Now, what have we been telling our users about phishing attacks? Remember, those are those emails 6:41that come in that say, I'm your bank or I'm some well-known entity, but it's a fake and 6:48they're trying to use ... usually get you in most cases to log in, click on a link that takes you to 6:53a bogus site. Then they harvest your credentials and then they're off to the races. What do we 6:58normally tell people to look for that is a clue. This is the dead giveaway that this is not real. 7:04Well, oftentimes it's bad grammar, bad spelling. So we say 7:11if you see those kinds of things, then think it's a phishing attack. Well, the implication is if you 7:17don't see those things, then people are likely to believe that it's legitimate. And I'm telling you, 7:23we need to untrain all of our users from that, because now with AI, we're not going to see this 7:29kind of stuff much anymore. The smart phishers will, in fact use an LLM, a large language 7:36model, which will generate their text in perfect English or Spanish or French or what have 7:42you, even though the attacker may not speak a word of that language. So, these kinds of artifacts, 7:48these kinds of clues, if we're expecting to find them, we may not find them much anymore. And it 7:54could cause someone to have a false sense of security in this. And the way it would work is, an 7:59attacker just basically puts prompts into a large language model. They're saying, okay, generate a 8:05phishing email that does this, that or the following. And then what comes out is a phishing 8:11email that then they send out to others. So they can just copy and paste that. And you might say, 8:15well, but the ... the LLMs that I work with, if I ask them to generate a phishing email, they'll refuse 8:22to do it. And they might. But there are other forms of those LLMs that sit out there that I'm not 8:28going to give the names of, but you can find them if you want to, some are on the dark web, that will 8:33generate all of this, that don't have those kind of guardrails and don't have those kinds of 8:36restrictions. The bad guys will be using those. So, and you could also do a little bit of research to 8:42really personalize this, hyper-personalize this. With AI, it could go out maybe and scrape all of 8:48your social media posts and things like that, gather a lot of information about you to make 8:53this phishing email that sent to you very specific to you and therefore you're more likely 8:58to believe it. I did a video on this a while back where IBM did an experiment on this, where we 9:04basically took an AI and gave it five prompts and five minutes and compared the 9:10phishing email it had and how effective it was to what it took a human 16 hours to 9:17produce. Well-crafted phishing attack. And you know what? They were almost equal. This 9:24one was slightly more effective, the human-generated one, but not by much. And when you 9:29consider 16 hours versus five minutes, you can see where the economics of this go. And here's the 9:35thing. The humans will not be getting vastly better at generating these; the AI will. So, this is 9:42going to be another area that we're going to continue to see more and more influence from, is 9:46AI-generated phishing. Now, the next type of attack we're goi ... going to take a look at is AI-powered 9:53fraud. And in this case, the fraud could take a lot of different forms, but I'm going to 9:58zero in on one particular type of fraud that we call a deepfake. And a deepfake is 10:04basically a ... a case where we're using generative AI. Uh, so we've got a gen AI 10:11model here, and I'm going to take, in this case, something that you say either an audio 10:17recording of your voice or a video of you doing something. I'm going to feed that into my 10:23generative AI model, and then it is going to generate a model itself. And that model 10:30that it has is basically copying what you act like, sound like, look like, all of this sort of 10:36thing. Then the only thing I have to do is come up with a script, words that I want to put in your 10:41mouth, and I feed those into this, and then it generates out the result. So that's how a deepfake 10:48works. And they're not very hard to do. And in fact, we've already seen cases where these have been 10:53very effective. And by the way, if you want to know more about this, I've got a whole video on that, so take 10:58a look. I'm just going to say, if you think that by not leaving your ... your voice on your ... on 11:03your voicemail, it's going to protect you from not getting deepfaked, uh, think again. It doesn't take 11:09very long. Some of these models can ... can generate a very believable deepfake of your voice with as 11:15little as three seconds of audio recording of you. Now, we've already seen, as I said, this be 11:21effective. This is not brand-new news. In 2021, there was a case where an audio deepfake was 11:28done and it convinced an employee that their boss was telling them to wire 35 11:34million dollars to a particular account. Turns out that was a deepfake; it wasn't their boss. 11:41The company had then basically lost 35 million dollars. More recently, in 2024, 11:48there was a case where the deepfake got even better, and it was video-based. In this one, there 11:54was a video call that simulated the CFO, the Chief Financial Officer of a company, and it 12:01convinced an employee to wire 25 million dollars to an attacker. So, this is not theoretical. And 12:08this is a case where generally we believe what we see and hear. Well, I'm telling you, with deepfakes, 12:13if you aren't in the room, you can't believe it. Our next type of attack that's AI-powered is 12:19going to be AI-powered exploits. Now, an exploit is something that once you found a vulnerability, 12:26the exploit is the thing that takes advantage of that vulnerability. So for instance, we publish in 12:31the security industry these things called CVEs, common vulnerabilities and exposures. are 12:37reports where once we find a particular vulnerability, it's described and these things are 12:42numbered, they're cataloged, they're publicly available. It's a way that everyone in the 12:47security industry can talk about a particular vulnerability. And we all know what we're all 12:52talking about. Also, it will talk about the way the thing works and what the underlying vulnerability 12:58is. So, this is publicly available information. So, with this, another research project, they ... they took 13:05something, uh, CVE and feed it into a thing that is in an AI called a CVE 13:12Genie. So this again, is an agent that's going to go off and take the CVE, 13:19the document itself, feed it into an LLM. Starting to see a trend here. Agent leverages LLM. 13:25LLM reads the document, pulls out the salient details, sends that information back to 13:32the genie, which the ... is the agent that then not only figures out what this vulnerability means, 13:39but how do we exploit it, and writes the actual exploit code for you. So in this case, the whole 13:45process is automated, from feeding in the CVE to processing it, to generating the exploit. 13:51And, here's the thing, uh, with this uh, particular version, they achieved a 51% 13:58success rate by just feeding in a CVE into the system. And the cost for each one of these 14:04exploits? Less than 3 dollars. So, the economics of this are astronomical 14:11for the bad guys. And that means, individuals who don't know anything about coding will be able to 14:17take advantage of systems by using publicly available information and an AI at their disposal. 14:22And other examples that do this also are malware. So, malware is a type of 14:29exploit, in many cases where we're taking advantage of an underlying vulnerability in the 14:34system. So, I could use a system like this to generate malware, which would then obfuscate uh, 14:41its nature, be polymorphic, like I was talking about before. It could hide certain details about 14:47the way it's going to operate and make it even harder to detect. So you have a smart system that 14:53is making itself difficult to detect, making it a lot more effective potentially as well. Now what 15:00if you have AI that runs the entire kill chain? AI-powered attacks all the way across the 15:07board. This has already been done. It's been proven to be effective with a sys ... an AI system that 15:14is weaponizing Anthropic, which is a popular AI system. And what it does, is it uses an AI 15:21agent, as do many of these other attacks, and the agent is basically responsible for running the 15:27entire attack. So, it's going to figure out and make decisions, tactical and strategic, on what 15:33kinds of things it wants to attack, what kind of attack does it want to run. It's going to find its 15:38victims. It's going to identify the ones that thinks are the most effective, maybe the high-value 15:42targets, that sort of thing. It's going to analyze data that it has gotten off of their 15:49systems that it has exfiltrated, figured out, here's the good stuff that I really want to have, 15:54and I can analyze all that within the context of this agent. The agent might, by the way, leverage an 16:01LLM to process the documents and things like that. I might create personas to hide behind so I can 16:08say, if I'm going to do an extortion attack and say, maybe I'm going to release all of this 16:13information to the world, if you don't give me a certain amount of ransom and pay a ransom, well, we 16:19could create false personas and hide behind those and say you need to pay it to this false persona 16:24And then that way, it makes it easier for the attacker to get away. And then ultimately have it 16:30create the ransomware itself. It's going to create all of this. It's going to figure its demands and 16:36calibrate those based upon the value of the information, the value of the target that they 16:40have gone after, and figure out what likelihood they have to actually pay. Because if you ask for 16:47a ton of money from someone who doesn't have it, they're not going to pay it. If you ask for too 16:51little, well, then you sold yourself short. So, this can make all of those economic decisions. And 16:57basically, uh, we could in the future add all kinds of things to this. You could imagine any kind of 17:03attack that could happen, could potentially be done with a system like this. So, the AI 17:10agent is able to advance the attack, it's able to design the attack, it's able to execute the attack. 17:16And what all of this amounts to is basically we're making the skill level that is required for 17:22an attacker be much lower. In other words, there was a time when an attacker had to be really 17:29sharp, elite-level skills in order to pull off a complex attack like this. Now, all they have to do 17:36is basically be like a vibe coder who's doing vibe attacking, vibe hacking. In other words, you 17:42come up with the idea, instruct your agent to go do it, it figures out all the details, and then 17:49just you collect the money. This is an example of where AI has been weaponized to do the full kill 17:55chain. By now it should be pretty clear where all this is headed. AI-powered attacks are on the rise, 18:02and we're just seeing the beginning of this trend. This much I'm sure of: AI attacks are not going to 18:07get worse. That means the defenders are going to have to step up their game to meet the challenge. 18:12We're going to need to leverage AI for cyber defense to do prevention, detection and response. 18:18it won't be optional. It's going to be good AI versus bad AI. Make sure the good one wins.