Learning Library

← Back to Library

AI-Driven Cyber Threats Forecast

Key Points

  • The IBM Technology channel annually reviews the past year’s cybersecurity landscape and makes forward‑looking predictions, a tradition continued through 2025 with a forthcoming confession about a “cheat” at the video’s end.
  • AI’s dual‑edged impact proved true: while it offers benefits, unchecked “shadow AI”—unauthorised models deployed in the cloud—added roughly $670 K extra to breach costs, and 60 % of firms still lack AI governance policies to curb it.
  • Deepfake creation exploded from about 500 K instances in 2023 to 8 M in 2025—a roughly 1,500 % surge—highlighting growing risks of AI‑generated fraudulent media for cyber‑attacks.
  • Attackers are leveraging generative AI to craft more sophisticated exploits and polymorphic malware, automating vulnerability exploitation and increasing the difficulty of detection.
  • The speaker underscores that these AI‑driven threats are expected to intensify, urging stronger AI security frameworks and hinting at a personal “cheat” reveal later in the presentation.

Full Transcript

# AI-Driven Cyber Threats Forecast **Source:** [https://www.youtube.com/watch?v=2jU-mLMV8Vw](https://www.youtube.com/watch?v=2jU-mLMV8Vw) **Duration:** 00:20:17 ## Summary - The IBM Technology channel annually reviews the past year’s cybersecurity landscape and makes forward‑looking predictions, a tradition continued through 2025 with a forthcoming confession about a “cheat” at the video’s end. - AI’s dual‑edged impact proved true: while it offers benefits, unchecked “shadow AI”—unauthorised models deployed in the cloud—added roughly $670 K extra to breach costs, and 60 % of firms still lack AI governance policies to curb it. - Deepfake creation exploded from about 500 K instances in 2023 to 8 M in 2025—a roughly 1,500 % surge—highlighting growing risks of AI‑generated fraudulent media for cyber‑attacks. - Attackers are leveraging generative AI to craft more sophisticated exploits and polymorphic malware, automating vulnerability exploitation and increasing the difficulty of detection. - The speaker underscores that these AI‑driven threats are expected to intensify, urging stronger AI security frameworks and hinting at a personal “cheat” reveal later in the presentation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=0s) **Year-End Cybersecurity AI Review** - A recap of IBM’s annual cybersecurity roundup, evaluating past AI‑related predictions—especially the rise and cost impact of shadow AI—while teasing a forthcoming confession. - [00:06:49](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=409s) **AI Agents: Speed, Risks, Predictions** - The speaker reflects on the unexpected rapid deployment of autonomous AI agents, foreseeing escalating attacks on and by these agents as they amplify both productivity and security risks. - [00:10:35](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=635s) **AI Agents Amplify Cyber Attacks** - The speaker warns that AI‑driven agents can automate and hyper‑personalize phishing, create constantly evolving polymorphic malware, and streamline ransomware campaigns, making malicious operations far more efficient and harder to detect. - [00:14:39](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=879s) **AI's Transformative Role in Education** - An adjunct professor argues that education must move from banning generative AI to embracing it—training students to use AI tools for future workplace tasks—while also noting AI’s growing influence on creative fields like music. - [00:18:13](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=1093s) **Passkeys, Phishing Prevention, Quantum‑Safe Future** - The speaker outlines IBM’s enterprise‑wide shift to passkey authentication as a phishing‑proof alternative to passwords, shares personal usage statistics, and humorously warns that quantum‑cracking could soon threaten conventional cryptography, urging immediate adoption of quantum‑safe security measures. ## Full Transcript
0:00It's become something of a tradition here on the IBM Technology channel to do a video at the 0:04end of each year, where we look back at the past year in cybersecurity and then make a few 0:09predictions for the future. We did this in 2023, again in 2024 and in 0:152025. And I'm back again to dust off my crystal ball and tell you what I see. Oh, and by the way, 0:22this year I might have cheated just a little bit. So stick around to the end to hear my confession. 0:28Okay, let's take a look and see how the predictions from 2025 and beyond stacked up. Well, 0:34I said a lot of the stuff was going to be around AI. We would get some positives and some negatives, 0:39and I think that's turned out to be the case. If you look at AI in particular, it has done some 0:44good things for us. But I'm in cybersecurity, so I'm focused on some of the negative aspects of 0:50how AI is getting used against us, first. So, shadow AI, that's an example of an AI 0:57implementation where no one approved this thing. It just exists. Somebody downloaded something into 1:02a cloud, put in a model, and they were off to the races. Shadow AI instances we have seen be 1:09costly.Uh, I predicted they would be. They are. Uh, every year IBM runs a Cost of a Data Breach 1:15report. And in this, we figure out how much it costs when an organization has their data 1:20breached or has their data compromised. One of the findings in that report uh, justifies what I was 1:27saying here. $670,000 more for organizations that had a data breach and had 1:33shadow AI. So shadow AI contributed to an additional cost whenever a data breach 1:40occurred. So that's uh, a big problem. Uh, compounding this is the fact that the Cost of a Data Breach report 1:46also found that 60% of organizations don't have an AI governance or security policy in place 1:53to guard against shadow AI. So it already costs more, and we don't have the guardrails in place to 1:59prevent it. I think this is going to continue to be an issue for us. Deepfakes, where you use 2:04generative AI to generate pictures, uh, audio, video of another person—may be real or 2:11not real—doing things that they never did, saying things they never said, this kind of thing. Uh, I am 2:18concerned about this and how it could be used. It can be used for fun stuff, for entertainment, but 2:24misused in terms of cyberattacks. And in particular, we've seen this occur. There was one 2:30report I found where they uh, in 2023, they were seeing about 2:36500,000 instances of deepfakes that they were able to catalog and observe. In 2:412025, that number moved to 8 million. So, if you're doing the math at home, that's a 2:481,500% increase. So, uh, I think the prediction that we would see more deepfakes 2:55is definitely happening, and I think we're going to see them become even more pervasive as we move 3:00forward. Using AI to generate exploits, where we find a vulnerability and then go to the AI and 3:07have it generate, yes, we've seen that. We've seen AI-generated malware and that malware is more 3:14sophisticated. So, one of the things it can be is what we call polymorphic. Polymorphic 3:20malware is stuff that can change over time. So, it's harder to detect, which means uh, 3:28it's tougher for the bad ... for the good guys to defend against this. And the bar got lower for the 3:33bad guys who are creating it in the first place. So they get more intelligent malware that was 3:38easy to create, cause all they had to do was go to an AI and have it do the work. And the defense 3:43side is actually more difficult for us. These next two I'm going to take together. Uh, I said that AI 3:49would increase the attack surface, and it definitely has. So in this case, I'm talking about 3:54organizations that are using AI to advance their business, to do their-their goals, to become more 4:00productive, those kinds of things. And that technology is excellent at doing that. But it also 4:06becomes another thing that a person can try to attack. And that's what we've seen. And the 4:11organization called OWASP, the Open Worldwide Application Security Project, in 2023 came out 4:18with their top ten list of vulnerabilities for large language models. And on that list, 4:24number one was this guy right here: prompt injection. Guess what? In 2025, it was number one 4:31again. So, uh, the projection that we would continue to see this— and, in fact, I think we'll see it even 4:37more going forward—uh, that has definitely turned out to be true. Now on the positive side, I didn't want 4:43to give only negatives, but I thought we would see AI used to improve cybersecurity, in particular, 4:49to improve our response to incidents, to identify issues and respond to those issues. So, in 4:56fact, we've seen that occur as well. Um, in this case, we're-we're seeing that uh, IBM in 5:03particular came up with a product that basically uses an AI to detect prompt injections and defend 5:10against those. So it's an AI that's defending and helping against AI-based attacks and other things 5:16like that. We're going to see more of that as well. We're going to need systems that are adaptable in 5:22real time to the attacks that are changing in real time. And an AI would be a good way to do 5:28that. So we've already started to see this being infused into cybersecurity tools. And then the 5:33next one that I'll talk about that's not AI related—because not everything is AI— but a really 5:38important thing coming in the future is quantum computing. And quantum computing can solve a lot 5:44of problems for us, but it can also create some headaches for us. And one of those is the fact 5:49that at some point it will be able to break all of our cryptography, and when it does, we're going 5:55to wish that we had implemented these new post-quantum cryptography algorithms, the things 5:59that are quantum-safe. I'll talk more about this later in the video, but uh, I'll just say what 6:05I've seen. I-I projected that we would be, of course, closer to the Q-Day, when the quantum systems 6:11will be able to break. We don't know when we're going to hit that yet, but it's coming. But with 6:17quantum-safe, what I have observed is as we moves throughout the year in 2025, the level of 6:23interest in this topic has increased. And that's good because there needs to be a widespread 6:30awareness of this uh, impending threat that we have. But on the downside, well, I haven't seen so 6:37much yet in terms of deployments. Some people are doing it, but not nearly enough. And the clock is 6:43ticking already on what we're going to have to do in order to address those particular threats. 6:49Okay, but I didn't hit on everything. I knew agents were coming, just not this fast. Silly me. 6:56I'm here thinking that giving autonomy to hallucinating AIs was not quite ready for prime 7:02time. Everyone else disagreed and said, "What could possibly go wrong?" Well, okay, so fasten your 7:09seatbelts and off we go. All right, so I'm not missing on agents two years in a row. I knew they 7:15were coming, I just didn't know they were coming so fast. But I'm going to make a lot of 7:20predictions about agents this year because they have really taken off and in two different 7:25categories the way I'm going to look at this. One is attacks on agents and another is 7:31attacks by agents. So, we're going to see both of these. We've already begun to see some of these 7:37things. So the prediction is we're going to see these things continue and increase. So, for 7:42instance, attacks on agents. The first thing is to think about what does an agent do for me. Well, an 7:48agent is an autonomous AI that you give it goals and it goes off and does the things you want it 7:53to do. So it's a productivity amplifier if it's running properly. Guess what else it is? It's a 7:59risk amplifier because if someone is able to hijack that agent, 8:06then it will do something bad that you didn't intend, but do it at light speed. So it will do it 8:12much faster than a person would be able to do. So, it is amplifying risk, and if it has the ability 8:18to access all the kinds of tools that we want it to do in order to be really effective, then it's 8:24going to i-increase risk there as well. Uh, when I did a video on this particular topic in uh, about 8:31zero-click attacks, where someone maybe sends in an email with a prompt injection uh, 8:37directly in the email. So we call this an indirect prompt injection. Your agent comes along and reads 8:42it in order to summarize it, and then follows the instructions that are in the prompt injection and 8:47exfiltrates data out of your environment. Zero-click, because the user never touched it. The user 8:53might not have even been in the office that day. So, that's another issue because the agent is 8:59processing a lot of these things, so it doesn't even require user intervention. Another thing 9:04we're going to see is an increase in non-human identities. Non-human identities, 9:11meaning all these agents that are out there, they need certain levels of privilege and they need 9:17certain levels of access. And that means I need to have them run under particular accounts, under 9:22identities. But they're not really associated with a particular person. In fact, agents can spawn and 9:28create other agents. So now we have more and more identities that need to be managed. That's 9:35increasing the risk surface. Uh, and as a result, attacks on these things will attack here as 9:42well. Sometimes a user may say, well, I'm just going to have this agent operate under the same 9:47privilege that I have. Well, that sounds like a good idea, but here's the issue. You might not do 9:54a certain set of things, or you might do 1 or 2 of them, but your agent now is running at light speed 9:59and it does it 10,000 in a minute. So, again, the risk becomes much greater. Also, 10:06agents could have situations where they have privilege escalation, where they get more access 10:12than they should have if we're not really careful, or excessive, uh, access to systems. So, these are 10:19the kinds of things we have to be worried about as we're deploying agents. Not that we shouldn't 10:24do agents; we absolutely should. But do them with your eyes wide open and understand what some of 10:29those risks could be. Now, over here on the other side, how about agents being used by the bad guys? 10:35I was just talking about using agents for the good guys to ... for us to do our business. How are 10:40the bad guys going to use this? Well, this is attacks by agents on us. Well, one set of these 10:47that we've already seen are phishing attacks. So phishing attacks get even more amped 10:53up when we start using agents to do the work. The phishing agent can go craft 11:00a very special email that is personalized just for you, hyper personalized. So you are 11:07more likely to fall for the clickbait that it's trying to get you to-to follow, than you would 11:12just a conventional one. And again, this can all be automated through an agent. Another one is malware, 11:18which I already mentioned that we've been uh, seeing already. How about this? The malware is not only 11:25smarter, it's polymorphic; therefore it's harder to-to detect because it's changing itself over time. 11:30It's changing its behaviors, it's changing with the signature that we would look for with it. But 11:36we could, again, just have a bad guy go into an AI and say, I want you to start creating different 11:42kinds of malware samples, throw them all out into the world and we'll see which ones stick. Now, that 11:49would be a lot of work if you're a person doing all of that. It's not a lot of work if you're an 11:53agent. You can just go out and try a whole bunch of different options. The ones that don't work, you 11:57just discard.Uh, ransomware is also going to have, uh, and we've already seen some indication 12:04of this, where a ... the entire ransomware attack chain is being automated, where it's 12:11writing the-the ransomware, it's writing the email that tells the person and the ... what they're going to 12:17do. It's writing the exploit that's going to go encrypt all the data or steal the data. And it's 12:22collecting the ransom. So it's even telling the instructions and doing the collections—the whole 12:28system being automated through an agent. So, again, the attacker skill level keeps going down, 12:35but the effectiveness keeps going up. Another thing that we could see here is automating the 12:40entire kill chain. And this is where—I-I've alluded to it in some of these others, but it's going 12:47to be even more— where we've got the AI agent is figuring out which targets it wants 12:54to hit. It's evaluating them, it's doing reconnaissance on them, it's probing them and 12:59finding their vulnerabilities. It's building the exploits, it's running the exploits. It's then 13:05collecting all the information, stealing the data. It's doing all of this. So it is literally just a 13:11click-here to-hack situation. And the agent handles all the work. This, we're already beginning 13:17to see signs that this is possible. So I expect you're going to see more of it. Another thing I 13:23think we'll see an increase on is social engineering. Social engineering is where you try 13:29to uh, fake someone into doing something that they really shouldn't do. They should know better than 13:34to do, but they're-they're going to do it anyway. And in a social engineering attack, if I have 13:40deepfake information, or if I have other information that I used my agent to gather about 13:46you, it's going to be even more convincing. So, we need to-to see what kinds of things we can do uh, 13:53in order to ... to block these types of social engineering attacks. And again, the-this is all coming 13:58from deepfakes.So, uh, let's go ahead and add that as something that's going to, I think, continue to 14:04increase. Deepfakes will keep getting better.Uh, the people that are trying to invest a lot of time 14:10into doing deepfake detection, don't bother, because deepfakes keep getting better. The 14:16detectors will not be able to keep up. So, deepfakes are something we're going to have to 14:21accept, and we're going to have to train people to expect them and not be looking to recognize them, 14:27but think about what the deepfake is asking them to do. Now, another area that's not specifically 14:32related to agents, but I think is a pretty easy prediction, is that we're going to see AI increase. 14:39AI is going to increase, though in some very predictable places and some not so predictable. 14:44One of my side hustles is as a ... as an adjunct professor at NC State University. 14:51So I ... when I first saw what uh, the capabilities of generative AI and chatbots could be, 14:58my first reaction, like most people in the education area, was, okay, we need to outlaw this. We 15:04need to put detection in. We need to make sure students are still writing their papers, and 15:08they're not just getting ChatGPT output or whatever, something like that. I think there's 15:13going to be a change. And I did a video on this uh, that will be coming out, on the future of 15:18education based upon AI. AI in the future of education. Education, the whole industry is going 15:25to have to embrace AI instead of fighting it. It's not going to work for us to say, just keep it out 15:30of the classroom. Nobody is going to. Your boss is not going to come to you in the future and say, I 15:35want you to accomplish this, this and this. And don't use AI while you're doing it. Okay, if that's 15:40what the workplace is going to require, then that's what we need to be training students to do. 15:44So education is going to have to change the way that we think about teaching, and it's going to be 15:49affected dramatically by AI. Some other areas that we're going to be seeing uh, in the area of art and 15:56music, things of the-the arts. Um, I'm a guitar player, so music is really important to me. And 16:03I've seen what AI has been doing in the area of music. We've got entire groups that don't 16:10exist. They were generated out of AI. We're going to see more of that. Some of the music that comes 16:15out is actually pretty good. Not all of it. Some of us just slop, but heck, we've had slop in the music 16:20industry for as long as we have in the music industry, so that's not new. Marketing is another 16:25area where we're seeing a lot of uses of AI already, and even more as it can generate copy for 16:32us. It can generate ... even give you ideas for business plans. It can give you ideas for 16:36marketing campaigns. Another one that's going to affect, and I keep telling this to all of my 16:41students. If you just want to learn to code like I did when I was a computer science major—I was 16:46going to learn to be a programmer—if that's all you want to do, we're going to need far fewer of 16:51those in the future, as AI gets better and better at writing code. Right now, people are still better. 16:55But people don't scale to the same degree as AI. And AI keeps getting better. So we'll still have 17:01coding jobs, but not as many going forward. I think you're going to see AI affecting all of these 17:07areas in a very significant way as we move forward. Just to let you know, I'm not only focused 17:13on AI for the future; there's some other non-AI topics. One of them, if you've watched any of my 17:19videos, you'll know I'm a big fan of these things called passkeys, which are a replacement for 17:24passwords. More secure, easier to keep up with, phishing-resistant for the most part. So passkeys, 17:31and they come to us from an organization called the FIDO Alliance, Fast Identity Online. There's a 17:37lot of companies that have signed up for this. I mean, just to name-drop a few: Amazon, Google, uh, Target, 17:43PayPal, Microsoft, TikTok. Those are all organizations that are part of this. And the FIDO 17:50Alliance came out with a report that said that 93% of accounts from those organizations 17:57are eligible for passkeys, and that, in fact, one third of people have 18:04actually enabled those as well. I remember when I first started talking about passkeys, a lot of 18:08people said, ah, this doesn't work. What about this? What about that? It's got problems. It is working. 18:13It's working. Large companies are using this and it's getting deployed. IBM internally, we made a 18:19switch to where all employees have to use this when we're authenticating into our internal 18:24systems. So we're using this across the entire company as well. And me personally, if I go into my 18:31Password Manager, which is where these can be stored, I've got 17 passkeys already and I expect 18:38it will grow over time. So, I think passkeys are a good alternative. They help us with what is the 18:43number one c-cause of data breaches, which are phishing attacks, which in many cases 18:50are going after your credentials, your passwords. And I can't steal your password if you don't have 18:56one. A passkey is a better alternative. Okay, confession time. I built a time machine and 19:03traveled into the future. Okay, not really, but play along. I found a copy of the latest IBM Cost of 19:10a Data Breach Report, because, I mean, what else would a security nerd grab when they go into the 19:15future? Cost of a Data Breach report. And what did I find in there? Well, I found that one of the top 19:20causes of data breaches was quantum cracking of conventional cryptography. Unfortunately, I forgot 19:27to note the year of the report, i-it slipped my mind. So, I won't be able to tell you exactly what year 19:33that's going to happen. But I do know it wasn't all that far into the future because there were 19:38no flying cars. So the good news, though, is that you can do something to avert that disaster now 19:45by implementing what's known as quantum-safe cryptography, or some people refer to it as 19:50post-quantum cryptography. Do it now. So now you've seen what my crystal ball sees 19:57happening in cybersecurity in 2026 and beyond. Now I'd like to hear from you, to see what you think. 20:04Where did I hit and where did I miss? Be gentle. Crystal balls aren't perfect, after all. What do 20:09you see happening in the future? Post all of that in the comments section below, and next year we 20:15can all look back and see how we did.