AI-Driven Cyber Threats Forecast
Key Points
- The IBM Technology channel annually reviews the past year’s cybersecurity landscape and makes forward‑looking predictions, a tradition continued through 2025 with a forthcoming confession about a “cheat” at the video’s end.
- AI’s dual‑edged impact proved true: while it offers benefits, unchecked “shadow AI”—unauthorised models deployed in the cloud—added roughly $670 K extra to breach costs, and 60 % of firms still lack AI governance policies to curb it.
- Deepfake creation exploded from about 500 K instances in 2023 to 8 M in 2025—a roughly 1,500 % surge—highlighting growing risks of AI‑generated fraudulent media for cyber‑attacks.
- Attackers are leveraging generative AI to craft more sophisticated exploits and polymorphic malware, automating vulnerability exploitation and increasing the difficulty of detection.
- The speaker underscores that these AI‑driven threats are expected to intensify, urging stronger AI security frameworks and hinting at a personal “cheat” reveal later in the presentation.
Sections
- Year-End Cybersecurity AI Review - A recap of IBM’s annual cybersecurity roundup, evaluating past AI‑related predictions—especially the rise and cost impact of shadow AI—while teasing a forthcoming confession.
- AI Agents: Speed, Risks, Predictions - The speaker reflects on the unexpected rapid deployment of autonomous AI agents, foreseeing escalating attacks on and by these agents as they amplify both productivity and security risks.
- AI Agents Amplify Cyber Attacks - The speaker warns that AI‑driven agents can automate and hyper‑personalize phishing, create constantly evolving polymorphic malware, and streamline ransomware campaigns, making malicious operations far more efficient and harder to detect.
- AI's Transformative Role in Education - An adjunct professor argues that education must move from banning generative AI to embracing it—training students to use AI tools for future workplace tasks—while also noting AI’s growing influence on creative fields like music.
- Passkeys, Phishing Prevention, Quantum‑Safe Future - The speaker outlines IBM’s enterprise‑wide shift to passkey authentication as a phishing‑proof alternative to passwords, shares personal usage statistics, and humorously warns that quantum‑cracking could soon threaten conventional cryptography, urging immediate adoption of quantum‑safe security measures.
Full Transcript
# AI-Driven Cyber Threats Forecast **Source:** [https://www.youtube.com/watch?v=2jU-mLMV8Vw](https://www.youtube.com/watch?v=2jU-mLMV8Vw) **Duration:** 00:20:17 ## Summary - The IBM Technology channel annually reviews the past year’s cybersecurity landscape and makes forward‑looking predictions, a tradition continued through 2025 with a forthcoming confession about a “cheat” at the video’s end. - AI’s dual‑edged impact proved true: while it offers benefits, unchecked “shadow AI”—unauthorised models deployed in the cloud—added roughly $670 K extra to breach costs, and 60 % of firms still lack AI governance policies to curb it. - Deepfake creation exploded from about 500 K instances in 2023 to 8 M in 2025—a roughly 1,500 % surge—highlighting growing risks of AI‑generated fraudulent media for cyber‑attacks. - Attackers are leveraging generative AI to craft more sophisticated exploits and polymorphic malware, automating vulnerability exploitation and increasing the difficulty of detection. - The speaker underscores that these AI‑driven threats are expected to intensify, urging stronger AI security frameworks and hinting at a personal “cheat” reveal later in the presentation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=0s) **Year-End Cybersecurity AI Review** - A recap of IBM’s annual cybersecurity roundup, evaluating past AI‑related predictions—especially the rise and cost impact of shadow AI—while teasing a forthcoming confession. - [00:06:49](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=409s) **AI Agents: Speed, Risks, Predictions** - The speaker reflects on the unexpected rapid deployment of autonomous AI agents, foreseeing escalating attacks on and by these agents as they amplify both productivity and security risks. - [00:10:35](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=635s) **AI Agents Amplify Cyber Attacks** - The speaker warns that AI‑driven agents can automate and hyper‑personalize phishing, create constantly evolving polymorphic malware, and streamline ransomware campaigns, making malicious operations far more efficient and harder to detect. - [00:14:39](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=879s) **AI's Transformative Role in Education** - An adjunct professor argues that education must move from banning generative AI to embracing it—training students to use AI tools for future workplace tasks—while also noting AI’s growing influence on creative fields like music. - [00:18:13](https://www.youtube.com/watch?v=2jU-mLMV8Vw&t=1093s) **Passkeys, Phishing Prevention, Quantum‑Safe Future** - The speaker outlines IBM’s enterprise‑wide shift to passkey authentication as a phishing‑proof alternative to passwords, shares personal usage statistics, and humorously warns that quantum‑cracking could soon threaten conventional cryptography, urging immediate adoption of quantum‑safe security measures. ## Full Transcript
It's become something of a tradition here on the IBM Technology channel to do a video at the
end of each year, where we look back at the past year in cybersecurity and then make a few
predictions for the future. We did this in 2023, again in 2024 and in
2025. And I'm back again to dust off my crystal ball and tell you what I see. Oh, and by the way,
this year I might have cheated just a little bit. So stick around to the end to hear my confession.
Okay, let's take a look and see how the predictions from 2025 and beyond stacked up. Well,
I said a lot of the stuff was going to be around AI. We would get some positives and some negatives,
and I think that's turned out to be the case. If you look at AI in particular, it has done some
good things for us. But I'm in cybersecurity, so I'm focused on some of the negative aspects of
how AI is getting used against us, first. So, shadow AI, that's an example of an AI
implementation where no one approved this thing. It just exists. Somebody downloaded something into
a cloud, put in a model, and they were off to the races. Shadow AI instances we have seen be
costly.Uh, I predicted they would be. They are. Uh, every year IBM runs a Cost of a Data Breach
report. And in this, we figure out how much it costs when an organization has their data
breached or has their data compromised. One of the findings in that report uh, justifies what I was
saying here. $670,000 more for organizations that had a data breach and had
shadow AI. So shadow AI contributed to an additional cost whenever a data breach
occurred. So that's uh, a big problem. Uh, compounding this is the fact that the Cost of a Data Breach report
also found that 60% of organizations don't have an AI governance or security policy in place
to guard against shadow AI. So it already costs more, and we don't have the guardrails in place to
prevent it. I think this is going to continue to be an issue for us. Deepfakes, where you use
generative AI to generate pictures, uh, audio, video of another person—may be real or
not real—doing things that they never did, saying things they never said, this kind of thing. Uh, I am
concerned about this and how it could be used. It can be used for fun stuff, for entertainment, but
misused in terms of cyberattacks. And in particular, we've seen this occur. There was one
report I found where they uh, in 2023, they were seeing about
500,000 instances of deepfakes that they were able to catalog and observe. In
2025, that number moved to 8 million. So, if you're doing the math at home, that's a
1,500% increase. So, uh, I think the prediction that we would see more deepfakes
is definitely happening, and I think we're going to see them become even more pervasive as we move
forward. Using AI to generate exploits, where we find a vulnerability and then go to the AI and
have it generate, yes, we've seen that. We've seen AI-generated malware and that malware is more
sophisticated. So, one of the things it can be is what we call polymorphic. Polymorphic
malware is stuff that can change over time. So, it's harder to detect, which means uh,
it's tougher for the bad ... for the good guys to defend against this. And the bar got lower for the
bad guys who are creating it in the first place. So they get more intelligent malware that was
easy to create, cause all they had to do was go to an AI and have it do the work. And the defense
side is actually more difficult for us. These next two I'm going to take together. Uh, I said that AI
would increase the attack surface, and it definitely has. So in this case, I'm talking about
organizations that are using AI to advance their business, to do their-their goals, to become more
productive, those kinds of things. And that technology is excellent at doing that. But it also
becomes another thing that a person can try to attack. And that's what we've seen. And the
organization called OWASP, the Open Worldwide Application Security Project, in 2023 came out
with their top ten list of vulnerabilities for large language models. And on that list,
number one was this guy right here: prompt injection. Guess what? In 2025, it was number one
again. So, uh, the projection that we would continue to see this— and, in fact, I think we'll see it even
more going forward—uh, that has definitely turned out to be true. Now on the positive side, I didn't want
to give only negatives, but I thought we would see AI used to improve cybersecurity, in particular,
to improve our response to incidents, to identify issues and respond to those issues. So, in
fact, we've seen that occur as well. Um, in this case, we're-we're seeing that uh, IBM in
particular came up with a product that basically uses an AI to detect prompt injections and defend
against those. So it's an AI that's defending and helping against AI-based attacks and other things
like that. We're going to see more of that as well. We're going to need systems that are adaptable in
real time to the attacks that are changing in real time. And an AI would be a good way to do
that. So we've already started to see this being infused into cybersecurity tools. And then the
next one that I'll talk about that's not AI related—because not everything is AI— but a really
important thing coming in the future is quantum computing. And quantum computing can solve a lot
of problems for us, but it can also create some headaches for us. And one of those is the fact
that at some point it will be able to break all of our cryptography, and when it does, we're going
to wish that we had implemented these new post-quantum cryptography algorithms, the things
that are quantum-safe. I'll talk more about this later in the video, but uh, I'll just say what
I've seen. I-I projected that we would be, of course, closer to the Q-Day, when the quantum systems
will be able to break. We don't know when we're going to hit that yet, but it's coming. But with
quantum-safe, what I have observed is as we moves throughout the year in 2025, the level of
interest in this topic has increased. And that's good because there needs to be a widespread
awareness of this uh, impending threat that we have. But on the downside, well, I haven't seen so
much yet in terms of deployments. Some people are doing it, but not nearly enough. And the clock is
ticking already on what we're going to have to do in order to address those particular threats.
Okay, but I didn't hit on everything. I knew agents were coming, just not this fast. Silly me.
I'm here thinking that giving autonomy to hallucinating AIs was not quite ready for prime
time. Everyone else disagreed and said, "What could possibly go wrong?" Well, okay, so fasten your
seatbelts and off we go. All right, so I'm not missing on agents two years in a row. I knew they
were coming, I just didn't know they were coming so fast. But I'm going to make a lot of
predictions about agents this year because they have really taken off and in two different
categories the way I'm going to look at this. One is attacks on agents and another is
attacks by agents. So, we're going to see both of these. We've already begun to see some of these
things. So the prediction is we're going to see these things continue and increase. So, for
instance, attacks on agents. The first thing is to think about what does an agent do for me. Well, an
agent is an autonomous AI that you give it goals and it goes off and does the things you want it
to do. So it's a productivity amplifier if it's running properly. Guess what else it is? It's a
risk amplifier because if someone is able to hijack that agent,
then it will do something bad that you didn't intend, but do it at light speed. So it will do it
much faster than a person would be able to do. So, it is amplifying risk, and if it has the ability
to access all the kinds of tools that we want it to do in order to be really effective, then it's
going to i-increase risk there as well. Uh, when I did a video on this particular topic in uh, about
zero-click attacks, where someone maybe sends in an email with a prompt injection uh,
directly in the email. So we call this an indirect prompt injection. Your agent comes along and reads
it in order to summarize it, and then follows the instructions that are in the prompt injection and
exfiltrates data out of your environment. Zero-click, because the user never touched it. The user
might not have even been in the office that day. So, that's another issue because the agent is
processing a lot of these things, so it doesn't even require user intervention. Another thing
we're going to see is an increase in non-human identities. Non-human identities,
meaning all these agents that are out there, they need certain levels of privilege and they need
certain levels of access. And that means I need to have them run under particular accounts, under
identities. But they're not really associated with a particular person. In fact, agents can spawn and
create other agents. So now we have more and more identities that need to be managed. That's
increasing the risk surface. Uh, and as a result, attacks on these things will attack here as
well. Sometimes a user may say, well, I'm just going to have this agent operate under the same
privilege that I have. Well, that sounds like a good idea, but here's the issue. You might not do
a certain set of things, or you might do 1 or 2 of them, but your agent now is running at light speed
and it does it 10,000 in a minute. So, again, the risk becomes much greater. Also,
agents could have situations where they have privilege escalation, where they get more access
than they should have if we're not really careful, or excessive, uh, access to systems. So, these are
the kinds of things we have to be worried about as we're deploying agents. Not that we shouldn't
do agents; we absolutely should. But do them with your eyes wide open and understand what some of
those risks could be. Now, over here on the other side, how about agents being used by the bad guys?
I was just talking about using agents for the good guys to ... for us to do our business. How are
the bad guys going to use this? Well, this is attacks by agents on us. Well, one set of these
that we've already seen are phishing attacks. So phishing attacks get even more amped
up when we start using agents to do the work. The phishing agent can go craft
a very special email that is personalized just for you, hyper personalized. So you are
more likely to fall for the clickbait that it's trying to get you to-to follow, than you would
just a conventional one. And again, this can all be automated through an agent. Another one is malware,
which I already mentioned that we've been uh, seeing already. How about this? The malware is not only
smarter, it's polymorphic; therefore it's harder to-to detect because it's changing itself over time.
It's changing its behaviors, it's changing with the signature that we would look for with it. But
we could, again, just have a bad guy go into an AI and say, I want you to start creating different
kinds of malware samples, throw them all out into the world and we'll see which ones stick. Now, that
would be a lot of work if you're a person doing all of that. It's not a lot of work if you're an
agent. You can just go out and try a whole bunch of different options. The ones that don't work, you
just discard.Uh, ransomware is also going to have, uh, and we've already seen some indication
of this, where a ... the entire ransomware attack chain is being automated, where it's
writing the-the ransomware, it's writing the email that tells the person and the ... what they're going to
do. It's writing the exploit that's going to go encrypt all the data or steal the data. And it's
collecting the ransom. So it's even telling the instructions and doing the collections—the whole
system being automated through an agent. So, again, the attacker skill level keeps going down,
but the effectiveness keeps going up. Another thing that we could see here is automating the
entire kill chain. And this is where—I-I've alluded to it in some of these others, but it's going
to be even more— where we've got the AI agent is figuring out which targets it wants
to hit. It's evaluating them, it's doing reconnaissance on them, it's probing them and
finding their vulnerabilities. It's building the exploits, it's running the exploits. It's then
collecting all the information, stealing the data. It's doing all of this. So it is literally just a
click-here to-hack situation. And the agent handles all the work. This, we're already beginning
to see signs that this is possible. So I expect you're going to see more of it. Another thing I
think we'll see an increase on is social engineering. Social engineering is where you try
to uh, fake someone into doing something that they really shouldn't do. They should know better than
to do, but they're-they're going to do it anyway. And in a social engineering attack, if I have
deepfake information, or if I have other information that I used my agent to gather about
you, it's going to be even more convincing. So, we need to-to see what kinds of things we can do uh,
in order to ... to block these types of social engineering attacks. And again, the-this is all coming
from deepfakes.So, uh, let's go ahead and add that as something that's going to, I think, continue to
increase. Deepfakes will keep getting better.Uh, the people that are trying to invest a lot of time
into doing deepfake detection, don't bother, because deepfakes keep getting better. The
detectors will not be able to keep up. So, deepfakes are something we're going to have to
accept, and we're going to have to train people to expect them and not be looking to recognize them,
but think about what the deepfake is asking them to do. Now, another area that's not specifically
related to agents, but I think is a pretty easy prediction, is that we're going to see AI increase.
AI is going to increase, though in some very predictable places and some not so predictable.
One of my side hustles is as a ... as an adjunct professor at NC State University.
So I ... when I first saw what uh, the capabilities of generative AI and chatbots could be,
my first reaction, like most people in the education area, was, okay, we need to outlaw this. We
need to put detection in. We need to make sure students are still writing their papers, and
they're not just getting ChatGPT output or whatever, something like that. I think there's
going to be a change. And I did a video on this uh, that will be coming out, on the future of
education based upon AI. AI in the future of education. Education, the whole industry is going
to have to embrace AI instead of fighting it. It's not going to work for us to say, just keep it out
of the classroom. Nobody is going to. Your boss is not going to come to you in the future and say, I
want you to accomplish this, this and this. And don't use AI while you're doing it. Okay, if that's
what the workplace is going to require, then that's what we need to be training students to do.
So education is going to have to change the way that we think about teaching, and it's going to be
affected dramatically by AI. Some other areas that we're going to be seeing uh, in the area of art and
music, things of the-the arts. Um, I'm a guitar player, so music is really important to me. And
I've seen what AI has been doing in the area of music. We've got entire groups that don't
exist. They were generated out of AI. We're going to see more of that. Some of the music that comes
out is actually pretty good. Not all of it. Some of us just slop, but heck, we've had slop in the music
industry for as long as we have in the music industry, so that's not new. Marketing is another
area where we're seeing a lot of uses of AI already, and even more as it can generate copy for
us. It can generate ... even give you ideas for business plans. It can give you ideas for
marketing campaigns. Another one that's going to affect, and I keep telling this to all of my
students. If you just want to learn to code like I did when I was a computer science major—I was
going to learn to be a programmer—if that's all you want to do, we're going to need far fewer of
those in the future, as AI gets better and better at writing code. Right now, people are still better.
But people don't scale to the same degree as AI. And AI keeps getting better. So we'll still have
coding jobs, but not as many going forward. I think you're going to see AI affecting all of these
areas in a very significant way as we move forward. Just to let you know, I'm not only focused
on AI for the future; there's some other non-AI topics. One of them, if you've watched any of my
videos, you'll know I'm a big fan of these things called passkeys, which are a replacement for
passwords. More secure, easier to keep up with, phishing-resistant for the most part. So passkeys,
and they come to us from an organization called the FIDO Alliance, Fast Identity Online. There's a
lot of companies that have signed up for this. I mean, just to name-drop a few: Amazon, Google, uh, Target,
PayPal, Microsoft, TikTok. Those are all organizations that are part of this. And the FIDO
Alliance came out with a report that said that 93% of accounts from those organizations
are eligible for passkeys, and that, in fact, one third of people have
actually enabled those as well. I remember when I first started talking about passkeys, a lot of
people said, ah, this doesn't work. What about this? What about that? It's got problems. It is working.
It's working. Large companies are using this and it's getting deployed. IBM internally, we made a
switch to where all employees have to use this when we're authenticating into our internal
systems. So we're using this across the entire company as well. And me personally, if I go into my
Password Manager, which is where these can be stored, I've got 17 passkeys already and I expect
it will grow over time. So, I think passkeys are a good alternative. They help us with what is the
number one c-cause of data breaches, which are phishing attacks, which in many cases
are going after your credentials, your passwords. And I can't steal your password if you don't have
one. A passkey is a better alternative. Okay, confession time. I built a time machine and
traveled into the future. Okay, not really, but play along. I found a copy of the latest IBM Cost of
a Data Breach Report, because, I mean, what else would a security nerd grab when they go into the
future? Cost of a Data Breach report. And what did I find in there? Well, I found that one of the top
causes of data breaches was quantum cracking of conventional cryptography. Unfortunately, I forgot
to note the year of the report, i-it slipped my mind. So, I won't be able to tell you exactly what year
that's going to happen. But I do know it wasn't all that far into the future because there were
no flying cars. So the good news, though, is that you can do something to avert that disaster now
by implementing what's known as quantum-safe cryptography, or some people refer to it as
post-quantum cryptography. Do it now. So now you've seen what my crystal ball sees
happening in cybersecurity in 2026 and beyond. Now I'd like to hear from you, to see what you think.
Where did I hit and where did I miss? Be gentle. Crystal balls aren't perfect, after all. What do
you see happening in the future? Post all of that in the comments section below, and next year we
can all look back and see how we did.