Understanding AI Attacks with MITRE Atlas
Key Points
- Effective problem‑solving requires first identifying the root cause, whether it’s a leaky pipe or the specific steps of a cyber‑attack.
- To defend against AI‑based threats, analysts must understand the attacker’s goals, methods, and the target’s value before deploying appropriate mitigations.
- MITRE’s new ATLAS (Adversarial Threat Language for AI Systems) extends the ATT&CK framework to map tactics, techniques, and procedures unique to AI attacks.
- Real‑world AI attacks can be extremely costly—MITRE cites a $77 million incident—so using ATLAS to visualize and counter these threats is increasingly critical.
Full Transcript
# Understanding AI Attacks with MITRE Atlas **Source:** [https://www.youtube.com/watch?v=QhoG74PDFyc](https://www.youtube.com/watch?v=QhoG74PDFyc) **Duration:** 00:08:40 ## Summary - Effective problem‑solving requires first identifying the root cause, whether it’s a leaky pipe or the specific steps of a cyber‑attack. - To defend against AI‑based threats, analysts must understand the attacker’s goals, methods, and the target’s value before deploying appropriate mitigations. - MITRE’s new ATLAS (Adversarial Threat Language for AI Systems) extends the ATT&CK framework to map tactics, techniques, and procedures unique to AI attacks. - Real‑world AI attacks can be extremely costly—MITRE cites a $77 million incident—so using ATLAS to visualize and counter these threats is increasingly critical. ## Sections - [00:00:00](https://www.youtube.com/watch?v=QhoG74PDFyc&t=0s) **Diagnosing AI Cyber Attack Origins** - The speaker likens fixing a leaky pipe to analyzing AI‑based cyber threats, emphasizing that understanding the attack’s source and progression is essential before selecting proper tools and mitigations, and introduces the MITRE ATT&CK framework as a helpful resource. ## Full Transcript
if you want to fix a problem you have to
first understand what's causing the
problem so for instance with this leaky
pipe we've got water pooling up here
where's the cause well is it because
there is break in the bend in this pipe
or is it further Upstream maybe it's
this fitting that's loose and therefore
it's dripping down there or maybe the
source is actually higher up in the
system and the water is Flowing down the
bottom line is if I'm going to fix this
I got to know where the problem is and
how this water has traversed so it's the
same with cyber security in particular
with AI based attacks I'm going to need
to understand the type of attack that
I'm dealing with then I can get out the
right tools I need to understand what
the target is what is the bad guy after
in this attack and then what are the
steps that they took if I can understand
that and retrace those then I can do a
better job of preventing this in the
future and then ultim
what are the mitigations that I need to
put in place in order to figure out how
I fix this problem we're going to take a
look in this video at a timeline a tool
that you can use to help understand
better aib based attacks so there's an
organization called miter that came out
with a tool that we use in the industry
very useful I did a video on this first
one it's called the adversarial tactics
techniques and common knowledge and it
goes over cyber security attacks in
general and shows you what are the steps
what are the things that an attacker
could go through so that you understand
it better well they've built on that and
come out with a new version that is
designed specifically for AI and it's
called Atlas for short it's the
adversarial threat language for AI
systems so Atlas is what we're going to
take a look at today so that we can
better understand these new class of aib
based
attacks so why do we have to care about
these AI based attacks well it turns out
miter that I mentioned previously has
already documented one case that cost
$77 million in Damages that was an AI
based attack it was an attack on the AI
within a particular system so we've
already seen that this can be expensive
I expect that number is only going to
increase as we start using AI more and
more in all kinds of use cases so Atlas
let's take a look at what this thing is
so this is what the framework looks like
you can get just a general sense of of
what's there and you can see in the
columns we have the tactics for instance
the first is reconnaissance then we have
resource development initial access and
so forth so well that is what the
framework looks like and the tactics
then are the things that are basically
the why what is the attacker uh really
trying to accomplish in a particular uh
step for instance as I mentioned
reconnaissance they're trying to case
The Joint they're trying to figure out
what does the environment look like like
that's the why and merer has documented
14 different ones of those different
kinds of why the tactics then the
techniques this is the how this is how
do they go about doing what they're
going to do and we've got 82 of those
already documented these things might in
fact grow over time as we learn more and
more and attackers learn more and more
different ways to do things and also
included to sort of illustrate a lot of
this are case studies there's 22
different case studies as of the time of
this uh video and there may be more in
the future in fact we're going to take a
look at one of those in a minute to give
you an idea though to that further
illustrates this there's also this thing
called a navigator so the Navigator
shows you in fact which ones of these
have been selected which ones of these
have been followed think of it as a
breadcrumb trail that shows you in this
particular attack what actually occurred
out of all the different possible things
here are the ones that were actually
selected by the attacker and then a heat
map as well and the heat map shows you
uh other visualization for what these
different tactics and techniques could
be okay let's take a look at an actual
case study from the miter Atlas
framework this particular case looked at
a malware scanner that was based on
machine learning and it discovered that
there was a universal bypass that could
be appended to malware that would fake
out the system and it wouldn't identify
the malware as in fact harmful so how
did this work we're going to map this to
the various tactics and techniques in
particular we'll take a look at the
tactics so the Recon stage what did the
attacker do well the first thing they
did it seems is they went for public
information there was a decent amount of
this available through uh the the
organization maybe does talks at
conferences presentations maybe even
YouTube videos or things like that so
publicly ailable information like that
also uh patents and other intellectual
property that might have been filed in a
public format so you can use all of this
to do your initial reconnaissance okay
The Next Step then is machine learning
model access what did they do in this
case well what they did was they took a
look at the product itself the tool
that's supposed to be doing this
detection and they started trying to see
how does this thing work they turned
verbose logging on so that means the
system is writing out all kinds of
information information about what it's
seeing and all the information it's
writing about what it's seeing is also
information an attacker can use later at
further steps and they discovered by
looking at all of this sort of figured
out a bit about what the reputation
scoring system was like in the system so
it's looking at this malware and
classifying it as this is good or this
is bad then the next stage is resource
development in this case what they're
going to do is take a look at developing
some adversarial machine learning in
particular what they identified uh
through reverse engineering was that
there were some specific attributes
things that the malware scanner was
looking for all the time and when it saw
those things then that's when it would
flag this as malware so what they tried
to do in in this was discover how did
that algorithm work what was that
reputation scoring process like and in
particular they made a Discovery but
there was actually a second model that
was included in this and the second
model was basically an override and if
the second model found enough good in
the code then it would override what its
suspicions were about malware and that
became the weak point that got
exploited then the ml attack staging in
this case what they did was a manual
modification they go in and
modify the malware that being submitted
into the system and in this case what
they did was they appended uh just a
little bit of good information they mix
in just enough good information with the
malware and figured out if I add that at
the very end and append that everything
will be okay and the system will not
recognize this because this second model
would do the override and then
ultimately they launch this and we have
our boom that's the defense that evades
uh that the attack that evades the
defense which is looking for this
malware okay so now we've gone through
one of the case studies that comes with
the miter Atlas framework hopefully you
have a little better idea of how this
framework is able to give us a better
understanding of the problem because we
can go back and see the source we can
see the steps that the person went
through we can understand what sort of
tactics and techniques were deployed and
employed um we can also take a look at
this as a common description a Common
Language a lingua franka if you will
something that we can all in the
industry use to describe so when we talk
about reconnaissance we know what that
means when we talk about resource
development we know what that means
because we're all reading from the same
description the hope then is that with
better understanding and a common
description we end up with better
defenses and that's really what we're
trying to do with AI this new attack
surface
if you like this video and want to see
more like it please like And subscribe
if you have any questions or want to
share your thoughts about this topic
please leave a comment below