Video jFHPEQi55Ko
Full Transcript
# Video jFHPEQi55Ko **Source:** [https://www.youtube.com/watch?v=jFHPEQi55Ko](https://www.youtube.com/watch?v=jFHPEQi55Ko) **Duration:** 00:07:25 ## Sections - [00:00:00](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=0s) **Untitled Section** - - [00:03:26](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=206s) **Untitled Section** - - [00:06:45](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=405s) **Untitled Section** - ## Full Transcript
How do you know if you can trust the results of an AI model?
Let's say I've deployed a new AI model called "Fraud Detection".
Here it is.
You know what?
I spent a lot of time on this model.
It's got an input layer, an output layer, some hidden layers -- all connected together.
Now, this model analyzes all of your transactions. And this AI model of mine has flagged one of your transactions
for a purchase of $100 at a coffee shop as potentially fraudulent.
You know, people can be fraudulent sometimes.
Now, how confident can you be that my AI model is probably right and that this transaction should be denied or investigated further?
Well, from the information I've given you, you can't possibly make that call.
You don't know anything about my AI model.
It's what's commonly referred to as a "black box".
And that's just impossible to interpret.
You have no idea what's going on in those calculations.
But here's the kicker -- me, the guy who created this beautiful AI algorithm, well, I have no idea either.
You see, when it comes to application of AI, not even the engineers or data scientists who create the algorithm
can fully understand or explain what exactly is happening inside them for a specific instance and result.
But, thankfully, there is a solution to this problem.
Actually, we have plenty of solutions to plenty of problems!
Consider subscribing to the IBM Technology Channel to hear about those.
But the solution in this case, well, it's called Explainable AI, or XAI.
And it allows us humans to understand how an AI model comes up with its results.
And consequently build trust in those results.
Now the set up of XAI consists of three main methods -- the prediction accuracy and there's traceability.
And these two methods, they address technology requirements. And then we have decision understanding.
And decision understanding addresses human needs.
Now, prediction accuracy is clearly an important component in how successful the use of AI is in everyday operation.
By running simulations and comparing XAI output to the results in the training dataset, we can figure out prediction accuracy.
The most popular technique used for this is called Local Interpretable Model-agnostic Explanations, or LIME,
which explains the prediction of classifiers by the machine learning algorithm.
Now, traceability can limit the way decisions can be made, setting up a narrower scope for machine learning rules and features.
One traceability technique is called DeepLIFT.
DeepLIFT stands for Deep Learning Important FeaTures,
which compares the activation of each neuron in the neural network
to its reference neuron showing traceability links and dependencies.
And then decision understanding is the, well, the human factor.
There are no fancy measurements here.
This is all about educating and informing teams to overcome distrust in AI and helping them understand how the decisions were made.
Now this can be presented to business users in the form of a dashboard.
And, for example, here, a dashboard could show the primary factors why a transaction was flagged as fraudulent
and the extent to which those factors influenced the decision.
Was it the transaction amount?
Was it the location where the transaction took place and so forth?
And further, this dashboard can show the minimum changes that will be required for the AI to produce a different outcome.
So, if the transaction amount of, let's say, $100 was a significant factor, and we showed that in the dashboard --
how much lower would that amount have to be for the AI to have made a different decision?
Let's say, flagged the transaction as non-fraudulent.
But, you see, explainable AI is more than just building trust in the AI model.
It's also about troubleshooting and improving model performance.
It allows us to investigate model behaviors through tracking model insights on deployment status, fairness, quality and drift --
because AI model's performance can indeed drift.
And by that we mean, it can degrade over time, because production data differs from training data.
By using explainable AI, you can analyze your model and generate alerts when models deviate from the intended outcomes and perform inadequately.
Such as, well, a bunch of false positive fraud alerts.
From there, analysts can understand what happened when deviations persist.
And we've talked already about financial services as a use case.
But there are plenty of use cases across many industries that we can apply this to.
So, for example, let's consider healthcare.
And with healthcare, XAI can accelerate diagnostics and image processing and streamline the pharmaceutical approval process.
Or, how about in the field of criminal justice?
With criminal justice, XAI can accelerate resolutions on DNA analysis, or prison population analysis, or crime forecasting.
Explainability can help developers ensure that the system is working as expected,
meet regulatory standards, and even allow a person affected by a decision to challenge that outcome.
So, when I deny or approve that $100 transaction of yours, you can understand how I came to that decision.
And perhaps I can also suggest where to find a more moderately priced coffee shop.
And that's a wrap.
As you may have heard, we're on the lookout for new topics that are of interest to you.
So, if you have topics in mind we could address in future videos, hit us up in the comments.
Thanks for watching.
If you have any questions, please drop us a line below.
And if you want to see more videos like this in the future, please Like and Subscribe.