Enterprise AI Ethics: Guidelines and Guardrails
Key Points
- Enterprises should start by establishing clear ethical guidelines for AI, such as IBM’s principles that AI must augment humans, respect data ownership, and remain transparent and explainable.
- Design‑thinking techniques like dichotomy mapping help teams list a solution’s features and benefits, then evaluate each for potential harms such as privacy breaches or exclusion of disabled users.
- Once risks are identified, organizations implement “guardrails”—specific rules (e.g., prohibiting data sales to advertisers) that the AI system must obey.
- Using diverse, representative training data is essential to avoid bias, and open‑source tools like IBM’s AI Fairness 360 can detect and mitigate bias, support privacy compliance, and assess model uncertainty.
- The combined approach of defined principles, systematic risk assessment, enforceable guardrails, and tooling enables enterprises to continuously verify that their AI solutions stay within ethical boundaries.
Full Transcript
# Enterprise AI Ethics: Guidelines and Guardrails **Source:** [https://www.youtube.com/watch?v=muLPOvIEtaw](https://www.youtube.com/watch?v=muLPOvIEtaw) **Duration:** 00:03:45 ## Summary - Enterprises should start by establishing clear ethical guidelines for AI, such as IBM’s principles that AI must augment humans, respect data ownership, and remain transparent and explainable. - Design‑thinking techniques like dichotomy mapping help teams list a solution’s features and benefits, then evaluate each for potential harms such as privacy breaches or exclusion of disabled users. - Once risks are identified, organizations implement “guardrails”—specific rules (e.g., prohibiting data sales to advertisers) that the AI system must obey. - Using diverse, representative training data is essential to avoid bias, and open‑source tools like IBM’s AI Fairness 360 can detect and mitigate bias, support privacy compliance, and assess model uncertainty. - The combined approach of defined principles, systematic risk assessment, enforceable guardrails, and tooling enables enterprises to continuously verify that their AI solutions stay within ethical boundaries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=muLPOvIEtaw&t=0s) **Enterprise AI Ethics Guidelines** - The speaker outlines how companies can establish and verify ethical AI practices using IBM’s three core principles and design‑thinking tools such as dichotomy mapping. ## Full Transcript
AI ethics are something that's on
everyone's mind these days so how do
Enterprises determine if their AI
solution is at risk of Crossing some
sort of ethical boundary the first thing
we want to do is come up with a set of
guidelines or rules that we're going to
follow whenever we are creating or
interacting with AI systems so I'll give
you IBM's Three core principles as an
example number one artificial
intelligence is meant to augment human
intelligence it's not here to replace
us two data and insights belong to their
creator so if we are using customers
data for anything that is it's still
their data not
ours number
three solutions have to be transparent
and explainable and what this means is
that we need visibility into who is
training the system what data they're
using to train the system and then also
how all of this is going to effect an
algorithm's recommendations to the end
user so now we have our rules how do we
determine if we are actually following
them or breaking any sort of guidelines
so there are a few different design
thinking activities that you could try
one of them is called dichotomy mapping
and basically what this means is that
first we're going to list all of the
features of our solution and then we're
going to list the benefits and what
they're meant to be used for so I'll
give you an example let's say a hotel
has a recommendation system for their
users that will determine maybe what
room they get or if we leave any kind of
treats for them in their room if
someone's going skydiving maybe we give
them a room on a higher floor um we list
all of these benefits out and obviously
these things are great for the customer
right they're getting a great experience
but then the next thing we need to do is
actually
look at these features and determine can
this cause harm in any way so in this
case we would want to look at is this
data being sold to advertisers is it
secure something else if we have
differently abled users are they able to
use the interface are they being
included in the algorithm once we've got
our rules we've got our activities we've
defined some issues that maybe we need
to work on what do we do next we fix
it the first thing we're going to do is
Implement a set of guard rails and these
are basically just rules that your AI
system has to follow um in this case
it's going to be we do not sell to
advertisers that is a guard
R next let's talk about the data that
we're using to train this AI system as I
mentioned before if we're not using a
diverse set of data then we're actually
not going to be able to accommodate all
of our users the next thing we could do
is look at some open source tooling so
IBM actually has one called AI fairness
360 and it's actually going to help you
mitigate and detect bias in your machine
learning models there may be other tools
that help you with adhering to privacy
regulations or even detecting
uncertainty in your models so now that
we have our rules we know how to
identify problems and then how to fix
them AI is all of our responsibility we
have to make sure that the AI that we're
creating and using is safe secure and
and built by humans with humans in mind
if you like this video and want to see
more please like And subscribe if you
have any questions or just want to share
your thoughts about AI ethics please
leave a comment
below