Learning Library

← Back to Library

Enterprise AI Ethics: Guidelines and Guardrails

Key Points

  • Enterprises should start by establishing clear ethical guidelines for AI, such as IBM’s principles that AI must augment humans, respect data ownership, and remain transparent and explainable.
  • Design‑thinking techniques like dichotomy mapping help teams list a solution’s features and benefits, then evaluate each for potential harms such as privacy breaches or exclusion of disabled users.
  • Once risks are identified, organizations implement “guardrails”—specific rules (e.g., prohibiting data sales to advertisers) that the AI system must obey.
  • Using diverse, representative training data is essential to avoid bias, and open‑source tools like IBM’s AI Fairness 360 can detect and mitigate bias, support privacy compliance, and assess model uncertainty.
  • The combined approach of defined principles, systematic risk assessment, enforceable guardrails, and tooling enables enterprises to continuously verify that their AI solutions stay within ethical boundaries.

Full Transcript

# Enterprise AI Ethics: Guidelines and Guardrails **Source:** [https://www.youtube.com/watch?v=muLPOvIEtaw](https://www.youtube.com/watch?v=muLPOvIEtaw) **Duration:** 00:03:45 ## Summary - Enterprises should start by establishing clear ethical guidelines for AI, such as IBM’s principles that AI must augment humans, respect data ownership, and remain transparent and explainable. - Design‑thinking techniques like dichotomy mapping help teams list a solution’s features and benefits, then evaluate each for potential harms such as privacy breaches or exclusion of disabled users. - Once risks are identified, organizations implement “guardrails”—specific rules (e.g., prohibiting data sales to advertisers) that the AI system must obey. - Using diverse, representative training data is essential to avoid bias, and open‑source tools like IBM’s AI Fairness 360 can detect and mitigate bias, support privacy compliance, and assess model uncertainty. - The combined approach of defined principles, systematic risk assessment, enforceable guardrails, and tooling enables enterprises to continuously verify that their AI solutions stay within ethical boundaries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=muLPOvIEtaw&t=0s) **Enterprise AI Ethics Guidelines** - The speaker outlines how companies can establish and verify ethical AI practices using IBM’s three core principles and design‑thinking tools such as dichotomy mapping. ## Full Transcript
0:00AI ethics are something that's on 0:02everyone's mind these days so how do 0:04Enterprises determine if their AI 0:06solution is at risk of Crossing some 0:09sort of ethical boundary the first thing 0:11we want to do is come up with a set of 0:13guidelines or rules that we're going to 0:15follow whenever we are creating or 0:18interacting with AI systems so I'll give 0:21you IBM's Three core principles as an 0:23example number one artificial 0:26intelligence is meant to augment human 0:29intelligence it's not here to replace 0:32us two data and insights belong to their 0:36creator so if we are using customers 0:39data for anything that is it's still 0:41their data not 0:43ours number 0:45three solutions have to be transparent 0:48and explainable and what this means is 0:50that we need visibility into who is 0:53training the system what data they're 0:55using to train the system and then also 0:58how all of this is going to effect an 1:00algorithm's recommendations to the end 1:02user so now we have our rules how do we 1:05determine if we are actually following 1:08them or breaking any sort of guidelines 1:10so there are a few different design 1:12thinking activities that you could try 1:14one of them is called dichotomy mapping 1:17and basically what this means is that 1:19first we're going to list all of the 1:22features of our solution and then we're 1:25going to list the benefits and what 1:26they're meant to be used for so I'll 1:28give you an example let's say a hotel 1:32has a recommendation system for their 1:34users that will determine maybe what 1:37room they get or if we leave any kind of 1:39treats for them in their room if 1:41someone's going skydiving maybe we give 1:42them a room on a higher floor um we list 1:45all of these benefits out and obviously 1:47these things are great for the customer 1:48right they're getting a great experience 1:51but then the next thing we need to do is 1:53actually 1:54look at these features and determine can 1:58this cause harm in any way so in this 2:01case we would want to look at is this 2:03data being sold to advertisers is it 2:05secure something else if we have 2:07differently abled users are they able to 2:10use the interface are they being 2:12included in the algorithm once we've got 2:15our rules we've got our activities we've 2:17defined some issues that maybe we need 2:19to work on what do we do next we fix 2:23it the first thing we're going to do is 2:25Implement a set of guard rails and these 2:28are basically just rules that your AI 2:31system has to follow um in this case 2:34it's going to be we do not sell to 2:37advertisers that is a guard 2:39R next let's talk about the data that 2:43we're using to train this AI system as I 2:45mentioned before if we're not using a 2:48diverse set of data then we're actually 2:50not going to be able to accommodate all 2:53of our users the next thing we could do 2:55is look at some open source tooling so 2:59IBM actually has one called AI fairness 3:02360 and it's actually going to help you 3:04mitigate and detect bias in your machine 3:07learning models there may be other tools 3:09that help you with adhering to privacy 3:12regulations or even detecting 3:14uncertainty in your models so now that 3:16we have our rules we know how to 3:19identify problems and then how to fix 3:21them AI is all of our responsibility we 3:24have to make sure that the AI that we're 3:26creating and using is safe secure and 3:29and built by humans with humans in mind 3:32if you like this video and want to see 3:34more please like And subscribe if you 3:36have any questions or just want to share 3:38your thoughts about AI ethics please 3:40leave a comment 3:42below