Learning Library

← Back to Library

Navigating GRC in AI Development

Key Points

  • Governance, risk, and compliance (GRC) become especially challenging in AI projects because responsibility is fragmented across numerous teams such as governance, privacy, security, data engineering, data science, deployment, and AI management.
  • Each stakeholder group brings a distinct focus—governance teams handle model validation and auditing, privacy and compliance officers guard data protection, CDOs and data engineers ensure data quality and lineage, data scientists build models, deployment engineers scale them, and AI management teams uphold trustworthy AI principles.
  • This diffusion of accountability creates a “political mess,” making organizations hesitant to address GRC due to unclear ownership and complex coordination requirements.
  • A practical remedy is to establish two‑way, automated workflows that link governance and data teams, enabling continuous data sharing, auditing, and compliance checks throughout the model lifecycle.
  • By embedding automated validation before production and ongoing monitoring after deployment, organizations can maintain compliance, manage risk, and keep models accurate and trustworthy over time.

Full Transcript

# Navigating GRC in AI Development **Source:** [https://www.youtube.com/watch?v=3CfRu22_eus](https://www.youtube.com/watch?v=3CfRu22_eus) **Duration:** 00:04:22 ## Summary - Governance, risk, and compliance (GRC) become especially challenging in AI projects because responsibility is fragmented across numerous teams such as governance, privacy, security, data engineering, data science, deployment, and AI management. - Each stakeholder group brings a distinct focus—governance teams handle model validation and auditing, privacy and compliance officers guard data protection, CDOs and data engineers ensure data quality and lineage, data scientists build models, deployment engineers scale them, and AI management teams uphold trustworthy AI principles. - This diffusion of accountability creates a “political mess,” making organizations hesitant to address GRC due to unclear ownership and complex coordination requirements. - A practical remedy is to establish two‑way, automated workflows that link governance and data teams, enabling continuous data sharing, auditing, and compliance checks throughout the model lifecycle. - By embedding automated validation before production and ongoing monitoring after deployment, organizations can maintain compliance, manage risk, and keep models accurate and trustworthy over time. ## Sections - [00:00:00](https://www.youtube.com/watch?v=3CfRu22_eus&t=0s) **Navigating GRC Role Overlap** - The speaker outlines how governance, risk, and compliance responsibilities are spread across multiple teams—legal, data, security, and AI—making model validation and data management increasingly complex. - [00:03:14](https://www.youtube.com/watch?v=3CfRu22_eus&t=194s) **Building Trusted AI Governance** - The speaker outlines a workflow that ensures AI models use qualified data, undergo continual compliance checks, and are governed across the organization to maintain risk‑aware, trustworthy deployments. ## Full Transcript
0:00Let's talk about GRC: Governance, Risk and Compliance. 0:04So this is something that a lot of organizations struggle with. 0:06And while there are many reasons for that, one of the biggest is that there's a diffusion of responsibility across each one of those domains. 0:14Now, when we think about that when we're building technical models and we're trying to validate, govern, check for risk-- 0:22--this gets infinitely more complicated as the diffusion of responsibility expands to technical teams, legal teams, lines of business. 0:31Let me show you what I mean by that. We have our governance team. 0:35If they're building a model, they're concerned about governance structure, the model validation, where are we getting things? 0:42How are we auditing? 0:43That's going to be your risk manager. 0:45Your model risk manager. 0:47We also have our chief privacy officer, our chief compliance officer and our CISO, who are all worried about data privacy. 0:56So their concerns are going to be the privacy, security compliance piece of this. 1:00Now, on the other hand, we also have to think about how we're organizing and managing our data. 1:06So that's where a chief data officer or a data engineer comes in, 1:11and they're worried about creating governed, quality-checked assets, and they're thinking about data lineage. 1:19Then we also have our Build AI team. 1:23So these are the individuals that are data scientists. 1:27They're mostly concerned with how are we looking at the data and what models are we creating from it. 1:35On the other end of the spectrum, we've got the Deploy AI team. 1:39So they're the ones that are taking this from the data scientists and they're scaling it into production. 1:45They're also running the models again and making sure they're up to compliance. 1:50And then finally, we have our data management team-- 1:54AI management team --who is very concerned about keeping up with the tenets of a trustworthy AI model. 2:02And when you look at this holistically, it's an absolute mess. 2:06So no wonder why nobody really wants to touch governance, risk and compliance. Right? 2:10It's going to be a political mess. 2:12How do you assign accountability? 2:14How do you make sure that overall we are governed and we are always in control and monitoring our risk? 2:22Let me show you how. 2:23So if we start with our governance, as we always do, and we're thinking about our model validation, where we're getting our data sources from, right? 2:33We create a nice two-way connection between these two groups where we're sharing data, we're accounting for it. 2:39So there's already this automated, auditing workflow that's built in. 2:43So you're always within your privacy and security and compliance. 2:47Now, if we are looking down below at how we are building our models, we want to make sure that we are testing and validating here. 2:58Before we go into production, we want to make sure that we are within compliance so that lies within this risk category. 3:05But then once we are in production, we also want to make sure that we are validating 3:10our models to make sure that they are still accurate, that they're unbiased. 3:14All of these things that we've touched on previously. 3:17So the robustness as well. 3:20Now, finally, we want to make sure that we are communicating correctly. 3:29So as we are tracking compliance and risk, we are making sure that we are still pulling from these fully qualified data assets. 3:37We also want to make sure that we are updating our governance structure to say "Yes, this model has been checked. 3:43It's been checked recently, it's been rechecked. We are still within compliance." 3:47So we're creating this workflow where we're sharing across different parts of the organization to first build the model 3:56and then deploy it with safe, trusted government assets. 4:04And this is how you create a governance risk and compliance structure for your trusted AI models. 4:11If you have any questions, please leave them in the comments below. 4:14Also, please remember to Like this video and Subscribe to our channels so we can continue to bring you content that matters. 4:21Thanks for watching.