Who Owns Responsible AI?
Key Points
- Embedding human values in AI is a socio‑technical challenge that requires a holistic approach across people, processes, and tools, not just a purely technical fix.
- Surveys at AI summits reveal that most organizations lack clear accountability for responsible AI outcomes, with responses often being “no one,” “we don’t use AI,” or “everyone,” which effectively means nobody is truly responsible.
- Those tasked with AI accountability now face a broadened remit: aligning values, maintaining model inventories, tracking evolving regulations, and handling ethical considerations that go beyond mere legality.
- Effective AI governance depends on building AI literacy, especially by teaching stakeholders how to operationalize principles such as fairness, explainability, and transparency into concrete functional and non‑functional requirements.
- Applied, hands‑on training for both AI model governors and the teams that build or procure models is the preferred method to ensure that AI systems reflect an organization’s values and are managed responsibly.
Sections
- Accountability Gap in AI Governance - The speaker argues that aligning AI with human values is a socio‑technical challenge requiring people, processes, and tools, yet most organizations lack clear accountability, often answering “no one,” “we don’t use AI,” or “everyone,” highlighting the need for defined responsibility.
- Applied Training for Responsible AI - The speaker outlines a comprehensive applied training program for teams developing or acquiring AI models, covering use‑case selection, business alignment, risk mitigation, interpretable fact sheets, audit interpretation, and the necessity of dedicated responsible‑AI leadership.
Full Transcript
# Who Owns Responsible AI? **Source:** [https://www.youtube.com/watch?v=yh-3WU1FKrk](https://www.youtube.com/watch?v=yh-3WU1FKrk) **Duration:** 00:05:53 ## Summary - Embedding human values in AI is a socio‑technical challenge that requires a holistic approach across people, processes, and tools, not just a purely technical fix. - Surveys at AI summits reveal that most organizations lack clear accountability for responsible AI outcomes, with responses often being “no one,” “we don’t use AI,” or “everyone,” which effectively means nobody is truly responsible. - Those tasked with AI accountability now face a broadened remit: aligning values, maintaining model inventories, tracking evolving regulations, and handling ethical considerations that go beyond mere legality. - Effective AI governance depends on building AI literacy, especially by teaching stakeholders how to operationalize principles such as fairness, explainability, and transparency into concrete functional and non‑functional requirements. - Applied, hands‑on training for both AI model governors and the teams that build or procure models is the preferred method to ensure that AI systems reflect an organization’s values and are managed responsibly. ## Sections - [00:00:00](https://www.youtube.com/watch?v=yh-3WU1FKrk&t=0s) **Accountability Gap in AI Governance** - The speaker argues that aligning AI with human values is a socio‑technical challenge requiring people, processes, and tools, yet most organizations lack clear accountability, often answering “no one,” “we don’t use AI,” or “everyone,” highlighting the need for defined responsibility. - [00:03:08](https://www.youtube.com/watch?v=yh-3WU1FKrk&t=188s) **Applied Training for Responsible AI** - The speaker outlines a comprehensive applied training program for teams developing or acquiring AI models, covering use‑case selection, business alignment, risk mitigation, interpretable fact sheets, audit interpretation, and the necessity of dedicated responsible‑AI leadership. ## Full Transcript
The work of having human values be reflected in AI is not strictly a technical challenge with a technical solution,
but one that is indeed socio technical and with any socio technical challenge it has to be approached holistically,
meaning you need to be thinking about people process tools.
People, meaning what is the right organizational culture that is required to curate AI responsibly,
which are the right AI governance processes and the right tools and AI engineering frameworks.
When I take the time to ask large audiences at AI summits,
who in their organization is accountable
for responsible outcomes from artificial intelligence, the top three answers that I get are pretty bad.
The first answer I typically get is "no one," which is overtly terrible.
The second common response that I get is "we don't use AI,"
although you might not be keeping track of it in a formal inventory program,
absolutely you have employees that are using artificial intelligence in some way, shape or form.
And then the last common response that I get is "everyone,"
and now would opine that if everyone is being held accountable for responsible outcomes from AI.
Is anyone actually being held accountable?
The job of those who are being held accountable for responsible outcomes from artificial intelligence is expanding.
It's a big job, right?
Not only do these people have to actually achieve value alignment within their organizations,
they also have to keep track of AI model inventory.
They have to keep track of regulations.
Right?
And there are a growing amount of regulations around the world,
but there's also a recognition that you can have AI models be lawful but awful, which means their purview,
their responsibility actually has the push into ethics.
And as soon as you push into ethics, you have to be a pretty darn good teacher.
You have to be teaching those not only who are building
AI models on your behalf and governing AI models on your on your behalf,
but also who are going to be procuring AI models on your behalf.
You want them to be able to do this work and again, in a way that reflects your organization's values.
First, I want to talk about what does A.I. literacy look like
for those who are going to be governing AI models on your behalf?
And the best way my favorite way of approaching this kind of training is applied training.
So the way that we work with those who are going to be governing AI models,
is first of all to dive into teaching people how do you operationalize
principles like fairness, like explainability, like transparency,
thinking through how do you make sure you can detail what are those
functional requirements for what you expect to see in AI models,
but also the nonfunctional requirements of what you expect to see in those systems
around those, the use of those AI models.
Then the second group that you would offer this applied training to
are those who will be building and buying models on your behalf.
And this applied training includes things like making sure that you're choosing
a AI model use cases to actually work on which ones are really important to your organization,
and then diving into each of those use cases, starting with how do you make sure that the investment in that AI model is actually
aligned to your business strategy,
teaching those teams, working on those use cases, How do you assess for the risk
of that particular use case, unintended effects of that use case?
How would you approach mitigating those kinds of risk holistically?
Then we give an introduction to fact sheets in particular, not just how do you build a fact sheet,
but how do you build one that's actually interpretable that empowers people?
We give an introduction to audits,
and then we teach those teams how do you actually interpret
the results from an audit so that they would know what to do when they see those audit results.
Those doing this kind of applied training that I'm describing truly, truly
benefit from actually working with a diverse and multidisciplinary team.
Now more than ever, having a leader or team that ensures the
responsible use and responsible outcomes from artificial intelligence is absolutely crucial.
Without a dedicated leader with a funded mandate to do this work, AI governance can absolutely fall through the cracks,
leaving organizations vulnerable to the risks associated with the technology.
A successful, responsible A.I. leader has a seat at the table and ensures
that there are seats at the table for others, including the CISO, ensuring AI Ethics is weaved into the
very fabric of the organization, not just left off at the end, but incorporated across the entire AI life cycle.
They make accountability policies transparent and work across the organization to see them implemented.
Finally, championing AI literacy in a holistic way is absolutely essential.
Ensuring that everyone within the organization understands how to build and buy AI models
that actually reflects the organization's values
by investing in a responsible AI leader with that funded mandate to do the work.
Organizations unlock the full potential of artificial intelligence.
They drive innovation, and they create a culture of responsible and transparent AI use,
ultimately leading to better decision making, improved customer experiences, and sustained business success.