Transparency in Open AI Governance
Key Points
- The episode of “Smart Talks with IBM” explores the theme of openness in AI, examining its possibilities, misconceptions, and impact on industry and society.
- Host Jacob Goldstein interviews Rebecca Finlay, CEO of the Partnership on AI, about the nonprofit’s role in fostering accountable AI governance through diverse stakeholder collaboration.
- Finlay emphasizes that transparency is essential for responsibly scaling AI technologies and for building the infrastructure and community needed to support open‑source models.
- Drawing on her experience at the Canadian Institute for Advanced Research, she explains how early, long‑term AI research evolved into today’s deep‑learning era and highlighted emerging concerns about data bias and societal impact.
- The conversation underscores the importance of open collaboration—such as with the AI Alliance—to develop resources and standards that guide the future development and deployment of AI.
Sections
- Open AI Governance with Rebecca Finlay - Malcolm Gladwell introduces an episode where Jacob Goldstein talks to Partnership on AI CEO Rebecca Finlay about the importance of transparency, open‑source collaboration, and accountable governance in the rapidly evolving AI landscape.
- Building Interdisciplinary AI Impact Program - The speaker recounts launching a program in the early 2000s‑2010s to study AI’s societal effects—uniting ethicists, lawyers, economists, and sociologists to tackle bias, job impacts, and the necessity of diverse perspectives in an increasingly divided world.
- Balancing Openness and Safety in AI - The speaker describes collaborative working groups that create open frameworks, best practices, and resources for responsibly deploying large foundation models, while discussing the challenges of the open versus closed AI debate.
- Balancing Open AI Innovation & Safety - The speaker outlines how an open‑innovation ecosystem for AI can combine transparent, peer‑reviewed research and thorough documentation with safeguards to ensure responsible downstream deployment.
- Responsible AI Deployment Framework - The speaker outlines a responsibility framework for generative AI—covering consent, disclosure, and watermarking—showcasing collaborative pledges from tech firms, startups, civil society, and media, and describing how case studies and an online resource are being used to drive ethical deployment practices.
- Transparency as Foundation for Accountability - The speaker argues that open disclosure about AI development, data protection, performance metrics, and auditing is essential for responsible, ethical deployment and serves as the first step toward full accountability.
- Ethics, Policy, and AI Deployment - The speaker outlines sociotechnical AI ethics, the need for context‑specific safeguards, and how the Partnership on AI’s frameworks for synthetic media and responsible foundation‑model deployment have shaped industry policies and practices.
- Open Collaboration for Responsible AI - The speaker outlines an AI Alliance that leverages open datasets, open technology, and global expertise to promote transparent, safe innovation, then identifies the biggest hurdle to responsible AI adoption as companies’ limited understanding of their own AI deployments and urges them to audit current AI use across products and services.
- AI Innovation Requires Worker-Centric Regulation - The speaker stresses that AI systems should be developed with workers at the center and that effective regulation—viewed as essential guardrails rather than obstacles—enables responsible innovation.
- Transparency, Ethics, and Misconceptions in AI - The speaker stresses the importance of openly disclosing AI use to clients, complying with legal requirements, and clarifies that AI is not inherently good or bad but is shaped by human choices, addressing common misunderstandings and the ethical questions that will define its future impact.
- Open‑Minded Collaboration for AI Ethics - The speaker stresses openness and cross‑sector dialogue as essential for the Partnership on AI’s effort to create ethical guardrails and foster responsible innovation.
Full Transcript
# Transparency in Open AI Governance **Source:** [https://www.youtube.com/watch?v=VNWXOYf73tI](https://www.youtube.com/watch?v=VNWXOYf73tI) **Duration:** 00:34:19 ## Summary - The episode of “Smart Talks with IBM” explores the theme of openness in AI, examining its possibilities, misconceptions, and impact on industry and society. - Host Jacob Goldstein interviews Rebecca Finlay, CEO of the Partnership on AI, about the nonprofit’s role in fostering accountable AI governance through diverse stakeholder collaboration. - Finlay emphasizes that transparency is essential for responsibly scaling AI technologies and for building the infrastructure and community needed to support open‑source models. - Drawing on her experience at the Canadian Institute for Advanced Research, she explains how early, long‑term AI research evolved into today’s deep‑learning era and highlighted emerging concerns about data bias and societal impact. - The conversation underscores the importance of open collaboration—such as with the AI Alliance—to develop resources and standards that guide the future development and deployment of AI. ## Sections - [00:00:00](https://www.youtube.com/watch?v=VNWXOYf73tI&t=0s) **Open AI Governance with Rebecca Finlay** - Malcolm Gladwell introduces an episode where Jacob Goldstein talks to Partnership on AI CEO Rebecca Finlay about the importance of transparency, open‑source collaboration, and accountable governance in the rapidly evolving AI landscape. - [00:03:20](https://www.youtube.com/watch?v=VNWXOYf73tI&t=200s) **Building Interdisciplinary AI Impact Program** - The speaker recounts launching a program in the early 2000s‑2010s to study AI’s societal effects—uniting ethicists, lawyers, economists, and sociologists to tackle bias, job impacts, and the necessity of diverse perspectives in an increasingly divided world. - [00:06:30](https://www.youtube.com/watch?v=VNWXOYf73tI&t=390s) **Balancing Openness and Safety in AI** - The speaker describes collaborative working groups that create open frameworks, best practices, and resources for responsibly deploying large foundation models, while discussing the challenges of the open versus closed AI debate. - [00:09:38](https://www.youtube.com/watch?v=VNWXOYf73tI&t=578s) **Balancing Open AI Innovation & Safety** - The speaker outlines how an open‑innovation ecosystem for AI can combine transparent, peer‑reviewed research and thorough documentation with safeguards to ensure responsible downstream deployment. - [00:12:47](https://www.youtube.com/watch?v=VNWXOYf73tI&t=767s) **Responsible AI Deployment Framework** - The speaker outlines a responsibility framework for generative AI—covering consent, disclosure, and watermarking—showcasing collaborative pledges from tech firms, startups, civil society, and media, and describing how case studies and an online resource are being used to drive ethical deployment practices. - [00:15:56](https://www.youtube.com/watch?v=VNWXOYf73tI&t=956s) **Transparency as Foundation for Accountability** - The speaker argues that open disclosure about AI development, data protection, performance metrics, and auditing is essential for responsible, ethical deployment and serves as the first step toward full accountability. - [00:19:04](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1144s) **Ethics, Policy, and AI Deployment** - The speaker outlines sociotechnical AI ethics, the need for context‑specific safeguards, and how the Partnership on AI’s frameworks for synthetic media and responsible foundation‑model deployment have shaped industry policies and practices. - [00:22:16](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1336s) **Open Collaboration for Responsible AI** - The speaker outlines an AI Alliance that leverages open datasets, open technology, and global expertise to promote transparent, safe innovation, then identifies the biggest hurdle to responsible AI adoption as companies’ limited understanding of their own AI deployments and urges them to audit current AI use across products and services. - [00:25:29](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1529s) **AI Innovation Requires Worker-Centric Regulation** - The speaker stresses that AI systems should be developed with workers at the center and that effective regulation—viewed as essential guardrails rather than obstacles—enables responsible innovation. - [00:28:43](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1723s) **Transparency, Ethics, and Misconceptions in AI** - The speaker stresses the importance of openly disclosing AI use to clients, complying with legal requirements, and clarifies that AI is not inherently good or bad but is shaped by human choices, addressing common misunderstandings and the ethical questions that will define its future impact. - [00:31:45](https://www.youtube.com/watch?v=VNWXOYf73tI&t=1905s) **Open‑Minded Collaboration for AI Ethics** - The speaker stresses openness and cross‑sector dialogue as essential for the Partnership on AI’s effort to create ethical guardrails and foster responsible innovation. ## Full Transcript
Hello, hello.
Welcome to Smart Talks with IBM, a podcast from Pushkin
Industries, iHeartRadio, and IBM.
I’m Malcolm Gladwell.
This season, we’re diving back into the world of artificial intelligence,
but with a focus on the powerful concept of “open”—its possibilities,
implications, and misconceptions.
We’ll look at openness from a variety of angles and explore how the concept
is already reshaping industries, ways of doing business, and our
very notion of what’s possible.
In today’s episode, Jacob Goldstein sits down with Rebecca Finlay, the CEO
of the Partnership on AI—a nonprofit group grappling with important
questions around the future of AI.
Their conversation focuses on Rebecca’s work bringing together a
community of diverse stakeholders to help shape the conversation
around accountable AI governance.
Rebecca explains why transparency is so crucial for scaling
the technology responsibly.
And she highlights how working with groups like the AI Alliance can provide
valuable insights in order to build the resources, infrastructure, and community
around releasing open source models.
So, without further ado, let’s get to that conversation.
Can you say your name and your job?
My name is Rebecca Finlay.
I am the CEO of the Partnership on AI to Benefit People and Society,
often referred to as “PAI.”
How did you get here?
What was your job before you had the job that you have now?
I came to PAI about three years ago, having had the opportunity to work for
the Canadian Institute for Advanced Research, developing and deploying
all of their programs related to the intersection of technology and society.
And one of the areas that the Canadian Institute had been funding since 1982 was
research into artificial intelligence.
Wow.
Early.
They were early.
It was a very early commitment and an ongoing commitment at the institute
to fund long-term, fundamental questions of scientific importance in
interdisciplinary research programs that, um, were often, uh, committed
and funded—to—for well over a decade.
The AI, Robotics, and Society program that kicked off the work at the
institute eventually became a program very much focused on deep learning
and reinforcement learning neural networks—all of the current iteration
of AI, or certainly the pre–generative AI iteration of AI that led to this
transformation that we’ve seen in terms of online search and all sorts of ways
in which predictive AI has been deployed.
So I had the opportunity to see the very early days of
that research coming together.
And when, in the early, sort of, 2000,
2010s, when compute capability came together with data capability through some
of the internet companies and otherwise, and we really saw this technology start
to take off, I had the opportunity to start up a program specifically
focused on the impacts of AI in society.
There was, as you know, at that time, some concerns both about the potential
for the, the technology, but also in terms of what we were seeing around
datasets and bias and discrimination and potential impact on future jobs.
And so bringing a whole group of experts, whether they were ethicists
or lawyers or economists, sociologists, into the discussion about AI was core
to that new program and continues to be core to my commitment to
bringing diverse perspectives together to solve the challenges and
opportunities that AI offers today.
So specifically, what is your job now?
What is the work you do?
What is the work that PAI does?
I like to answer that question by asking two questions: First and
foremost, do you believe that the world is more divided today than
it ever has been in recent history?
And do you believe that if we don’t create spaces for very different
perspectives to come together, we won’t be able to solve the challenges
that are in front of the world today?
My answer to both of those questions is yes, we’re more divided.
And two, we need to seek out those spaces where those very different
perspectives can come together to solve those great challenges.
And that’s what I get to do as CEO of the Partnership on AI.
We were begun in 2016 with a fundamental commitment to bringing together experts,
whether they were in industry, academia, civil society, or philanthropy,
coming together to identify what are the most important questions when we
think about developing AI centered on people and communities, and then how
do we begin to develop the solutions to make sure we benefit appropriately?
So that’s a very big-picture set of ideas.
Um, I’m curious, on a, sort of, more day-to-day level—I mean, you
talk about collaborating with all these different kinds of people,
all these different groups.
What does that actually look like?
Like, what are some specific examples of how you do this work?
So right now we have about 120 partners in 16 countries.
They come together through working groups that we look at through a
variety of different perspectives.
It could be AI, labor, and the economy.
It could be, How do you build a healthy information ecosystem?
It could be, How do you bring more-diverse perspectives into the inclusive
and equitable development of AI?
It could be, what are the emerging opportunities with these very, very
large foundation model applications, and how do you deploy those safely?
And these groups come together, most importantly, to say, what are the
questions we need to answer collectively?
So they come together in working groups.
I have an amazing staff team who “hold the pen” on synthesizing research and
data and evidence, developing frameworks, best practices, resources, all sorts
of things that we can offer up to the community, be they in industry or in
policy, to say this is how we can—this is what “good” looks like, and this is
how we can do it on a day-to-day basis.
So that’s what we do.
And then we publish our materials.
It’s all open.
We make sure that we get them into the hands of those
communities that can use them.
And then we drive and work with those communities to put them into practice.
You used the word “open” there in describing your publications.
Uh, I know in the world of AI, on the, sort of, technical side, there’s
a debate, say, or discussion about, kind of open versus closed AI.
And I’m curious how you, kind of, encounter that particular discussion.
What is your view on open versus closed AI?
So the current discussion between open and closed release of AI models
came once we saw ChatGPT and other very large generative-AI systems
being deployed out into the hands of consumers around the world.
And there emerged some fear about the potential of these models to act
in all sorts of catastrophic ways.
So there were concerns that the models could be deployed with regard to, you
know, different—development of viruses or biomedical weapons or even nuclear weapons
or—through manipulation or otherwise.
So this emerged—about over the last 18 months—this real concern that these
models, if deployed openly, could lead to some level of truly catastrophic risk.
And what emerged is actually that we discovered that—through a whole bunch
of work that’s been done over the last little while—that releasing them openly
has not led and doesn’t appear to be leading in any way to catastrophic risk.
In fact, releasing them openly allows for much more—greater—scrutiny and
understanding of the safety measures that have been put into place.
And so what happened was, sort of, the pendulum swung very much towards
concern about really catastrophic risk and safety over the last year.
And over the last year, we’ve seen it swing back as we learn more and more about
how these models are being used and how they’re being deployed into the world.
My feeling is we must approach this work openly.
And it’s not just open release of models, or what we think of as
traditional open source forms of model development, or otherwise.
But we really need to think about how do we build an open innovation
ecosystem that fundamentally allows both for the innovation to be shared
with many people but also for safety and security to be rigorously upheld?
So when you talk about this, kind of, broader idea of open innovation,
beyond open source or, you know, transparency in models, like, what—what
do you mean, sort of, specifically?
How does that look in the world?
So I have three particular points of view when it comes to open
innovation, because I think we need to think both, both upstream, around
the research that is driving these models, and downstream, in terms of
the benefits of these models to others.
So first and foremost, what we have known in terms of how AI has been developed—and
yes, I had an opportunity to see it when I was at the Canadian Institute for Advanced
Research—is a very open form of scientific publication and rigorous peer review.
And what happens when we release openly is: you have an opportunity
for the research to be interrogated to determine the quality and
significance of that, but then also for it to be picked up by many others.
And then secondly, openness for me is about transparency.
We released a set of very strong recommendations last year around the
way in which these very large foundation models could be deployed safely.
They’re all about disclosure.
They’re all about disclosure and documentation, right?
From the early days, pre–R&D development of these systems, right?
In terms of thinking about what’s in the training data and how’s it
being used, all the way through to postdeployment monitoring and disclosure.
So I really think that this is important: transparency throughout.
And then the third piece is openness in terms of who is around the table
to benefit from this technology.
We know that if we’re really going to see these new models having—being successful,
deployed into education or healthcare or climate and sustainability, we need to
have those experts and those communities at the table charting this and making sure
that the technology is working for them.
So those are the three ways I think about openness.
Is there, like, a particular project that you’ve worked on that
you feel, like, you know, reflects your approach to responsible AI?
So there’s a really interesting project that we have underway at
PAI that is looking at responsible practices squarely when it comes
to the use of synthetic media.
And what we heard from our community was that they were looking for a clear
code of conduct about what does it mean to be responsible in this space?
And so what happened is: we pulled together a number of
working groups to come together.
They included industry representatives.
They also included civil society organizations like WITNESS, a number of
academic institutions, and otherwise.
And what we heard was that there were clear requirements that creators could
take, that developers of the technology could take—and then also distributors.
So when we think about those generative-AI systems being deployed
across platforms, and otherwise, and—we came up with a framework
for what responsibility looks like.
What does it mean to have consent?
What does it mean to disclose responsibly?
What does it mean to embed technology into it?
So, for example, we’ve heard many people talk about the importance
of watermarking systems, right?
And making sure that we have a way to watermark them.
But what we know from the technology is: that is a very, very
complex and complicated problem.
And what might work on a technical level certainly hits a whole new set
of complications when we start labeling and disclosing out to the public about
what that technology actually means.
All of these, I believe, are solvable problems, but they all needed to have
a clear code underneath them that was saying, This is what we will commit to.
And we now have a number of organizations—many, many of the
large technology companies, but also many of the small startups who are
operating in this space, civil society and media organizations like the
BBC and the CBC—who have signed on.
And one of the really exciting pieces of that is that we’re now
seeing how it’s changing practice.
So a year in, we asked each of our partners to come up with a clear case
study about how that work has changed the way they are making decisions,
deploying technology, and ensuring that they’re being responsible in their use.
And that is creating, now, a whole resource online that we’re able to
share with others about what does it mean to be responsible in this place?
There’s so much more work to be done.
And the exciting thing is, once you have a foundation like this in place,
we can continue to build on it.
So much interest now in the policy space, for example, about this work as well.
Are there any specific examples of those, sort of, case studies or the,
you know, real-world experiences that, say, media organizations had that are
interesting, that are illuminating?
Yes.
So, for example, what we saw with the, with the BBC is that they’re developing
a lot of content as a, as a public broadcaster, both in terms of their news
coverage, but also in terms of some of the resources that they are developing,
uh, for the British public as well.
And what they talked about was the way in which they had used synthetic
media in a very, very sensitive environment, where they were hearing
from individuals talk about personal experiences, but wanted to have some
way to change the face entirely in terms of the individuals who were speaking.
So that’s a very complicated ethical question, right?
How do you do that responsibly?
And what is the way in which you use that technology, and most
importantly, how do you disclose it?
So their case study looked at that in some real detail, about the
process they went through to make the decision responsibly to do what
they chose—uh, how they intended to use the technology in that space.
As you describe your work and some of these studies, the idea of
transparency seems to be a theme.
Talk about the importance of transparency in this kind of work.
Yeah, transparency is fundamental to responsibility.
I always like to say it’s not accountability in the—in a complete
sense, but it is a first step to driving accountability more fully.
So when we think about how these systems are developed, they’re often
developed behind closed doors inside companies who are making decisions about
what and how these products will work from a, from a business perspective.
And what disclosure and transparency can provide is some sense of the decisions
that were made leading up to the way in which those, those models were deployed.
So this could be ensuring that individuals’ private information was
protected through the process and won’t be inadvertently disclosed, or otherwise.
It could be providing some sense of how well the system performs against
a whole level of quality measures.
So we have all of these different types of evaluations and measures
that are emerging about the quality of these systems as they’re deployed.
Being transparent about how they perform against these systems is
really crucial to that as well.
We have a whole ecosystem that’s starting to emerge around
auditing of these systems.
So what does that look like?
We think about auditors in all sorts of other sectors of the economy.
What does it look like to be auditing these systems to ensure that they’re
meeting all of those—both legal, but additional ethical requirements that
we want to make sure that are in place?
What are some of the hardest ethical dilemmas you’ve come
up against in AI policy?
Well, the interesting thing about AI policy—right?—is: what works very
simply in one setting can be highly complicated in another setting.
And so, for example, I have an app that I adore.
It’s an app on my phone that allows me to take a photo of a bird and it will
help me to better understand, you know, what that bird is, and give me all
sorts of information about that bird.
Now it’s probably right most of the time, and it’s certainly right enough
of the time to give me great pleasure and delight when I’m out walking.
You could think about that exact same technology applied—so, for example,
now you’re a security guard and you’re working in a shopping plaza and you’re
able to take photos of individuals who you may think are acting suspiciously in
some way, and match that photo up with some sort of a database of individuals
that may have been found, you know, to have some sort of connection to other
criminal behavior in the past, right?
So what goes from being a delightful “Oh, isn’t this an interesting bird?”
to a very, very creepy “What is this?” What does this say about surveillance
and privacy and access to public spaces?
And that is the nature of AI.
So much of the concern about the ethical use and deployment of AI
is how an organization is making the choices within the social
and systemic structure they sit.
So, so much about the ethics of AI is understanding: What is the use case?
How is it being used?
How is it being constrained?
How does it start to infringe upon what we think of as the human
rights of an individual to privacy?
And so you have to constantly be thinking about ethics.
What could work very well in one situation absolutely doesn’t work in another.
We often talk about these as sociotechnical questions, right?
Just because the technology works doesn’t actually mean that
it should be used and deployed.
What’s an example of where the Partnership on AI influenced changes
either in policy or in industry practice?
We talked a little bit about the framework for synthetic media and
how that has allowed companies and media organizations and civil society
organizations to really think deeply about the way in which they’re using this.
Another area that we focused on has been around responsible deployment
of foundation and large-scale models.
So as I said, we issued a set of recommendations last year that
really laid out, for these very large developers and deployers of
foundation and frontier models, what were—What does “good” look like?—right?
From, uh, R&D through to deployment monitoring.
And it has been very encouraging to see that that work has been
picked up by companies and really articulated as part of the fabric of
the deployment of their foundation models and systems moving forward.
You know, so much of this work is around creating clear definitions of
what we’re meaning as the technology evolves, and clear sets of responsibility.
So it’s great to see that work getting picked up.
The NTIA in the United States just released a, uh, uh, report on open
models and the release of open models.
Great to see our work cited there as contributing to that analysis.
Great to see some of our definitions in synthetic media getting picked up
by legislators in different countries.
Really, just—it’s important, I think, for us to build capacity, knowledge and
understanding in our policymakers in this moment, as the technology is evolving
and accelerating in its development.
What’s the AI Alliance and why did Partnership on AI decide to join?
So you had asked about the debate between open versus closed models, um, and how
that has evolved over the last year.
And the AI Alliance was a community of organizations that came together
to really think about, “Okay, if we support open release of models,
what does that look like, and what does the community need?” And so
that’s about a hundred organizations.
IBM, one of our founding partners, is also one of the founding
partners of the AI Alliance.
It’s a community that brings together a number of academic institutions, many
countries around the world, and they’re really focused on, how do you build
the resources and infrastructure and community around what open source in
these large-scale models really means?
So that could be open datasets.
It could be open technology development, really building on that understanding
that we need an infrastructure in place and a community engaged
and thinking about safety and innovation through the open lens.
This approach brings together organizations and experts from
around the globe with different backgrounds, experiences, and
perspectives - to transparently and openly address the challenges
and opportunities that AI poses.
The collaborative nature of the AI Alliance encourages
discussion, debate, and innovation.
Through these efforts, IBM is helping to build a community around
transparent, open technology.
So I want to talk about the future for a minute.
I’m curious what you see as the biggest obstacles to widespread
adoption of responsible AI practices.
One of the biggest obstacles today is an inability—and really, a lack
of understanding about how—to use these models and how they can most
effectively drive forward a company’s commitment to whatever products
and services it might be deploying.
So I always recommend a couple of things for companies to really—to
think about this and to get started.
One is: think about how you are already using AI across all of your
business products and services.
Because already AI is integrated into our workforces and into our
work streams and into the way in which companies are communicating
with their clients every day.
So understand how you are already using it, and understand how you are integrating
oversight and monitoring into those.
One of the best and clearest ways in which a company can really understand how to use
this responsibly is through documentation.
It’s one of the areas where there’s a clear consensus in the community.
So how do you document the models that you are using, making sure
that you’ve got a registry in place?
How do you document the data that you are using and where that data comes from?
This is, sort of, the first system, first line of defense in terms of understanding
both what is in place and what you need to do in order to monitor it moving forward.
And then secondly, once you’ve got an understanding of how you’re already
using the system, look at ways in which you could begin to pilot or
iterate, in a low-risk way, using these systems to really begin to see
how—and what structures you need to have in place—to use it moving forward.
And then thirdly, make sure that you structure a team in place
internally that’s able to do some of this cross-departmental monitoring,
knowledge sharing and learning.
Boards are very, very interested in this technology.
So thinking about how you could have a system or a team in place
internally that’s reporting to your board, giving them a sense of
both, um, the opportunities that it identifies for you and the additional
risk mitigation and management you might be putting into place.
And then, you know, once you have those things into place, you’re
really going to need to understand how you work with the most valuable
asset you have, which is your people.
How do you make sure that AI systems are working for the workers, making
sure that they’re going into place?
The most important and impressive implementations we see are those where
you have the workers who are going to be engaged in this process central to
figuring out how to develop and deploy it in order to really enhance their work.
It’s a core part of a set of shared prosperity guidelines
that we issued last year.
And then from the side of policymakers—um, how should policymakers think about the
balance between innovation and regulation?
Yeah, it’s so interesting—isn’t it?—that we always think of, you know,
innovation and regulation as being two sides of a coin, when in fact so much
innovation comes from having a clear set of guardrails and regulation in place.
We think about all of the innovation that’s happened in
the automotive industry, right?
We can drive faster because we have brakes.
We can drive faster because we have seat belts in place.
So I think—it’s often interesting to me that we think about the two as
being on either sides of the coin.
But in actual fact, you can’t be innovative without
being responsible as well.
And so I think, from a policymaker perspective, what we have been really
encouraging them to do is to understand that you’ve got foundational regulation
in place that works for you nationally.
This could be ensuring that you have strong privacy protections in place.
It could be ensuring that you are understanding potential online harms,
particularly to vulnerable communities, and then look at what you need to
be doing internationally to being both competitive and sustainable.
There’s all sorts of mechanisms that are in place right now at the
international level to think about “How do we build an interoperable space for
these technologies moving forward?”
We’ve been talking in various ways about what it means to responsibly
develop AI, and if you’re gonna boil that down, you know, the essential
concerns that people should be thinking about—like, what are the key things
to think about in responsible AI?
So if you are a company, if we’re talking specifically through the company lens
when we’re thinking about responsible use of AI, the most important difference
between this form of AI technologies and other forms of technologies
that we have used previously is the integration of data, and the training
models that go on top of that data.
So when we think about responsibility, first and foremost
you need to think about your data.
Where did it come from?
What consent and disclosure requirements do you have on it?
Are you privacy protecting?
You can’t be thinking about AI within your company without thinking about data.
And that’s both your training data—but then once you’re using
your systems and integrating and interacting with your consumers, how
are you protecting the data that’s coming out of those systems as well?
And then secondly is: when you’re thinking about how to deploy that AI
system, the most important thing you want to think about is, Are we being
transparent about how it’s being used with our clients and our partners?
So, you know, the idea that if I’m a customer, I should know when I’m
interacting with an AI system; I should know when I’m interacting with a human.
So I think those two pieces are the fundamentals.
And then, of course, you want to be thinking carefully about, uh,
you know, making sure that whatever jurisdiction you’re operating in,
you’re meeting all of the legal requirements with regard to the services
and products that you’re offering.
Let’s finish with a speed round.
Complete the sentence: In five years, AI will…
Will drive equity, justice and shared prosperity if we choose to set that
future trajectory for this technology.
What is the number one thing that people misunderstand about AI?
AI is not good and AI is not bad, but AI is also not neutral.
It is a product of the choices we make as humans about how
we deploy it in the world.
What advice would you give yourself 10 years ago to better
prepare yourself for today?
Ten years ago, I wish that I had known just how fundamental the
enduring questions of ethics and responsibility would be as we develop
this technology moving forward.
So many of the questions that we ask about AI are questions about ourselves
and the way in which we use technology and the way in which technology
can advance the work we’re doing.
How do you use AI in your day-to-day life today?
I use AI all day every day.
So whether it’s my bird app when I go out for my morning walk, helping
me to better identify birds that I see, or whether it is my mapping app
that’s helping me to get more speedily through traffic to whatever meeting I
need to go to, I use AI all the time.
I really enjoy using some of the generative-AI chatbots, more for fun
than for anything else, as a creative partner in thinking through ideas.
And integrating it into all aspects of our lives is just so much about
the way in which we live today.
So people use the word “open” to mean different things, even just
in the context of technology.
How do you define “open” in the context of your work?
So there is the question of “open” as it is applied to technology,
which we’ve talked a lot about.
But I do think a big piece of PAI is “open-minded.” We need to be
open-minded truly to listen to, for example, what a civil society advocate
might say about what they’re seeing in terms of the way in which AI is
interacting in a particular community.
Or we need to be open-minded to hear from a technologist about
their hopes and dreams of where this technology might go, moving forward.
And we need to have those conversations, listening to each other, to really
identify how we’re going to meet the challenge and opportunity of AI today.
So “open” is just fundamental to, uh, the Partnership on AI.
I often call it an experiment in open innovation.
Rebecca.
Thank you so much for your time.
It is my pleasure.
Thank you for having me.
Thank you to Rebecca and Jacob for that engaging discussion about some of the most
pressing issues facing the future of AI.
As Rebecca emphasized, whether you’re thinking about data privacy or
disclosure, transparency and openness are key to solving challenges and
capitalizing on new opportunities.
By developing best practices and resources, Partnership on AI is building
out the guardrails to support the release of open source models and the
practice of post-deployment monitoring.
By sharing their work with the broader community, Rebecca and
PAI are demonstrating how working responsibly, ethically, and
openly can help drive innovation.
Smart Talks with IBM is produced by Matt Romano, Joey Fischground, Amy
Gaines McQuade and Jacob Goldstein.
We’re edited by Lidia Jean Kott.
Our engineers are Sarah Bruguiere and Ben Tolliday.
Theme song by Gramoscope.
Special thanks to the EightBar and IBM teams, as well as
the Pushkin marketing team.
Smart Talks with IBM is a production of Pushkin Industries
and Ruby Studio at iHeartMedia.
To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts,
or wherever you listen to podcasts.
I’m Malcolm
Gladwell.
This is a paid advertisement from IBM.
The conversations on this podcast don’t necessarily represent IBM’s
positions, strategies or opinions.