Learning Library

← Back to Library

Pride of Ownership in AI Era

Key Points

  • The core of “pride of ownership” hinges on three timeless questions—did you author it, do you truly understand it and its provenance, and can you take responsibility for its outcomes—whether in school, work, or property transactions.
  • Even though AI introduces new tools, these underlying criteria for accountability and integrity do not change, and expecting them to shift leads to conflict in both public and private institutions.
  • Disputes about AI usage often stem from perceived gaps in answering those three questions, prompting groups to clamp down when they feel ownership, authorship, or provenance are unclear.
  • By deliberately ensuring we can affirm authorship, comprehension, and outcome responsibility, we can integrate AI responsibly and maintain productive, trust‑based collaborations.

Full Transcript

# Pride of Ownership in AI Era **Source:** [https://www.youtube.com/watch?v=SXomDjPP4Xg](https://www.youtube.com/watch?v=SXomDjPP4Xg) **Duration:** 00:07:38 ## Summary - The core of “pride of ownership” hinges on three timeless questions—did you author it, do you truly understand it and its provenance, and can you take responsibility for its outcomes—whether in school, work, or property transactions. - Even though AI introduces new tools, these underlying criteria for accountability and integrity do not change, and expecting them to shift leads to conflict in both public and private institutions. - Disputes about AI usage often stem from perceived gaps in answering those three questions, prompting groups to clamp down when they feel ownership, authorship, or provenance are unclear. - By deliberately ensuring we can affirm authorship, comprehension, and outcome responsibility, we can integrate AI responsibly and maintain productive, trust‑based collaborations. ## Sections - [00:00:00](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=0s) **Redefining Ownership in an AI Era** - The speaker explains how traditional questions of authorship, provenance, and outcome responsibility are being reshaped by AI, and why cultural shifts are needed. - [00:03:32](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=212s) **AI‑Enhanced Knowledge and Provenance** - It explains how AI can be used to deepen product and domain expertise rather than replace it, while emphasizing the need to transparently document AI contributions for personal conviction and legal accountability. - [00:06:56](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=416s) **Responsibility and Provenance in AI Workflows** - The speaker argues that preserving domain expertise and transparent artifact provenance should be an agreed‑upon standard, guiding responsible AI tool usage like ChatGPT or Copilot. ## Full Transcript
0:00We're talking about pride of ownership 0:01today. And I know you might think AI, 0:04Nate is an AI guy. I promise you this 0:06gets back to AI. At the end of the day, 0:09most of the conflicts we see in our 0:11public institutions and in the private 0:13workplace are about how we handle pride 0:19of ownership in an AI world. And I want 0:21to take you through a brief tour of the 0:23before times so you get a sense of how 0:27much has changed and how our underlying 0:28frameworks have not shifted because I 0:31think that gives us the ability to show 0:34where we need to advance our work 0:37culture, our educational culture to 0:40truly make AI a useful tool. We begin 0:45before AI. We had three implicit 0:48questions that we asked every time in 0:52work, at school, if we took pride of 0:54ownership in something. It is, did you 0:57author it? Do you truly know the 1:01material? Can you show me the chain of 1:04provenence for this material? That 1:06happens with property and transactions, 1:09but it also happens with work. Can you 1:12show me the reviews it went through? It 1:14was a conversation I've had about mini 1:16docks. Professors have asked how many 1:18books I read. They're implicitly asking 1:20about the chain of providence of the 1:22idea. So, show me that you can keep the 1:25idea and have a sense of integrity 1:27there. And finally, can you show that 1:30you have ownership on outcomes? If 1:32you're in class, it's do you have 1:33ownership on the grade? If you are at 1:36work, it's do you have ownership of your 1:39KPIs? The point is those questions are 1:42not new. We have been asking those 1:44questions since Ian Nir complained about 1:48his copper shipments because someone was 1:51not upholding their end of the agreement 1:52and not taking pride in their work. And 1:54yes, I am making a very nerdy reference 1:56and I hope someone appreciates it. The 1:59point here is that those underlying 2:03components of pride of ownership will 2:06not change in the age of AI. They won't. 2:11Stop expecting them to. When you get 2:13into a fight, whether it's in uh at 2:16school, whether it's at work, whatever 2:18your context about whether AI is 2:21appropriate to use, it almost always 2:24comes down to pride of 2:26ownership. You need to be able to answer 2:29all three of those questions 2:32affirmatively in order to have a 2:35positive communal AI productivity 2:38experience. And I know that sounds weird 2:40like it's very hippie to say it that way 2:42but fundamentally when we use AI 2:44onetoone where I am talking to the AI if 2:47I am doing so in the context of group 2:49work the group is affected and demands 2:51to know what's going on implicitly or 2:54explicitly and if they feel like they 2:56don't the instinct of a lot of groups is 2:58to clamp down and so if you were in an 3:02environment like that you have to 3:03recognize those are the three questions 3:05people are asking is do you know your 3:07work? Can you keep your work? Do you 3:10understand the chain of 3:12providence? And can you be responsible 3:16for the outcomes? Can you hang your hat 3:19on what happens 3:21afterward? Those are the things that 3:23work is expecting. And I believe we can 3:26answer yes on those in the age of AI. 3:29It's not impossible. You can do it. You 3:32can in fact use AI to prompt and ask 3:37yourself questions of the data that you 3:40have at your disposal. So you know your 3:42product area, so you know your domain, 3:44so you know your educational subject 3:46matter better. You can actually increase 3:49your product knowledge. Yes is a 3:51spectrum and you can be more yes on 3:54product knowledge or domain 3:56knowledge if you use AI well. And so in 4:00a sense, part of the interesting thing 4:02about AI is that people tend to assume 4:04you can use AI as a cheat sheet and skip 4:06the product knowledge part. But it 4:08doesn't have to be that way. It can be 4:09the reverse. Similarly, with provenence, 4:13you can be transparent about where you 4:18are using, evolving, and thinking about 4:21your arguments. Sometimes that's as 4:24simple as saying, "Chat GPT and I have 4:25been working on this together." 4:27Sometimes that's as simple as saying 4:30this is an argument. I evolved it after 4:32this conversation. Then I processed it 4:34through chat GPT and I used this prompt 4:36and came back. And that more formal 4:38sense might be appropriate if legal 4:41might get involved. There are cases now 4:43where documents are being created with 4:44legal implications inside workspaces and 4:47you have to have some providence and 4:49that may include the prompts. 4:51But even if you're not that formal, it 4:54is still appropriate to understand for 4:57yourself how you gained conviction in a 5:01space. And I think that's the heart of 5:03it. And so maybe it's not about logging 5:06all your prompts and this and that, 5:07although maybe that's a best practice. 5:09It's about is your workspace and the 5:12artifacts you're leaving behind in a 5:14position where a stranger who is fluent 5:17in the art could come in, look at the 5:20workspace, look at the 5:22artifacts and evolve to a place of 5:26similar high 5:27conviction that you have about your 5:30angle of approach on your work. Does 5:32that make sense? It's like could they 5:34look at what you have left 5:35behind and gain a sense of conviction 5:40because you have left enough behind that 5:42there's a bit of a chain of providence 5:44on your thinking. That's what keeping 5:46means to me. That's keeping your 5:47thinking. 5:49Finally on 5:51outcomes that I I think that has changed 5:53the least in the age of AI like at the 5:56end of the day you are still accountable 5:59for the KPIs and there is still a gut 6:02level accountability to say it's on 6:05me the grade for the essay it's on me if 6:08you're an 6:09education the performance of the team 6:11it's on me if you're a manager that kind 6:14of thing that level of commitment is not 6:17different than it was before, but people 6:21sometimes think you're going to skip it 6:24because they think that people will 6:26depend on AI and then blame AI. I got 6:29news for you. You you just cannot blame 6:31the machine. It's the old the old joke 6:34from the IBM slide that a machine can't 6:37make managerial 6:39decisions. Well, joking aside, it's 6:43still kind of true from a human 6:45perspective. Humans expect humans to own 6:47and be 6:48accountable. So you got to be 6:50accountable. And that's as old as taking 6:52accountability for your copper imports. 6:54And I am just going full nerd here. I 6:56hope someone appreciates it. So there 6:58you go. It's about making sure that you 7:01are responsible for knowing your domain, 7:05keeping the providence of your work in 7:06an artifact form that people can 7:08understand how you evolve conviction, 7:09and you're responsible for your 7:11outcomes. That's true before AI and 7:14that's true with AI. And I think if we 7:16understood that and we talked about that 7:18more, we would have fewer arguments 7:22about when and where to use chat GPT or 7:24co-pilot because they would be grounded 7:26in what's really going on, which is an 7:28agreement in how we work. We need to 7:31reforge those agreements in how we work. 7:33And I think that framework is a way to 7:35think about it. Cheers.