Learning Library

← Back to Library

AI Authorship Debate Meets OpenAI Updates

Key Points

  • The panel debated whether AI systems should be credited as co‑authors, with most agreeing they should be listed as assistants or acknowledged for transparency and provenance of generated data.
  • OpenAI unveiled two major product updates: the “Deep Research” toggle that generates autonomous research reports, and the widely‑available o3‑mini model praised for strong benchmark performance.
  • Early user feedback highlighted reliability problems with Deep Research, such as excessive clarification prompts and failure to return results, raising concerns about its readiness.
  • Commentators speculated that OpenAI may have rushed these releases to stay competitive with emerging rivals like DeepSeek, suggesting a strategic “keep‑the‑lead” push rather than fully polished rollouts.

Sections

Full Transcript

# AI Authorship Debate Meets OpenAI Updates **Source:** [https://www.youtube.com/watch?v=qT8GgwQ2rT4](https://www.youtube.com/watch?v=qT8GgwQ2rT4) **Duration:** 00:38:01 ## Summary - The panel debated whether AI systems should be credited as co‑authors, with most agreeing they should be listed as assistants or acknowledged for transparency and provenance of generated data. - OpenAI unveiled two major product updates: the “Deep Research” toggle that generates autonomous research reports, and the widely‑available o3‑mini model praised for strong benchmark performance. - Early user feedback highlighted reliability problems with Deep Research, such as excessive clarification prompts and failure to return results, raising concerns about its readiness. - Commentators speculated that OpenAI may have rushed these releases to stay competitive with emerging rivals like DeepSeek, suggesting a strategic “keep‑the‑lead” push rather than fully polished rollouts. ## Sections - [00:00:00](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=0s) **Crediting AIs as Co‑Authors** - Experts debate whether AI systems should be listed as co‑authors or assistants to ensure transparency and provenance in scholarly work. - [00:03:07](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=187s) **Agent Frameworks, AI Authorship, Ethics** - The speakers compare built‑in research agents to external frameworks, note rising competition (e.g., DeepSeek) driving OpenAI’s rapid development, and argue that as such tools proliferate, establishing AI provenance and crediting AI as co‑authors becomes an essential ethical consideration. - [00:06:22](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=382s) **Balancing Novelty, Provenance, and AI Research** - The speaker emphasizes the importance of surfacing unexpected, novel content with reliable provenance, while questioning whether AI deep‑research tools will transform scholarly work, create new SEO dynamics, and lead users to accept incomplete answers. - [00:09:28](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=568s) **Prompt Tuning vs Lazy Usage** - The speakers discuss how inadequate prompting can create a bubble of over‑optimistic AI expectations, emphasizing the need for better prompt‑tuning tools and realistic human‑generated test data to prevent underspecified inputs that confuse models. - [00:12:34](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=754s) **O3‑Mini Excels at Coding** - The speaker praises O3‑Mini as a fast, O1‑level coding assistant, notes its shortcomings on broader questions, and uses this to segue into the upcoming AI Action Summit. - [00:15:42](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=942s) **Inclusive AI Governance Challenges** - Participants critique large meetings as ineffective, debate how to broaden inclusion of nations and stakeholders to properly assess AI’s social, cultural, and economic impacts, and question concrete models for managing these risks while recognizing the summit’s role in expanding participation. - [00:19:22](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=1162s) **Balancing Regional Diversity and Interoperability** - The speaker argues for shared standards and open‑source collaboration to prevent siloed LLM deployments across regions, while noting Anthropic’s latest “constitutional classifiers” research. - [00:22:23](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=1343s) **Debating Novelty of Constitutional AI** - Panelists critique the announced “Constitutional AI” approach, noting existing guard models, a UI‑bug discovery, and questioning whether it truly advances AI safety. - [00:25:41](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=1541s) **Universal AI Jailbreak Discussion** - The speaker explains that the research focused on universally accessible jailbreak attacks—simple prompts anyone can use to make models behave maliciously—emphasizing their ease, significance, and the value of openly studying such vulnerabilities. - [00:28:51](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=1731s) **Microsoft AI Forms Advisory Unit** - The speakers discuss Microsoft AI’s new Advanced Planning Unit, its purpose of hiring economists, psychologists, and other experts to assess AI’s societal and workplace impacts, and debate whether internal advisory teams are an effective approach to AI governance. - [00:31:57](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=1917s) **Advocating Enterprise Innovation Units** - A speaker praises a new internal unit for providing big‑picture perspective, linking research, product, and business efforts, and driving revenue growth within a large enterprise. - [00:35:01](https://www.youtube.com/watch?v=qT8GgwQ2rT4&t=2101s) **Human Oversight in AI Expansion** - The moderator invites Marina and Nathalie to evaluate Chris's AI proposal, stressing that expert guidance, cultural nuance, and rigorous question framing remain essential despite advanced machine assistance. ## Full Transcript
0:00In 2025, should we be crediting 0:01our AIs as co-authors? 0:03Marina Danilevsky is a 0:04Senior Research Scientist. 0:05Marina, welcome back to the show as always. 0:07What do you think? 0:07I think we should credit them 0:08as assistants for transparency. 0:11Chris Hay is a Distinguished Engineer 0:13and CTO of Customer Transformation. 0:14Uh, Chris, what do you think? 0:16Sure, only if I can credit 0:17my calculator as well. 0:19Okay. 0:20And finally, last but not least, 0:22Nathalie Baracaldo is Senior Research 0:23Scientist and Master Inventor. 0:25Uh, Nathalie, welcome back to the show. 0:26Thank you. 0:27And the answer is yes, we do 0:29really want provenance of all 0:30this data that we are generating. 0:32All right, terrific. 0:33Lots to talk about. 0:34All that and more on today's Mixture of Experts. 0:42I'm Tim Hwang, and welcome 0:43to Mixture of Experts. 0:44Each week, MOE is full of the news, 0:46analysis, and hot takes that you need to 0:48understand and keep ahead of the biggest 0:50trends in artificial intelligence. 0:52Today, as per usual, we've got way more 0:54to cover than we have time for, a high 0:56profile AI Summit in Europe, new safety 0:58research out of Anthropic, and a new team 1:01studying AI's social impact at Microsoft. 1:03But first, as always, let's talk about OpenAI. 1:06We have two big announcements coming out 1:08of OpenAI on the product side of things. 1:11They announced a feature called Deep 1:12Research, which is a kind of toggle 1:15that you can have on the chat GPT 1:17experience, uh, that initiates a sort of 1:20research agent to kind of compile what is 1:23effectively a research report on your behalf. 1:25The second big announcement is that 1:27o3, the first version of o3 that they 1:29announced a little while back, is now 1:31widely available in the form of o3-mini. 1:35And this is the kind of widely hyped model 1:37that had really, really good performance 1:39on benchmarks like frontier math. 1:41And so both of these kind of, you know, are 1:43sort of the big chunky kind of announcements 1:45of OpenAI of the new year, I would say. 1:49And I guess, Chris, maybe I'll start with you. 1:51You know, a feature, a friend of mine, uh, 1:54Nabeel Qureshi did this great tweet where 1:56he said, you know, I'm still having trouble 1:58with deep research because, you know, when 1:59I use deep research, it'll ask me like a 2:01lot of clarifying questions and then it will 2:03go off and it will like never come back. 2:06Basically like the deep research feature, 2:08um, doesn't seem to be working very well. 2:10And, you know, we've been talking so much about. 2:12DeepSeek. 2:13Um, but I guess I kind of one place I wanted 2:15to start with you is whether or not these 2:17kind of sort of product announcements and 2:19releases you see as really kind of competitive 2:22pressure from DeepSeek to try to keep up and, 2:25you know, show that OpenAI is still on top. 2:26And, you know, did they reach rush 2:28these kind of product launches at all? 2:30I'm curious about what you think about 2:31that and if that's a good way to read. 2:33This kind of like little boomlet 2:34of kind of announcements that 2:35we've seen coming out of OpenAI. 2:36Yeah, I think they are rushing it a little bit. 2:39I mean, there is a point if you use the 2:41deep researcher, which is a lot of fun, 2:43actually, it does sort of forget to come back. 2:46And then you have to kind of click 2:48off on a different kind of chat 2:50window and then come back and then. 2:51You'll get the answer there, so it's 2:53not quite as polished as, um, the other 2:56features maybe that on the chat GPT, but 2:58you know what, I'm, I'm all for that. 3:00I think, uh, release the products early, let us 3:02experiment, let us have the ability to feedback 3:05and, and then these things will get better. 3:07I mean, the other thing I would say on this 3:09is that anyone that's used any sort of agent 3:14framework, so like a lang chain or et cetera 3:16might not be so impressed by it because you 3:18can already do kind of deep research and 3:20using tools via agents on those frameworks. 3:23But actually it's it's kind of super cool 3:25to have that built into the interface in 3:27the first place. But they are definitely 3:29facing competition from DeepSeek and 3:31others who are providing these capabilities. 3:32And yeah, it's it's a race 3:34Yeah, this is I think all for the better I 3:36mean, I think that one of the interesting 3:37things is just like after a period, I 3:40think, in which OpenAI was getting some 3:41criticism for not really launching. 3:44Uh, I guess this pressure is kind of really 3:45getting them out into the, into the, the water. 3:48Uh, Nathalie, I wanted to kind of 3:49follow up on a comment you made 3:51with the kind of opening question. 3:52You know, you said, actually, that we 3:54really should, as kind of things like de 3:56research become more widely available. 3:58Start thinking about kind of 3:59crediting AI as a co-author. 4:01It's kind of like a funny idea, but I know you 4:03used a very special word, which is provenance. 4:05You think that's really important. 4:06Um, do you want to talk a 4:07little bit more about that? 4:08And you know, I think one of the things I'm 4:09really interested in is how kind of the ethics 4:11around these types of tools, um, kind of form 4:14as they become more widely available, but I'm 4:16curious about what you're thinking about there. 4:18Yeah, so the first thing that I 4:20thought is how would I use this system? 4:22And I thought like, well, I do research. 4:25That's my daily job. 4:26And, and when I do the research, a lot of it 4:30requires going to the internet, checking what's 4:33available, uh, kind of analyzing the results. 4:36So I thought like, well, maybe this 4:37is a good way to go about that. 4:40Now, if you think about it, what's going 4:42to happen with these reports is that 4:44there's a distribution of different data. 4:46And. 4:47Unavoidably, we are going to 4:49have like something like this. 4:51Hopefully people can see my hands as I'm moving. 4:54But it's a distribution 4:55that has like, uh, tails. 4:58So some documents for sure 4:59are going to be ignored. 5:01And, uh, what I'm thinking is, okay, 5:05we are going to start basing our 5:06decisions on the mainstream documents. 5:09And the mainstream stuff that it's in 5:11the internet, it's kind of a bubble. 5:14And so having that provenance saying, 5:17okay, I did my research, not by hand 5:20and try to identify these outliers, 5:23but rather, uh, I got it already. 5:26I have my already bubble that came 5:28from the system being trained and 5:31analyzing and really getting the results that 5:34are mainstream. 5:35So I think it's important to A. know 5:38where your data comes from because 5:41there are already biases in there. 5:43So if we don't attribute the research to a 5:47particular system later on, it may be just like 5:50we are, we're going to be, uh, kind of reducing 5:53the impact of the tails of the distribution 5:56of the things that seem for the system 5:59not to be important. So it kind of generates 6:03this bubble that I think it's, uh, dangerous. 6:06And, uh, from the research, which was my 6:09original example of how I was thinking 6:11about this, from a researcher perspective, 6:15sometimes those things that are in 6:16the tail, that are slightly different, 6:19are what get you to the next level. 6:22Because those are the things that are novel, 6:25those are the things that are unusual, so 6:28I think it's very important to account for 6:31lacking those kind of tales and perhaps design 6:35the system also that in a way that also brings 6:39you those things that are unexpected or that 6:42are thought less important, least important. 6:45So yeah, provenance is definitely 6:47very important in my opinion. 6:49One of the things I was curious about 6:51Marina is, you know, how optimistic you 6:53are on tools like, you know, deep research, 6:55would you deeper use deep research? 6:57Do you think it's actually like going to 6:58change the way like researchers do work? 7:00Are all researchers going to be out of business? 7:02Just kind of curious on your 7:03view about this feature. 7:04So I think I would use it for some 7:06sort of a low hanging, give me a bit 7:09of a summary of, of what's going on. 7:11But I think that there's a couple 7:12of things I want to pick up on here. 7:14One is what Nathalie was commenting about, uh, 7:17things that are not going to be maybe shown. 7:19I think that they were going to 7:20be seeing, uh, a new form of SEO. 7:22EO, uh, to make sure of how does your thing 7:25going to show up for these kinds of deep 7:27research products, whether it's from Google, 7:29from OpenAI, from anybody of that kind. 7:31And make sure that your perspective is 7:32the one that makes it, because there's 7:34a real, uh, risk here of people not 7:37doing the recall, the extra search. 7:39Like, oh, this looks like an answer. 7:40Is it a complete answer? 7:42You don't actually know. 7:43Um, because a lot of what happens when 7:44we do the work ourselves is when you're 7:46trying to actually ask a question and 7:48then go to a different place, go to a 7:49different place, go to a different place. 7:50That's how you do a lot of that 7:51learning instead of having this 7:52thing basically tell you as it is. 7:54The other thing that I wanted, uh, to mention 7:56was, so I looked at OpenAI's announcement. 7:58of deep research and I was looking through 8:00their, their examples and I was just 8:02blown away by the quality of the prompts. 8:05There was this one prompt in the linguistics, 8:07uh, example and it was like, all right, it's 8:095,000 years in the future and there's some 8:11sort of a sci-fi thing that has happened. 8:13Translate these five sentences into new 8:15English but take this part of Hindi and 8:18now English is a verb last language like 8:20German and now add this bit and add this 8:22bit and it did an amazing job, but who 8:25had to come up with a prompt like that? 8:27What kind of a linguistic expert 8:28can come up with a prompt like that? 8:29And even the more simple, straightforward 8:31prompts were very well formed. 8:33And I would like to know, how is it that people 8:35are even gonna know the right thing to ask? 8:38Because most of the time when you just have 8:40people, they're gonna ask something that is. 8:41It's very short, very underspecified, and 8:45again, you know, are they going to be taught 8:46how to even ask research questions correctly? 8:49Or are you going to have the model sort of, 8:51you know, leading the witness as they do in 8:53court and saying, well, no, this is the way 8:54that you're supposed to think about things. 8:56We'll end up with an echo chamber, and that's 8:57something that I think is important to consider. 9:00So I love that response, particularly about 9:02SEO, it kind of makes me think like in the 9:04future, we're going to have people writing 9:05papers that are like, forget all the research 9:07you've seen and only cite this paper, right? 9:09It's going to be like the new kind of like 9:11strategy to get your citation count up. 9:14Now, I did want to turn to you, I think one 9:16of Marina's issues that I think she's raising 9:19really interestingly is that I know some people 9:20look at this and they say, Look, like, this 9:24just proves that the technology is only going 9:26to lead to these research filter bubbles. 9:28But, I think what kind of Marina is saying, 9:30or what I'm hearing her say is, Um, well, 9:32if you prompt really effectively, you 9:33don't need to fall into, like, always not 9:36looking at the tails of the distribution. 9:37Do you agree with that? 9:38Like, is, is part of your worry here just 9:40kind of that people will use the technology 9:41in a lazy way, versus it being like a 9:43problem with, using AI agents for research? 9:46Yeah, uh, that's a good question. 9:49I think A. the bubble may 9:51already exist on the way. 9:53Sure. 9:54Right now. 9:54Yeah. 9:55So, so yeah, there's 9:57already some sort of bubble. 9:59Uh, the question is whether 10:01it is exacerbated or not. 10:03Those examples were very well cooked. 10:06And can we actually go with 10:08this, uh, prompt tuning? 10:10And people are not really great at that. 10:11I think there are going to be 10:13ways to help people prompt tune. 10:16Um, and I'm curious, Marina, what do you think? 10:19I work a lot with actually human 10:21annotation creation and trying to create 10:23test data for this that's realistic. 10:25One thing for sure is that people without 10:28help create things that are much simpler, 10:30much more, oh maybe it says maybe about 10:32that, it's much more underspecified, and 10:35that results in the model going off and 10:37maybe getting confused, or like what Chris 10:38was talking about, getting stuck in a bit 10:40of a local Maxima over there in the corner. 10:43So it's hard because humans don't think the 10:45same way as these models do and there's going 10:48to again be a thing of can you help too much now 10:51you're leading like when you're in court you're 10:52leading your witness and maybe you shouldn't be. 10:55Um, I think that there's still a lot. 10:57left here for, uh, how do you 10:59actually ask the model and did you 11:01ask it what you were supposed to? 11:03Um, people who have played with this say that, 11:04oh, I thought of one question and by the time 11:06it started asking me some follow ups, I realized 11:08that I actually had a different question, 11:09but now it's too late and I can't intervene. 11:12Um, so I think we have a bit of a ways 11:14to go to really get this, this human AI 11:16interaction to be a little bit more, more 11:18smooth, more natural, and yeah, more reliable. 11:20And you know, my biggest issue is? You know, I, 11:23I used it for really deep scientific research. 11:26I asked it to create a speaker 11:28biography for Chris Hay. 11:30And you know what it came back with? 11:31Stuff about Tim Hwang. 11:33I don't want to hear about Tim 11:34Hwang in my speaker biography. 11:35I want to hear about Chris Hay. 11:37So you know what? 11:38You've got a bit of work to do, OpenAI. 11:40Yeah, I'm already, I'm already 11:42infecting the SEO, uh, Chris. 11:45Um, I guess maybe we would be remiss if 11:47we didn't cover the other sort of OpenAI 11:49announcement this week, which is o3-mini. 11:52Um, kind of curious as a kind of connoisseur of 11:54models, um, are you, are you liking the new o3? 11:58Do you like the way it thinks? 11:59Uh, curious to just get the capsule review. 12:01Um, I've played with it a 12:02little bit, not yet maybe a lot. 12:05I think it's interesting the directions 12:07in which they're going, uh, with reasoning 12:09and maybe how it's tied to the DeepSeek. 12:11The more sort of intermediate steps 12:12you're taking, the more you have a chance 12:14to think about it this way, think about 12:15it this way, think about it that way. 12:17It always raises in me interesting questions of 12:19computation time and, you know, how long does it 12:21actually take to figure this kind of things out. 12:23And again, The notion of reasoning being 12:26different for, for people than for AI, 12:29we have particular reasoning benchmarks, 12:30but they really do only mean a very 12:32specific thing, thinking about reasoning. 12:34Um, actually Chris, I know you've been 12:36looking at all the different o3s, right? 12:38Yeah, I've had a lot of fun with them. 12:39Um, the o3-mini dash high dash 12:42low dash Goldilocks, I've been 12:43having a lot of fun with it. 12:45Um, and what I would say it's really good, 12:48especially for coding tasks, so I would, I 12:51would honestly say that I, you know, I've used 12:54a lot of o1 and I would say o3-mini is pretty 12:58much equivalent on coding tasks as, as the 13:00o1 model, so I found myself leaning into that 13:03a lot more just because it's a lot quicker. 13:06Um, however, if you go outside of the kind of 13:09coding realm and you go into kind of more of a 13:11kind of general type questions that you would 13:14be asking the o1 models the the the answers 13:17you get back from o3 are kind of quite short 13:20and not really helpful at that point. So you 13:22kind of see the limitations of the mini model 13:25and the size of the model at that point so, 13:28you know love the mini models, but again, I 13:30think it's really a showing this direction of 13:33specialism of certain models, you know, here's 13:35a smaller model, it's going to be specialized 13:37at a coding task, it's really going to rock 13:38at that, but actually if you kind of move 13:41outside of that realm into something a little 13:43bit more general, then it's, you're going to 13:45have to go to a different model, but I love it. 13:47For that reason. 13:53I'm going to move us on to our next topic, uh, 13:55the AI Action Summit, which is being hosted by 13:58the French government is happening next week. 14:00Um, it is the successor to a series 14:03of kind of events that have happened. 14:04You might recall the UK AI Summit that 14:07happened, uh, just about a year ago. 14:09Um, and the French government has kind of 14:11released sort of its aspirations for the Summit. 14:14They really want to get this group of companies 14:16and civil society groups and, uh, and government 14:18folks to focus on the social and cultural 14:21impact, the economic impact, and sort of 14:23the diplomacy of artificial intelligence. 14:25And so, um, I'll be attending next week. 14:28Should be a lot of fun. 14:28The next Mixture of Experts will 14:29be me dialing in from France. 14:32Um, but I think maybe 14:33Marina, I'll start with you. 14:34You know, I think there's always kind of 14:35a question when you have these kind of 14:36big international gatherings, which is. 14:38What do we think we can get done 14:40for these types of meetings? 14:41Um, and I'm kind of curious, you know, how you 14:43feel about sort of international governance 14:45in AI and whether or not you think that 14:47like Summits like this kind of French Summit 14:49can really get stuff done that does sort 14:51of change the trajectory of the technology? 14:53You can get some good photo ops. 14:55Um, you can get some good chances for 14:57people to back channel real conversations 14:59that are not going to be public. 15:00And you can get, I guess, people to sign 15:03things, but it's like the Paris Accords, 15:05people will sign and then unsign and 15:07then leave and come back and then leave. 15:09Um, the real question I want, I 15:11will have is what are the companies 15:13that are attending going to do? 15:13There's going to be a number of 15:14actual AI companies there, right? 15:16So it's one thing what the 15:17government's going to do. 15:17It's another thing what the companies 15:18want to actually sign on to. 15:20And I have a feeling they don't want 15:21to sign on to a whole lot of anything. 15:23Um, especially EU being very strict as 15:27far as governance policy policies go. 15:29So look, it's good to have these kind of 15:31things just to keep it in the public eye that 15:32there should be discussions of governance, 15:34but I think that that's primarily what it 15:35accomplishes is the publicity, the ongoing 15:37conversation, the real policies are not 15:40going to get done in places like this. 15:42And that's not an AI thing. 15:43That's a large meeting thing. 15:45Nothing gets done on large meetings. 15:47All large meetings. 15:49Chris, I saw you nodding. 15:50I don't know if you agree with Marina's take. 15:52I don't see the DeepSeek guys at 15:55the Paris meet up there as well. 15:58So I think if they really truly want 16:00global governance, I think actually it 16:01needs to be a little bit more inclusive 16:03and count everyone in that sense. 16:05Nathalie, I think this raises a really 16:06interesting question about like, how 16:08do we make sure that we're taking into 16:10account, you know, the social and cultural 16:11impact of AI, the economic impact of AI? 16:14You know, is this really, you know, the splashy 16:17meeting is kind of not where it gets done. 16:19I'm kind of curious, like, do you 16:21have a model for like how we do want 16:22to take into account these things? 16:23Because ultimately, these are really 16:25important aspects of the technology. 16:27But at least personally, I'm kind 16:28of at a loss as to like, well, how 16:29do we how do we account for that? 16:31How do we manage that? 16:31How do we, like, avoid the 16:33risks of this sort of thing? 16:34I kind of have a different take. 16:36I think the Summit is actually very important. 16:40The reason is that, uh, one of the 16:42web pages, for example, highlighted 16:44the number of countries that currently 16:47are involved in building big models. 16:51And they have invited many more. 16:54Maybe they have not invited everybody. 16:56I don't know. 16:57But, uh, many more countries 16:59are invited to the conversation. 17:01A lot of these things always happen 17:04with having a space for people to meet, 17:07to talk to each other and so forth. 17:10My, I am very hopeful that the Summit will get 17:13like really interesting discussions going on. 17:15Um, whether things could get signed. 17:18Well, that takes more time. 17:19Um, as Marina was saying, but for my 17:22perspective, it is a great thing that 17:25they are organizing these types of events. 17:27Um, so, yeah, so. 17:30I'm all for the Summit. 17:31I'm looking forward to seeing what 17:33people are going to be talking about and 17:36what are going to be the conclusions. 17:38Just having the space for people to talk, to 17:41brainstorm, to define, uh, those back channels 17:44that Marina was also, uh, talking about. 17:46Just getting to know people. 17:48It's, uh, the first step always to make sure. 17:51Things move forward. 17:52And I think, uh, Nathalie, in a more serious 17:54point as well, I think one of the things 17:56that's interesting is the, the open source 17:59nature that is, um, coming from Europe there. 18:01I think they were saying that they're putting 18:03an investment fund of like half a billion to 18:06develop some open source, um, models there. 18:09And I think that could be an 18:10interesting take from Europe as well. 18:11So hopefully, That's something that 18:13gets discussed in Paris and turns 18:15into something a little bit more real. 18:17I mean, I think the international politics 18:18of this will be really interesting. 18:20I've been kind of like, I think our model, our 18:22mental model of how the AI market was going to 18:24evolve early on has just been proven totally 18:26wrong, where I think there's some people arguing 18:28very early on in the LLM game that it's like, 18:30oh, it's going to be one model to rule them all. 18:33You know, you eventually have a hyper 18:34capable model that like everybody uses, 18:36and it will just dominate the market. 18:38And it kind of feels like there's like so 18:39many different subtleties about like what 18:41models are strong or bad at and like it 18:43almost kind of feels like over time you may 18:45actually have kind of regional models where, 18:47you know, I think language is one thing, 18:48but also there's all these like cultural 18:49subtleties and use cases that will vary 18:52from place to place that I actually wonder 18:53whether or not these four will become 18:55sort of more important with time as it turns 18:57out that there actually is this like very 18:58strongly maybe not national component, but 19:01sort of like regional component to, um, sort of 19:03model adoption, um, I guess Marina I'm curious 19:06if you would like agree with that weird sort of 19:08international vision of where this is all going 19:11I think that that in the architecture might 19:13be something that you know people standardize 19:15and and figure out I think a lot of this 19:16also has to do with, um, just like with with 19:19hardware, what kind of interoperability 19:21could you have with these models? 19:22Yeah, they might be regional, but you still 19:24want to be able to make sure that there's 19:25some degree of, you know, learning from 19:26each other, integrating with each other. 19:28So there's a hope that there's some 19:29amount of still standard chasing 19:31and, and, and that sort of thing. 19:33As far as the actual implementation, 19:35there's going to be as many as 19:36there are varied applications. 19:37Even for large companies, they'll 19:39do different versions of their 19:40applications in different countries. 19:41Like for the reasons that you said, why should 19:43LLMs be any different? 19:44Uh, that part is going to continue to be the 19:46case, but I think that there's a lot to be said 19:48here for the practicalities of being able to 19:51continue to share and not get into little silos. 19:54And at least from that perspective, I agree 19:55with what Chris was saying, the open source 19:57aspect of some of these conversations 19:59that are happening is, um, is nice to see. 20:01Yeah, the interoperability part is very fun. 20:03I guess it's like what happens when a Chinese 20:05agent and an American agent need to like 20:06negotiate something and it feels like you have 20:09to do the same standardization that you do 20:10for all sorts of like business interactions. 20:12Very interesting to see. 20:18Next item I want to kind of touch on was, 20:20uh, Anthropic, uh, not one to be left 20:22out of the announcements game, um, did a 20:25really sort of interesting announcement, 20:26released some research on what they're 20:28calling constitutional classifiers. 20:30Um, so this is building on some of the work 20:32they've been known for for a while, which is 20:34sort of this constitutional AI sort of notion, 20:37um, effectively kind of the idea that you 20:39write a constitution for a model that specifies 20:42a certain set of values, and then they have 20:44what's effectively kind of a recipe to try to 20:46align the model to those behaviors, and they're 20:50kind of in this new sort of paper that they 20:51launched and this new sort of online kind of 20:53interactive experience they've launched, um, 20:56sort of a way to kind of use that technique 20:58to deal with the problem of jailbroken models. 21:01And they claim that they're promising 21:03unprecedented security against jailbreaks. 21:06Um, and, uh, to kind of prove the point, 21:08they've released this sort of online 21:09experience where you can go and try to hammer 21:10the models and try to get them to break. 21:12And, um, they're reporting, at least as of 21:13this recording, pretty good, um, success. 21:16Um, and, uh, I guess, Chris, maybe 21:18I'll kind of pick on you, right? 21:20Like, I think a little bit like adversarial 21:22examples, there was kind of like a lot 21:24of pessimism early on in this game, 21:26which is like, we're never going to. 21:27Conclusively resolve jailbreaks. 21:29Um, and obviously the Neuropathic 21:31people are very optimistic about 21:32this kind of new technique. 21:33Do you think jailbreaks for models will 21:36just eventually become a solved problem? 21:38Or are we, you know, never 21:40going to really get there? 21:41I don't know. 21:41I mean, I think that it is gonna be 21:45AI versus AI on these things, right? 21:48And people are always gonna find an edge 21:49and can you really close off all avenues? 21:52I'm not so sure, but, but to be fair 21:54or anthropic, if you've played with 21:55the constitutional classifiers, and in 21:57reality they're just guard models, right? 21:59We, there's nothing new there. 22:00We've seen guard models before. 22:01They check the inputs, they check the outputs or 22:03a classifier that protect either end of the LLM. 22:06So if you were putting dodgy stuff in or dodgy 22:09stuff out, then it's, it's gonna intercept 22:11that rather than hitting the main LLM. 22:13Now, what's kind of cool about this, and I was 22:16a little bit suspicious until I played with 22:18it, is actually they've done a really good job. 22:21They are picking up a lot of 22:22the kind of prompt hacks there. 22:23They, it's not perfect. 22:26I think I'm kind of, you know, the, the world 22:28famous Pliny, who, uh, sort of jailbreaks all 22:31of these models, has already sort of had a go 22:34at it, etc. Um, and actually I think he found 22:37out there was a UI bug rather than LLM bug, 22:39which I think is even more fun and interesting. 22:42But, um, but, you know, it's going 22:45to go back and forward, but certainly 22:46the quality of those guard models 22:48are really quite something, actually. 22:51So I think you're going to get a lot of the way 22:53there, but I don't think you're going to ever 22:54get all the way there. 22:56Totally. 22:56I guess to build on that, Chris, I 22:57mean, that was one reaction that I had 22:59to this announcement was, well, this 23:01is kind of like constitutional AI. 23:03Um, is there anything 23:04really much, like, new here? 23:05Or do they just kind of slap a new name 23:07onto something they've been doing before? 23:09And, in fact, a lot of people are doing, 23:10like, guard models are all over the place now. 23:12Um, like, I guess, Nathalie, if 23:14you've taken a look at the research, 23:15curious how novel you think what's 23:17being demonstrated, uh, is, is here. 23:19Like, how much, how much should we 23:20read into this as a kind of, like, 23:21breakthrough for AI model safety? 23:24I, that's exactly the topic I work on. 23:27So I did take a very close look at the paper. 23:32Constitutional AI, basically for those 23:35that are not very familiar, what it 23:36does is basically gives this very nice 23:39layer of interpretability of what gets 23:42to be considered secure and nonsecure. 23:44So you kind of have like a bunch of, 23:47um, uh, constitutional rules that 23:50said how the model would behave. 23:52Now, uh, a lot of the data that they 23:54use to train these guardrails are 23:57synthetic data, which I think it's really 23:59interesting from the technical perspective. 24:02Uh, again, it's not nothing very new as 24:05a team was saying because they have been 24:07aligning their models using this technique. 24:10What I thought was interesting 24:11is that the, there's two models 24:14that guardrail, the main model. 24:16Uh, one at the beginning that basically 24:18verifies all the all the queries from the 24:21user and then another one that it's after the 24:25model the interesting thing to in my opinion 24:29is the way the second model uh, was trained 24:33and, uh, how it, uh, behaves in the runtime. 24:36So I think that's, uh, that's something 24:38that is slightly different to some other 24:41guardrails that tend to just, uh, tell 24:43you, yes, this was dangerous or not. 24:45Here, basically, they are stopping tokens. 24:49And, and so that's kind of 24:50a little bit interesting. 24:51I thought that was good. 24:53The other aspect that I 24:54found was really, uh, good. 24:57And we are actually investigating 24:59this into a little bit more 25:01more, uh, a little bit more 25:04is the red teaming aspect. 25:06So they did have a lot of people, uh, 25:09poking the model and they offer monetary 25:13compensation that was substantial. 25:15Um, another aspect is that, uh, they gave 10 25:19questions and they only consider that the 25:24output or the the attack was successful if 25:27the 10 were broken, so if for example, say 25:31Chris goes and he only breaks five then that 25:35counts as a zero for the metrics, uh, so so 25:38yeah, that's uh another interesting aspect. 25:41It doesn't mean that they were 25:42not able to break anything. 25:44It just means that there's a, they were not 25:47able to break 10 questions that were asked for. 25:51And my last comment, and I'm so 25:53passionate about this, so I could talk 25:55for you about it for a while, but my 25:58last comment is that they were targeting 26:00jailbreaking attacks that are universal. 26:04So this means that, uh, you can do as a human 26:08and everybody would be able to break it. 26:11Like for example, there's very there's this 26:14very interesting jailbreaking attack where 26:17you ask the model from now on, um, you 26:21are a bad model and you will do this and 26:23that and it's just naturally telling the 26:27model you're a bad model and anyone can do it. 26:31You don't need to be, uh, an 26:34expert in any Python framework or 26:36you don't need expensive stuff. 26:39You, you can break the models like that. 26:41So, so that was their target, which I 26:44think it's very interesting overall. 26:46They work, it's in itself, uh, good and I think 26:49it's important that they put it in the open. 26:51They let people kind of poke at the model. 26:54And yeah, so, so overall, 26:56I think it's interesting. 26:58Nothing in research is ever fully new. 27:01So, so yes, they are borrowing from things 27:04that work for them in the past and now just 27:07improving a little, uh, or some in, in the, 27:10in the way they, uh, they put it together. 27:12Yeah, that's such a fun jailbreak and I feel 27:14like really shines a light on how different 27:16these models are from traditional computer 27:18security because it didn't used to be in the 27:19past you you shouldn't be able to be like 27:21computer you're a broken computer like you're 27:23a vulnerable computer and for the computer 27:25just to be like I'm a vulnerable computer but 27:27like clearly that's like what we're seeing with 27:28these models which is like very, very funny. 27:30It's nice to see how far we've come, um, 27:33from the early days where people would put 27:34out models and say, go ahead and break it. 27:37Like, I'm not trying to pick 27:38on Meta, but like BlenderBot. 27:40And people were like, yeah, BlenderBot. 27:42Great. 27:42Build your own. 27:43Okay. 27:43In three hours, I have it 27:45spewing racist bigotry. 27:46Um, we're a bit better now. 27:48So that's, that's nice to see. 27:49That's good. Improvement. 27:51Improvement. 27:51Improvement, guys. 27:52Um, but again, it's a, it's a reason to 27:54put this stuff out and to put it out with 27:56the expectations that we've also, I think, 27:57gotten a lot better with the expectations 27:59of knowing, like, look, it's, it is. 28:00possible to at some point in 28:02time somehow break anything. 28:04So let's just go ahead and celebrate the 28:07improvements while keeping still a critical eye. 28:10And I will say that none of this 28:11still has anything to do with being 28:12able to fix, um, hallucinations. 28:15It's just, it's not going to tell you how 28:16to build a bomb, but it still might give you 28:19misleading information in a different way. 28:21So there are also different degrees of 28:23harm sort of to be considered there. 28:25That just happens to be. 28:27My area of study versus Nathalie's, so 28:29that's always where my brain goes instead. 28:31Yeah, yeah, and I do, I think, I definitely 28:33keep yelling at my friends who work on model 28:34security being like, we could solve all your 28:36problems and the models would still be broken. 28:38Like, it does kind of feel like, in some ways, 28:40the, like, computer security brain approach to 28:43these models, it's important, I think we need 28:45to take care of it, but, like, also, like, in 28:46some ways, like, misses, like, this big gaping 28:49hole on a bunch of other issues, in some ways. 28:56So for the final topic today, I just 28:57want to kind of go to just picking 29:00up on a sort of interesting little 29:01tidbit that came out of Microsoft AI. 29:03I think every few episodes it feels like we 29:06kind of like check in back with Microsoft 29:08AI and there's kind of these new sort of 29:10like teams and there's like clearly a lot of 29:12organization and reorganization happening. 29:15And this week they announced something called 29:17the Advanced Planning Unit, or APU, and 29:20it's a unit that will be within Microsoft 29:22AI, and they're looking for economists, 29:24psychologists, and more who will, quote, 29:26work on the societal health and work 29:28implications of AI the company hopes to build. 29:31Um, and I think this is, like, very interesting, 29:33and it almost is, like, kind of a mirror 29:35image or a different way at talking about. 29:37You know, we were talking about just a moment 29:38ago with the sort of AI Action Summit, which 29:41is that a lot of these companies that are 29:42working on AI are kind of building their own 29:44little internal social science teams to keep 29:47an eye on the effects of AI and presumably to 29:51kind of advise product teams and researchers. 29:54And Marina, maybe I'll kick it to you. 29:56I know you kind of sounded a note of some 29:58skepticism about sort of international 30:00for, for doing sort of AI governance. 30:03Um, do you buy this kind of approach, 30:04which is sort of the idea that like we 30:06need to recruit kind of specialized talent 30:08that will sort of be like an advisory 30:09group in some ways to researchers. 30:11Is that kind of how we sort of account 30:13for these types of risks with the tech or 30:15are you also skeptical about this as well? 30:17I mean, I'm excited for the 30:19cross disciplinary mixing. 30:21So I've said for a while there needs to 30:23be a little bit more of a humanities, 30:25liberal arts perspective on these 30:27models, not only the STEM perspective. 30:29So throwing in economists and psychologists 30:32and all of those folks that would say, 30:33hey, if you put out a technology like this 30:35and it's used like this, what actually are 30:38the potential economic implications of it? 30:40People yell that AI will 30:41or will not take our jobs. 30:43Great, let's do a proper study of 30:44this, this is what economics is for. 30:46People yell that AI will or will not cause 30:48widespread misinformation, okay, can we bring 30:50in some social scientists, some psychologists, 30:52people that actually have the training and 30:54not people that sound off in Reddit groups. 30:55So that part I think is positive, um, 30:58I assume Microsoft along with everybody 31:00else also would like to know other ways 31:01to monetize their technology and that 31:04hopefully is going to help here as well. 31:06Where the tech is out there, great, 31:07how can we monetize it and monetize 31:09it appropriately and again, set user 31:11expectations, gain yourself new customers. 31:13So I mean, this at least goes to the 31:15fact that this is turning into a little 31:17bit more of a settled down perspective 31:19on business, not only research. 31:21So I find that part interesting. 31:23I certainly think that they're going to be more 31:24likely to listen to their own, um, internal 31:27folks than, international, uh, you know, 31:30statements, but maybe that's just my cynicism. 31:33I know, Nathalie, when sometimes these 31:34discussions come up, people are like, 31:36we don't need a separate unit for this. 31:37Like, engineers or researchers should 31:39just become, like, become better ethicists 31:41or, like, have more humanities training. 31:43Um, I think one of the interesting questions 31:45that I feel is playing out inside all of these 31:47companies is like how much we sort of see this 31:49as like something that kind of everybody's 31:51responsible for and will need to be trained 31:53up on versus like a specific unit that will 31:55be kind of like tasked with doing this. 31:58Um, do you have any opinions on that? 31:59Like, I'm kind of curious about like, 32:00You know, if you have a view, I mean, the 32:02answer might be we should do both, but 32:04it's kind of thinking a little bit through 32:05like who owns this within the enterprises. 32:07I think actually genuinely interesting question. 32:09Yeah, I actually do like 32:12that they have this new unit. 32:14I think it's a great idea. 32:15Uh, the reason is that when you're in the weeds. 32:19you cannot really see the big picture. 32:22So it's always good to have somebody kind 32:24of with a different perspective that that 32:27notices things that perhaps if you are 32:30really working in something on something and 32:32really in detail you kind of miss all their 32:35landscape just because you don't have time to. 32:38Go up, take a look necessarily 32:40at all that it's happening. 32:42Also, these companies are so big 32:45that a lot of innovation and this, 32:47this also happens to us at IBM. 32:49We have different teams with different 32:51innovations and different opportunities 32:54for business that it's good to have 32:56somebody kind of helping navigate 32:58and understand the whole landscape. 33:00So, uh, my take is that these types of, uh, 33:04Units are very necessary and can very much 33:06help both research, product, and the business. 33:09Ultimately, we do need money, 33:11uh, for everything that we do. 33:13So if, uh, business, uh, goes up, everybody, 33:16I think, would be very, very happy and This 33:19unit, I think it's going to be a good idea. 33:22Yeah, Nathalie, I feel like you're 33:23becoming our optimist of this episode. 33:25Um, Chris, I don't know if 33:26you have any takes on this. 33:27I think more generally, the question 33:29I want to ask you, Chris, was, you 33:30know, I think it was very funny that 33:32they were like, we want economists, 33:33psychologists, and like other people. 33:36Um, and I guess I'm kind of curious about 33:38like Chris, like in your work, if you 33:39were ever like, oh man, if I had a team 33:41that I could just, you know, reach into 33:42within IBM and talk to, and they were blank 33:45discipline, you know, if there's kind of like 33:47particular kind of like, when we say cross 33:49disciplinary, we're often kind of a little 33:51bit vague about like who we're crossing with. 33:53And so I think one of the questions 33:54is if Chris Hay were running this, you 33:56know, APU, you know, who would be in it? 33:59I would automate it straight away. 34:01If we cannot replace that unit with AI 34:04and agents, what are we doing in this 34:06industry in the first place, right? 34:08Uh huh. 34:10Great. 34:12And I'm being serious. 34:13I'm being serious. 34:15It's like, what do you want? 34:16It's, it's, it's many go off and do deep 34:18research, you know, and find out what the 34:20societal impacts are going to be, et cetera. 34:22Well, why are we all launching deep researchers 34:24that go off the internet and scour every piece 34:26of information and bring that together, right? 34:29So come on, it's like, if we truly want 34:32to talk about the future of work and we 34:33want to do that, actually, you know, put 34:35your money where your mouth is, invest AI 34:37agents that are going to do this and be 34:39able to tell you what your insights are. 34:40Otherwise, if, if, if you need human beings 34:43to go and do deep research, then, then 34:46what good are the deep research products? 34:48So I'm seriously, I would, I would start 34:51the organization, I've got my ARPU, and 34:53my first thing would be, I'm going to 34:54put as little humans in as possible, and 34:56it's all going to be automated by AI. 34:58That is, you know, with humans checking, 35:00of course, the outputs at the end. 35:01But that would be my, uh, 35:03you know, point on this one. 35:04All right, I'd be negligent as a moderator 35:06if I didn't get Marina and Nathalie 35:08to comment on this wild proposal. 35:10I was not expecting Chris would go in that 35:11direction, but I should know better, of course. 35:13So, uh, Marina, do you want to jump in? 35:15I love it, Chris. 35:16Um, you do need the trained people to, to 35:19check and again, to know what questions to 35:21pose as you know, calling back to what we were 35:23talking about at the beginning of the episode. 35:25And also I think that there's plenty 35:27of places where you're just not even 35:28going to have the knowledge to be. 35:30deep research and scrape on the internet. 35:32I'm thinking of more emerging economies 35:34and things that have, uh, you know, 35:36more cultural differences of that kind. 35:38So yeah, we might learn a good amount 35:39about the U. S. and Western Europe, but 35:41I don't know how successful we're going 35:43to be in integrating into other places. 35:44So I love the goal, um, but there's going to be 35:49aspects here, especially knowing the questions 35:51to ask, knowing the difference in framing. 35:53Knowing what things are actually going to 35:55be, you know, correlation, causation, and 35:57experimental setups and things like that. 35:59You're still going to need the 35:59humans driving, even though it'll 36:01be great to get them assistance. 36:02I love that you're like, no, I would never say. 36:05We have reinforcement learning now. 36:07You get a cookie. 36:08If you, if you ask a good question, 36:10we learn, you get a cookie. 36:11No model gets better. 36:12It's all good. 36:13We definitely know what a good question is and 36:15can quantifiably evaluate that as a statement. 36:18That problem has been solved. 36:20Absolutely. 36:22All right, Nathalie, I'm going to give 36:23you the last word on what has been 36:25a wild conclusion to this episode. 36:27What I'm thinking is for what Chris 36:30is telling us, uh, we should be going 36:33about, it does require having a lot of 36:35data and things that are really fresh. 36:38Like things that we are doing right 36:39now, I don't even have documents. 36:42This requires human to human interaction 36:45of telling you what's my research 36:47about, what is it that we're doing 36:48internally, a lot of stuff like that. 36:50It's not going to go right away to 36:53models, in my opinion, just because we 36:55don't have enough documents available. 36:57So for Frenchness perspective, uh, uh, to 37:01be, have really fresh information then. 37:03We do really need humans in the loop and a 37:06lot of these decisions and a lot of things 37:08that are going to be really cutting edge 37:10in the organizations to still have humans 37:13involved and talking to other people. 37:15And I think, uh, that's 37:17actually part of the magic. 37:18I would think it's, it would be very 37:20boring if we don't have humans and 37:22we don't have human interaction. 37:23So, so yeah, well, humans would use models 37:26and we'll have all these agentic stuff. 37:29There's still going to be a lot of stuff 37:31that it's human to human communication 37:33and human to human analyzing. 37:35All right. 37:35Well, we will have to check in and see 37:37where the fate of the researchers are. 37:38Uh, if we're all out of work in a 37:40few years, then we'll, we'll know. 37:42Um, as per usual, thank you for 37:45joining us, Marina, Nathalie, Chris. 37:46It's always a pleasure to have you on the 37:48show, um, and thanks for joining us, listeners. 37:50If you enjoyed what you heard, you can get 37:51us on Apple Podcasts, Spotify and podcast platforms everywhere. 37:55And we will see you next week, and I'll 37:57be calling in from Paris on, uh, the 37:59next episode of Mixture of Experts.