Learning Library

← Back to Library

FAccT Highlights: Fairness, Safety, Benchmarks

Key Points

  • Shobhit Varshney cautions that AGI still feels far off, predicting only “very intelligent machines” within the next five years rather than true general intelligence.
  • Host Tim Hwang outlines the episode’s focus: the FAccT conference on AI fairness, an AI safety interview with Leopold Aschenbrenner and Dwarkesh Patel, and the latest developments in Retrieval‑Augmented Generation (RAG) benchmarking.
  • The annual FAccT conference in Rio is highlighted as the premier venue for the newest research and debates on machine‑learning fairness, accountability, and responsible AI.
  • A recent AI‑safety discussion explores methods for forecasting AI capabilities and updates on safety efforts at OpenAI.
  • Benchmarking RAG systems is emphasized as a key indicator of industry progress, with experts weighing in on what current results reveal about the state of the technology.

Sections

Full Transcript

# FAccT Highlights: Fairness, Safety, Benchmarks **Source:** [https://www.youtube.com/watch?v=0sA6X6n3goc](https://www.youtube.com/watch?v=0sA6X6n3goc) **Duration:** 00:40:28 ## Summary - Shobhit Varshney cautions that AGI still feels far off, predicting only “very intelligent machines” within the next five years rather than true general intelligence. - Host Tim Hwang outlines the episode’s focus: the FAccT conference on AI fairness, an AI safety interview with Leopold Aschenbrenner and Dwarkesh Patel, and the latest developments in Retrieval‑Augmented Generation (RAG) benchmarking. - The annual FAccT conference in Rio is highlighted as the premier venue for the newest research and debates on machine‑learning fairness, accountability, and responsible AI. - A recent AI‑safety discussion explores methods for forecasting AI capabilities and updates on safety efforts at OpenAI. - Benchmarking RAG systems is emphasized as a key indicator of industry progress, with experts weighing in on what current results reveal about the state of the technology. ## Sections - [00:00:00](https://www.youtube.com/watch?v=0sA6X6n3goc&t=0s) **AI Forecasts, Fairness & Benchmarking** - The segment opens with Shobhit Varshney’s hot take on near‑term AGI feasibility and introduces the Mixture of Experts podcast episode covering the FAccT conference, an AI safety interview, and the latest RAG benchmarking developments. - [00:03:11](https://www.youtube.com/watch?v=0sA6X6n3goc&t=191s) **From Fairness Theory to Organizational Practice** - The speakers discuss the shift from defining fair machine‑learning algorithms to the practical challenges of implementing responsible AI within companies, covering education, internal workflows, copyright, and labor impacts of LLMs, with examples from IBM. - [00:06:14](https://www.youtube.com/watch?v=0sA6X6n3goc&t=374s) **Ethical AI Governance in RAG Systems** - The speakers critique using isolated legal groups for ethical AI, argue for interdisciplinary oversight to ensure fairness and responsibility in retrieval‑augmented generation, and highlight incentives and liability risks that drive proper organizational structures. - [00:09:23](https://www.youtube.com/watch?v=0sA6X6n3goc&t=563s) **Governance Bottleneck Drives Platform Shift** - The speaker argues that lengthy governance approvals are throttling AI project delivery and advocates consolidating RAG processes into a unified platform with pre‑approved accelerators to eliminate the delay. - [00:12:47](https://www.youtube.com/watch?v=0sA6X6n3goc&t=767s) **Choosing and Scaling Enterprise AI** - The speakers highlight enterprises’ growing focus on selecting the right model size and deployment approach for responsible AI, and on scaling workflow steps—such as improving OCR accuracy with larger language models—to boost task performance. - [00:15:49](https://www.youtube.com/watch?v=0sA6X6n3goc&t=949s) **AGI “Situational Awareness” Sparks Policy Attention** - The speaker explains how Leopold Aschenbrenner’s extensive “Situational Awareness” essay on artificial general intelligence has vaulted into public and congressional focus, highlighting the contrast between superintelligence hype and the practical, everyday use of AI technology. - [00:19:00](https://www.youtube.com/watch?v=0sA6X6n3goc&t=1140s) **AI Compute, Nuclear Analogy, Governance** - The speaker likens future AI power demands to nuclear reactors, arguing that merely adding compute can't fix poor data or algorithms, and insists that such massive, potentially hazardous technology should be overseen by national governments rather than private entities. - [00:22:04](https://www.youtube.com/watch?v=0sA6X6n3goc&t=1324s) **Predicting AI Safety and Trends** - The speakers discuss governmental safety mandates, access and transparency at major AI firms, and debate whether linear extrapolation of current developments can accurately forecast AI capabilities by 2026. - [00:25:11](https://www.youtube.com/watch?v=0sA6X6n3goc&t=1511s) **Powerful Models, Complex Trade‑offs** - The speakers argue that while AI models grow more powerful, their accuracy isn’t guaranteed, prompting concerns about escalating compute demands, concentration of control, environmental impact, and the necessity of responsible, purpose‑driven development. - [00:28:20](https://www.youtube.com/watch?v=0sA6X6n3goc&t=1700s) **AGI Emerging, Not Yet Distributed** - The speaker contends that AGI‑level abilities already exist in narrow AI, will disseminate gradually across tasks, and that future super‑intelligent machines will surpass humans in knowledge sharing and collaboration, leading to safer, more powerful AI development. - [00:31:25](https://www.youtube.com/watch?v=0sA6X6n3goc&t=1885s) **RAG Failure Points and Evaluation** - The speaker outlines how poorly formed queries cause incorrect retrieval, leading to hallucinated or incomplete answers in RAG systems, and stresses the need to assess relevance, answerability, faithfulness, and completeness. - [00:34:29](https://www.youtube.com/watch?v=0sA6X6n3goc&t=2069s) **RAG Benchmark Saturation and Limits** - The speakers discuss how RAG evaluation metrics quickly become saturated, lack a universally accepted standard, and drive the community toward incremental improvements and new directions such as AI agents. - [00:37:31](https://www.youtube.com/watch?v=0sA6X6n3goc&t=2251s) **Debating RAG Definitions and Evaluation** - Participants argue that RAG encompasses more than simple retrieval, highlighting the need for routing, structured/unstructured data integration, and new metrics for end‑to‑end accuracy. ## Full Transcript
0:00Shobhit Varshney: I've never seen, uh, like AGI being more plausible 0:02than we are standing right now. 0:04So my hot take, where we are right now, 2024, um, five years out, I 0:09would see us, uh, be able to get to very, very intelligent machines. 0:23Tim Hwang: Hello, and happy Friday. 0:25You're listening to Mixture of Experts. 0:27I'm your host, Tim Hwang, back again. 0:29Each week, Mixture of Experts distills down the week's most 0:32important headlines and chatter in the world of artificial intelligence. 0:35From research papers and product announcements to ethics governance and 0:39just plain gossip, we've got you covered. 0:41This week on the show, First, the annual ACM Conference on Fairness, 0:45Accountability, and Transparency, or FAccT, is happening this week in Rio. 0:49We'll talk about the latest developments in ML fairness and 0:51the state of responsible AI. 0:53Next up, Leopold Aschenbrenner's AI safety screen situational awareness hit 0:57the airwaves with a widely talked about interview with Dwarkesh Patel, what's the 1:01best way to forecast AI capabilities, and what's going on with safety at OpenAI. 1:05And finally, benchmarking, benchmarking, benchmarking. 1:09This week we talk about the latest in RAG benchmarking and what it tells 1:12us about the industry as a whole. 1:13As always, I'm joined by an incredible group of experts who 1:16will help us cut through the noise and drop some hot takes as we go. 1:19Vagner Santana, Staff Research Scientist, Master Inventor, and importantly, 1:23debuting for the first time on MOE. 1:24Vagner, welcome to the show. 1:26Vagner Figueredo de Santana: Thanks for having me. 1:28Tim Hwang: Uh, next up, Marina Danilevsky, Senior Research Scientist. 1:30Welcome back to the show. 1:32Marina Danilevsky: Thanks, happy to be here. 1:34Tim Hwang: And Shobhit Varshney, who has been with us since episode number 1:37one, uh, senior partner consulting on AI for US, Canada, and Latin America. 1:41Shobhit, welcome back to the show. 1:42Shobhit Varshney: Absolutely love these. 1:43Thanks for having me again. 1:49Tim Hwang: All right, well, let's just jump right into it. 1:50So the first story I want to cover is the annual FAccT conference 1:55is happening this year in Rio. 1:57So for those who don't know, it is. 1:58Uh, arguably the leading conference on topics of machine learning 2:02fairness and responsible AI. 2:04And I thought this would be a good jumping off point just because if you've 2:06been watching this space for some time, responsible AI and ML fairness has become 2:11kind of a buzzword that lots and lots of people have used in recent years. 2:15And I think these conferences are a good time to check in on what this kind of, 2:17you know, state of play is in fairness and accountability questions in AI. 2:22And Vagner, one of the reasons I wanted to have you on the show was, um, you've 2:25been watching kind of the papers and the chatter around the conference. 2:28Maybe I can just kind of toss it to you first for our listeners, uh, any sort of 2:31patterns or trends that you've noticed, I think this year, um, at FAccT, if there's 2:36particular papers that you think people should check out, just curious about 2:38your review or your kind of thoughts on, um, what you're seeing out there, 2:42um, uh, at this year's conference. 2:44Vagner Figueredo de Santana: Well, there are interesting discussions around, um, 2:49uh, synthetic data around how, uh, how people are using LLMs to create data 2:55and then also to assess LLMs using LLMs. 2:59So there, there's this discussion going on also about, uh, responsible AI. 3:03Uh, one of the papers I selected to discuss with y'all, I think 3:06has to do with how to, how people are learning about responsible AI. 3:11on the job. 3:12I think that that's important because people are getting 3:14interested and people are following and trying to find resources. 3:18But then that comes with all the complexities of 3:23working in an organization. 3:24So that is one aspect as well. 3:26And, well, the other aspect connects with copyright and also how to deal 3:31with all the labor that is being packed by the, the, uh, the use of LLMs. 3:38In a wide range of of jobs around the world. 3:42Tim Hwang: Yeah, for sure. 3:42And I did want to pick up on that second theme, specifically, you know, 3:45there's much more as true with all these conferences, there's many more papers 3:49that you'd ever have time to discuss. 3:51But I think what's so interesting about the responsible AI topic is. 3:55You know, this is really an evolution, I think, in fairness in ML, where I would 3:58say even a few years ago, basically a lot of the attention was like, can we 4:02define in computational terms, what a fair machine learning algorithm is? 4:07And it kind of feels like there's a lot more work now that's happening in 4:10this much broader question, which is okay, well, we have all these techniques 4:13and approaches around fairness in ML. 4:15How do we actually get like an organization to implement it? 4:18How do we get people to learn about it? 4:20What are the techniques that people use? 4:22And You know, Vagner, I think in addition to your research, it kind 4:24of sounds like you've been doing some work on this internally within IBM. 4:28And so I'm kind of just curious if you want to talk a little bit about that 4:30paper that you mentioned and then just kind of map it to your own experience. 4:33I think I'm curious about, like, what you're sort of learning as someone who's, 4:36you know, very much in the trenches, you know, trying to get this work to work. 4:40Vagner Figueredo de Santana: Yeah, and one, one of the aspects that, 4:42um, that the paper covers and, and has to do with incentives. 4:45And we need to be aware of the symptoms that our organizations have before 4:50thinking about responsible AI, because otherwise, uh, well, we'll be facing 4:55a lot of blockers all along the way. 4:57Um, and also, uh, there is an interesting aspect that the paper highlights about, 5:01um, The, the discipline identities that we have when we are like in, 5:06in, in hard technical teams, they have their own discipline identities. 5:11And uh, when they are looking for, uh, let's say resources about responsible 5:15AI, they'll probably go into resources. 5:18They are used to look for. 5:20So they're going to be looking for technical, uh, 5:23technical libraries or metrics. 5:25And sometimes we need to go beyond this. 5:29Um, our own discipline identities and look for other skills and other 5:33disciplines and to learn more about, let's say, social technical impacts, right? 5:38Beyond go beyond focusing on, let's say, some fair metrics for folks, uh, 5:43focusing on more technical aspects. 5:45And the other way is to. 5:47Be as well, right? 5:48For people, uh, uh, thinking about, uh, let's say indirect impacts on society. 5:52They also need to be aware of the daily job of data scientists and coders and 5:59researchers, and how can we like, do this, uh, uh, a connection, right? 6:04Tim Hwang: Yeah. 6:04I think you're just rundown, I think runs into, or I think highlights. 6:08I think a bunch of the issues, uh, you know, I think it was a, it was 6:11a joke that I had for, uh, uh, with a friend for a while that like, oh. 6:14The main thing a lot of big companies would do when they wanted to do like 6:17ethical AI or responsible AI would be like, well, we're going to create like 6:21this, like secret group of lawyers that will just determine everything. 6:25Um, and it was like, this is like not a good way of, of doing, you know, things. 6:30Um, and I guess I'm kind of curious, I mean, Marine, if I can bring you into this 6:33discussion, you know, um, I guess maybe one thing I'd be curious about is like how 6:38you all think about things like fairness and responsible AI in, in rag, right? 6:42Which are, you're literally trying to pull information from another source. 6:45Um, and, and I guess I'm kind of curious about like if you've kind 6:47of, you know, have thoughts on this particular discussion, right? 6:51Like how should organizations kind of best organize themselves to, to do this right? 6:55Because I think part of it is this kind of interdisciplinary crosstalk, which 6:59I think organizations of different size, you know, do better or worse 7:02at, you know, in different capacities. 7:04Marina Danilevsky: I think something that Vagner said about incentives 7:07really pops up here as well, which is why should you care? 7:11Well, it's because you would like your customers finally 7:13to be using your rag system. 7:15And if it is giving answers that are not, um, not even so much fair, but there's 7:19a risk that, uh, it's going to give something that is irresponsible that is 7:23going to lead to your users being misled or being upset or taking legal action, 7:29then your solution is not going to be, uh, But it's not going to be taken. 7:33So actually, because we usually are looking at enterprise use cases, we 7:37are very, very incentivized to make sure that we are communicating things 7:41that are, you know, fair, ethical, regardless of what our own ideas are. 7:45It's because if we don't succeed in that, it will not be purchased. 7:48The risk is too high. 7:50Um, there's too many, you know, fun stories in the news about, 7:53uh, what happens when you don't pay enough attention to that. 7:55Tim Hwang: So you're actually seeing that. 7:56Cause I think, I dunno, I had a Fear, you know, um, which I still kind 8:00of have, which is like, maybe this discussion is gonna become a little 8:03bit like, um, like data privacy. 8:05Where like, I think early on there was kind of this idea that like, oh, well 8:08the minute there's a really big data breach, then everybody's suddenly gonna 8:11care about data privacy and security. 8:13And like consumers will all prefer the, the better privacy option, right? 8:17But then I think you can make the argument that one of the 8:18things that's happened is that. 8:19There's just like so many huge data breaches now, so many big 8:23failures that like almost like the Overton window has shifted. 8:25We're just kind of like, Oh, you know, someone leaked 8:28billions of customer records. 8:29I guess that just happens. 8:30But that is something that you're seeing is kind of sounds like that, like at 8:33least in fairness because we've had all these high profile failures, it's 8:37not necessarily people have just been come resigned to it actually still 8:40remains kind of like a thing that people are really concerned about. 8:43Shobhit Varshney: We've seen this quite a bit, right? 8:44If you look at, uh, the AI culture. 8:47The AI framework, responsible AI framework, and what we should prioritize 8:50and which ones are high risk, how do you categorize use cases, so on and so forth. 8:54And there's actual tooling and platforms that are needed to go 8:56drive these at a price, right? 8:58And those three layers have to be addressed one by one. 9:01Um, the AI culture around, hey, look at your day to day workflows and see 9:05where you can apply AI, and you have to do this in a responsible way, 9:07and here's a framework around it. 9:09Unfortunately, the reality on the ground for, Most of the Fortune 100 companies 9:13that I work with, the responsible AI team, you have to go, it's easy to go 9:17create a governance board, and you go to the governance board for guidance 9:20and coaching and making sure that you're not doing the wrong things. 9:23Unfortunately, it becomes, I'm going to quote Lord of the Rings, Gandalf, standing 9:28on a bridge and saying, you shall not pass, right, go back to the shadows. 9:31So we've, we've, Somehow created a, uh, forcing function that anything 9:37that goes to the governance board adds about two months of delay to a project. 9:41So the value of the unlock for the business gets diminished and I might 9:45as well not even deal with this and I should just go stick to my RPAs 9:49or automation scripts or regular AI stuff, and that'll be just fine, right? 9:54So they've become a rate limiting step at this point, and that 9:56has to fundamentally change. 9:58And for that, the next layer that I was talking about in terms of platforms, 10:01that becomes more and more critical. 10:03So instead of saying that, hey, you need to go figure out all these 20 different 10:06checklists in your RAG pattern so I can know exactly where the data is coming 10:10from, there's, uh, there's metrics that I need to report against, and so on and 10:13so forth, now you start to move towards a platform approach where you say, Hey, 10:17use the platform pre approved accelerators all the rag when we looked at rag patterns 10:22within IBM consulting within a week's time we had like 121 different different 10:27ways in which people were doing rags and we said guys time out we've got to 10:30go consolidate we'll create scribe flow we'll create a mechanism that has the 10:34best of all techniques in one single spot right so when you start to get to a 10:38platform then you come to a point where the governance boards are pointing you 10:42towards accelerators Versus becoming a you shall not pass moment, right? 10:46I think that the whole culture leading to governance, then leading to the stack. 10:51And we're doing this with a lot of our Fortune 100 companies. 10:54Recently, last couple of weeks back, I had Pepsi on stage with us where we're 10:58talking about how we're helping them build a culture of responsible AI and 11:01the frameworks and so on and so forth. 11:03This is one of the many examples where we've had to go do this end to end 11:06from culture to frameworks to actual tooling that goes and deploys that. 11:12Tim Hwang: Yeah, that's really interesting, and actually, I should add 11:14that I'm surprised that it has taken this long for us to get to a Lord of 11:16the Rings reference, so I think we're at episode six right now is the first 11:20one that we've actually, uh, heard. 11:22Yeah, exactly. 11:23Well, and I think, I don't know, I mean, maybe one last nuance to kind of touch 11:26on, I'd be curious to get the panel's thoughts, and Vagner, maybe to throw 11:29it back to you, is like, you know, so for the last few episodes, we've all 11:32been very excited about open source. 11:34And it feels like part of the problem of open source is that, you know, suddenly. 11:38Like your fairness methodologies are almost competing with like just 11:42being able to like pull something off the shelf and like deploy it in any 11:45reckless way that you really want to. 11:47Um, and I guess I'm kind of curious about like how we think about sort 11:50of responsible AI going forwards in a world where like anyone can just 11:54pull AI off the shelf and use it. 11:56Um, because it feels like in a world where like maybe there's only two or 11:58three platforms, you really can say, okay, well, if you want access to this advanced 12:02technology, you're going to have to go through this additional compliance cost, 12:05even if it takes you, um, a little bit more time, but that kind of lever is, I 12:10don't know, from my point of view, seems to be like breaking down a little bit 12:12as it becomes more and more accessible. 12:14I don't know if you'd buy that. 12:15You might also just say, Tim, you're totally wrong, but I don't 12:17know, Vagir, if you've got any thoughts on that or anyone really. 12:19Vagner Figueredo de Santana: In terms of open source, I think that the interesting 12:22aspect is that, uh, people, um, have more transparency as, as we all know, and, 12:30and also thinking about, uh, well, fully open source models, because people are 12:36also discussing that when you also only have the model and you don't know the 12:40data, you used to train the model, you just have, have, have open source model, 12:44and when you know more about the model, then you have a fully open source, right? 12:47So I think that that's important, and people are getting more and 12:51more interested interested on that. 12:52And when we talk to clients, they are also interested on, um, finding 12:56the right model for the right task. 12:58I think that that is also interesting. 13:00Uh, that, uh, pattern that is emerging, like people discussing, 13:04okay, is this the right size of model for solving my problem? 13:08Is, is this, uh, like, is this language model or, uh, or generative AI? 13:13Fully open. 13:14Uh, can I host that in my own private cloud? 13:17So these are questions that are appearing when we talk about 13:20responsibility AI right now. 13:23Shobhit Varshney: Yeah, we, uh, again, I'm coming in from a very 13:25enterprise approach to this. 13:27My, my square focus is how do we scale these, right? 13:30And when you look at a, uh, step by step process, any workflow that's happening 13:33in an organization today, right? 13:34Seven different steps. 13:35Uh, step number one, you're going to pull some data. 13:37Step number three, you're going to do some fraud detection. 13:40Step number four, now you're going to go extract something from a 13:42document, an invoice, a contract, something that came to you, right? 13:45Now, say where I was able to do OCR and pull that out with 13:48about 80 percent accuracy. 13:50So far, that's best where we were now. 13:53All of a sudden, we have LLMs and we say, Hey, I've recently believed 13:55that I could potentially get about 90, 92 percent accuracy and squeeze 13:59more out of this document, right? 14:01So now you're saying about 10, 12 points of additional benefit that 14:03you can derive from it, right? 14:05At that point, we stop and say, if you have reason to believe in LLM could 14:08do this, let's Talk about constraints. 14:11The constraints around cost and envelope, right? 14:12How much can I afford if I'm doing this a thousand times or a million times? 14:15There's a different ROI attached to it, right? 14:18Then there is security where the data resides. 14:20Models follow the data gravity. 14:22They go, we deploy them closer to where the restricted data sets are and so forth. 14:26Then you start to look at how quickly you need an answer. 14:28The latency of a model matters. 14:30You're trying to start figuring out, Okay. 14:32Uh, from a compliance perspective, when I have to go explain to somebody 14:35how I came up with this answer, which means I need auditable responses, I 14:40need more deterministic responses in certain use cases and so on and so forth. 14:43So you come up with a set of constraints, and given those five, ten different 14:46constraints, now you have two or three good athletes that you start to test with. 14:50And then from there on, we start to move towards metrics and 14:52see which one is giving me more versus the others and so forth. 14:55But it's very critical at a step level, at a sub level. 14:58task level, you're trying to figure out which LLM is going to do the job. 15:01And we're getting away from, hey, earlier we said, hey, can I, can a GPT 4 model 15:05do the entire workflow end to end? 15:06So we'll talk about that in a little bit, but I think we're 15:09still at the sub task level. 15:10We're surgically infusing AI and Gen AI and seeing if it can do 15:13this one thing incredibly well. 15:15I'll take care of the rest before and after. 15:18Tim Hwang: Yeah, no, I think it makes a lot of sense, and I think goes to 15:20this really interesting question, which we won't have time to address 15:22today, but we should do on a future episode is, you know, what's that 15:26mean for responsible AI, right? 15:28Because it's like, you know, you have lots and lots of sub modules 15:31that may have, you know, various, various different types of problems. 15:34There's almost kind of a question about whether or not any one deployment is 15:37responsible, but then whether or not the whole system hangs together is a 15:40whole nother set of analysis, right? 15:42That actually is another question. 15:49Great. 15:49So I want to move us to the second topic of today. 15:53This is a really big week if you track the discourse around 15:55artificial general intelligence. 15:59Leopold Aschenbrenner, who is a former OpenAI Superalignment team 16:02member, published this massive online screed called Situational Awareness. 16:07Um, This would have kind of existed, I think, as sort of a weird, obscure 16:11screed, but Leopold ended up doing an interview with Dwarkesh Patel, the 16:15sort of influential tech podcaster, and this story and this document 16:20has now just gone everywhere. 16:22So I've caught up with friends who, you know, work in policy in D.C. 16:25saying, we're getting calls from congressional offices saying what 16:28our take is on situational awareness. 16:30So I just want to take a quick breather here, um. 16:32Uh, because the claims of situational awareness are quite breathless. 16:37Um, the argument is if you take all the existing trends in AI and you project 16:41linearly, we will reach a point where AI becomes, you know, sort of, um, 16:46uh, transformational in its impact. 16:49And so I think this is kind of a great opportunity to bring in. 16:52You show a bit to this conversation, because the way I sort of see the 16:56discourse is that there's a circle of people who are in like AGI 16:59super intelligence land, right? 17:00Who are like the AI is going to take over the world, right? 17:04But then I think like there's this vast group of other people who are just 17:07like doing work with the technology, who are like talking to companies 17:10that are implementing the technology. 17:11And I'm kind of curious as someone who's like really right. 17:14You know, at the front lines of that. 17:16Like, are companies like, situational awareness, do we 17:19need to be worried about AGI? 17:20Like, does that even enter into the commercial discussion? 17:23Or is this like a completely, like, almost in the parallel dimension? 17:26Shobhit Varshney: I've never seen, um, like, AGI being more plausible 17:29than we are studying right now. 17:31So, My hot take, where we are right now, 2024, um, five years out, I 17:36would see us, uh, be able to get to very, very intelligent machines. 17:40Now, the definition of AGI has been very weak, right? 17:43Everybody has their own interpretation of what Artificial 17:44General Intelligence would mean. 17:46And even if you compare two different people, it's very difficult for 17:49us to really have a good metric on is this person in the show that's 17:52really intelligent or not, right? 17:54Like if you ask my wife or my kids, you'd have a very different answer. 17:57So it's a very different point and even being able to define 18:00what AGI looks like, right? 18:02But if you just talk about intelligence, we've been doing an incredibly good 18:06job at making progress every two years. 18:10If you like, stepping away from a half a year increments is 18:13looking at a two year horizon. 18:14Right? 18:15When, uh, GTP4, uh, stopped training and you know, they've, they've discussed 18:18this in 2022, you're looking at about a half a billion dollar spend, uh, 18:22about a 10 megawatt, uh, about, uh, 25,000 a hundred, a 100, uh, GPUs 18:28from Nvidia at that point, right? 18:29That's kind of, kind of what they, uh, must have spent 18:31doing this 2024 today you have. 18:34You can have a hundred thousand Edge 100 equivalents. 18:37You're seeing what, how much investment, meta and others are making into this. 18:39Right? 18:40And then you, uh, you had this huge, big announcement with OpenAI and 18:43Microsoft that they go to establish a hundred billion dollars super computer. 18:46Right now we're talking about something that starts to get 18:48into 2026 timeframe when you can potentially have a gigawatt cluster. 18:53You can have this big, big giant machine, and the power needed for it would be 18:58equal to say the Hoover Dam, right? 19:00So, or nuclear react. 19:01Right. 19:01So now you're trying to start to say that I can solve a lot of trough 19:05problems by throwing more compute at it. 19:07That's just part of the equation, right? 19:08There's better algorithms, there are better data that's needed. 19:11It's a combination of those. 19:12You can't overcorrect for bad quality data with having more compute. 19:18So we're getting to a point where now you would get to more and more 19:21compute power being available. 19:23If you keep extrapolating that out, I see a situation where we would have More than 19:29a nuclear reactor attached to this one of these big machines, and you can then have 19:33a huge cluster that just intelligently look, crumpling through numbers. 19:38I think what he's extrapolating was by 2030, we'll be at 100 gigawatts. 19:43I think that's a stretch. 19:44That's about 20 percent of U. S. 19:46electric production, but it does bring in a few different 19:51aspects of the safety of the A.I., who should have access to it, nations versus private sector. 19:56We solve for that with the nuclear energy saying that, hey, only the 19:59big national government should have access to nuclear power, right? 20:03We, uh, to nuclear arsenal. 20:06And then we trust that there's a mechanism in place that has checks and balances 20:09in the government that has access to something that's super, uh, foundational, 20:13such a massive impact on humanity. 20:15It should be in the hands of governments. 20:18But if you start to look at some of the world leaders around right now, a lot 20:21of them and potential elections coming up and stuff too, they don't quite 20:25understand what we're dealing with. 20:26I'm just looking at the axis of AI. 20:29I would, I could easily see a geopolitical issue here where the 20:33country that has has those clusters. 20:36If you think about the Oppenheim, if you go back in time and look at what we did 20:40in the During that stage, you would not want to have that entire establishment 20:43in a different country, right? 20:45US went out of our way to ensure that that's being built 20:48inside the United States, right? 20:49So you'll see a lot more of concentration of AI superpowers 20:53and how much they're investing in building the energy, the requirements, 20:56building these massive clusters. 20:58And if you follow the trajectory of electric production, I was giving 21:02a talk recently on how much was the impact AGI and supercomputers and stuff. 21:09And I had looked at this detail around the per capita electric generation, right? 21:14How much electricity does each of the countries generate? 21:16And if you just look back at the last 30 years, US, United 21:20States has declined 5 percent in electricity production per capita. 21:25United Kingdoms, the UK has gone down 23 percent in the last 30 years. 21:31China has gone up nine times in energy production per capita, right? 21:35So you're starting to see axes of power that who has access to what kind of 21:39energy, who has access to what kind of compute power, and then to your earlier 21:43point, Vagner, once you start to open up these models and open weights being 21:47available, you're essentially giving people the recipes of how you can go 21:50replicate these things on your own, right? 21:52So I think we're at this weird intersection of private versus government. 21:58And then, does the AI intelligence then dictate geopolitical power? 22:03And when does that tip over? 22:04At what point does the government start getting really, 22:07really serious about safety? 22:09Who has access to these technologies inside of OpenAI, or the big tech 22:12giants, and things of that nature? 22:13How open are you about that? 22:15But what data is going in, and so forth. 22:17I'm just very fascinated by the impact it's going to have. 22:20Yeah, 22:20Tim Hwang: it's actually, I don't know, I feel like you, you, you 22:22surprised me there, actually, right? 22:24I thought you were going to go in a completely different direction. 22:26I feel like when I talk to many kind of folks who are like in enterprise 22:29on the business side of this, they'll basically say, This is not happening. 22:34This is not realistic. 22:36Like you see what's happening with AI right now It's never gonna be like 22:39what this guy Leopold says And and it feels like you're actually going 22:43the opposite direction You kind of say look you take all the existing 22:46trends you extrapolate them out and we're gonna be in a really weird place 22:49in 24 months, I guess Marina Vagner. 22:51I'm curious if you two sort of like agree with this kind of assessment or, you 22:56know, from the researcher side, is it right to say, hey, these linear trends 22:59are basically what we should use to think about capabilities in, say, 2026? 23:04Marina Danilevsky: Tim, we all know that nothing wrong has ever 23:06happened from linear extrapolation. 23:08In the history of 23:10Tim Hwang: humans, it's very dependable, 23:12Marina Danilevsky: very dependable. 23:14This is always how things go. 23:15Um, I, I think I do have a bit of a different perspective than Shobhit 23:19and maybe a little bit more like the one that you had said where, 23:23yeah, I don't, I don't agree with all of the linear extrapolations. 23:26And of course we all can have the perspectives that we have 23:28on how things are going to go. 23:30But I think that even if you continue to throw more compute, more data, the 23:34way that AI is currently implemented, and we're just in another wave. 23:38We've gone through waves before. 23:39We're in the current wave. 23:40There are, to my mind, limitations to what you're going to be able to achieve. 23:45And it is not completely clear how you will actually get out of 23:49an AI never recommending you to put glue on pizza just because you 23:53gave it more compute and more data. 23:55Um, and so I think that while we are Closer. 23:59We're still not there. 24:00And in my mind, there's at least another one or several technological waves that 24:05need to come before we really get there. 24:08So is there going to be a lot of interesting things coming? 24:11Sure. 24:11The points that show but raises about accessibility and who gets to actually 24:16have these models that has a lot of really interesting implications for 24:20being able to disseminate misinformation. 24:23have an impact on how people perceive information, and so on and so forth. 24:27Do I think that that's going to get to AGI? 24:29Personally, no, but it doesn't mean that it's not going to get 24:32to places that are very impactful. 24:35Tim Hwang: Yeah, and I think there's actually one thread, and 24:38maybe I'll throw it to Vagner. 24:38I'm curious about your thoughts that I hadn't really thought about, 24:41which is very, very interesting. 24:42It's kind of, I guess there's this kind of bet about like, what 24:46does Compute actually get you? 24:48Right? 24:49Um, there's kind of one view which is if, so long as I feed in more data and more 24:52compute, the representation in the model will eventually just become accurate. 24:56Like, we'll solve the just eat rocks problem by basically, like, 25:00You know, kind of like computing our way out of the problem. 25:03I guess, Marina, you're kind of saying, I don't want to mischaracterize you, 25:05that sort of like, there's actually some genuine questions as to whether or 25:09not that, that will even happen, right? 25:11Like the models will become more powerful, but they might not 25:13necessarily become more accurate, right? 25:16We normally think about things getting better as, you know, kind 25:18of trending in a certain direction. 25:19I guess you're saying we can see improvement, but it might be very 25:22multidimensional in a way that kind of is a little bit counterintuitive, I think. 25:25Yeah. 25:27Vagner Figueredo de Santana: Yeah, I think that the, the, the. 25:29issue with, uh, increasing more, uh, or requiring more compute to improve 25:34or increase, uh, the already really large models, uh, we'll, we'll be 25:40seeing like, uh, less and less, uh, organizations controlling everything. 25:44So that's in terms of responsibility, I think it's, it's something 25:47that may be concerning. 25:49And in terms of ever meant environmental impacts, also people 25:53are thinking a lot about the energy that, uh, these models are, uh, Not 25:57only, uh, requiring for training, but also inference at scale, right? 26:01So then these, I think that balancing all of these, I think 26:06that that's a big challenge. 26:07And in terms of responsibility, right? 26:09What we are always trying to think about is, uh, do we? 26:12really need to create this technology right now? 26:14What are the problems that we're going to solve? 26:16Uh, can we solve the problems that we have with the technologies we already have? 26:21Right, there's a lot of interesting questions and, uh, uh, well, in 26:25terms of responsible AI, we need to think about these all the time, 26:28not only as afterthought, right? 26:30Tim Hwang: Like part of the responsibility might just be like, no AI. 26:33Shobhit Varshney: I think the cost and impact of this is going 26:37to start plummeting, right? 26:38If you just look at the Compute power that's in your phone 26:41today that causes lamenting. 26:43So over time, we'll solve for this. 26:44I think from enterprise perspective, Tim, your original question. 26:47I think we are over complicating how work gets done in organization. 26:51If you have access to hundreds or millions of MIT and Harvard and 26:57Stanford grads, And you put them into something very mundane and say, you're 27:00going to do procurement analysis. 27:02You're going to get an invoice, you're going to compare it 27:03against something, right? 27:04That's the kind of work that happens in an enterprise, right? 27:07So I have reason to believe that if you put a really intelligent person or an 27:11equivalent of a person, digital labor inside of a particular workflow, that 27:15sub task will get done very, very well. 27:17There are all kinds of guardrails and stuff that you can create around that. 27:20that particular task. 27:21So if you look at it in levels, the first level is, can I do a subtask really well? 27:25In the previous discussion, I said step number four, I'm 27:27going to extract something out. 27:29Can I do that task really well? 27:30And that starts to become a specific unit of work. 27:34Then you go one level up and say, today a human asks each machine, 27:37each LLM to go do different steps. 27:39Can I replace that with an orchestration where an LLM agent can figure out a 27:43plan, manage the memory and stuff, and automate the entire flow end to end? 27:47There's a very plausible path for us to get to. 27:50Figuring out how step is done, auto, uh, orchestrating all of those workflows, and 27:55now you start to move up the hierarchy of what a human supervisor would have done 27:58versus a summer intern would have told. 28:00And you always double check what a summer intern does. 28:03That's where we are today. 28:04And over time, you see a progression towards work itself getting 28:07automated to with a very, very high accuracy, especially with the cost 28:10of AI just plummeting over time. 28:13Tim Hwang: Yeah, that's, that's almost a great way of thinking 28:15about it is to show me your earlier point about sort of AGI having like 28:19this very amorphous definition. 28:21It's almost interesting thinking about the idea like William 28:23Gibson has this quote, right? 28:24Like the future is here. 28:25It's just not widely distributed yet. 28:27But kind of what you're saying is like AGI is here. 28:29It's just not widely distributed yet. 28:30Like for certain types of tasks, like the AI that we have right now 28:34can do all the possible things. 28:36Yeah. Job tasks, right? 28:37Um, and you're basically just kind of talking about like how far up the 28:41organizational chain this thing will go. 28:43I don't think 28:43Shobhit Varshney: that, I don't think we'll get into a point where we'll have 28:45a crisp definition of what AGI is and we'll say, hey, today, rah, rah, open 28:49a champagne, we'll reach that, right? 28:50It's incremental progress. 28:51It's a different definition in each field. 28:53Feel in each domain. 28:54In each task, right? 28:55So I think the right, the right frames to say that machines will get super 28:58intelligent over time and they will exceed human intelligence in certain tasks. 29:02And one thing that humans don't do really well is share our knowledge 29:05amongst our ourselves, right? 29:06If you put two experts in a room, it's very difficult for them to actually 29:09go at a problem together, right? 29:11We don't do a really good job at expanding out using the network effect. 29:14I think that's gonna change when you have super intelligent machines 29:17that can talk to each other. 29:18And, and drive better safety, better algorithms, better research, and start 29:22to build better algorithms all together. 29:23Right. 29:24So I think I'm very excited about the direction that we're going. 29:31Tim Hwang: So I want to move us on to the final topic, um, uh, and the way I 29:35want to tee this up is that there's this famous, uh, clip of Steve Ballmer when 29:39he was CEO of Microsoft, where he's like, if you've seen Steve Ballmer before, he's 29:43like this big muscular guy, he's like very sweaty on stage, and he's just shouting, 29:48Developers, developers, developers. 29:50And I kind of feel like if you had played that scene again today, people 29:53would be being like benchmarking, benchmarking, benchmarking. 29:57Um, because I think it is becoming such an important aspect of sort 30:00of like the supply chain of AI. 30:02Um, and you know, there's lots and lots of things we could 30:04talk about with benchmarking. 30:05We have, and we will continue to. 30:07Um, but I think Marina, particularly with you on the line, I figured 30:10it would be great to kind of zoom in specifically to rag. 30:14Um, and do a little bit where we kind of talk to the listeners about essentially 30:19what's happening in RAG benchmarking. 30:22Um, and then I think from there kind of talk a little bit about what that 30:25tells us about how benchmarking in the industry is evolving, uh, not 30:28just in the industry, but I would say in research as well, uh, as a whole. 30:32But um, if you, if you will, I wanted to kind of throw it to you and to say, 30:35if it's possible, we'd love kind of a short crash course into how people think 30:40about measuring the quality of rag. 30:42And I think that'll almost give us something very concrete to talk about 30:44in terms of benchmarking generally. 30:46Marina Danilevsky: Sure. 30:46Sounds good. 30:47So I will say benchmarking that's been around for a very long time. 30:51It's always been something that was extremely important for 30:54systems, databases, ML, everything. 30:56So I understand folks are maybe looking at that right now. 30:59Um, but. Yeah, you're into it before 31:01Tim Hwang: it was cool. 31:02Marina Danilevsky: That's right. 31:03That's right. 31:03We were into it before it was cool. 31:04We discovered the band first. 31:06Um, so the thing right now with a rag, let's talk about what rag is again, real 31:11quickly, and then we'll see what it is that you need to evaluate the retrieval, 31:14the augmented, the generation part. 31:16All right. 31:16So what are you trying to do? 31:17You're trying to finally give information that is supported by a 31:21knowledge that you can say, okay, this is knowledge that I can assume 31:24here's information I'm giving you. 31:26So what happens with RAG? 31:27Remember, a user has some sort of an inquiry. 31:30You fetch something that is related and you say, I'm going to give, use this 31:34information to give you the answer. 31:36Okay. 31:37Where's all of this going to break? 31:38It's going to break when the query is not well formed. 31:41So you're not fetching the right thing. 31:43If you're not fetching the right thing. 31:45Then you don't know that there's information you didn't get it. 31:47So that's something to evaluate. 31:49If you are fetching the right thing or even not the right thing, 31:52you don't have to generate an answer based on that multiple ways 31:55that that's going to break down. 31:56So you're going to have a model that gives you an answer. 31:58That's not based on the information you fetched. 32:00It's going to give you an incomplete answer. 32:02It's going to give you an answer. 32:03That's a mix of. 32:04Some of it is drawing from it. 32:06Some of it is drawing from its parameter. 32:07Some of it is just making up because it decided to go off, uh, especially 32:11a little later in the response. 32:12And you have to check all of that. 32:14Uh, can you force the model to give you a different answer because 32:17you told it, no, no, no, you told me it was this way, but I'm going 32:20to say, now assume it's that way. 32:22Okay. Can you, can you mess it up that way? 32:24Can you give the answer quickly enough? 32:27Um, so these are all of the things that you have to manage to evaluate. 32:29So when people talk about context relevance or answerability or the 32:36faithfulness or completeness, all of these different metrics that people 32:40have, uh, this is really what we're talking about with evaluating RAG. 32:43A couple of points here. 32:45You can try to benchmark, uh, with, uh, against a gold answer. 32:50Which is usually something that works in cases like classification 32:54or anything where there is a very, very clear thing as an answer. 32:56The problem with generative AI is, remember that word generative? 33:00Everything is created fresh, which means that there might have been a lot 33:02of different ways to create an answer. 33:04So when you're saying that I'm going to have some sort of an overlap metric, like 33:07rouge or blue, anything of that kind, that's not always going to be great. 33:11It'll tell you if you've gone off completely, but it won't tell you 33:14subtleties that, oh, maybe you rephrase the answer a little differently, but 33:16it still would have been, acceptable. 33:19So problems there. 33:20So then you say, okay, let's not have references. 33:22Let's just judge the answer as it is the problem with all of the 33:25metrics that I just mentioned. 33:26Nothing has a completely clear definition and it can't. 33:29Because you cannot get everybody to agree on what does complete 33:32mean, what does faithful mean. 33:33Believe me, I've tried. 33:35We have had so many arguments with research. 33:37Tim Hwang: Well, 33:37Marina Danilevsky: I mean, it's kind of existential, because like, 33:39Tim Hwang: yeah, sorry, go ahead. 33:40Marina Danilevsky: No, it is. 33:41You're completely right. 33:42It is like, what does it mean for you that an answer is complete? 33:45Not only can the researchers not agree, then the customers can't agree. 33:48So when you are talking about benchmarking, uh, there are 33:51bits that you try to benchmark first, parts of this system. 33:54Um, As Shobhit was saying, well, how do you do on just the retriever part? 33:57How do you do on just the generative part? 33:58How do you do on, you know, just faithfulness? 34:00And the problem is that here, the whole is not the sum of its parts. 34:04You put all of that together in an end to end experience, and it is not 34:07equivalent to, I checked every part individually, therefore I know how it's 34:10going to go together, doesn't go that way. 34:13And it's a very difficult thing to actually benchmark because the more parts 34:17there are to a system, the more complex it is to know what happens when you put 34:21all of them together in different ways. 34:24So that's actually why people are so interested in the benchmarks 34:26right now is because the state of it is a little confusing. 34:29It's a little bit incomplete. 34:30We're just like, well, it isn't that we can actually trust. 34:32And then, of course, what we talked about in previous episodes that 34:34the benchmarks do get saturated. 34:36Very quickly. 34:37As soon as you have one out, a few months later, okay, everybody 34:40already can deal with that one. 34:41You know, you got to thank it for its service and move on. 34:44Tim Hwang: Yeah. 34:45What I love about this is that it's like, it starts very tactical and then becomes 34:48like existential very quickly, where you're basically like, what is truth? 34:52What is clarity anyways? 34:53You know, of which like there kind of is no answer. 34:56I guess. 34:57So I don't know if Marina, this is a good way to sum it up. 34:59I mean, are you sort of saying that there is no RAG benchmark 35:03in a certain sense, right? 35:04Like there's no commonly understood norm for judging RAG quality. 35:09Marina Danilevsky: We do, we do our best and I think there are incremental 35:13implementations that are better and better and better as we have one benchmark 35:17realized something it didn't cover. 35:19Do another one. 35:20Do another one. 35:21Do another one. 35:21So there are, you know, in incremental approximations of 35:25what isn't is not going to work. 35:26And at some point in time again, it's probably going to reach a level where 35:30we say, All right, this is good enough. 35:31We've we've kind of, you know, saturated this as much as we can. 35:35But what ends up happening is then you end up moving to other use cases, right? 35:38Shobhit mentioned agents. 35:39It's a very interesting direction that we're going in. 35:41Well, now you don't just have texts. 35:42You don't just have that going out of the rack. 35:44Now you have, I am calling functions. 35:46I am using tools. 35:47I'm having something else happen in the middle. 35:49My execution plan as an LLM agent is absolutely all over the place. 35:53Now you don't just have an R and a G. 35:55Now you have, I don't know how many things every single time you add. 35:58Now, how do you benchmark? 35:59Now, how do you benchmark? 36:00So we all are having a lot of fun constantly making. 36:03new problems for ourselves that we then have to test that then reveal additional 36:08problems and and things we can implement 36:10Tim Hwang: yeah i love the idea that kind of like um eval design itself is 36:14trying to hill climb like basically like yeah it has like a very similar 36:18pattern to the evals themselves 36:20Shobhit Varshney: so Yeah, so Tim, uh, just working with real clients, 36:23uh, one of my big, big clients, we're looking at contracts and RAG 36:27is a great example of that, right? 36:28Given a whole bunch of thousands, a few thousand contracts, I want 36:30to ask questions against it, expect to get a good answers, right? 36:33So when you start to look at the kind of questions and queries 36:36that people are going, what's going to be insightful for them? 36:38There's a level one question is, can I find something in a 36:41contract that tells you what's the expiry date of the contract? 36:44Or is there an exit clause in this contract or not? 36:46That's a simple RAG pattern, right? 36:48Very naive, it can work. 36:49Then you start to look at. 36:50This is a contract, but then it has amendments stapled to it. 36:53And now the answer of the end date actually is in the third amendment that 36:56overrides the previous amendment, right? 36:57So now you're looking at the whole chain of thought of how to 36:59read this particular document. 37:01Then a level three of a question could be when I'm trying to cross 37:04compare and say, Hey, I want to order another thousand units. 37:07Which one of these contracts is closest to the threshold where 37:10I'm going to get some cash back? 37:11It's a more complex question. 37:13And you very quickly start to move away from a rag. 37:16So the perception is that, oh, I can ask, I can dump contracts, ask questions, but 37:20in reality, a human would have gone and looked at another system in an SAP and 37:25said, here are all the orders to date, and then done some math on it, and then 37:28given you an answer that's going across. 37:30So it's not quite right. 37:31There's no document that gives you the answer that you can go retrieve on demand. 37:35So you need to have some type of a router in the middle that understands 37:38what kind of question is asked. 37:39And then you may have to go chat with some structured data at the back end to 37:43bring that in and then call unstructured. 37:45Starts to get really complex. 37:46We talk about RAG, but we should really be talking more about the use case end 37:51to end that has much more than just the RAG patterns need to be metrics. 37:54So that's more complex than what Marina you were talking about. 37:57And now we're talking about a whole end to end chain and how do 38:00you measure accuracy in this case? 38:02Two different people have two different answers. 38:05Marina Danilevsky: Yeah. 38:05See, we even disagree because I call all of that rag in my mind. 38:08And so like, we don't even agree on what rag patterns are 38:10because to me, I'm like, great. 38:11You're, you're retrieving a function answer. 38:13You're retrieving something from a knowledge base. 38:15You're, you're still kind of, you know, retrieving this case just 38:17means, you know, function call. 38:18But so even with that, right, you can think of rag pattern as just a single 38:22call and only informational query. 38:23You can think of it as the entire thing that you're talking about, Shobhit. 38:26And I think that you end up having to extend. 38:29How you do the evaluation. 38:30Great. We've done it for one small pattern. 38:32Now, how about when you extend, extend, extend, extend? 38:34So yeah, you're right, Tim. 38:35Hill climbing. 38:36Why, why sit on our laurels when there are more complicated problems to 38:40Tim Hwang: solve? Like, we could be building. 38:41Yeah. 38:41And I mean, you know, my bias is just like, I think one of the things I'm most 38:44interested to see in the AI space is just like, The continued growth of evals as an 38:50industry, because this is like where the endless value will emerge, right, which 38:54is like companies being like, is it good? 38:56And it like actually ends up being like this very, very deep 38:58question that really requires some real sort of craft and expertise. 39:03So Vagner, you get the privilege of having the last word on the episode 39:07as our, uh, inaugural, or sorry, our debut guest, uh, this, uh, this episode. 39:12Um, any final thoughts, uh, on kind of the benchmarking question or RAG in general? 39:16I mean, I'm always excited about RAG hot takes. 39:20Vagner Figueredo de Santana: Um, well, no, not a word that relates to RAG, 39:24just a final word that I like to, to Uh, uh, more of a provocation, like, so for 39:30folks interested in, on, on Responsible AI, I think it's worth to try to go 39:34beyond your discipline identity, your bubble of content, and try to reach out 39:39to other contents because, and we are talking about FAccT, in fact, uh, it's 39:43interesting because they, Go to a more technical and also more to the humanity. 39:47So try to find a subject that are interested like brag or other Subjects and 39:53try to go outsider discipline identity. 39:55I think it's good for for the The whole community as a whole. 40:01Tim Hwang: Yeah, for sure. 40:01That's a great note to end on Well, uh, that's all the time we have for today. 40:05Uh marina showbit. 40:06Thanks for joining us again 40:08Shobhit Varshney: Thank you so much for having us, Tim. 40:10This is awesome. 40:10Most fun thing we do every week. 40:13Tim Hwang: Yeah, definitely. 40:13Thanks for joining and Vagner. 40:15Thanks for joining and hopefully we'll have you back again sometime. 40:18Vagner Figueredo de Santana: Thank you. 40:19Tim Hwang: Great Well, if you enjoyed what you heard you can get MOE on Apple 40:22podcast Spotify and good podcast platforms everywhere And we'll see you next week