Learning Library

← Back to Library

AI Wins Nobel Prizes in 2027

Key Points

  • The hosts open the episode with a tongue‑in‑cheek “2027” scenario where an AI‑generated work wins the Nobel Prize for literature and AI also sweeps major entertainment awards, setting up a debate on AI’s cultural impact.
  • Recent real‑world Nobel wins are highlighted: the 2024 Chemistry prize went to David Baker, Demis Hassabis and John Jumper for AlphaFold‑related work, and the Physics prize honored Geoffrey Hinton and John Hopfield for advances in neural networks.
  • Chris, the CTO of Customer Transformation, argues that the Nobel recognitions signal a shift from theoretical awards to honoring AI’s tangible contributions across science, suggesting AI‑human collaboration will dominate future breakthroughs.
  • Edward, VP of Product Management for Watson X, offers a more skeptical counterpoint, questioning whether the Nobel Committee is simply riding the AI hype wave rather than acknowledging lasting scientific merit.
  • The show also teases other AI news—OpenAI’s new DGX B200 hardware and fresh funding for the startup Unstructured—indicating broader industry momentum beyond the Nobel discussion.

Full Transcript

# AI Wins Nobel Prizes in 2027 **Source:** [https://www.youtube.com/watch?v=v9go9rttLO8](https://www.youtube.com/watch?v=v9go9rttLO8) **Duration:** 00:37:23 ## Summary - The hosts open the episode with a tongue‑in‑cheek “2027” scenario where an AI‑generated work wins the Nobel Prize for literature and AI also sweeps major entertainment awards, setting up a debate on AI’s cultural impact. - Recent real‑world Nobel wins are highlighted: the 2024 Chemistry prize went to David Baker, Demis Hassabis and John Jumper for AlphaFold‑related work, and the Physics prize honored Geoffrey Hinton and John Hopfield for advances in neural networks. - Chris, the CTO of Customer Transformation, argues that the Nobel recognitions signal a shift from theoretical awards to honoring AI’s tangible contributions across science, suggesting AI‑human collaboration will dominate future breakthroughs. - Edward, VP of Product Management for Watson X, offers a more skeptical counterpoint, questioning whether the Nobel Committee is simply riding the AI hype wave rather than acknowledging lasting scientific merit. - The show also teases other AI news—OpenAI’s new DGX B200 hardware and fresh funding for the startup Unstructured—indicating broader industry momentum beyond the Nobel discussion. ## Sections - [00:00:00](https://www.youtube.com/watch?v=v9go9rttLO8&t=0s) **When AI Claims the Nobel** - A satirical podcast segment imagines an AI‑generated work winning the 2027 Nobel Prize for literature, prompting expert guests to debate the milestone alongside other AI triumphs in science and entertainment. ## Full Transcript
0:00it's 2027 has an AI generated work won 0:03the Nobel Prize for literature chrisy is 0:05a 0:07distinguished let me start that again 0:10give me that pause man just starts 0:12yelling even before I ask the question 0:14and do the in it's 2027 has an AI 0:17generated work won the Nobel Prize for 0:19literature Chris Haye is a distinguished 0:21engineer and the CTO for customer 0:23transformation Chris welcome to the show 0:25what do you think absolutely and while 0:27we're at it AI is going to win a few 0:29Oscars and an em as well okay all right 0:31and uh next up Edward calbar is a vice 0:34president product management for the 0:35Watson X platform Edward welcome to the 0:38show uh what do you think no way uh I 0:40don't think the noble institution will 0:41outow work all right cool well with a 0:43difference opinion like that you know 0:44it's going to be a good show all that 0:46and more on today's mixture of 0:49[Music] 0:53experts I'm Tim hang and it's Friday 0:56which means that it's time again to take 0:57a deep dive with our experts into the 0:59week's news in AI we're going to talk 1:01about open ai's new dgx b200 the new 1:04round of funding for a company called 1:06unstructured but we're going to start 1:07today with the big news story of the 1:09week which is basically that AI has been 1:12taking the Nobel prizes by storm this 1:14year in the prize for chemistry David 1:16Baker Demis hbus and John jumper took 1:18the prize with hbus and jumper winning 1:20in part for deep minds work on Alpha 1:22fold and then in the Nobel Prize in 1:25physics one of the founding fathers of 1:27modern AI Jeff Hinton and John hopfield 1:29another uh Leading Light of the field W 1:31for their work in neural networks so I 1:34think Chris I want to start with you 1:35first is what do we make of this are the 1:37Nobel prizes basically scumming to like 1:40AI hype or is this the start of 1:42something way bigger I love it actually 1:45um I think well I think the Nobel prizes 1:49is if I'm being completely honest has 1:52been a little theoretical and not 1:54hitting with the real world for a while 1:56so actually you know recognizing Ai and 1:59the impact that it's going to have in 2:01multiple Fields such as physics and 2:02chemistry and if we think about the big 2:05innovations that are going forward in 2:07the next few years it is going to be 2:09more AI Le right so it's going to be Ai 2:11and humans in collaboration and how do 2:13you distinguish that well is it fair to 2:16say that the the people who founded AI 2:18in the first place then don't get 2:20rewarded for that work of course not so 2:22I actually I think it's a good thing 2:23because this is the Cornerstone over you 2:26know the next few years where AI is 2:28going to massively help in these areas 2:29so I'm I'm all for it go for it and 2:31while we're at it uh I think my next AI 2:34means I'm going to become the MVP of the 2:36NFL soon as well you know so Aaron 2:38Rogers watch out here I come I guess 2:41Edward to bring you into the 2:42conversation I think Chris has already 2:43taken a very strong stance that 2:45basically you know it's G to be a few 2:46years from now and you'll just be AI 2:48winning every single Nobel Prize award 2:51um I guess Edward there's two questions 2:53I think what you said in the opening 2:55question you were like well I don't know 2:56if the dobell institution will really 2:58allow it does that mean that like you 3:00don't think it's deserved or basically 3:02that you think like the institution 3:03won't really you know be into being into 3:06awarding and giving AI it's do yeah I 3:09mean I think I think um mainly the the 3:13former uh I mean I do think that that AI 3:16is going to be an incredible Tool uh in 3:19in almost every aspect of our lives and 3:21it's going to do amazing good for 3:23society and and uh well-being and 3:26quality of life and and and basically 3:29all the things that they institution 3:30stands for uh but but I I do think uh 3:33you know the human contribution to uh to 3:36to to those outputs that deliverable is 3:38really essential right so whether it's 3:4010% AI or 90% AI I think if it's 100% AI 3:44maybe it goes a little bit too far uhuh 3:47right um I think Chris one thing I 3:49wanted to do is you know I think I I 3:50take a lot of pride in the fact that 3:52mixture of experts is like a good way 3:53for people who might not be like reading 3:56every single archive paper working their 3:58way through every single machine 3:59learning textbook to kind of learn about 4:01sort of like what's happening more 4:03deeply in the AI space and you know I 4:05think what's kind of interesting is you 4:06can often get lost with all the stuff 4:07that's happening like at the Enterprise 4:09layer around AI like I think there's a 4:11lot of people listening to this who may 4:12actually not even know Jeff Hinton right 4:15um and um I guess I'm curious if you 4:17feel comfortable you know if you want to 4:19give our listeners just kind of a quick 4:20explanation for why someone like Hinton 4:23is so kind of important to the field and 4:24and what exactly he sort of like 4:26contributed here no absolutely so I 4:28think the first thing I would say is 4:31Hinton really is kind of the he's 4:34considered as the the OG goat of uh 4:38Godfather of AI which is which is I love 4:41that term in that sense and and he's 4:44been doing AI for a very long time U 4:47machine learning specifically he has uh 4:49been there before it was cool so he was 4:51doing that right back in the sort of 4:531980s right so if we think really uncool 4:57basically right yeah very uncool exactly 5:00so so but if we think of the modern 5:03foundations of what we've got today so 5:04we think of things like deep learning um 5:07that all comes down into these really 5:10deep massive neural networks with with 5:12billions and even trillions of 5:14parameters these days right and I'm not 5:15going to go into the the massive details 5:18of that but if we look at the work that 5:20Hinton has done there even as far back 5:23as sort of 2011 right when uh Alex snack 5:27came out um you know and IL were were 5:31was part of that as well then that was a 5:34time where really the Deep learning 5:35Revolution kicked off which was the sort 5:37of first kind of CNN on a on a GPU for 5:42training uh against images and if we go 5:44further back uh in time there as well so 5:48if we look at the the work that he did 5:49things like back back propagation which 5:51is a key Cornerstone of what we even do 5:54today with deep learning so all of this 5:57goes back to the 80s and Jeff Hinton 5:59Wasing doing this when it was uncool 6:01right so if he hadn't done that work we 6:03wouldn't be where we are today so I I 6:07think you have to sort of recognize that 6:09fact and as I said earlier the impact 6:11that AI is having and going to have in 6:14the future is going to be incredible so 6:16I think actually the impact that it's 6:18having in these different fields he 6:20should be recognized for the work that 6:21he's done right even even in physics 6:24right because I know there's some 6:24physicists I saw on my Twitter feed 6:26grumpy about like well what's this 6:28computer science person doing in here I 6:30guess kind of ultimately you're like 6:31actually this is significant enough that 6:33like it it actually should be recognized 6:35in this context absolutely and and I 6:37think it does open this up right so and 6:41I think it's a good thing for physics as 6:42well right you don't want to be seen as 6:43this kind of boring thing here's a bunch 6:45of formulas you know oh look here's 6:46another telescope in the sky do you know 6:48what I mean it's like this is stuff 6:51exactly AI is impacting every field and 6:54and and therefore um I think I think 6:56it's a really good move by by the Nobel 6:59uh instit Edward I'd love to bring you 7:00into this because I think one of the 7:01things I I love about Jeff Hinton in 7:04particular is just kind of how sort of 7:07down to earth and like kind of open he 7:10is about his sort of views um there's a 7:12great quote that the Nobel committee had 7:14posted on Twitter about how he was like 7:16oh yeah like I'm I was just in this like 7:17low rent hotel room when I got the news 7:19and like I I had to like reschedule my 7:21medical appointments to go deal with 7:22this Nobel Prize win um I think one of 7:25the things that Hinton has been sort of 7:27very kind of um sort of strong on I 7:30would say in the last few years is kind 7:32of warning about the sort of risks of AI 7:35and I think people have taken him very 7:36seriously because he's been at this of 7:37course for a very long time as Chris 7:38explained he's sort of like the goat 7:40hipster of neural Nets um and I guess 7:43I'm kind of curious about how you sort 7:44of think about those as kind of like a 7:46Leading Light in the fields you know do 7:48you take do you take his kind of sort of 7:50dark warnings about where AI is going 7:52seriously do you think he's on the right 7:53track you know I'm curious about how you 7:55kind of think about those those kind of 7:57risks and he I think made it actually a 7:58center of like some of his comments 8:00during his some of his interviews around 8:02this prize and so I did want to make 8:03sure that we talk about it before we 8:05move on to our next topic yeah I mean I 8:07think he's he's raising the warning uh 8:09just to make sure that that that that 8:10that voice is you know always considered 8:13right that that that that risk is always 8:15kind of part of the part of the math 8:17that uh that we're you know that 8:19enterprises individuals 8:21institutions uh you know do when when 8:24they're applying AI to to the particular 8:27problem or use case that that they're 8:28applying it to and I really think that's 8:29what it comes down to right so when we 8:31think about uh risk assessments or AI 8:33governance it's really in the 8:35intersection of the technology and the 8:37use case right it's not the same thing 8:39to apply AI to do creative writing right 8:42as we mentioned this morning uh than it 8:45is to do credit underwriting uh for for 8:49a bank right or to do a hiring decision 8:51for for an organization uh so these are 8:54very different use cases very different 8:56impact on on individuals right uh on on 8:59Society uh and and they they pose 9:02totally different uh different risks so 9:04it's really not just about that 9:05technology it's really what the 9:06technolog is being applied to uh that I 9:08think uh that I think is is needs to be 9:12assessed you know the more this 9:13technology makes its way into into 9:15National Security into defense right 9:17obviously it's a much different 9:20consideration uh than uh than than 9:22poetry yeah for sure and is this are you 9:25hearing this like um you know because 9:26you work very close to the metal in some 9:28ways right like Watson X platform is 9:30something that like customers are using 9:32and relying on I mean sometimes I think 9:34I feel like a lot of the discussion 9:35about like oh AI is really dangerous 9:37kind of takes place in like a totally 9:38different domain but I I guess I'm kind 9:40of curious I mean it sounds like you're 9:41sort of suggesting that like even 9:42day-to-day you're sort of hearing from 9:44you know customers and and the market 9:46that like these kinds of risks and these 9:48kinds of concerns are are things that 9:50people are thinking about yeah I mean 9:52they're not existential risks right but 9:54but there definitely risks uh to uh to 9:57Brand to Brands right uh their their 9:59their business risks uh their Regulatory 10:02Compliance risks uh and and and managing 10:06these risks uh is definitely one of the 10:08the top considerations that enterprises 10:10are uh uh that that that's that's really 10:12acting as an inhibitor right to to more 10:15WID scaled adoption of the technology uh 10:18and and it's something you can't really 10:19do after the fact right because so much 10:21of of uh of managing this risk is is the 10:25endtoend life cycle right it starts with 10:27the data that goes into the model and 10:29the 10:30and you know what model you selected and 10:31how you customized and tuned it all the 10:34way to monitoring and guard rails um 10:38separation of Duties right between 10:39development deployment so so all these 10:41things that uh that you kind of have to 10:43start thinking about from the beginning 10:45because if you don't then at the end 10:47they become a wall or real obstacle to 10:49try to reconstitute Posta uh so so 10:52that's we've been working with clients 10:54uh uh you know in that perspective uh 10:58with that approach uh and that's what's 10:59leading to you know some of these use 11:01cases making their way into production 11:02and I wasn't being hypothetical when I 11:03was talking about credit risk 11:05underwriting and and and hiring 11:06decisions right these These are these 11:08are real real world use cases where 11:10where the risk is being assessed 11:12mitigated uh in order to implement uh 11:14these uh these work fors yeah for sure 11:17um Chris do you want to get a final shot 11:18here curious about what you think about 11:19sort of you know I guess hinton's kind 11:21of late career turn as being sort of 11:22like a voice of warning around some of 11:24these Technologies I I like to think of 11:26this as like the difference between 11:28waterfall and agile you know which is 11:31probably a weird way of putting it which 11:33is go into that more if if we think of 11:36waterfall projects nobody does waterfall 11:38projects anymore because we realize that 11:41we are too dumb and I mean this in the 11:44nicest possible way to figure out every 11:47requirement in advance and be able to 11:49plan everything because the world is too 11:50complicated I sort of feel that way 11:53about AI risks I think we are too dumb 11:55to figure out every single risk and 11:57every exploitation and be able to get 12:00ahead of everything in advance and 12:01pre-planned so therefore I kind of think 12:04like a software project I think we need 12:06to be agile which is you need to 12:08experiment and then you need to discover 12:11in a safe and controlled fashion what 12:13those risks are and let them evolve and 12:16that means we're going to do dumb things 12:18we really are right but then in the 12:20process of doing dumb things like you 12:23know sticking your fingers and you know 12:25a wall socket or whatever you realize oh 12:28I better not do that right and then you 12:29put safety things in there now I'm not 12:31saying that we should go that far with 12:34AI but I I hope that history tells us 12:38that in human existence um we've done 12:41enough dumb things that we shall do a 12:44enough dumb things before they become 12:46catastrophic dumb things so I think a 12:49little bit of agility will help us 12:50discover that stuff we need to have 12:52control but I don't think we are going 12:55to all blow ourselves up because I think 12:58we're going to do much dummer things 12:59much earlier that is my 13:02[Music] 13:05opinion I'm going to move us on to our 13:07next topic um there was an incredible 13:10photo that uh open AI uh put out onto 13:13social media it's of the team 13:15celebrating their receipt of the new 13:18Nvidia dgx b200 um and it's a great 13:22photo because it's like you can see 13:24clearly that everybody is so jazzed to 13:26be standing next to this fresh new piece 13:28of compute that it's like Christmas 13:30morning you know it's like people are so 13:32thrilled to get this computer in their 13:33hands um and I think it's a nice cook to 13:36talk a little bit about this kind of 13:37next generation of uh platform that 13:41Nvidia is rolling out um and is actually 13:44having a really material effect on the 13:45market for compute right so um there's a 13:48great chart I saw earlier in the week 13:49about how sort of the prices for NVIDIA 13:51h100s right which were sort of like last 13:54season's got to have it Hardware those 13:57compute costs are just dropping all of a 13:59sudden right as these new boards are 14:01kind of coming available and online and 14:03so I think it's a nice hook to talk a 14:04little bit about what's happening in the 14:06hardware markets and I think you know 14:08maybe Edward I'll turn to you first you 14:11know is what we're seeing here just more 14:13speed right like I guess there's one 14:14kind of point of view which is it's 14:15Christmas morning because it's really 14:17cool to be standing next to what's 14:18basically like an F1 racing car for 14:20compute but like is what we're getting 14:22here largely just faster and if not you 14:25know what's different about it yeah I 14:27mean I think I think it connects back to 14:28the first topic we talked about right 14:31the the the the the evolution of this 14:33technology and really trying to to build 14:35it in a way that uh somewhat models the 14:38way our brain works right and and and 14:41this kind of almost Infinity uh I know 14:44that's a big word but uh uh of of of 14:47nodes and connections and and relative 14:50strengths between them right so so I 14:52it's not just speed right it's it's it's 14:54it's scale right and the capacity to to 14:57consume more data and to have more 14:59nuanced uh relationships between between 15:02that data um so I'm I'm not a hardware 15:05expert uh but uh but I definitely uh I 15:09think it's I think it's a great time in 15:11technology when Hardware matters again 15:13right I think we go through Cycles where 15:15like Hardware becomes totally 15:16commoditized and then and then it 15:18matters again uh and then eventually 15:20becomes commoditized again right uh so 15:23we're definitely in a in a stage where 15:25it matters um I think that I think 15:27that's a signal right that that the 15:29Innovation right the The Innovation 15:31Frontier is is active and and moving 15:34rapidly and I think that's all very 15:35positive yeah I mean I think Chris it's 15:37it's stunning I was talking to a friend 15:38of mine who is working on some of these 15:40clusters and he's basically like the 15:42hardware is literally moving so quickly 15:44here that they can only really afford to 15:46do like one big training run on a 15:48cluster they've built and then almost 15:50immediately they start moving to 15:51building the next cluster that they're 15:52going to do training right on um I guess 15:54I'm kind of curious here as someone who 15:56kind of like thinks about this and 15:57researches in this space you know 15:59where's this all going right like are we 16:01just you know is is the cluster just 16:02going to get bigger and bigger and 16:04faster and faster you know like is there 16:06a top this this top out at some point or 16:08you know what what is the trend here in 16:09the next like 12 to 24 months I I think 16:12there's a couple of Trends going on and 16:14and I I and I think I might have said 16:15this on another episode but I'm going to 16:17say it again it's like it's almost like 16:19following the Bitcoin Trend right which 16:21is if you follow a Bitcoin Trend 16:23everything started on CPU then it moved 16:25from gpus and then it moved to fpgas and 16:28it moved to as6 right so so basically 16:30you went from kind of CP you went from 16:34you know compute being CPU to being GPU 16:37bound and then you were going to custom 16:39hardware and we we're kind of seeing the 16:42same thing again because people you need 16:44to bring the cost of compute down you 16:46need to bring the cost of training down 16:48I.E you've got bigger and better models 16:49to train but actually I think the bigger 16:51thing is on inference right so you got 16:53to run these models at a low cost and 16:56speed now if if there is one criticism I 16:59would say of Nvidia uh over anything is 17:02that the speed of tokens per second and 17:05the cost on these gpus are quite 17:07expensive and you've seen this in the 17:09marketplace already this is where folks 17:11like grock have been coming in right and 17:14they've been sort of releasing these 17:15chips that go really really fast and 17:17then IBM's got their North Pole chip as 17:19well right um and then Google's got 17:21their TPU chip so everybody's trying to 17:23bring down the cost of inference because 17:24if you're running these massive models 17:26on the cloud everybody's consuming 17:28compute you that to be as cheap and as 17:30fast as possible the big thing if you 17:32look at these new Nvidia boxes right is 17:36yes the the the training speed was much 17:39faster but actually if you look at that 17:40chart the cost of inference the speed of 17:42inference came down massively right so 17:45they've obviously put a focus on that as 17:47well because they know that if they 17:49don't improve the inference speed if you 17:52don't improve that influence cost then 17:55all of these other providers are going 17:56to start eating their lunch as well 17:58right because everybody's is going to go 17:59cheaper and I and I and I but I think 18:01this push and pull between kind of 18:03general purpose GPU and sort of custom 18:06chips is really important but again in 18:09the training point of view different 18:11from inference everybody's just focused 18:14on I need to get the biggest and fastest 18:16mod I need to get my model out really 18:17quickly and therefore you know throw 18:20away your last card put in the the 18:23latest card because I just need to get 18:24my model out all the time so there's a 18:26different Dynamic that's going on 18:29over time you know you're going to get 18:31faster architectures you're going to get 18:32different architect it's going to get 18:33cheaper and and these cost uh speed 18:37performance ratios are going to change 18:39over time yeah the architecture I think 18:41bit of this component I think is a 18:42really interesting part of the market 18:44right I think like one one theme that 18:46we've had pop up on a lot of mixture of 18:47experts episodes is customers want 18:50smaller models they want faster models 18:53uh they don't want the gigantic model 18:55that's really expensive right um and so 18:58there is that pressure there but it 18:59feels like there's kind of two ways of 19:01getting there right one of them is well 19:02we start marketing just smaller models 19:05right where we like lower our demand the 19:07other one which you're arguing is well 19:09the chips get good enough that the cost 19:11of inference finally Falls for running 19:12larger and larger models and the two are 19:14kind of like in a little bit of a race 19:15it sort of seems like um and I don't 19:18know predictions on kind of who wins 19:19that race in the end because you can 19:20imagine like the the market might 19:21eventually settle and say hey look these 19:24models do 99% of what we need them to do 19:27we don't need cre near AGI models to do 19:30this work so at a certain point you just 19:32don't need the chunkier model right I 19:35guess there's another point of view 19:36which as well but if the cost was cheap 19:37enough you would still go bigger um and 19:40I guess I'm kind of curious about like 19:41how you think about that relationship 19:42it's a little bit complex and it's 19:43unclear where it lands in the market 19:45today I think it's just going to keep 19:47pushing and pulling right because we are 19:49going to want to run our models on 19:51device right if you think of things like 19:53apple intelligence Etc so I think 19:56smaller models and faster compute are 19:59just they're you're going to need both 20:02for a while will one win it yeah and 20:06will one it win out I don't think so 20:08because the smaller that you can make 20:10the models and the faster you can make 20:11and smaller you can make the chips then 20:13the more you can put them on embedded 20:14devices which open up a whole set of 20:16other scenarios which are kind of low 20:18latency and and again you even see that 20:21like this week so what uh llama 32 was 20:23out last week and they released their 20:25their 1 billion model and their three 20:27billion model I think it was right and 20:28and again just smaller models and and I 20:31think the big thing there is folks are 20:33getting really good at taking these 20:35larger models and Distilling them down 20:37into into much smaller models and that's 20:40going to continue and I and I think 20:41we're looking at 1 billion parameter 20:43models but let's project forward maybe 20:46uh six months a year you're going to 20:48then start to be back into the million 20:50parameter models and then the chips are 20:51going to get faster and we're just going 20:53to go back and forward back and forward 20:54and it's forever yeah um yeah Edward are 20:58you seeing that in the market I mean it 21:00feels like one kind of interesting 21:01outcome of what Chris is talking about 21:03is that there's a lot of Market pressure 21:04to like have a lot of the models just 21:06more on edge devices everywhere um and 21:09it strikes me that like you know part of 21:11the idea of a platform is you're running 21:13it in the cloud and all the advantages 21:14of cloud but it does seem like there's 21:16actually really powerful kind of 21:17economic incentives eventually kind of 21:19pushing us to sort of all on device here 21:22you know not really like in the model 21:23that we're familiar with do you think 21:24that's like a real possibility going 21:26forward I mean I think it's going to be 21:28all the above uh and you know and we're 21:31we're the hybrid Cloud company right so 21:33so Edge to us you know is definitely a 21:36Continuum right uh uh the data center 21:40right compared to the hypers skater 21:41cloud is is effectively a type of edge 21:44um and then you go down to facilities 21:46and and eventually you know devices um 21:49so so yes it's going to be it's going to 21:52be all of the above and and and finding 21:54the right balance is always uh is always 21:57very specific to the requirements of the 21:59of the use case I mean I think what what 22:01what we see a lot right is 22:03clients to get started use a big model 22:08because that's a that's a way of you 22:10know accommodating a very broad range of 22:13of requirements use cases 22:15languages uh all sorts of things right 22:17so so so so you kind of prove out the 22:19business case with a with a big big 22:21model uh that's that's going to help you 22:24accelerate right but then when you're 22:26there you're like okay how can I do this 22:28as cheap cheaply and with the least 22:29latency as possible right uh and and now 22:32you start to really kind of optimize um 22:35and customize right uh once you once 22:37you've validated that uh that that 22:40business case and really want to want to 22:41scale it uh with uh with with real 22:44economics behind it so you know it's 22:47it's it's like you use the the Swiss 22:49army knife right uh it's going to give 22:51you a lot a lot of flexibility but 22:52eventually you know you're going to want 22:54to use that fit forp purpose tool to get 22:56the job done yeah that's super 22:57interesting I never really thought about 22:58it as kind of like this life cycle but 23:00it's sort of very interesting that like 23:01well just for the pilot we use the 23:03biggest baddest model because it gives 23:05us the most optionality and then as an 23:07organization kind of Tunes in the use 23:08case it gets kind of like much more 23:10discret and smaller and you're 23:11optimizing for cost and all these other 23:12sorts of things um it's very interesting 23:14is this the time to mention agents I 23:16realize we haven't mentioned the word 23:18agents in this episode yet so I mean 23:21we're not contractually obligated to 23:22talk about agents but if you want to 23:23mention agents Chris go for it you can 23:25do the final uh uh hot take before we 23:28move on to last topic this is needed for 23:30agents because you need your agents are 23:32going to be highly specialized they're 23:34going to work together and they need to 23:36have low latency Etc so actually the 23:39smaller model and being being able to 23:41run on device and being able to run in 23:44you know whether it's on data center on 23:45device and running different locations 23:48that is 100% necessary for this agentic 23:51world that we're in so uh you know so 23:53it's a good thing agents for sure 23:56agents uh a lot more to get into there 23:58for sure 23:59[Music] 24:02sure so for our final story of today I 24:05really wanted to make sure that we had a 24:06chance to talk about a company called 24:08unstructured which recently closed a $40 24:11million round um and uh this round was 24:14led by IBM and Nvidia and a long list of 24:17kind of prominent companies and 24:18investors in the space what's most 24:20interesting about inst structured is 24:21it's a company that focuses purely on 24:24transforming unstructured data into 24:26structured data which is not something 24:28that you normally think of as being 24:30something that you'd invest $40 million 24:32in so I want to make sure that we talked 24:33about it first Edward if I want to bring 24:35you in just like if you want to resolve 24:37that mystery for some of our listeners 24:38like why is unstructured data important 24:41and why is structuring it incredibly 24:43incredibly important for AI yeah well 24:45unstructured data is most data uh 24:47nowadays right and and I think the most 24:50relatable type of unstructured data the 24:52most usable type of unstructured data 24:54today for for llms is uh is document 24:57data right so so 24:59um could be could be the the content on 25:02the Internet or or or word docs or 25:05PowerPoint presentations right but 25:06effectively document data and that is 25:08that is Enterprise knowledge right that 25:11is that is what runs the world right 25:13it's it's these documents uh in this 25:16language information um and and and 25:20that's what large language models are 25:21built on right that's what they're 25:23trained on and that's what they're 25:25excellent at processing 25:27summarizing uh and and and making making 25:30usable right um so so bringing bringing 25:35that data bringing that Enterprise and 25:37institutional 25:38knowledge to the models is really the 25:42way in which uh in which an organization 25:44can make it their own right customize it 25:48to the knowledge of their business the 25:50language of their business the tonee the 25:53entities the relationships right the the 25:55values all all all the things that you 25:57need to to put a a model uh in service 26:01of of a business right or a goal you 26:04need to do that by by effectively 26:06teaching it right uh with with uh with 26:09your data and and that's what this 26:10company uh focuses on i' I've met them 26:14uh they're very focused I think that's 26:15really been part of their strength and 26:17success they're very focused on taking 26:20that unstructured data that relies you 26:21know in different locations and and and 26:25and different formats and then make it 26:26available for for the models 26:28particularly uh in Vector stores right 26:30for uh for retrieval augmented 26:32generation as an initial uh use case 26:34which is effectively Universal at this 26:36point but then beyond that you know 26:38identifying relationships in the data 26:39for graph rag taking the data and 26:41putting into structured format to really 26:44increase the Precision uh and accuracy 26:46of some of those queries so I think I 26:48think rag is uh you know very popular 26:52really valuable uh but already kind of 26:55running out of uh out of gas a little 26:58bit for the next evolution of uh of use 27:02cases and and that's really all about 27:04kind of continuing to unlock the value 27:06of the data in those documents huh yeah 27:09that's really interesting can you go 27:10into that a little bit more is like why 27:11why is rag running out of steam it's 27:13kind of like again it feels like 12 27:15months ago is like the new hotness right 27:16or like people were still definitely 27:18like leaning into it as the kind of key 27:20strategy for doing retrieval what's 27:23what's missing I guess what's what were 27:24the cracks appearing yeah I mean I think 27:26I think it's a great starting point and 27:27I and I think think it's I think it's 27:28essential in in most cases right but but 27:31for example graph rag um is going to 27:35give you the ability to have richer 27:37contextualization right by identifying 27:39non- obvious relationships if I prompt 27:41the model with you know a certain set of 27:43words uh it's it's really only going to 27:45limit uh its ability to to to reason 27:48right including uh retrieving the the 27:51the uh the knowledge base to to that 27:54domain right and there may be hidden 27:56relationships right there may be for 27:57example if going to search something 27:59about Facebook but I don't get a 28:01response about Instagram then I'm not 28:03really getting the whole picture right 28:05uh but the model is not necessarily 28:06going to know that Facebook and 28:07Instagram are are related right because 28:09those relationships could potentially be 28:11non-obvious right uh so so the graph rag 28:13um pattern right it's going to give you 28:17uh strength in relationships uh that are 28:21nonobvious and in doing so provides you 28:24richer contextualization that will be 28:26more relevant right to the question 28:28being asked even if it's not asked with 28:30those specific words right so it's it's 28:32again MIM mimicking a little bit of how 28:34how how our brain works uh in in in 28:36identifying those relationships that's 28:38that's one example and but even that is 28:40not necessarily going to be perfectly 28:42accurate right because there's data 28:43about transactions that may have like a 28:46skew a skew number or a particular you 28:49know ID has no semantic value it's just 28:51a bunch of characters it's like it's 28:53like your license place it doesn't 28:54really doesn't really mean anything 28:56right so so so you need to have that 28:59type of data in structured formats and 29:02really combine rag or semantic search 29:04with with SQL right was with structured 29:07query queries and that's going to give 29:09you more accurate responses to questions 29:11that have you know to do with uh 29:13transactions or or other types of data 29:16that are that are very you know very 29:18important to a particular business very 29:20important to particular domain but don't 29:21have semantic value in a conversational 29:23or language sense right so now you have 29:25to complement rag with a different 29:27dimension 29:28uh which is structured data so those are 29:30just two examples right of of how how 29:32you really need to complement kind of 29:35classic rag uh to make it more more more 29:38accurate that's really helpful Chris you 29:41know again I think there's another I 29:43think this story made me think a little 29:44bit about like the market for data 29:46structuring which I think is really 29:47interesting which like we normally think 29:49about like okay the people who generate 29:50data the people who do the training the 29:52people who offer the models to the 29:54consumer as kind of the supply chain and 29:56one Link in that chain I have really 29:58thought about is just like this layer 29:59that exists between like the data that's 30:01out there and like the data that's 30:02usable um and I guess one question I 30:05want to ask of view is that it feels 30:07like there's lots of different potential 30:10ways you could go about doing that right 30:11there's companies like unstructured 30:12where like we have a Specialized Service 30:14that does the structuring of data for 30:16you you might imagine that um you know 30:19that the the foundation models 30:21themselves become good enough that they 30:22can do the structuring kind of out of 30:24the box you don't actually have to do 30:25much additional post-processing to make 30:28it happen you could imagine that uh 30:30synthetic data gets good enough we don't 30:31even need the UN structure data because 30:32we can just generate a purely you know 30:35out of nowhere um and it feels like 30:37there's a lot of contenders to the 30:38throne of getting data that's usable um 30:42I guess how do you size that up like do 30:43you think that at some point say like 30:45synthetic data just gets good enough 30:46that you know you don't need to do this 30:47data structuring anymore or is there 30:49like always going to be a niche for this 30:51kind of structuring kind of business 30:52just kind of curious about how where you 30:54think this Market is going oh goody I 30:56get to say the word agents again my fa 30:58yes please do yeah we got to get a few 31:00more in before the episode's up so so 31:03actually I think everything is GNA move 31:06into a Marketplace in the future so I do 31:09think we're going to have a Marketplace 31:10of data we're going to have marketplaces 31:13of agents and we're going to have 31:14marketplaces and models and I think we 31:17are going to get more outcome focused so 31:20specifically on the data I think we're 31:23doing a lot of human work at the moment 31:26to curate that data and even if you look 31:29at things like stretcher Etc they do 31:32great work because they're actually 31:33taking away a lot of the complexity to 31:36get your data into your kind of vector 31:38databases to follow rack right because 31:40it's it is really hard you have to do 31:42things like chunking you're constrained 31:45by the uh the context of the model I.E 31:47the short-term memory that it can work 31:49within you have to work out which data 31:52is going to be associated with wall as 31:54Edward's saying you need to start 31:56building up things like relations and 31:58then you've got to understand okay I've 31:59got to get this data from this form I'm 32:01getting this from an S3 bucket I'm 32:03getting this from here it's really 32:05complicated but actually we are even 32:08though that's a faster process we are 32:10still humans who are figuring this out 32:14and doing Transformations Etc and doing 32:16these sort of ETL pipelines if I project 32:19a little bit forward in the future back 32:21to our earlier discussion where the 32:24models are going to be smaller they're 32:26going to be uh have latency they're 32:28going to have faster tokens per second 32:30you're then going to be able to train 32:32these smaller models to start to do that 32:34restructuring work for you and therefore 32:37uh you I think you're going to be in 32:39this world where agents are going to 32:40help you get your data into a structured 32:43format and once your data is into a 32:44structured format you're going to be 32:45able to train your model and then you're 32:47going to loop around and you're going to 32:48be in this nice virtuous circle 32:50so will there be a Marketplace for this 32:53absolutely because at the end of the day 32:56people own data right so 32:58the uh the publishing companies the 33:01media companies the they're all sitting 33:04on gold mines right um at the moment 33:07because that's data that is highly 33:09valuable highly creative there are 33:12things that are probably can be 33:15synthetically generated so things like 33:17all the math data Etc you could probably 33:19argue you know that will just be 33:21commoditized over time because that will 33:23just get generated and synthetically 33:25created and that would be the same for 33:26anything that our puzzle Games Etc so 33:29there will be this push and pull of who 33:31owns that data and and I think that 33:34human data in especially the creative 33:36spaces will still be highly valued um so 33:40um I don't see the record companies 33:42giving up their ownerships of or 33:45songwriters of yeah exactly so I think 33:48that's going to be the push and pull 33:50that we have over over time but we are 33:53going to be moving into this uh 33:56marketplace where sort of that soft IP 33:59is just going to be uh the big thing 34:01that distinguishes companies 34:03because one of the examples I like to 34:06give is if you have got a model trained 34:08and you have the data of all the kind of 34:10Spanish legal texts and you've got that 34:12structured Etc and your model can answer 34:16uh Spanish legal queries better than any 34:18general purpose model if I'm going into 34:21court you know what I want the the model 34:24that's really good at Spanish law as 34:26opposed to the model that's got a vague 34:29understanding of Spanish law because 34:30that's the difference of me getting a 34:31large fine or going to jail right so you 34:34know so there's a huge value on that 34:37locality and I and I think that will be 34:40one of the biggest Trends as models are 34:41going to get more and more specialized 34:43and we're just going to be like like 34:45we've been having with the general 34:47purpose benchmarks MML use and all that 34:49we're going to have a benchmark for 34:51everything you can imagine Tim it's 34:53going to be here's the Spanish legal 34:54Benchmark here's the car parking 34:56benchmark that you name it is going to 34:59be benchmarks everywhere and we're just 35:01going to be in this big massive 35:02Marketplace of specialization I I love 35:05the image of of of hiring an agent AI 35:09agent attorney right to defend you in a 35:12in in a case I mean I think I think that 35:14that is that is a that is a feature I 35:16can get behind I used it myself I did a 35:19an insurance claim I looked at the 35:20insurance document and I was like I have 35:22no clue what any of this means so right 35:24and it was a kind of medical condition 35:26thing and I was like I run through the 35:28llm it's like huh tell me gave me the 35:31the key points brought to the insurance 35:32company pay out and you're like you know 35:36uh that is this could go somewhere 35:38exactly that's what you want from from 35:40these things so I'm I'm yeah I but we 35:43are we are going to be in a wild ride we 35:46are we are going to be having like the 35:49kind of the Uber style marketplaces 35:51where you're matching up AIS to people 35:53AI to AIS it's going to be wild over the 35:55next few years anded do you want to 35:57final to close us out for the day uh 35:59agents agents 36:02uh absolutely I mean I I uh some of the 36:05work we're doing uh at IBM with agents 36:08is super exciting uh and and it really 36:11is going to kind of I think it's going 36:12to be a step function right in terms of 36:15the the the complexity of the of the 36:18workloads and the use cases uh the the 36:21creativity right uh to to solving 36:25problems uh um potentially Beyond even 36:28even even our approaches uh the the 36:31automation right the fact that you know 36:33you're going to have so much work 36:34happening 24/7 365 a lot of stuff 36:37already works that way right but but 36:39this is going to take it to the next 36:40level um and uh and and I think it's 36:43it's it's exciting it's productive um I 36:45think it's going to level the playing 36:47field for for uh for for for consumers 36:50in some cases uh for smaller 36:52institutions right so so uh we're we're 36:55we're excited um uh to be part of this 36:58future and and to really be co-creating 36:59it with h with our clients and uh and 37:01our community well gentlemen this is um 37:04wonderful uh Chris you're always welcome 37:06back on mixture of experts um and uh 37:08Edward I hope to have you on uh some 37:10point in the future listeners out there 37:12if you enjoyed what you heard you can 37:13get mixture of experts on Apple podcast 37:15Spotify and podcast platforms everywhere 37:18and we will see you next week for 37:20another Roundup