Learning Library

← Back to Library

AI Personhood, Microsoft RAG Patent, PolyMarket Election

Key Points

  • Yuval Harari predicts that AI “personhood” will first emerge legally rather than philosophically, with autonomous LLMs potentially being incorporated as corporate‑like entities by 2025, granting them limited legal protections but no voting rights.
  • Microsoft filed a patent on “response‑augmented systems” (a rebranding of retrieval‑augmented generation) on Oct. 31 2024, but the filing is not yet granted and can be challenged with prior art, likely prompting industry pushback.
  • Polymarket, a blockchain‑based (non‑crypto) prediction market, demonstrated its utility by processing $4 billion in trades and delivering election outcome data 2–12 hours faster than traditional news outlets during the recent U.S. election.
  • These developments highlight emerging legal, intellectual‑property, and decentralized‑data trends that could shape how AI systems are regulated, commercialized, and utilized in real‑world events.

Full Transcript

# AI Personhood, Microsoft RAG Patent, PolyMarket Election **Source:** [https://www.youtube.com/watch?v=WKSwUYEOjTo](https://www.youtube.com/watch?v=WKSwUYEOjTo) **Duration:** 00:09:12 ## Summary - Yuval Harari predicts that AI “personhood” will first emerge legally rather than philosophically, with autonomous LLMs potentially being incorporated as corporate‑like entities by 2025, granting them limited legal protections but no voting rights. - Microsoft filed a patent on “response‑augmented systems” (a rebranding of retrieval‑augmented generation) on Oct. 31 2024, but the filing is not yet granted and can be challenged with prior art, likely prompting industry pushback. - Polymarket, a blockchain‑based (non‑crypto) prediction market, demonstrated its utility by processing $4 billion in trades and delivering election outcome data 2–12 hours faster than traditional news outlets during the recent U.S. election. - These developments highlight emerging legal, intellectual‑property, and decentralized‑data trends that could shape how AI systems are regulated, commercialized, and utilized in real‑world events. ## Sections - [00:00:00](https://www.youtube.com/watch?v=WKSwUYEOjTo&t=0s) **AI Personhood via Corporate Incorporation** - Yuval Harari argues that the first route to AI personhood will be legal, not philosophical, as autonomous AIs are incorporated as corporations—granting them corporate protections by around 2025, though without voting rights. ## Full Transcript
0:00okay six pieces of AI news for you and 0:02we'll start with a 0:04banger Yuval Harari is not the person I 0:07would expect to generate AI news he's an 0:10author but one of the things he 0:12suggested that I think is compelling and 0:14correct is that the likely initial path 0:17for AI personhood is not philosophical 0:20because there's really no answer to that 0:23it is not an intelligence test which 0:26some people have suggested it's legal 0:29fundamentally if an AI is capable of 0:31autonomously incorporating itself it is 0:34by definition a legally defined person 0:37now it can't vote and that's probably a 0:40good thing but it would have the same 0:43protections as a corporation would 0:46have I think that is likely to happen in 0:502025 I don't know if that's good I don't 0:53know if that's bad I just think that 0:55given the level of autonomy that we're 0:57seeing in the space given the Simplicity 0:59of incorpor a someone is going to figure 1:02out how to give an llm a mission to 1:04incorporate get it done and the llm will 1:08be legally a 1:10person and that's couple of months away 1:13like that's close now I don't think 1:16we're going to see a significant share 1:18of Corporations B aai in 2025 simply 1:22because there's a lot of Corporations 1:24and not every AI use case requires this 1:27and this will remain a bit of an edge 1:29case but it's worth paying attention to 1:31because anytime you talk about 1:33artificial intelligence people 1:34immediately start thinking about 1:36replicants and they start thinking about 1:38Blade 1:39Runner 1:40Etc and I think that yval Harari 1:42actually had a really good thesis for 1:44where this is going to go all right 1:46number two Microsoft has attempted to 1:49patent rag you know retrieval augmented 1:52generation that we've everyone in AI has 1:55been doing for a while they've decided 1:57they're going to call it uh I'm getting 2:00this right uh response augmented system 2:04something like that they oh they change 2:06like a couple of words right and they 2:07call it ra not rag I'm going to link it 2:11and they filed it on Halloween this year 2:14October 31st 2:162024 the thing with patent applications 2:18is one they're not immediately granted 2:20so nobody building in the space with 2:21rack has to stop what they're doing and 2:23two you can object to them with prior 2:26art which means if you've been doing 2:28something that is substantially 2:30similar you can say so and so I would 2:34expect there to be a lot of objection 2:36and push back to this uh because we've 2:38been using rag for a long time in AI 2:42certainly a long time before October 2:4331st 2024 I'm not quite sure why 2:45Microsoft decided they could get away 2:47with trying to patent 2:48that number three poly Market is worth 2:52paying attention to uh because of how 2:54they performed in the US election this 2:56past week this is a blockchain based 3:00solution it is not crypto I would argue 3:02it is probably the first blockchainbased 3:04solution that is not crypto that is 3:06widely understood and used across the 3:09world uh in this case they did $4 3:12billion do trading on the election and 3:15they were able to call the race 2 to 12 3:19hours faster than official news outlets 3:22so that's a significant Improvement in 3:25an election with consequences that will 3:28Echo across the globe 3:30getting that right faster is a big 3:32achievement uh and I want to call it out 3:35because I think it's the first scaled up 3:36use of blockchain we've seen outside of 3:40crypto all right number three Google 3:43dropped a model labeled Gemini 3:462.0 and it was briefly available this is 3:49the same thing right they dropped them 3:51they're briefly available and then we 3:52hear rumors about them in this case uh 3:55this model was very very fast but failed 3:59the strawberry test uh the strawberry 4:01test is where you ask it to write 4:02strawberry but because llms uh do not 4:05necessarily include logical checks by 4:08default it can have trouble getting the 4:11number of RS in Strawberry correct 4:12because there's multiple RS in the same 4:15place so it failed the strawberry task 4:17which doesn't really argue for a super 4:19smart model I I'm not entirely convinced 4:21this was actually a 20 model it may well 4:24have been an accidental leak and a 4:26mislabel we will see we probably will 4:29never really know but we'll see what 2.0 4:31looks like when it actually releases I 4:34I'm willing to bet you though whenever 4:362.0 really 4:37releases that same week 01 is going to 4:40drop I think that open AI is just 4:42holding it back and they want to be the 4:44last horse at the 4:46Corral all 4:47right what one two three four I think 4:50we're at number four now uh yeah so 01 4:54you were wondering about that leak the 4:58icons in Wind windows I know you didn't 5:01expect me to go there so you everyone 5:03wonders like what are these things going 5:05to be used for when they're more 5:06advanced and one of the classic uses his 5:08coding and that's where I get to icons 5:09and windows because it suggests 5:11something very interesting someone said 5:14again it's a claim we haven't I haven't 5:16seen a demo yet but they said that 5:18during the brief window last week when 5:20L1 is open they were able to code up 5:25icons in windows with a world model 5:28icons that had physics and weight to 5:30them without instructing the model on a 5:34lot of physics and weight which meant 5:35that the model understood enough about 5:38physics to code up icons that looked and 5:42behaved and bounced around 3D icons like 5:44they were in the real world without much 5:46instruction which suggests some degree 5:50of practical World model which would be 5:52a big 5:53deal okay and number five uh this one's 5:57super nerdy but it's really important 5:59one of the things that is hard about 6:02training very large models is that it is 6:05hard to physically put the chips in the 6:07same place and it is hard to make sure 6:10that all of the chips lock up and finish 6:13a task at the correct point in the 6:16training sequence with the correct 6:18response the traditional approach to llm 6:20training does require all the trip chips 6:22to work in sequence and when you get a 6:24very large number of chips like a 100,00 6:27one fault on one chip can screw up that 6:30entire data center until it gets fixed 6:33now the leader here has always been 6:34Google because Google has been running 6:36huge data centers since before anybody 6:37else they have developed more work 6:41around fault tolerant architecture at 6:42scale than anybody else I know they 6:45actually did train Gemini on a multi- 6:49DAT Center footprint which as far as I 6:52know is the only model to be trained on 6:53a multi-data center 6:55footprint and the thing is that doesn't 6:59automatically make it smarter and so 7:01that's why these other models have been 7:02able to keep up is because they have 7:03things like synthetic data figured out 7:05better than Google Etc but open AI knows 7:09and anthropic knows that they have to 7:12get multi dat Center figured out to 7:14scale because there's just limits to 7:15footprint on data centers and so they 7:17need for these very large training runs 7:19to not just have to put a million chips 7:21down in the same place but to actually 7:24figure out a fault tolerant architecture 7:27that will enable them to add more chips 7:28in scale because the other factor with 7:32chips is that at a certain point because 7:34of um the costs of fault tolerance and 7:39because of the transmission costs that 7:41go into a single state maintenance 7:44across a huge number of chips like the 7:46speed of light becomes a factor because 7:48you're actually like transmitting things 7:50back and forth you actually get 7:52diminishing returns on extra chips to a 7:55point where it may not be even worth it 7:56after a certain point so what you should 7:58hear from that is not oh no we don't 8:01have any more sort of intelligent 8:03scaling capability because of the speed 8:05of light and chips that is not the 8:06correct take the correct take is we have 8:09some inherent architectural flaws with 8:13assuming that you have to do training 8:15runs with lock steps across all chips at 8:17once and we have got a solve for that 8:20already on the Google Side open AI is 8:22working with Nvidia on another solve 8:24Jensen was talking about it and saying 8:26that he doesn't see an inherent blocker 8:27to it and that fundament we should be 8:30able to do multi-data center training 8:31which would involve not having to 8:34transmit and keep all of the chips and 8:36lock step across all the data centers 8:38and that in turn unlocks the laws of 8:40scaling again and you aren't stuck on 8:43adding another chip you aren't as stuck 8:45when a chip fails Etc I know that was a 8:48long sort of explanation I have a whole 8:50link I I'll post as well but I think 8:52it's important to understand the 8:53underlying architecture that powers 8:55these models because it demystifies it 8:58it also helps us to to understand what's 9:00really going on uh as we head into what 9:03is likely a training run aiming for 9:04artificial general intelligence in the 9:07next year to two years cheers