Learning Library

← Back to Library

Nvidia GTC Highlights: Chips, Robotics, AI

Key Points

  • Jensen Huang outlined Nvidia’s chip roadmap, confirming a second “Blackwell” iteration later this year, followed by the next‑gen “Reuben” series slated for 2025‑2027, despite production yield challenges with Blackwell.
  • The company is emphasizing new AI‑driven applications, especially in robotics (including a consumer‑grade “R2‑D2”‑style device) and automotive partnerships such as a forthcoming collaboration with GM.
  • Nvidia will continue expanding its on‑premise AI workstation line, allowing developers to run large models locally, though this is expected to consume a relatively small share of overall chip volume.
  • Heavy investment in the “Neotron” model family—a Meta‑based LLaMA fork—was announced, with claims of superior performance and the potential to lock customers more tightly into Nvidia’s hardware‑centric stack.
  • The broader strategic question raised is how Nvidia can leverage proprietary models like Neotron to maintain ecosystem dominance as enterprises adopt multi‑tenant, multi‑cloud AI solutions and experiment with competing large‑language‑model providers.

Full Transcript

# Nvidia GTC Highlights: Chips, Robotics, AI **Source:** [https://www.youtube.com/watch?v=a5TxVzRuz7Q](https://www.youtube.com/watch?v=a5TxVzRuz7Q) **Duration:** 00:04:32 ## Summary - Jensen Huang outlined Nvidia’s chip roadmap, confirming a second “Blackwell” iteration later this year, followed by the next‑gen “Reuben” series slated for 2025‑2027, despite production yield challenges with Blackwell. - The company is emphasizing new AI‑driven applications, especially in robotics (including a consumer‑grade “R2‑D2”‑style device) and automotive partnerships such as a forthcoming collaboration with GM. - Nvidia will continue expanding its on‑premise AI workstation line, allowing developers to run large models locally, though this is expected to consume a relatively small share of overall chip volume. - Heavy investment in the “Neotron” model family—a Meta‑based LLaMA fork—was announced, with claims of superior performance and the potential to lock customers more tightly into Nvidia’s hardware‑centric stack. - The broader strategic question raised is how Nvidia can leverage proprietary models like Neotron to maintain ecosystem dominance as enterprises adopt multi‑tenant, multi‑cloud AI solutions and experiment with competing large‑language‑model providers. ## Sections - [00:00:00](https://www.youtube.com/watch?v=a5TxVzRuz7Q&t=0s) **Jensen Huang Announces Chip Roadmap & AI Applications** - At GTC, Jensen Huang outlined Nvidia’s upcoming Reuben GPU series, emphasized continued AI chip development, and highlighted pushes into robotics, automotive, and on‑premise AI workstations. ## Full Transcript
0:00so Jensen hang was up on stage at their 0:02big conference and video's big 0:03conference GTC I want to give you what I 0:06think are the most impactful things that 0:07he announced and we can we can argue 0:09about them but uh I think that they 0:11speak to where his priorities are at and 0:13I think that's what matters uh so first 0:16up they're continuing to evolve on chips 0:18that's the core of their business they 0:19announced the Reuben series which will 0:21come after Blackwell their second 0:24Blackwell uh iteration is going to come 0:26in the second half of this year and then 0:27they'll be on to Ruben next year and 0:29into 0:302027 uh take all of those timelines with 0:33a grain of salt they struggle to get 0:35Blackwell uh at appropriate yields in 0:38production last year so they're just 0:40rolling it out now chips are more magic 0:43than they are science in some ways so 0:45we'll we'll sort of see but apparently 0:46it's going to be the usual right cooler 0:49faster more memory the stuff that 0:51investors want to hear is what Jensen is 0:53delivering let's move to 0:56Applications he is pushing Robotics and 1:00he is pushing cars uh and I think that's 1:02really interesting the other thing he's 1:03pushing of course is their little sort 1:05of workstation that they're doing for on 1:08premise so if you're a developer you can 1:09run a local model I don't see that 1:11consuming a huge number of his chips I 1:13do think like the core chip series 1:16getting into server racks is obviously 1:17the heart of their business and then 1:19moving into household robotics he 1:21unveiled a little R2-D2 kind of a thing 1:23and then getting into cars um and I 1:26think it's a partnership with GM that 1:28he's working on those will be 1:30significant those will be drivers for 1:32the business long term the other one 1:34that seems speculative uh they're 1:36continuing to invest really heavily in 1:37the neotron models which is a fork of 1:39llama which is meta sort of core-based 1:41model uh I think Jensen claimed it was 1:44better than deep SE get supercharges 1:46agents blah blah blah great fine uh my 1:49question is how does it supercharge your 1:51business model how does having neotron 1:54help a customer stay in the Nvidia stack 1:59and I think that's of the really 2:00interesting questions because most 2:02Enterprise Stacks are inherently sort of 2:04multi-tenant and part of where Nvidia 2:05has won is by basically having the 2:09hardware heart of the stack locked down 2:12like you may be multi-tenant in other 2:13places you may have multiple models you 2:15may be a multic cloud Enterprise but 2:17you're still going to be on Nvidia 2:18Hardware a lot of the time I am curious 2:22if they're able to figure out the secret 2:24sauce to have that same kind of grip on 2:30Baseline llm driven operations and I 2:33think one of the ways they might do it 2:36is as AI application scale you're going 2:40to have different intelligence levels 2:41needed for different applications and 2:43you may have premier stuff that's fancy 2:45like research or whatever that you want 2:47to do with a Cutting Edge Chad GPT Model 2:49A Cutting Edge model from anthropic 2:51maybe who knows uh your your engineers 2:54probably will be using anthropic when 2:55they code 2:56sure but you're going to have because 2:59this is now the AI economy tons and tons 3:02and tons of little AI apps running and I 3:05wonder I just wonder if part of the 3:07pitch that jenssen is going to have his 3:09sales guys bring to Enterprise is you're 3:12going to have hundreds and thousands of 3:14applications that are small AI modules 3:17why not get something that is from 3:19Nvidia that is built to run agentically 3:21that has our own sort of reasoning 3:23attached to it that is designed to run 3:25effectively on Nvidia hardware and all 3:27of your basic household every day apps 3:30as an Enterprise can run on that kind of 3:32a stack and if you want fancy stuff over 3:34the top that's fine you can go grab 3:35another model it's just a guess but that 3:39would be the sales pitch I would make 3:41that would enable me to drive lock in 3:43from an Nvidia perspective which is kind 3:45of what they want is they want to kind 3:46of keep people in the ecosystem and a 3:49lot of how you read the conference is 3:52basically how does it reinforce their 3:53business model so those are the things I 3:56think that are standing out to me we can 3:58obviously go farther in 4:00Jensen continues to love leather jackets 4:02and I know that's what you were worried 4:04about um and he apparently was using a 4:07T-shirt cannon so pretty standard sort 4:09of how do I put it pretty standard 4:11Shenanigans for a conference uh I will 4:14continue to keep you posted through the 4:16week as we uh move forward but I figure 4:19robot cars R2D2 getting some new Chips 4:23these These are pretty decent keynote 4:25startups for or ke keynote starters for 4:28the first day of the conference cheers