AI Innovations Redefine Scientific Discovery
Key Points
- The speaker counters a New York Times “hit piece” denying AI progress by highlighting concrete breakthroughs across multiple scientific domains over the past two years.
- Google’s AlphaDev used reinforcement learning to invent new sorting algorithms that run up to 70 % faster on short sequences and are already being integrated into mainstream C++ toolchains.
- MIT researchers leveraged deep‑learning models to discover a novel antibiotic molecule, “Allison,” that kills pathogens where existing drugs fail, representing an entirely new class of antibiotics generated solely by AI.
- AlphaFold’s prediction of 200 million protein structures and DeepMind’s Gnome system, which generated millions of stable crystalline compounds, have dramatically accelerated vaccine design, antibody engineering, and materials discovery, saving countless wet‑lab years.
- AI‑driven simulation and evolutionary design tools—from IBM’s battery digital twins to NASA’s titanium mounts—are cutting iteration cycles, producing lighter‑stronger aerospace parts and novel battery chemistries that humans could not have conceived on their own.
Sections
- Countering NYT: Recent AI Breakthroughs - The speaker rebuts a New York Times article doubting AI progress by showcasing concrete advances like AlphaDev’s novel sorting algorithms and an AI‑designed molecule that outperforms existing drugs.
- AI Innovation Defies Media Myths - The speaker contends that AI systems generate novel, high‑impact solutions—outperforming human benchmarks and delivering measurable productivity gains such as UPS’s route‑optimization savings—while acknowledging biases and data needs, and criticizes media narratives that downplay AI’s creative capacity.
- AI Tools, Skepticism, and Future Challenges - The speaker highlights practical AI uses like visual magnification and live translation, argues for asking more nuanced questions about AI’s scientific edge, data hunger, and its impact on work and startups.
Full Transcript
# AI Innovations Redefine Scientific Discovery **Source:** [https://www.youtube.com/watch?v=isuzSmJkYlc](https://www.youtube.com/watch?v=isuzSmJkYlc) **Duration:** 00:09:25 ## Summary - The speaker counters a New York Times “hit piece” denying AI progress by highlighting concrete breakthroughs across multiple scientific domains over the past two years. - Google’s AlphaDev used reinforcement learning to invent new sorting algorithms that run up to 70 % faster on short sequences and are already being integrated into mainstream C++ toolchains. - MIT researchers leveraged deep‑learning models to discover a novel antibiotic molecule, “Allison,” that kills pathogens where existing drugs fail, representing an entirely new class of antibiotics generated solely by AI. - AlphaFold’s prediction of 200 million protein structures and DeepMind’s Gnome system, which generated millions of stable crystalline compounds, have dramatically accelerated vaccine design, antibody engineering, and materials discovery, saving countless wet‑lab years. - AI‑driven simulation and evolutionary design tools—from IBM’s battery digital twins to NASA’s titanium mounts—are cutting iteration cycles, producing lighter‑stronger aerospace parts and novel battery chemistries that humans could not have conceived on their own. ## Sections - [00:00:00](https://www.youtube.com/watch?v=isuzSmJkYlc&t=0s) **Countering NYT: Recent AI Breakthroughs** - The speaker rebuts a New York Times article doubting AI progress by showcasing concrete advances like AlphaDev’s novel sorting algorithms and an AI‑designed molecule that outperforms existing drugs. - [00:03:27](https://www.youtube.com/watch?v=isuzSmJkYlc&t=207s) **AI Innovation Defies Media Myths** - The speaker contends that AI systems generate novel, high‑impact solutions—outperforming human benchmarks and delivering measurable productivity gains such as UPS’s route‑optimization savings—while acknowledging biases and data needs, and criticizes media narratives that downplay AI’s creative capacity. - [00:07:04](https://www.youtube.com/watch?v=isuzSmJkYlc&t=424s) **AI Tools, Skepticism, and Future Challenges** - The speaker highlights practical AI uses like visual magnification and live translation, argues for asking more nuanced questions about AI’s scientific edge, data hunger, and its impact on work and startups. ## Full Transcript
You know, yesterday the New York Times
published what's essentially a hit piece
on the idea that we can make artificial
general intelligence. It's called
Silicon Valley's Elusive Fantasy.
And I almost never do direct responses
to media pieces cuz I generally don't
think it's productive in this case
because of the prominence of that
newspaper and particularly because of
the number of people who have reached
out to me and essentially challenged the
idea that AI can
innovate. This needs to be addressed.
We need to put to bed the idea that the
advances of the last 24 months haven't
mattered. And I'm going to do it not by
taking apart the article in particular,
but by talking about factual advances
that recent AI has enabled us to make
across a really, really wide range of
fields. You may know some of these. I
doubt you know them all. And so sit
back. We're going to go through quite a
few here. And I'll do them pretty
quickly so we can get out in good time.
Number one, Google has trained a
reinforcement learning agent called
AlphaDev uh that has discovered sorting
algorithms that humans have never
written before. It makes up to 70 it
makes new routines that are up to 70%
faster on short sequences and can ship
mainstream C++ tool chains. Number
two, MIT researchers fed 6,000 chemical
structures into a deep learning AI model
and it surfaced an unexpected molecule
that they've named Allison. Lab tests
have shown that it kills multiple panes
pathogens where existing drugs fail and
it's opening an entirely new antibiotic
class discovered solely by AIEL
exploration. Number three, alpha fold
models have predicted 200 million
complete protein structures, many of
which lacked any experimental data. The
open database is already accelerating
malaria vaccine design. It's
accelerating antibbody engineering and
is believed to have saved hundreds of
millions of research years or hundreds
of years of work in a wet lab
environment.
DeepMind's Gnome system has used graph
neural networks to generate 2.2 million
crystalline compounds, of which 380,000
are predicted to be
stable. Lawrence Berkeley's lab was able
to quickly synthesize 41 of those brand
new compounds autonomously and validated
that AI can invent and that a lab can
build materials that humans did not
imagine.
IBM research is coupling large-scale
generative models with physics
simulators to create highfidelity
battery digital twins. You're like, why
do we need this? It enables us to slash
iteration cycles for designing cathodes
and electrolytes. And it lets scientists
explore the chemistry of batteries in
ways that were inaccessible with
conventional lab only workflows.
Engineers at NASA Gddard have used
evolutionary design software to grow
alien looking titanium mounts that are
lighter and stronger and delivered in
weeks, not months. And the shapes are so
novel that the engineers themselves say
they wouldn't have conceived them
without AIdriven
exploration. Each of these cases on the
science front goes beyond pattern
matching.
The systems are searching an
enormous sparsely labeled solution space
and they're producing artifacts that
outperform the best human benchmarks.
These are not just statistical parrots,
which is what the New York Times uses as
a frame. They're engines of creativity
in ways that humans are not. They they
use combinatorial creativity by
leveraging compute across very large
data sets that surpasses human
intuition.
This is falsifying a poorly sourced
blanket claim like AI cannot
innovate. I am the last one to say that
the technology doesn't have limitations.
I talk about that on here. AI has bias
issues. AI has brittleleness issues. AI
is really hungry for data. These are
real constraints. But the capacity for
innovation is a proven fact at this
point. And it is really frustrating to
see media narratives continue to confuse
people. I'm not just going to stick with
science. You might think, has AI really
delivered productivity gains, something
the New York Times claims it hasn't.
UPS's route optimization engine uses
machine learning to plan 55,000 driver
stops each morning. Amazon has something
similar internally. And UPS is reporting
that their company now saves a 100
million miles and 10 million gallons of
fuel every single year because of AI.
The claim that systems can't understand
images, audio, and text together is
exactly what chat GPT40 does and has
been doing for a while. The model can
identify a handwritten math problem on
the board from a camera phone, talk the
user through it, and respond naturally.
I myself have used it for code where you
point it at some code and talk it
through. It's exactly what the Times
claims it can't do. Current models don't
match humans on serious reasoning tasks.
That's another claim I hear.
Look, that's just not going to hold up.
Like you you can pick whatever bar you
want. Uh literally the uniform bar exam
is one uh chat GPT4, an older model,
scored 90% on that bar exam.
03 does even better. I will be honest
with you, some days 03 feels like it's
smarter than I
am. AI hasn't produced concrete medical
breakthroughs. There's another claim.
We've talked about that one. We've
talked about the antibiotics. Another
one that I think is a breakthrough in
bedside manner. A lot of studies show
that diagnosis
uh is more accurate and bedside manner
is better with
AI. Sometimes with AI alone, not with
doctors because they do test doctor
only, AI only and doctor and AI. Often
times the AI which may only be chat GPT4
because these models need to be
peer-reviewed still do better than the
doctor. AI hasn't learned new physical
tasks. That's just not true. We're
getting real breakthroughs in
robotics. If you haven't seen the videos
of the robots that pack uh a
refrigerator or the robots that unscrew
bottle caps or the robots that pack
lunch boxes they were never explicitly
trained for, go look them up. It's real.
Uh in fact, a number of different
robotics firms do this. Frankly, China's
probably ahead of us on this
one. The claim that AI hasn't improved
accessibility for disabled
users. I know friends of mine who are
disabled who use AI as a hack every
single
day. Uh and a as an example of something
that's coming quickly. Uh did you know
that like Apple is actually launching a
magnifier for Mac where you can strap
your phone to a laptop and it will
literally magnify any part of the room.
So a low vision user can pipe any part
of the room they're looking at into a
Mac and actually increase the size and
use it the way they want to.
Frankly, you can do that now with chat
GPT, too. It's not just an AI, it's not
just a Mac feature. It's an AI thing.
You can walk around with your phone in
chat GPT and like use the camera to look
and have a
conversation. You can do that with live
translation translation. Statistical
parrots, which is something that the New
York Times claims, do not have live
translation capabilities, but people are
using Chad GPT to live translate between
languages now.
And so it's not it's not that
questioning or skepticism is out of
place. It is right to ask really hard
questions of AI. But I would prefer that
we ask better quality questions.
Questions that are reasonable given what
we know about AI progress? For
example, why is it so much easier for AI
to make progress in scientific fields
than it is in other fields? By the way,
I think the answer is likely because
correctness is something that is
provable and LLMs are fundamentally a
branch of machine learning and machine
learning does better with correctly
provable solutions since you can drive
reinforcement learning off of it. There
are real answers for those real
questions. There are other questions
that are harder to answer like how do we
handle ongoing data availability as
models get hungrier and hungry hungrier
for data? How do we understand how work
and startup dynamics change as these
models come into the workplace? These
are real questions. I talk about them a
lot. I think they're worth thrushing out
in detail and they're much worth they're
much more worth column inches than
repeating tired claims that are
factually incorrect at this point.
I would really really love to have media
conversations that do not drive
misinformation into my inbox. I am tired
of people who are rightfully confused
reaching out to me saying, "Hey, Nate,
can you explain? I I thought I thought
that you said AI was innovative and look
at this media publication saying it's
not." That is confusing. It totally
makes sense. You'd be confused. It's not
on them. It's on the media to be more
responsible about this.
We need to take the idea that AI is
innovative seriously.