Generative vs Agentic AI, Dark Web
Key Points
- Generative AI focuses on on‑demand content creation (text, code, images, music) by responding to a single prompt, whereas agentic AI pursues a defined goal through multi‑step planning, execution, memory, and self‑improvement without continuous human input.
- Agentic AI’s workflow typically involves a planning phase, execution using large language models or specialized tools, ongoing context management via memory, and a feedback loop that refines its actions.
- Common generative AI use cases include copywriting, image and code generation, and summarization, while agentic AI is suited for complex, adaptive tasks such as autonomous incident‑response runbooks and robotic process automation.
- The “dark web” is called “dark” because it is unindexed and hidden, not because it solely contains illicit material, making it difficult to locate and block.
- Estimates suggest the dark web comprises less than 2 % of all web content, further complicating any effort to outlaw or comprehensively block it.
Sections
- Generative vs Agentic AI - Generative AI produces on‑demand content in reaction to prompts, whereas agentic AI autonomously plans, executes, and iterates multi‑step actions to achieve a specified goal.
- Challenges of Blocking the Dark Web - The speaker outlines the technical, jurisdictional, and ethical obstacles to censoring dark‑web content, noting its hidden nature, global legal gaps, constantly shifting sites, and occasional importance for free‑speech protection.
- Why LLMs Hallucinate Answers - The speaker explains that large language models generate text by predicting the most likely next token rather than retrieving factual data, which causes plausible‑sounding but inaccurate outputs—especially for recent events, niche subjects, or leading questions—though larger, more advanced models tend to reduce these hallucinations.
- Browser Bugs Can Infect Systems - The speaker explains how browser plug‑ins, extensions, and JavaScript can introduce vulnerabilities, allowing malicious code to escape sandbox protections and compromise a user's computer.
- AI's Strengths and Job Risks - The speaker explains that AI currently excels at pattern recognition, data processing, and drafting documents but lacks creativity, empathy, complex reasoning, physical dexterity, and adaptability, making rule‑based, documentation‑heavy, low‑judgment jobs especially vulnerable to automation.
- Job Request, Promo, and Conference Talk - The speaker dismisses a viewer’s job‑hunting plea, plugs another video and the TechXchange conference, and jokes about using a lightboard to demonstrate writing backwards.
Full Transcript
# Generative vs Agentic AI, Dark Web **Source:** [https://www.youtube.com/watch?v=79u2qP4Qhaw](https://www.youtube.com/watch?v=79u2qP4Qhaw) **Duration:** 00:18:00 ## Summary - Generative AI focuses on on‑demand content creation (text, code, images, music) by responding to a single prompt, whereas agentic AI pursues a defined goal through multi‑step planning, execution, memory, and self‑improvement without continuous human input. - Agentic AI’s workflow typically involves a planning phase, execution using large language models or specialized tools, ongoing context management via memory, and a feedback loop that refines its actions. - Common generative AI use cases include copywriting, image and code generation, and summarization, while agentic AI is suited for complex, adaptive tasks such as autonomous incident‑response runbooks and robotic process automation. - The “dark web” is called “dark” because it is unindexed and hidden, not because it solely contains illicit material, making it difficult to locate and block. - Estimates suggest the dark web comprises less than 2 % of all web content, further complicating any effort to outlaw or comprehensively block it. ## Sections - [00:00:00](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=0s) **Generative vs Agentic AI** - Generative AI produces on‑demand content in reaction to prompts, whereas agentic AI autonomously plans, executes, and iterates multi‑step actions to achieve a specified goal. - [00:03:09](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=189s) **Challenges of Blocking the Dark Web** - The speaker outlines the technical, jurisdictional, and ethical obstacles to censoring dark‑web content, noting its hidden nature, global legal gaps, constantly shifting sites, and occasional importance for free‑speech protection. - [00:06:15](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=375s) **Why LLMs Hallucinate Answers** - The speaker explains that large language models generate text by predicting the most likely next token rather than retrieving factual data, which causes plausible‑sounding but inaccurate outputs—especially for recent events, niche subjects, or leading questions—though larger, more advanced models tend to reduce these hallucinations. - [00:09:22](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=562s) **Browser Bugs Can Infect Systems** - The speaker explains how browser plug‑ins, extensions, and JavaScript can introduce vulnerabilities, allowing malicious code to escape sandbox protections and compromise a user's computer. - [00:12:35](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=755s) **AI's Strengths and Job Risks** - The speaker explains that AI currently excels at pattern recognition, data processing, and drafting documents but lacks creativity, empathy, complex reasoning, physical dexterity, and adaptability, making rule‑based, documentation‑heavy, low‑judgment jobs especially vulnerable to automation. - [00:15:50](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=950s) **Job Request, Promo, and Conference Talk** - The speaker dismisses a viewer’s job‑hunting plea, plugs another video and the TechXchange conference, and jokes about using a lightboard to demonstrate writing backwards. ## Full Transcript
All right, let's see. I'll start with an easy one. How about that? Right.
Because I know you guys hear this one all the time. So,
what's the difference between generative AI and agentic AI? Martin.
Yeah, I'll take that one. So, well,
they're pretty similar, right?
I mean, we think gen AI is all about— the clue is in the name—generation.
Producing new content, so text or maybe some code or even images, music and so forth.
So, it's about generating on demand.
It's reactive, it waits for a prompt
and then it gives you an output once you've prompted it.
But the other option is really agentic AI,
which is super hot right now,
which is all about actually achieving—so hot—
It's about achieving a goal, right?
So rather than just prompting it—we
put a prompt in and then we get a response out—with
an agentic AI, we give it a goal
and it now has to plan.
It has to decide
and it has to take multi-step actions along the way,
and it's going to do it without us being involved all the way through it.
So, it can trigger its own next steps.
It can adapt to ... to changing context and keep going
until it finally meets that goal.
So, if you think about the ... the stages of agentic AI,
there really are multiple.
There's kind of the ... the planning stage where it gets started.
Then once it's figured out a plan,
there's the execution stage where it's going to
maybe call a ...a large language model
or some domain specific tools. As it goes, it
needs to be talking to memory,
so it remembers stuff about what's going on,
because that's pretty important that it keeps context.
And then we go through kind of a feedback loop
where it keeps on self-improving as it goes.
So, in terms of use cases,
generative AI, copywriting, image generation, code generation, summarizing that sort of thing.
But agentic AI, that's going to be bigger stuff like, well, Jeff,
you'd know about this, like autonomous incident response runbooks,
Absolutely something ... Right.
Securitystuff
Yeah, or robotic process automation.
Stuff that needs to adapt on the fly. So, they're really pretty different.
Okay, let's see here.
We've got Jeff over here,
we've got Martin over here. Okay.
So that's one for Martin.
All right. Now let's ask Jeff a question. So,
Jeff, why can't we just outlaw
or even just block the dark web?
Yeah, yeah. So, a lot of people ask this because they ...
they first of all think that dark means dark content.
And that's not why we call it dark, although there certainly
is some dark and prohibited kind of content.
We call it the dark web because it's not indexed.
It's hard to find. It's in the shadows.
And if you think about uh ... the dark web,
the first question, if you were going to try to block
it is, you'd have to find where it is.
So, imagine all of these websites that are out on the internet. Well,
maybe there's a site
that is part of the dark web,
and our estimates are—nobody has any official numbers—it's
less than 2% of the content on the entire web.
That would be dark web. So good luck in finding it.
First of all, because it's a small amount.
And secondly, it would be hard to find
because there's just not ... not much of it and there's no indexing. So,
you can't go to a search engine in order to ... to get to it.
So, the first challenge would be finding it if you wanted to block it.
The next issue then becomes one of jurisdiction.
I mean, who gets to basically outlaw things on the internet?
The internet is a global phenomenon,
so that means individual countries can do what they want to do.
And even though you might outlaw the content in one area, another area may not.
And therefore, the content just moves there.
So that becomes a problem. Uh
... Also, the whole thing with the dark web
is that it's a bit of a game of whack-a-mole.
You've got a site that pops up
and then maybe it shuts down and it relocates to a different place. So,
this is always going to be a chasing your tail kind of situation. So,
it's not really practical.
And then, the question I would ask a lot of people to think about—Would
it even be desirable to block the dark web?
Well, certainly some of the content, I think we would all agree,
would be better if it didn't exist. And we'd love to block that.
But that's hard to, again, filter that stuff out.
But there's also some content on the dark web that actually serves us.
There are some places in the world where free speech is not honored, and therefore,
if a reporter wanted to get a story out,
putting it on the dark web is another way to do that.
Uh ... If there are cases where we want to do research,
if we want to be able to figure out
how hackers are doing what they're doing,
we can monitor their activities because they're talking on there. So,
there's a lot of different things like that
that actually could benefit us if we use it well. But,
we have to use it well, and that's not always easy to do.
Wow, what do you think? Was that a good answer?
It's really interesting to hear you say that
we should actually find some use cases
to keep the dark web around because initially, not knowing
too much about this, I'm like, oh yeah, we want to shut that thing down.
But actually, it does have some
uses too. Might as well, because we can't make it go away.
Yeah. Wow! Alright.
Martin, how are you guys ...
You're so smart, it's
almost intimidating, but ...
uh ... but I'm doing the best I can to keep up here. Alright,
I got another question for you. Are you ready for this one? So,
if AI is so smart, like ...
like you guys are, right, why
does it make stuff up?
Yeah, that is such a good question.
Such a good question. So yeah ...
We make stuff up all the time, so
I don't think it's any different.
Let me go ahead and do it right now.
Go right ahead.
Yeah! So ... so when AI makes stuff up, we call that hallucination. Right?
So, a ... a hallucination to you and I is like, you know, we're kind of trippy and going off on something, but to an AI,
it's kind of confidently
stating false information as though it were a fact.
And it does it in such a way that you kind of think,
you know, maybe that's true.
It's not lying.
It's not really fair to say it's lying because there's no intent behind it,
but it's just kind of pattern matching gone wrong.
So ... so why does it happen? Well,
it all comes down to the fact that LLMs are prediction machines.
They're not really knowledge databases.
They're not looking up an answer from a database of truth.
It's kind of coming up with its own.
And they're trying to predict the most statistically likely next token in a sequence.
So, if I come up with token A, B and C, what we're asking
the large language model to do next is to come up with the next token.
And the token is kind of more or less a word. So,
it's just coming up with what is most plausible next,
rather than having any sort of fact checking or truth detection.
So, it's optimized for fluency and for cohesion.
It's definitely not optimized for accuracy.
It'll fill in gaps of knowledge with plausible-sounding text.
And there are certain things that can cause hallucinations a bit more than others.
So, for example, if you ask it anything that is recent—so recent
events, especially if they're not in its training data,
so it's post the training cutoff—it's
not going to say 'I don't know what happens then'.
It's probably just going to hallucinate an answer. Uh ...
The other sort of situation is
if you have very niche topics,
but there's not a lot of training data from it to pull from,
it might do that as well.
And then also, if you kind of ask a leading question
where you sort of give it the answer in your question,
it's probably going to go along
and you know,
maybe take your lead with that and continue with your thought.
So, yeah, hallucinations ... that ... they have been reduced
as models get a bit bigger and a bit smarter.
But just the very nature of AI models
means they are always going to be the opportunity for hallucinations,
unless you're able to do some sort of fact checking.
Now there are some ... some mitigation strategies.
One of the biggest ones I think that we're seeing now
is RAG—retrieval-augmented generation—where
we actually pull contextual information
in from uh ... an external vector database into the model
and kind of give it the right answers.
So, things like that can help.
But right now, we're definitely in a case
where we still need human-in-the-loop validation
to actually check the outputs of these things,
that they're actually true.
Um ... I also need human-in-the-loop validation
because I'm very good at saying the wrong thing confidently.
So ... I think he hallucinated that entire answer.
I think he did too. But ... but we're going to give him ...
we're going to give him a checkbox ... a checkbox anyway.
Alright. Jeff, coming back to you,
your turn, your turn. Here it comes. So,
um, I, I
love the internet, right?
I think everybody does.
Going to my favorite websites is a joy, right? So,
uh ... there's ... I mean, there's no harm in that, right?
There's just no harm in, like, visiting a website, right?
I mean, there's not ... I don't have to worry about anything. Right? What could go wrong?
Yeah, yeah. What could possibly go wrong?
Well,
incorrect answer.
Thanks for playing. No,
it turns out that there are cases
where just visiting a website can be dangerous.
And that kind of ties into the previous question about the dark web,
which is one of the reasons I advise most people 'Don't go there.'
In fact, there are things called, a whole class of attacks,
that we call zero-click attacks.
So, all you did was just go to the website and view it.
You didn't click on anything there. And bang! I didn't do anything wrong. You sys ...
Yeah. Your system is infected. Sorry.
You went into a ... a bad neighborhood
and now you're going to wish you didn't. Uh ...
How do you think ... How could that happen? Because,
again, a lot of people say that's not possible.
I'll tell you, it is possible.
Plug-ins. We've got plug-ins into browsers.
We've got extensions and things like that.
That stuff is not perfect.
It could have bugs in it.
And therefore, if one of those bugs leaks out and gets onto your system,
then that could cause problems to you.
Also, active content like JavaScript can be an issue. So,
we have this—a lot of people who are not familiar with this—but
it's usually enabled in most people's browsers to uh ... uh
... to be able to show this kind of active content, videos and things like that.
Well, all of that's running. That's code that's running in your browser.
You went to the internet, you didn't install anything.
But just by visiting that site, you have effectively downloaded code. Now,
in theory, it's sitting inside the browser and should not break out of that.
However, there's a difference between theory and practice,
and the ... that difference has to do with browser bugs.
So, browsers are complex software.
All software of any complexity has bugs,
and some percentage of those bugs are going to be security related.
And we've had a number of cases where browsers had bugs
that would allow something to escape the sandbox of the browser
and start running actual code on a person's system.
So, until we get perfect software,
which I'm not holding my breath on,
then it's always a certain amount of risk. So,
you should be careful about which sites you visit,
because sometimes just visiting
can ... can rub off on you on ways that you didn't intend. Wow!
Well, I'll tell you, leaking bugs would be a great name
for a band comprised completely of back-end developers.
Alright, you get another check box.
You ... you've got it. Martin,
we're doing well here.
These are good answers. I'm learning a lot from you, and I love to learn.
But here comes another one. Alright. So,
is AI going to take my job away from me?
Well, I think it's probably safe to say
AI is probably going to take our jobs away from us at any point. Right?
At some point, they're going to take us off the channel, replace us.
Studio is mine!
No! Oh-oh! Replace us with digital avatars ... Oh! ...
and we'll have to see if anybody who knows notices us
and they ... uh ... can tell the difference.
Probably be better.
I'm sure it would. So,
look, I mean, that ... this is the big question
that AI is taking a lot of tasks that we could do
and uh ... being able to do them pretty well, completely autonomously.
You know, we mentioned agents earlier. But I think one of the arguments is
that AI will really transform jobs
more than replace them, like ATMs,
they didn't kill banking jobs;
they just shifted tellers into, kind of, a different role.
They weren't actually just handing out the money. So,
I think, if you look at, uh
... what all of our jobs are, they're
like a bundle of different tasks.
And I think AI today and in the immediate future
could probably automate some of those tasks,
but maybe not the entire job.
It depends, though, on the job. Like,
AI is good at certain things,
so it's very good at repetitive tasks—we
were just kind of doing the same thing over and over—because
it can learn the patterns.
AI is great for pattern recognition.
It's also very good at data processing,
and it does a pretty plausible job at first draft.
So, if you need to write a document, AI
can spit out a document.
And that large language model will do a reasonable job of a first draft.
But, you know, if you were just to use that
as the final document, everyone's going to know it's AI.
So, right now, it's good at that stuff,
but it's not necessarily able to replace people in those jobs.
What AI is not so good at right now, and this may change over time,
but right now, creativity.
Maybe not empathy. I would say not especially complex reasoning. Well,
we're starting to see some reasoning improving
with certain models, but complex reasoning,
I think humans do still have the edge.
But then other stuff like physical dexterity,
like a large language model is not going to hold this lightboard pen, is it?
So, we've got that at least.
And just kind of adapting to novel situations.
But I ... you know, if ... if I had to say
what are the kind of the signs that your job is ... is vulnerable to
AI, I would say there's probably three. So,
number one is
if what you're doing is very rule-based or deterministic,
that is something that a large language model can be trained on.
It might be doing it quicker than you can at some point.
Secondly, if a lot of your work is like kind of doing
lots of documentation or stuff that doesn't require a lot of judgment,
it's a sort of uh ... writing simple knowledge-based articles, something like that, you
could see how today's models even could do a plausible job with that.
And then thirdly, I would say kind of low context tasks. So,
if you think of, uh ...
asking it to create a stock photo of a person typing on a laptop,
well, AI image generation can do a pretty good job of that today.
Uh ... but not necessarily such a good job
if it needed to take into consideration all of the context around that, that
a human would intuitively know,
like the last language model isn't going to know unless you tell it.
So, where there is, uh ... not a lot of context
that needs to be considered, there maybe AI can help.
So, if your day job kind of scores
highly in all those three areas, well,
maybe it's a good time to start upskilling.
I like your story about the uh ... ATM.
Can you ...can you hand me some money?
Well, unfortunately, it's all been automated, so no.
Hopefully your digital twin can do a little better.
We'll see about that. Okay.
We're coming over to you now. Um, now we
... we're going to be hopefully we're three for three here.
And this one, this one I ... I ... I really like. No pressure. No,
no, there's a lot of pressure here. So,
you should feel the pressure. Okay.
How do I get a career in cybersecurity?
Well, in your case, uh
...I could give a different answer, but I'll give some general answers.
In fact, I did a video on this one
because I have this question so many times.
People will put in the comments, they'll say, cybersecurity is a really cool field.
I'd like to get involved in that
and how should I get started and this sort of thing.
So, I actually did a couple of videos
and one of them was with one of my former students.
I'm an adjunct professor, so I had him join me on one of those videos.
I'd highly recommend everybody take a look at those
and see. Some of the stuff is about cybersecurity careers, but
a lot of the general advice, I think, will apply
to anyone who's interested in IT.
And links to those are down in the description below.
But that, that ... that's a ... that's a great answer.
I'll ... I'll give it to you, That was not a great answer.
All he did was just promote his other video.
I ... I know, but ... but it's the follow-up that matters. Look,
here's the thing that I ... I really need to know is, can ...
can you just ... can you get me a job?
Uh ... yeah,
yeah. So, this is one of the questions I get all the time.
And the answer is 'no'.
Please don't send me your resumes.
I'm not involved in hiring.
They wouldn't trust me with that.
Come on, are you serious? No.
Uh ... so, if you do want to look for jobs, there are places to go.
ibm.com/jobs is where we post all of those.
But I thought you already had a job, Graeme.
I .. I think I do have a job.
think you do. Oh, that's right.
Uh, TechXchange, the conference in October in Orlando
for technologists and developers.
It's a learning conference.
It's all coming back to me now. This is why I like learning.
You know what? You guys, you guys
should go to TechXchange.
Uh ... Maybe we should. There is an idea, yeah.
Wait, wait, wait, wait, wait. Here's another idea.
Let's bring the lightboard.
The lightboard? Okay. Well,
will that fit in your carry-on?
not sure it will, but, you know,
it would allow us to answer one important question,
which is what everybody asks us in the comments.
How do we write backwards?
Oh, come on, that's not really that hard. Just watch.
See?
Not a big deal.
I don't know why everybody wants to ask.
I just wrote 'backwards'.
Alright, I'm ... I'm going to go do my job.
But this is what I want you to do.
If ... if ... if you want an opportunity to meet Jeff and meet Martin, join us at TechXchange in October.
The link is also—you
don't mind if I promote another link, do you? Please.
The link is also in the description where you can learn
about the conference and register to attend.
Uh ... Alright, I ... I got to go. This is amazing.
I can't believe I actually got to be in here with you,
but I'm ... I'm going to go do my job now,
so just ... I'm going to go. Great to Alright.
I got to ...
Guys, how do you ... how do you get out of here?
Uh ... where ...
I got in, but ...
Yeah, good luck with that.
We've been in here for years.
We've never found a way out.
So, if you find it, please tell us.
Anyone know the exit?