Disney to Sue X Over AI Images
Key Points
- The speaker predicts that Disney’s lawyers will soon sue Elon Musk because X’s new image‑generation AI lacks any safeguards against producing trademark‑infringing depictions of Disney characters.
- Disney’s litigation history—having helped shape much of modern copyright and trademark law—means it will aggressively protect its IP, and other celebrities are likely to follow suit for unauthorized, realistic portrayals.
- Unlike other generators (e.g., Midjourney, ChatGPT, Gemini, Copilot) that employ dedicated teams to enforce copyright, trademark, and safety policies, X’s model was released with virtually no guardrails.
- This unchecked freedom invites “bad actors” who can create misleading, photorealistic images that could spread misinformation or depict violence, damaging both public trust and the platform’s reputation.
- The lack of responsible moderation positions X as a liability risk, potentially prompting legal action and broader societal backlash against the company.
Full Transcript
# Disney to Sue X Over AI Images **Source:** [https://www.youtube.com/watch?v=fOx5qeb0Bac](https://www.youtube.com/watch?v=fOx5qeb0Bac) **Duration:** 00:07:24 ## Summary - The speaker predicts that Disney’s lawyers will soon sue Elon Musk because X’s new image‑generation AI lacks any safeguards against producing trademark‑infringing depictions of Disney characters. - Disney’s litigation history—having helped shape much of modern copyright and trademark law—means it will aggressively protect its IP, and other celebrities are likely to follow suit for unauthorized, realistic portrayals. - Unlike other generators (e.g., Midjourney, ChatGPT, Gemini, Copilot) that employ dedicated teams to enforce copyright, trademark, and safety policies, X’s model was released with virtually no guardrails. - This unchecked freedom invites “bad actors” who can create misleading, photorealistic images that could spread misinformation or depict violence, damaging both public trust and the platform’s reputation. - The lack of responsible moderation positions X as a liability risk, potentially prompting legal action and broader societal backlash against the company. ## Sections - [00:00:00](https://www.youtube.com/watch?v=fOx5qeb0Bac&t=0s) **Disney Suing Elon Over AI** - The speaker warns that X’s new image‑generation AI, which lacks safeguards against unauthorized use of Disney characters and celebrity likenesses, will soon trigger lawsuits from Disney and other public figures. ## Full Transcript
I do not often make specific concrete
predictions about the future in Tech
because I think that predicting the
future is inherently a Fool's errand but
I'm going to make one
now the lawyers for the Disney
Corporation are going to be suing Elon
Musk very very shortly and the reason
why is that there are absolutely no
guard rails as far as we can tell on the
image generation
AI that his company X just
released if you can release an image
generation AI that allows you to
show Disney characters like Mickey Mouse
doing things that are against the brand
without consent from the
corporation you as the company who built
the model are going to be headed
straight to court to talk with Disney's
lawyers Disney's lawyers literally wrote
the book on trademark and copyright a
lot of our copyright law comes from the
Disney Corporation in the way they have
worked with the court system to protect
the intellectual property of Disney
characters and I'm calling Disney out
specifically here but I'm not saying
they're the only ones you are going to
have a lot of individuals who are
celebrities coming for X as well because
another guard rail that just doesn't
seem to be there in X is that you can
build celebrity images showing
celebrities doing things that they've
never actually
done and that's going to be a problem
especially as these images are near
photorealistic people are going to look
at them and they're going to say oh this
is what this political figure did when
they didn't do
it this is what this musici did when she
didn't do
it and I recognized that part of the
value proposition that X is building is
that it is a place where the idea is
that you are the one almost in a
Libertarian sense who is responsible for
your choice of speech and that you
should be free to express yourself and
so I wasn't surprised to see that the
model had virtually no guardrails but
considering the difference between the
very very structured guard rails for
other image generation other llms other
large language models that we have in
the wild like mid Journey like chat GPT
like Gemini like
co-pilot these are all models where
there are entire teams of highly paid
professionals whose job it is to ensure
that they're not used in an unsafe
manner to not to ensure that they're not
used in a manner that violates copyright
or violates trademark if you decide to
build a highly capable model now after
all of those guardrails are in place for
other models and just ignore all of that
and just say you can do whatever you
want you are going to get into a
situation
where no one will believe you if you say
you can't do it it will look more
intentional than it would have eight
months ago 12 months ago 24 months ago
and you are going to attract the kinds
of Bad actors that Society at large will
view as reprehensible so you do you're
doing some bad some some brand damage
right I've I've seen pictures of
violence on Twitter on X that come from
that model generation of artwork that
were shocking that should not be just
rolling through on a social network
without any kind of consent without any
kind of guardrails at all and I you know
most of the people who are using these
networks are supposed to be grown-ups
anyway right I I get that this is a
grown-up environment that does not mean
that you can violate IP violate
trademarks and expect to get away with
it for long so I'm paying attention to
this because typically when these things
happen they don't stay in the wild long
it is going to be a situation I expect
where there will be some sort of
injunction where there will be some sort
of uh ruling from a judge that says
either you need to put guardrails into
this model now or you need to make
sure that you pay for damages and take
the model down something like that
that's my
guess the problem is guard rails have to
be deeply rooted to work in llms if you
just slap something that's kind of at
the prompt level on the very end
I mean yes Apple's prompts leaked there
is some prompt guarding there but you
also have to think about making sure
that the training materials you're using
are not training materials that are
going to be harmful that give the llm or
give the image generation tool features
inside their latent space that let them
produce harmful material easily now
these
tools are trained on so much data I
understand that you cannot catch every
single thing that they look at but there
is something to be said for making sure
that you are not intentionally training
them on the kinds of material that is
going to cause real problems for you the
brand that built the model down the
road this actually came up uh you may
recall a few I guess it was two or three
months ago Google r red out their AI
answers system and they didn't have full
guard rails on it and you started to get
really really hateful Google answers
come through that seem to be highly
correlated to similar answers on
Reddit that is the kind of thing where
guardrails weren't fully baked in and
Google went back and fixed that like we
don't talk about that as much anymore
because Google went back and their team
addressed it like they had tried to put
guard rails in but they hadn't put
enough in they had a whole team on that
and they fixed it I don't think it will
be quite that fast if you have
intentionally built the entire structure
of your business your brand and your org
to not put guard rails in it's going to
be hard to add them now so I don't know
what's going to happen for Elon and X
but I am sure paying attention because
we have not seen a gp4 class Model come
out we have not seen a highly
sophisticated image Model come out with
this few guard rails this
publicly in a while right like maybe
maybe since the launch of Chad GPT yes
there are always ways to jailbreak yes
there are models on other parts of the
web that are not as publicly available
that are intentionally jailbroken I know
those things exist but putting it in a
public brand where you own the brand and
you have hundreds of millions of people
in the app is a different thing and I
think the lawyers of the Disney
Corporation are going to agree so I will
be really curious to see how this goes