AI Dismantles Institutional Information Asymmetry
Key Points
- By using Claude, the family identified and eliminated $162,000 in erroneous Medicare charges, cutting a near‑$200K hospital bill down to about $30K.
- This case illustrates how AI can dismantle institutional information asymmetries, exposing hidden billing codes and regulations that institutions rely on to overcharge vulnerable consumers.
- Similar asymmetries exist across sectors—debt collection, funeral services, insurance, and education—where complex rules are deliberately opaque to extract higher fees from those who can’t navigate the maze.
- AI dramatically lowers the cost and time of uncovering such abuses, turning what once required thousands of dollars in expert help into a few hours of personal effort, provided users treat it as a research tool rather than formal legal advice.
Sections
- AI Battles Medical Billing Overcharges - By using Claude to dissect a hospital invoice, a family uncovered $162,000 in Medicare violations, slashing an almost $200,000 bill and illustrating how AI can dismantle institutional information asymmetry in healthcare and beyond.
- AI as Weapon Against Institutional Abuse - The speaker explains how large language models can be leveraged to decode complex legal and billing documents, exposing institutional malpractice and leveling the informational playing field for individuals.
- Leveraging LLMs for Dispute Prioritization - The speaker outlines how using large language models to identify high‑impact disputes, locate the applicable regulatory rulebooks, and pinpoint clear categorical violations can elevate a claimant’s standing in adversarial negotiations.
- Using Benchmarks and AI for Stronger Claims - The speaker argues that grounding disputes in objective standards—like Medicare rates or property market values—creates a compelling anchor for negotiation, while AI can rapidly flag potential violations, leaving the user responsible for verifying and assuming legal liability.
- AI-Enhanced Negotiation Frame Control - The speaker explains how AI can reframe emotions, detect and reject opponents' framing, and diagnose response patterns to strategically win billing disputes.
- AI Challenges Institutional Information Monopoly - A speaker argues that AI can level the playing field by breaking institutional control over information, ensuring fair pricing, and promoting a more just society.
Full Transcript
# AI Dismantles Institutional Information Asymmetry **Source:** [https://www.youtube.com/watch?v=h5AJr3bQGaY](https://www.youtube.com/watch?v=h5AJr3bQGaY) **Duration:** 00:18:04 ## Summary - By using Claude, the family identified and eliminated $162,000 in erroneous Medicare charges, cutting a near‑$200K hospital bill down to about $30K. - This case illustrates how AI can dismantle institutional information asymmetries, exposing hidden billing codes and regulations that institutions rely on to overcharge vulnerable consumers. - Similar asymmetries exist across sectors—debt collection, funeral services, insurance, and education—where complex rules are deliberately opaque to extract higher fees from those who can’t navigate the maze. - AI dramatically lowers the cost and time of uncovering such abuses, turning what once required thousands of dollars in expert help into a few hours of personal effort, provided users treat it as a research tool rather than formal legal advice. ## Sections - [00:00:00](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=0s) **AI Battles Medical Billing Overcharges** - By using Claude to dissect a hospital invoice, a family uncovered $162,000 in Medicare violations, slashing an almost $200,000 bill and illustrating how AI can dismantle institutional information asymmetry in healthcare and beyond. - [00:03:34](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=214s) **AI as Weapon Against Institutional Abuse** - The speaker explains how large language models can be leveraged to decode complex legal and billing documents, exposing institutional malpractice and leveling the informational playing field for individuals. - [00:06:58](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=418s) **Leveraging LLMs for Dispute Prioritization** - The speaker outlines how using large language models to identify high‑impact disputes, locate the applicable regulatory rulebooks, and pinpoint clear categorical violations can elevate a claimant’s standing in adversarial negotiations. - [00:10:42](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=642s) **Using Benchmarks and AI for Stronger Claims** - The speaker argues that grounding disputes in objective standards—like Medicare rates or property market values—creates a compelling anchor for negotiation, while AI can rapidly flag potential violations, leaving the user responsible for verifying and assuming legal liability. - [00:14:13](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=853s) **AI-Enhanced Negotiation Frame Control** - The speaker explains how AI can reframe emotions, detect and reject opponents' framing, and diagnose response patterns to strategically win billing disputes. - [00:17:25](https://www.youtube.com/watch?v=h5AJr3bQGaY&t=1045s) **AI Challenges Institutional Information Monopoly** - A speaker argues that AI can level the playing field by breaking institutional control over information, ensuring fair pricing, and promoting a more just society. ## Full Transcript
This is a true story. A man died of a
heart attack in June. Along the way, he
spent four hours in the emergency room.
He racked up 195 medical billing. His
brother-in-law got that bill, took it to
Claude, the AI, and had a conversation
with Claude about which of the billing
codes were legitimate. It turned out
that there were $162,000
in Medicare billing violations on that
bill. The hospital couldn't defend it.
The hospital dropped it and the bill
came down by $162,000.
So, the family owed a little bit over
$30,000, which is still not great, but
it's a lot better than almost $200,000.
This is actually not a story about this
one family. It's a story about how I am
seeing over and over and over again that
AI is adding value to our lives by
overcoming institutional information
asymmetry, which is a fancy way of
saying the hospital was counting on the
widow and the family not knowing the
billing codes. The hospital was counting
on them not knowing Medicare bundling
rules. the hospital was counting on them
not having $3,000 to hire a medical
billing advocate. So, they would just
pay or go bankrupt trying. That's less
true now with AI. And I want to give you
a sense of how broad that change is.
It's not just for medical billing. Debt
collectors are counting on you not
knowing the statutes of limitations.
Funeral homes are counting on you not
knowing FTC regulations when you're
grieving. Insurance companies are
counting on you not to understand policy
language when they deny your claim.
School districts are counting on you not
knowing procedural timelines when your
kid needs services. Institutions do not
accidentally make things confusion. They
construct information asymmetry on
purpose because complexity is how you
charge differential prices to different
people based on your ability to navigate
the system. That is what is going on.
The institutions are intentionally
creating mazes that people can run
through that will ring bells for money.
And that has worked for a long time
because there has been no map to the
maze. AI changes that. And that matters
for every single one of us because we
all have to face the school system. We
all have to face the medical system at
some point, the legal system at some
point. Investigation used to cost
thousands of dollars in these
situations, whether it was medical or
whether it was a question you had for
the school and you suspected wrongdoing,
etc. And I wasn't kidding when I said
that the family might not have $3,000 to
pay a medical billing advocate. That is
how much they cost, right? Lawyers can
charge hundreds of dollars an hour just
to understand your case. Those costs
make fighting back really, really
expensive for most disputes and these
institutions count on it. AI collapse
that cost from thousands to like three
hours of your time. AI makes that cost
disappear, but only if you understand
that you are not using AI to get advice.
This is where I actually agree. When
model makers emphasize, do not go to the
AI for advice. I want to reframe it.
I've never seen anyone do this. I want
to suggest to you that you're not you
are using AI to help you conduct an
institutional-grade investigation and
there's a methodology to how that works
that I want to lay out for you. And so
if you're in a situation and we all will
be at some point where you're facing one
of these institutions and they are
betting on you to be confused and angry
and grieving and upset and not focused,
AI can help. And the key thing that I
want to lay out for you is that there
are eight specific capabilities that
make LLMs uniquely powerful in what I
call these adversarial context. Right?
An adversarial context is where the
institution is out to bill you, out to
get you in some sense. And AI becomes a
weapon that helps you level the playing
field. It helps you to even out
thatformational asymmetry that these
institutions depend on to bill you. And
the nonobvious principles underlying how
we use AI are what makes this
successful. This is not as simple as
asking Claude for advice. Partly because
model makers are now training AI to be
very careful in that situation because
of the liability, but also because you
have to be more sophisticated in your
approach to actually win when there are
real dollars on the line. So let me lay
out how I actually think about it. The
first thing you have to do if you
suspect malpractice from one of these
institutions, if you suspect they are
trying to take advantage of you, have
the LLM parse the technical framework
that is built for humans to find
intimidating, have the AI read Medicare
billing rules or FDCPA statutes or IEP
regulations or insurance policy
appendices, technical documents that are
designed to be unreadable on purpose by
humans. This matters because
institutions are betting you won't use
an AI to do this. But AI is not
intimidated by jargon. They decode it
really quickly because they're trained
on it. You can audit these institutions
compliance with their own rules without
subject matter expertise. And that is
the first place that I would start if I
suspected an issue. Number two,
principle number two, use LLM to
crossreerence multiple authority
sources. So check if CPT codes were
built correctly against CMS bundling
rules and Medicare fee schedules and
setting requirements. The situation
matters because violations can hide in
the gaps between the documents. A
hospital can bill procedure X in setting
Y differently than in setting Z. You
have to check how bundling rules when
multiple procedures are put together.
You have to look at the fee schedule.
This kind of multi-document pattern
recognition is extremely hard for
people. We can't hold it in our heads
well, but it turns out AI is really,
really, really good at it. And so,
because it's easy to slide things in in
between documents, in the gaps between
documents, in the areas that are sort of
gray or gaps, use AI to crosscheck
multiple authority sources in the
relevant area that you're worried about.
Right? Maybe it's not medical, maybe
it's something else. I have the prompts
that I'm building are across debt
collection. They're across education.
There's other things too because I think
that this is actually a wider issue and
that's what I'm trying to call out is
that fundamentally institutions are good
at practicing informationational
asymmetry to bully people and AI gives
us our best weapon ever to fight back.
Principle number three, LLM match
institutional register. So register is
the idea that the way we speak language
matters for our ability to navigate the
system. Right? If you can speak in a
formal register of English, you are more
likely to get what you want to do.
What's convenient about AI is you don't
have to speak the formal register of
English. I don't have to speak legal
ease, right? You can get LLMs to draft
correspondence that reads like it came
from someone who does this
professionally. They can site regulatory
citations. They can measure escalation
threats and decide the best approach.
Institutions triage disputes by
sophistication because more
sophisticated disputes are more likely
to be winning disputes and they don't
want you to win and so they would rather
settle. Right? So if there's an angry
consumer letter, the phone company can
ignore that safely. If there is a
documented violation with a professional
cadence, that's a very very different
thing. And so if you can signal that I
understand the system by using AI to
write that that is a very powerful way
to push yourself to the top of the heap
in a adversarial situation like this.
Principle number four use LLM to figure
out what is the rule book that governs
your domain. What technical framework
governs hospital billing Medicare rules?
What governs debt collection? The FDCPA
and state statutes of limitations. what
governs special ed services IDA and case
law. This matters because you cannot
audit compliance without knowing which
rule book applies where. And in some
cases, as I'm describing it, there are
multiple rule books. And so most people
don't know that every domain one has
documented standards. There may be
multiple standards and where to find
them. So the unlock from this
intuitively seems unfair to show me
where they violated explicit rules is
actually getting the AI to go hunt up
the rule book, tell you what it is, find
the current copy, and dig into it and
compare it to what is going on in your
situation. Principle number five, use
LLM to find true categorical violations,
not just marginal disputes. And so what
you want to do is you want to identify
very clean, clear, binary violations of
the rule. Either they did X or they
didn't rather than subjective disputes
like it seems expensive. This matters
because your bill is too high is an
opinion that they can safely ignore. But
you build bundling codes separately
violating CMS regulation X is a category
violation that they have very they can't
defend it. And so instead of saying say
it's a funeral situation, right? Instead
of saying the casket price seems really
high, you can say FDC general rule 453
whatever prohibits requiring purchase
from a particular provider. Wow, okay,
that's much more serious. Not my kid
needs more support, but evaluation shows
standard scores of X comparable students
receive Y services proposal fails. FAPE
standard under IDEIDA. In other words,
LLM can help you find where the
particular situation you're in breaks a
category that exists in that rule book.
and you and me like it's hard to read
the hundreds of pages of the rule book.
You actually have to get into a place
where the AI can help you to read that
rulebook. Do you see how much more
sophisticated this is than just saying,
"Hey, can you give me advice? I think
this is expensive." I'm not saying you
won't get progress with that. I'm saying
that if you give the AI these tasks, by
the way, in this order, you are going to
get farther with serious investigations.
You are going to get farther when real
money is on the line. Because if you
notice, I'm going through these eight
principles and they build on each other.
You find categorical violations,
principle five, when you have looked at
the rule book, principle four, you get
the idea, right? It builds on itself. So
number six, use LLM. Use AI to calculate
objective anchors from authoritative
standards. That sounds complicated, but
basically you want to establish a
defensible position based on published
benchmarks, based on Medicare
reimbursement rates, based on comparable
property sales, based on required
clinical guidelines. Your position
should not be I can't afford this or
this doesn't seem fair. It needs to be
what the standards establish. As an
example, let's say Medicare would
reimburse $30,000 for X procedure.
That's the offer. You can't argue that
Medicare rates are unreasonable without
admitting that Medicare loses the
business money, which they're not going
to do. And so when you can establish a
benchmark, you have a leg to stand on
that is much stronger than I just don't
feel like paying this. Property taxes,
let's say comparable sales average 420K
and uh the assessment methodology
requires fair market value, therefore
420K. So the unlock is moving from hey I
think that the property value is wrong
to hey I have an average of all of the
property values around me and I can tell
you you are way off by X based on
documented standards it becomes much
stronger and so the more you can shift
the conversation to objective anchors
that align with authoritative standards
the less your position looks subjective
and the more institutions have to listen
to you. Principle number seven, AI
collapse investigative costs while
leaving you in control of verification.
And so what they do is they can identify
potential violations very quickly and
you have to verify the findings that
carry risk. And so this is where I think
it is important for me to call out what
the model makers are doing when they say
AI doesn't give medical or legal advice.
They don't want you to think that the AI
can take the legal liability of
verifying the findings. They want you to
be the one as the advocate in your
situation that is responsible for
verifying what is really going on here.
I'm actually fine with that because the
AI can do everything up to that step,
right? The AI can identify potential
violations. It can explain why they're a
violation. It can explain how you
respond. It can draft a proposal letter
for you and you can then assess that and
say, "Is this actually correct?" And I
recommend you do so. Right? We do not
want to be in a position where we are
citing something that is not a real
citation. And it is very easy nowadays
to take regulation X, type it into
Google and just check that we are citing
a real thing. If you are in a situation
where the stakes are high, you got to do
that. But that takes 2 seconds and it's
just you instead of a medical billing
advocate or whatever it is that costs
hundreds or thousands of dollars. In
normal AI use, just directionally fluent
and directionally accurate is correct
and fine. In adversarial context, the
stakes are higher. Wrong citations will
signal you don't know what you're
talking about. And so it's important for
you to use LLMs to collapse
investigation costs, but make sure you
stay in control of the verification
step. You need to enable AI to
investigate at a truly institutional
scale. As long as you can ensure quality
on what AI brings back, that gives you
the best of both worlds, right? You save
money. AI can do all of that scaled up
investigation and then you check the
final outputs. Finally, principle number
eight, let AI draft verification prompts
to catch its own mistakes. Yes, you
actually can fact check a dispute
letter. You can flag citation errors.
You can check incorrect code
interpretations. People don't do this.
Most people who are complaining about AI
hallucinations have never written a
verification prompt. I can tell you. But
if you're going to use AI to do
institutional grade investigation,
perhaps you should use AI to install
safeguards. I'm not saying this absolves
you. Remember I said at the start you
are still in control of verification but
if you can use verification prompts that
can help you to go faster and it's
something that I feel like people
ignore. So I wanted to mention it. Now
what are the underlying sort of bedrock
understandings that make all of this
hang together and work? We've gone
through the eight principles but I just
want to quickly call out the the first
nonobvious thing going on here is that
investigation must precede negotiation.
I know your instinct when you get an
unfair thing is to start negotiating,
but AI can help you by reframing your
emotions and getting into investigation
mode. And that's really important to
actually win. The next non-obvious
thing, you want to control the frame of
the conversation. So the hospital can
say, "We offer charity assistance to
people that can't afford their billing."
But that means they're framing the
pricing as legitimate. Your reframe
saying, "We don't seek charity. we are
negotiating based on documented billing
violations. Well, now you're moving the
hospital onto weaker ground. The
hospital has to defend why they broke
the rules. AI can help you to recognize
framing attempts and to draft responses
that refuse the frame. That's another of
those hidden things underneath that you
can follow the eight principles, but you
should understand you're really
following them to establish frame
control. Finally, remember that
responses are diagnostic. A lot of
people think they send a letter and
either the institution settles or they
don't and you get a binary out. What
actually happens tells you about the
strength of your position and you can
use AI strategically to understand that.
So if they fold immediately, they can't
win. Take the win. If they ignore you,
they're either bluffing or your position
is weaker than you thought. And you need
to evaluate honestly. If they counter
reasonably, you are in negotiation
territory and you can decide if there's
a gap worth fighting for. So understand
that you are entering a
negotiated arena where the information
coming from the other party is something
you can use as intelligence with AI to
figure out your position. So where does
all of this leave us? I've given you
eight principles to think through
adversarial prompting, adversarial
negotiation with AI. I want to sort of
go back to why we're doing this. We live
in a system where institutions have
historically had an exclusive monopoly
on complex information. They don't
anymore. AI levels the playing field on
information complexity. But
unfortunately, most people don't know
how to use AI to actually level that
playing field. Because just like
everything else in AI, it's how you use
it that matters. If you just say, "Hey,
Claude, give me advice on this bill."
You are, I guarantee you, going to get
much less value than if you methodically
follow this stepbystep process dealing
with an adversarial situation because
the way we use AI shapes our ability to
access the expertise that is inside the
parametric weights of the model. The
model has this expertise. It also has
the tools to go and get current
documents. But if you don't know how to
ask for it, if you don't know what steps
to take, you're stuck with a very
generic, "Hey, can you help?" I want to
make sure that you have the tools to
speak to the greatest machine
intelligence we have ever made so that
you can use that intelligence as a tool
to solve problems that we have never
been able to solve as a species. As far
back as I can look in history,
institutions have more power partly
because they manage complex information.
We are at a point where individuals can
level that playing field. And so this is
much bigger than a story about someone
being charged a ridiculous amount of
money by an institution over a visit to
the ER. This is a story about
institutions everywhere trying
desperately to hold on to a monopoly on
information that AI is eating away. Join
the revolution, right? Like let's let's
actually make sure that what we are
charged, that what we are offered
reflects the fair standards that are
written down. AI helps us get there. AI
helps us get to a more just world. So
there you go. Good luck.