- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Video of a lawyer being busted mid-hearing for using ChatGPT to write briefs
Posted on 8/24/25 at 9:52 am
Posted on 8/24/25 at 9:52 am
It's about 14 minutes so I can do a small breakdown.
The first portion is primarily the defense attorney going through how the brief cited cases that aren't real and mis-cited real ones.
You can go to 6:39 to skip to just the judge addressing him (and not the "reveal", so to speak of the first part of the video).
You can go to 8:53 where the lawyer who was busted gets to respond. He does not admit his folly.
Posted on 8/24/25 at 9:56 am to SlowFlowPro
quote:
17-year legal journey epitomizes a dedication to excellence. His proficiency as a plaintiffs' litigator is reflected in the meticulous handling of over a thousand cases across diverse legal realms. Holding memberships in multiple state bars and licensed in the US Supreme Court and 27 federal courts, Tristan's commitment to precision and integrity defines him as a leader in the legal arena. Clients can trust in his unwavering pursuit of excellence for unparalleled representation.
His bio.
Man. That hurts.
Posted on 8/24/25 at 9:57 am to SlowFlowPro
One caught, thousands not.
Welcome to the future.
Welcome to the future.
Posted on 8/24/25 at 9:58 am to SlowFlowPro
Stupid people assume that generative AI is to the level that it can consistently do the exact same quality of work that humans can do. It’s just a glorified search engine that can synthesize and communicate the things it finds through the materials it’s been trained on. It has zero incentive (because it’s a lifeless machine) to ensure its accuracy, so making shite up through misinterpretations or misguided findings is always a possibility.
What’s especially concerning to me is the research that’s come out about the impact on the brain when using AI too much. I imagine the lawyer in this video has literally lost his ability to read, understand, and synthesize legal materials because he’s used AI too much. I bet he was once a decent lawyer and now can’t bring himself to do the long work because he’s so used to letting AI do his thinking for him.
That’s not just conjecture. Emerging researching is showing the deleterious effects AI is having on us.
What’s especially concerning to me is the research that’s come out about the impact on the brain when using AI too much. I imagine the lawyer in this video has literally lost his ability to read, understand, and synthesize legal materials because he’s used AI too much. I bet he was once a decent lawyer and now can’t bring himself to do the long work because he’s so used to letting AI do his thinking for him.
That’s not just conjecture. Emerging researching is showing the deleterious effects AI is having on us.
Posted on 8/24/25 at 9:59 am to StringedInstruments
quote:
I imagine the lawyer in this video has literally lost his ability to read, understand, and synthesize legal materials because he’s used AI too much.
I’d bet he had an associate do the work and did not check it.
My guess is that associate is back on the job market.
Posted on 8/24/25 at 10:00 am to forkedintheroad
quote:
One caught, thousands not. Welcome to the future.
It’s one of the first things we look for when we get a brief. The question is how you handle it when you find hallucinations.
Posted on 8/24/25 at 10:03 am to StringedInstruments
How do we know you didn’t just use AI?
Posted on 8/24/25 at 10:06 am to RanchoLaPuerto
quote:
I’d bet he had an associate do the work and did not check it.
Naw. That guy is a solo firm. No way he has associates. Maybe a "paralegal" who was formerly a stripper that he represented in the past.
Posted on 8/24/25 at 10:07 am to SlowFlowPro
quote:
Maybe a "paralegal" who was formerly a stripper that he represented in the past.
That . . . blows.
Posted on 8/24/25 at 10:07 am to NotoriousFSU
quote:
How do we know you didn’t just use AI?
You're an inanimate object
Posted on 8/24/25 at 10:13 am to SlowFlowPro
Also sounds like he blew the deadline for RFAs.
These are all indices of an overloaded lawyer.
These are all indices of an overloaded lawyer.
Posted on 8/24/25 at 10:18 am to SlowFlowPro
That's horrifying for all imvolved.
Pretty simple soultion though. Either don't use it or verify what it gives you. It's extremely easy to find 99% of cases cited if you have a legal research program (if they are real).
Pretty simple soultion though. Either don't use it or verify what it gives you. It's extremely easy to find 99% of cases cited if you have a legal research program (if they are real).
This post was edited on 8/24/25 at 10:20 am
Posted on 8/24/25 at 10:24 am to StringedInstruments
quote:
Stupid people assume that generative AI is to the level that it can consistently do the exact same quality of work that humans can do. It’s just a glorified search engine
This is true. Nowhere close to ready for prime time.
I asked for a ready to drink protein shake with no artificial sweetener and no stevia.
I looked up the drinks it recommended and they had artificial sweetener. Oops.
I also asked it to do a 16-team snake draft of players in the MLB Hall of Fame. It picked multiple players not in the Hall.
Posted on 8/24/25 at 10:48 am to SlowFlowPro
quote:
cited cases that aren't real
How is it finding cases that aren't real? Is it pulling from fictional novels or movies?
Posted on 8/24/25 at 10:54 am to RougeDawg
It’s a phenomenon called hallucination. At its core, these programs create sentences by choosing a word that’s the most likely to follow after some other word.
If it doesn’t know the answer to a question, it sometimes gives answers that seem plausible, but are just random BS.
If it doesn’t know the answer to a question, it sometimes gives answers that seem plausible, but are just random BS.
Posted on 8/24/25 at 10:56 am to SlowFlowPro
quote:
SlowFlowPro
Are you an actual, real person?
Posted on 8/24/25 at 11:05 am to SaintsTiger
quote:
Are you an actual, real person?

Posted on 8/24/25 at 11:11 am to SlowFlowPro
quote:In a layman's observation, it appears the Judge did a good job both in temperament and findings.
where the lawyer who was busted gets to respond. He does not admit his folly.
Posted on 8/24/25 at 11:14 am to RougeDawg
quote:
How is it finding cases that aren't real? Is it pulling from fictional novels or movies?
The concerning part isn't that it finds or "hallucinates" information. The most concerning part is that it often doubles down on that fictitious information when challenged or questioned about it. It has created scientific research out of whole cloth, often attributed to real researchers or experts in a field, and given what was supposedly links to journal publication. Other times it just creates a fictional scientist to go along with the bullshite research. When challenged, it will double down and defend its information.
Posted on 8/24/25 at 11:31 am to NotoriousFSU
quote:
How do we know you didn’t just use AI?
I guess we’ll never know.
Popular
Back to top


14




