- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Lawyers of the OT, do y’all use AI?
Posted on 7/31/25 at 7:06 pm to Mingo Was His NameO
Posted on 7/31/25 at 7:06 pm to Mingo Was His NameO
quote:For discovery, document and ESI review?
We do have some machine learning and OCR “ai” that we use that works pretty well, but that’s not really ai,
This post was edited on 7/31/25 at 7:11 pm
Posted on 7/31/25 at 7:07 pm to arseinclarse
Hell no. AI sucks at the law. It's unbelievable how wrong it is most of the time and it doesn't ever disclose its degree of certainty.
This post was edited on 7/31/25 at 7:07 pm
Posted on 7/31/25 at 8:27 pm to arseinclarse
Yes, but
- A lot of it is for ancillary stuff - double checking a calculation of some sort, figuring out a common practice/situation somewhere I'm unfamiliar with (another JXN, country).
- Usually I'm either starting from scratch and it's just a starting point, or I know a good bit and I'm double checking or trying to see if AI can open some new angles. I never rely solely on AI.
- It's helpful for me to use AI to get a base understanding of things that are ancillary to my practice and which arise but which I'm not an expert in (ie, specific tax/regulatory/similar issues). Which I wouldn't use to advise a client, but more so to have an understanding of an issue I know is going to be a topic addressed by tax counsel on a deal call, etc.
- A lot of it is for ancillary stuff - double checking a calculation of some sort, figuring out a common practice/situation somewhere I'm unfamiliar with (another JXN, country).
- Usually I'm either starting from scratch and it's just a starting point, or I know a good bit and I'm double checking or trying to see if AI can open some new angles. I never rely solely on AI.
- It's helpful for me to use AI to get a base understanding of things that are ancillary to my practice and which arise but which I'm not an expert in (ie, specific tax/regulatory/similar issues). Which I wouldn't use to advise a client, but more so to have an understanding of an issue I know is going to be a topic addressed by tax counsel on a deal call, etc.
Posted on 7/31/25 at 8:36 pm to arseinclarse
So you were in the 24th at 9 a.m. today?
Do you find the most amazing part: that the lawyer claimed it was okay to use Google and ChatGPT and Copilot to check each other, instead of Westlaw or Lexis or even Fastcase? Or that she doubled down on things after being questioned? Or what?
ETA:
Fun day for the first-year associate on the other side, I bet.
Do you find the most amazing part: that the lawyer claimed it was okay to use Google and ChatGPT and Copilot to check each other, instead of Westlaw or Lexis or even Fastcase? Or that she doubled down on things after being questioned? Or what?
ETA:
Fun day for the first-year associate on the other side, I bet.
This post was edited on 7/31/25 at 8:41 pm
Posted on 7/31/25 at 9:00 pm to arseinclarse
No- seen people burned by judges for citing fake cases etc
frick that
frick that
Posted on 7/31/25 at 9:35 pm to arseinclarse
Hell no. Know a girl who was recently shitcanned over it.
LLM’s are really fricking bad at actual legal work. Research is practically guaranteed to be fake and even if it’s not you’d only know if you spent the time to check everything, which cuts down on whatever time savings you get from using AI.
I’ll pop a basic legal question into google every now and then and the AI generated answer is wrong all the damn time. It’s biased to give you what it thinks you want to hear and nearly every source it cites says something other than what the AI generated summary says.
LLM’s are really fricking bad at actual legal work. Research is practically guaranteed to be fake and even if it’s not you’d only know if you spent the time to check everything, which cuts down on whatever time savings you get from using AI.
I’ll pop a basic legal question into google every now and then and the AI generated answer is wrong all the damn time. It’s biased to give you what it thinks you want to hear and nearly every source it cites says something other than what the AI generated summary says.
This post was edited on 7/31/25 at 9:37 pm
Posted on 7/31/25 at 9:36 pm to arseinclarse
As a general counsel in a tech company with limited resources and headcount, it’s been a lifesaver. I use it how I would abuse a kid right out of law school making $60k.
“Hey we’re doing a partnership deal with XYZ Corp. in Q4 and we need an MOU drafted by next week. Nah we don’t have any of the commercials nailed down, we just need to get something signed!” First move is to an AI companion to put together a skeletal template.
My AR team calls me and says these 5 clients are between 68-110 days past due on their invoices and they want to send a notice of material breach + suspension? ChatGPT can put together a letter in 10 seconds which I can then just put each client’s name and notice contact on, sign it, ship it.
Would I ever give it confidential/proprietary information or PII? No.
Would I ever rely on it for legal research, case citation etc.? frick no. Not without independently verifying its conclusions on my own…and potentially with an outside specialist firm if budget allows.
ETA: I recently implemented a CLM system so that I could dump all our contracts in, have the native AI tool run and pull meta data for reports for my board/PE group. Their implementation people told me, “the more you feed it, the more it’ll learn!” So we dumped about 160,000 contracts in, and the AI did a fricking horrendous job of even just pulling out the party names. I was so disappointed we wasted money on that shite.
“Hey we’re doing a partnership deal with XYZ Corp. in Q4 and we need an MOU drafted by next week. Nah we don’t have any of the commercials nailed down, we just need to get something signed!” First move is to an AI companion to put together a skeletal template.
My AR team calls me and says these 5 clients are between 68-110 days past due on their invoices and they want to send a notice of material breach + suspension? ChatGPT can put together a letter in 10 seconds which I can then just put each client’s name and notice contact on, sign it, ship it.
Would I ever give it confidential/proprietary information or PII? No.
Would I ever rely on it for legal research, case citation etc.? frick no. Not without independently verifying its conclusions on my own…and potentially with an outside specialist firm if budget allows.
ETA: I recently implemented a CLM system so that I could dump all our contracts in, have the native AI tool run and pull meta data for reports for my board/PE group. Their implementation people told me, “the more you feed it, the more it’ll learn!” So we dumped about 160,000 contracts in, and the AI did a fricking horrendous job of even just pulling out the party names. I was so disappointed we wasted money on that shite.
This post was edited on 7/31/25 at 9:42 pm
Posted on 7/31/25 at 9:37 pm to arseinclarse
quote:
Witnessed a lawyer get exposed this morning for her use of AI hallucination cases.
Hell, a judge recently had to retract an opinion because it was full of AI shite.
Posted on 7/31/25 at 9:44 pm to JohnnyKilroy
quote:
I’ll pop a basic legal question into google every now and then and the AI generated answer is wrong all the damn time. It’s biased to give you what it thinks you want to hear and nearly every source it cites says something other than what the AI generated summary says.
This. My biggest problem is similar. I can usually not nail down a proper cite for what AI is telling me from the list of resources it cites as authority for its conclusion.
Posted on 7/31/25 at 9:47 pm to Garfield
This is about right. It can be useful as a better, more powerful search engine, but you really have to tardwrangle it.
Go ask ChatGPT, Gemini, and Grok to give you the lay of the land for whether surcharging is permitted in each of the 50 states. Certainly a nuanced question, sure, but all 3 will come back with inconsistent answers, and none of them will be entirely correct.
Go ask ChatGPT, Gemini, and Grok to give you the lay of the land for whether surcharging is permitted in each of the 50 states. Certainly a nuanced question, sure, but all 3 will come back with inconsistent answers, and none of them will be entirely correct.
Posted on 7/31/25 at 9:47 pm to Pettifogger
quote:
A lot of it is for ancillary stuff - double checking a calculation of some sort, figuring out a common practice/situation somewhere I'm unfamiliar with (another JXN, country). - Usually I'm either starting from scratch and it's just a starting point, or I know a good bit and I'm double checking or trying to see if AI can open some new angles. I never rely solely on AI. - It's helpful for me to use AI to get a base understanding of things that are ancillary to my practice and which arise but which I'm not an expert in (ie, specific tax/regulatory/similar issues). Which I wouldn't use to advise a client, but more so to have an understanding of an issue I know is going to be a topic addressed by tax counsel on a deal call, etc.
Maybe you’re more adept at using AI than me, but I’ve tried many, many times to use ai for these types of use cases and the answers it gives me are rarely correct upon further review. Wrong so often that I pretty much never trust the answer and it just feels like an extra step vs google from 10 years ago.
I’ll ask frequently if there is a certain statutory analog between two states and it will almost always say yes and then cite a statute that is maybe in the neighborhood but really not close to being analogous.
Posted on 7/31/25 at 10:15 pm to arseinclarse
No.
Not at all.
I don't even let clerks do work for me.
Not at all.
I don't even let clerks do work for me.
Posted on 7/31/25 at 10:27 pm to Teddy Ruxpin
quote:
. Its conclusions in my area of law are wrong over half the time and will change if I slightly change the prompt.
Yeah it can be pretty bad. However, it’s an awesome tool if used right.
ChatGPT Premium with o3 on my phone is my starting point for research questions. I usually already know the general answer or at least the wrong answers. Then I go to WestLaw and dig in (or send to associate to do so). I’ve used West’s AI (CoCounsel), and it’s nothing special except it accesses the entire West library.
My colleague in a case we were about to try made podcast eps from depo transcripts including exhibits to prep. It was helpful.
I had CoCounsel make a hearing outline for me, and it got a lot wrong but was still a useful tool to prep.
I’ve used ChatGPT to customize agreements and even provide some good templates. Helps when you don’t have a good go-by (a lot of times my go-bys are way too long and involved and customized).
I’ve found that my associates who really know how to use AI give me great work product. They aren’t idiots who just rely on it for finished product. I think they also know I would know if they did and that they’re going to be expendable if they don’t use their brains.
Eta: the most annoying aspect of AI in law for me so far: clients who think they’re lawyers because they gave AI some terrible prompts and got terrible results while staying at a Holiday Inn Express.
This post was edited on 7/31/25 at 10:38 pm
Posted on 8/1/25 at 6:49 am to NOLATiger163
quote:
Do you find the most amazing part: that the lawyer claimed it was okay to use Google and ChatGPT and Copilot to check each other, instead of Westlaw or Lexis or even Fastcase? Or that she doubled down on things after being questioned? Or what?
I enjoyed her argument attacking defense counsel’s billing in connection with the motion. “Your honor, it should have only taken three minutes to put the citations into google, not 7.3 hours!”
I encourage her to take up the appeal as she suggested.
Posted on 8/1/25 at 6:54 am to arseinclarse
Searching cases, this can be HUGE, but AI can not stand in front of a jury.
Posted on 8/1/25 at 6:55 am to arseinclarse
Lawyer LinkedIn has been pretty quiet about using it for litigation after a long period of “use AI or you’re a dinosaur!!1”. Too many dummies are getting popped with fake cites. I haven’t read into it much but I’d imagine there are privilege concerns about dumping sensitive records into these apps too.
This post was edited on 8/1/25 at 6:56 am
Posted on 8/1/25 at 6:56 am to arseinclarse
I pretend to be a lawyer using AI and most can’t tell lol.
Posted on 8/1/25 at 7:00 am to arseinclarse
I'm more of a Google / AI Physician
I can diagnose anything in just a few clicks
I can diagnose anything in just a few clicks
Posted on 8/1/25 at 7:14 am to arseinclarse
Only as a basic, frontline search. “Find me all of the statutes that reference ABC”. “Find me AG opinions on XYZ admin issue.” Saves me a few minutes in gathering where I want to start, but I utterly ignore what it claims those statutes or opinions say, as it’s practically always wrong 
Popular
Back to top



0










