- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: I use AI like grok/chatptg/gemini etc often and its bad how often its flat out wrong.
Posted on 9/23/25 at 12:12 pm to sidewalkside
Posted on 9/23/25 at 12:12 pm to sidewalkside
It’s wrong a lot. When you use Google, the AI answer is first to pop up and it’s either wrong or not giving the information you wanted from you search prompt most of the time. I just scroll past it every time.
Posted on 9/23/25 at 12:12 pm to sidewalkside
I’ve tried using it to solve a problem several times and it told me the steps. I said “that’s what I did, it didn’t work” Then it would respond, “oh it’s because of xyz try this”. Which was completely wrong and then the remedy was repeated. I said, “no it’s not because of xyz and you told me the same steps again and it doesn’t fricking work.” And it just can’t figure out that I’m saying their suggestions don’t work. More often than not lately I’ve just gotten frustrated. So many times I’ve had to correct it for basic facts and it’s like “oh you’re right! My mistake”
Posted on 9/23/25 at 12:18 pm to sidewalkside
I find that if I am pulling random crap from the internet this is true, but it is much better when fed source information and asked to work based on that. Also, I am using the paid for version which seems to be better than the free versions.
Posted on 9/23/25 at 12:45 pm to Earnest_P
quote:
am less bullish on it now than I was when I first started using it months ago. Seems to be getting worse.
Because these things are not true artificial intelligence yet, but basically more efficient googles that scour the web to find what it thinks you’re looking for and predicts what the next sequence of 0’s and 1’s will be
Posted on 9/23/25 at 12:49 pm to Oilfieldbiology
quote:
Because these things are not true artificial intelligence yet, but basically more efficient googles that scour the web to find what it thinks you’re looking for and predicts what the next sequence of 0’s and 1’s will be
Exactly. It's great if you want to summarize or look up well-documented information, but it's absolutely awful at basic reasoning on its own. It's not intelligence at all.
It's basically a glorified Google.
Posted on 9/23/25 at 12:55 pm to sidewalkside
The real question is why do you still use something often that doesn’t work?
Posted on 9/23/25 at 12:55 pm to sidewalkside
quote:
As an example I googled how old am I and put my date of birth in and Gemini was off by 3 years. This is the most basic of math questions.
A counter is straightforward:
Human error vs system error
If Gemini gave the wrong age, the likely cause was input interpretation, not “bad math.” These systems don’t “forget” arithmetic; they sometimes mis-handle formats (e.g., ambiguous dates, region-specific month/day order). That’s a parsing bug, not a sign AI can’t add.
Not absolute sources of truth
All general-purpose models—including Grok, ChatGPT, Gemini—are language models trained to predict text. They are not designed as authoritative truth engines. That’s why every provider warns: verify outputs, especially for factual or numerical tasks.
Humans also misstate facts
Doctors misdiagnose, calculators give wrong results if you mistype, journalists publish errors. The tool isn’t automatically unreliable because errors happen—it’s about how it’s used and verified.
Proper usage
For date-of-birth age calculation, a dedicated calendar/age calculator or code snippet is more reliable than a large model. AI is best for synthesis, brainstorming, and pattern detection, not for tasks where precision tools already exist.
So the counter is: The mistake is not proof that AI is useless. It’s evidence of scope and limits. Expecting flawless factual recall from a predictive model is misuse. Properly scoped and verified, AI is valuable.
Would you like me to draft this in a debate-style rebuttal, so it reads as a direct response to “this guy’s argument”?
TLDR: You're using it wrong and Gemini is trash anyway
Posted on 9/23/25 at 1:00 pm to sidewalkside
quote:
As an example I googled how old am I and put my date of birth in and Gemini was off by 3 years. This is the most basic of math questions.
This probably stems from using a large language model to do math.
LLM’s aren’t the end-all, be-all of AI development. They aren’t good at math because they don’t actually calculate anything.. they try to solve using natural language which is very hit-or-miss. Gemini is actually better at math than most from what I understand but.. yeh.
The real promise of LLM’s is that they can serve as a human-machine interface between the user and a plethora of software tools (both AI and non-AI). For a math problem, that might mean the LLM takes the input, reformats it (internally) for output into other tools, feeds the information to the other tools, collects the outputs, summarizes the final “answer,” and then reports it back to the user. Those other tools could be a simple direct math calculation, or they could be other AI models specializing in more detailed analysis or even capable of writing and executing their own python script to solve the problem.
We are just beginning to scratch the surface of this kind of integration. GPT is multimodal (capable of handling text, images, and audio) since mid-2024. We are starting to see integration at the end-user software level via Copilot and other tools. It’s still clunky. But in (almost?) all cases, the LLM is the top-level interface because it excels where the rest fail - handling inputs from humans.
When you consider that most people had never used an LLM at all before 2ish years ago, and that ChatGPT launched less than 3 years ago, it really has been advancing incredibly fast. Hell.. GPT only gained access to real-time search results about a year ago.
Posted on 9/23/25 at 1:17 pm to GrammarKnotsi
quote:
TLDR: You're using it wrong
If the “AI” can’t synthesize date formats and do the math correctly based on user information, it is in fact, nearly useless. Certainly in the context it’s being billed.
So, no, he didn’t use it wrong. The product just sucked
Posted on 9/23/25 at 1:25 pm to sidewalkside
I tried using it to provide team records over a certain period. It kept getting things wrong leaving out years,
Posted on 9/23/25 at 1:26 pm to crash1211
quote:
Garbage in Garbage out.
A fourth grade teacher (not in IT) taught us that. I remember, it was taped to the ceiling of the classroom right over our heads. That was 1984.
Posted on 9/23/25 at 1:34 pm to Jebadeb
quote:
I tried using it to provide team records over a certain period. It kept getting things wrong leaving out years,
I was listening to a college football podcast preseason. They were talking about La Techs move to the sunbelt and one of the hosts used ChatGPT to see how far Ruston was from other sun belt teams. It said Lafayette was further than Jonesboro Arkansas
Posted on 9/23/25 at 1:49 pm to sidewalkside
I have alot of problems with my chatgpt. I'll have to prompt it over and and again to get what I'm asking
Posted on 9/23/25 at 2:15 pm to sidewalkside
I've had coding samples created from 'AI' that "implemented" the main work by calling an API that does not exist. Pointing that out, it made up a new API that also did not exist.
AI is garbage and only serves as the latest catch phrase / fad for corporations.
AI is garbage and only serves as the latest catch phrase / fad for corporations.
Posted on 9/23/25 at 2:18 pm to sidewalkside
I'm in a grad school program where I have to read a ton of academic articles written 80-100 years ago. I trudge through them, make my own notes...and then ask ChatGPT to summarize the article in layman's terms in 600 words or less. If the article isn't publicly available, I will upload the PDF.
In this very specific example, it does a good job converting old academic language into something readily understandable.
In this very specific example, it does a good job converting old academic language into something readily understandable.
Posted on 9/23/25 at 2:20 pm to bikerack
quote:
I'm in a grad school program where I have to read a ton of academic articles written 80-100 years ago. I trudge through them, make my own notes...and then ask ChatGPT to summarize the article in layman's terms in 600 words or less. If the article isn't publicly available, I will upload the PDF.
This. With good prompts and direction for sourcing or providing some foundational facts, the premium versions do well to turn bland info into something more palatable and actionable.
Posted on 9/23/25 at 2:23 pm to GRTiger
quote:
This. With good prompts and direction for sourcing or providing some foundational facts, the premium versions do well to turn bland info into something more palatable and actionable.
Additionally, I make 1 conversation for each class. This helps in organizing things, but as the class goes on, it picks up on the things I ask and starts to connect the dots between all the articles I have uploaded or asked about.
Posted on 9/23/25 at 3:31 pm to bikerack
I also often take the output from one, and ask a different one to fact check it.
Bottom line is as long as you treat it like an entry level employee, it can be very useful.
Bottom line is as long as you treat it like an entry level employee, it can be very useful.
Posted on 9/23/25 at 3:52 pm to sidewalkside
We are going to become so reliant on AI, like our phones. But it's going to make people even more dumb. People will lose brain muscle because what would have been used for critical thinking, will stop being used since most will let AI do the heavy lifting.
Posted on 9/23/25 at 3:53 pm to sidewalkside
I did not expect this thread to go in this direction. This is the complete opposite of my experience. Maybe its my system/master prompts or how I have my threads organized into projects but ChatGPT pro version has been on the money. I'm in my 30s and have been going back to get my ChemEng degree and it has been a game changer for saving me time and energy.
For calc 1 over a year ago (chatGPT 3 free version) it didn't suck but it was definetly wrong 10-20% of the time. It was however a great tutor and combining this with the correct answers in the book it was very rarely wrong and allowed me to miss a lot of class by TEACHING me. It however, did not like any chemistry class. Once it started hallucinating it didn't matter if I provided the correct answer or not. It was goiung to get everything wrong.
For calc 2 and now 3 (ChatGPT 4 and now 5 paid version) it is almost never wrong. Seriously, like I am shocked when it does make a mistake but it's usually a bad prompt on my part because I mixed up sections or something. It is literally teaching me Calculus and other engineering classes.
Like I said I'm pretty surprised by everyone's reaction. I use it all the time but the calculus examples are what have saved me the most time and energy but it seems this is the exact opposite for everyone else... weird.. Maybe it's my prompts, maybe it's because I pay for it, or who knows...
For calc 1 over a year ago (chatGPT 3 free version) it didn't suck but it was definetly wrong 10-20% of the time. It was however a great tutor and combining this with the correct answers in the book it was very rarely wrong and allowed me to miss a lot of class by TEACHING me. It however, did not like any chemistry class. Once it started hallucinating it didn't matter if I provided the correct answer or not. It was goiung to get everything wrong.
For calc 2 and now 3 (ChatGPT 4 and now 5 paid version) it is almost never wrong. Seriously, like I am shocked when it does make a mistake but it's usually a bad prompt on my part because I mixed up sections or something. It is literally teaching me Calculus and other engineering classes.
Like I said I'm pretty surprised by everyone's reaction. I use it all the time but the calculus examples are what have saved me the most time and energy but it seems this is the exact opposite for everyone else... weird.. Maybe it's my prompts, maybe it's because I pay for it, or who knows...
Popular
Back to top



0









