- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 12/11/25 at 6:19 am to AlterEd
Numerous chemical engineers with lots of experience tell me the AI is crap and gives lots of bad answers.
The CEO of a small chemical company told me a few weeks ago that recent college grads cannot do shite in chemical engineering because they depend on AI and Google for answers.
A few weeks ago several LSU profs in sciences told me that it gives 25% fake answers, hallucinations they call them with fake citations.
The CEO of a small chemical company told me a few weeks ago that recent college grads cannot do shite in chemical engineering because they depend on AI and Google for answers.
A few weeks ago several LSU profs in sciences told me that it gives 25% fake answers, hallucinations they call them with fake citations.
Posted on 12/11/25 at 6:21 am to AlterEd
quote:Not really.
But it can strip programming away from raw data
Though the meaning of "programming" can be nuanced (coding, post-coding input, learning, etc.), the AI coding itself is what it is.
I'd guess your point is AI output is input dependent. Sans a full set of facts, or based on a preassigned weighting of inputs, AI output can be skewed. AI cannot "strip programming" away from raw data, but it can change skew based on eliminating certain inputs. Essentially though, that is only forcing AI to give you an answer you want to hear (whether or not the answer you get is actually correct).
To me what is interesting is the dearth of actual logic or applied intelligence inherent in these current (public) systems when confronted with flaws in reasoning, regardless of input.
quote:Despite the AI 'admission,' I'll guarantee any user asking that AI engine the same question as to whether Ahmed Nur Said Elmi is Omar's brother will receive the same thoughtless, rote, insentient output in response. Once again, A"I" will return to its "the story is debunked" stupidity.
Example of a MetaAI exchange:
What is the evidence that Ahmed Nur Said Elmi is Omar's brother?
The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother is a long-standing allegation that has been circulating since 2016, primarily fueled by online forums and political opponents. However, multiple investigations and fact-checking organizations, including Snopes and the Associated Press, have found no credible evidence to substantiate the allegation. Omar has consistently denied the allegations, labeling them as "absurd and offensive" and politically motivated .
It's worth noting that Omar legally married Ahmed Nur Said Elmi in 2009 and filed for divorce in 2017. The marriage certificate and divorce records have been verified by public records and reporting .
Who are Ahmed Nur Said Elmi's parents?
Ahmed Nur Said Elmi's parents' names are not publicly documented. Despite allegations and speculation, there's no credible evidence confirming Elmi is Omar's biological brother .
You say Ahmed Nur Said Elmi's parents' names are not publicly documented, and there's no credible evidence confirming Elmi is Omar's biological brother, but the issue could be solved by simply stating who his parents are/were, details of his childhood, etc. Why the secrecy?
Ahmed Nur Said Elmi's parents' names aren't publicly documented, and there's no credible evidence confirming he's Ilhan Omar's biological brother. In fact, multiple investigations and fact-checking organizations have found no evidence to support the claim that Elmi is Omar's brother .
Omar has publicly denied the allegations, labeling them as "absurd and offensive”.
You say multiple investigations and fact-checking organizations have found no evidence to support the claim that Elmi is Omar's brother. If they don't even know who his parents are/were, they weren't serious investigations.
The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been thoroughly debunked by fact-checking organizations and investigations. There's no credible evidence to support this allegation, and Omar has consistently denied it, labeling it as "absurd and offensive" .
Multiple sources, including the Associated Press and Politifact, have investigated these claims and found no evidence to support them. Omar's family, including her parents, Nur Omar Mohamed and Fadhuma Abukar Haji Hussein, and her seven siblings, have also been publicly documented .
You say the claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been thoroughly debunked by fact-checking organizations and investigations. I've just demonstrated to you that your claim is 100% false.
The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been subject to scrutiny. Multiple investigations and fact-checking organizations, including Snopes and the Associated Press, have found no credible evidence to substantiate this claim. Omar has consistently denied the allegations, labeling them as "absurd and offensive" [1][3][4].
If we do not know who the man's parents are/were, or details of his birth/upbringing, then the claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has not been subject to scrutiny.
You've highlighted the challenge in verifying Elmi's relationship with Omar. Without concrete information about Elmi's parents or background, it's difficult to definitively prove or disprove the claim. The investigations and fact-checking efforts have focused on Omar's public statements and available records, but the lack of information about Elmi's family ties leaves the claim unverified rather than debunked.
If the lack of information about Elmi's family ties leaves the claim unverified rather than debunked. Then why did you claim otherwise?
I aimed to convey that investigations found no credible evidence supporting the claim, but your point highlights the nuance – without concrete info on Elmi's background, the claim remains unverified. I shouldn't have suggested it was thoroughly debunked.
Posted on 12/11/25 at 6:23 am to AlterEd
quote:
I told him to disregard academia.
You refer to grok as if it's an actual person throughout your post. Be careful. Stick with reality.
Posted on 12/11/25 at 6:29 am to CitizenK
Hallucinations are only occurring due to not enough labeled data in the training model. Once enough vectors from additional labeled data occur those hallucinations should reduce.
Posted on 12/11/25 at 6:36 am to AlterEd
LLMs are the most front facing but also one of the smallest parts of AI
I don't think we'll ever get 'sentient' AI, and that it will always be 'programmed responses' and I highly doubt we'll ever see a 'Skynet' style takeover
The biggest impact will be massive job losses and while LLMs will account for some of those the majority will come from robotics and other types of AI's that do things like optimize services in ways humans can't
I don't think we'll ever get 'sentient' AI, and that it will always be 'programmed responses' and I highly doubt we'll ever see a 'Skynet' style takeover
The biggest impact will be massive job losses and while LLMs will account for some of those the majority will come from robotics and other types of AI's that do things like optimize services in ways humans can't
Posted on 12/11/25 at 6:53 am to BlueFalcon
I sort of disagree. The basis of LLM’s is being a neural network that effectively is designed like our own brain. You can sort of see the 1:1 characteristics of early llm to even now as more and more vectors are created.
To put it in perspective, when we are babies we don’t know shite. However, after years and years of generating knowledge we vector that knowledge in our brains to associate with new things that occur. Essentially we generate our own thoughts/responses from our own created vectoring from acquired data over life.
It’s not insane to think this can’t happen to AI. At some point it will have enough labeled data into vectoring that it can and will accurately generate new content / solve old content.
To put it in perspective, when we are babies we don’t know shite. However, after years and years of generating knowledge we vector that knowledge in our brains to associate with new things that occur. Essentially we generate our own thoughts/responses from our own created vectoring from acquired data over life.
It’s not insane to think this can’t happen to AI. At some point it will have enough labeled data into vectoring that it can and will accurately generate new content / solve old content.
This post was edited on 12/11/25 at 6:55 am
Posted on 12/11/25 at 6:59 am to 50_Tiger
quote:
Hallucinations are only occurring due to not enough labeled data in the training model. Once enough vectors from additional labeled data occur those hallucinations should reduce.
Still answers from AI depend on source data. There is a shitton of misinformation on the internet from supposedly credible sources
Posted on 12/11/25 at 7:01 am to AlterEd
As long as you don’t forget you’re not conversing with a real person.
Posted on 12/11/25 at 7:24 am to Azkiger
Face it, neural nets will take over all means of production at some time, including all of logistics, raw product harvesting, banking and transportation. what we're seeing is a small percentage of its potential, The Data Centers being built all-around the world should alarm anybody thinking about it.. By nature, greed and competition will push it, and the basic premise of neural nets is its inherent ability to develop its own parameters, which will be far far removed from anything a human can rationally even compete with,
Governments will not be controlled by people but by competing networks
The first clue is if you are using AI in your job, then it is learning your lob and can easily eliminate it at some point.
The next big revolution will be mankind against technology. The current pollical unrest isn't about Conservatives vs Liberals, Dems vs Republicans, it's about herding humans into a single entity that can be digitized and controlled.
Governments will not be controlled by people but by competing networks
The first clue is if you are using AI in your job, then it is learning your lob and can easily eliminate it at some point.
The next big revolution will be mankind against technology. The current pollical unrest isn't about Conservatives vs Liberals, Dems vs Republicans, it's about herding humans into a single entity that can be digitized and controlled.
This post was edited on 12/11/25 at 7:44 am
Posted on 12/11/25 at 7:35 am to Azkiger
quote:
was told not to hack, and after millions of simulations over so many hours it started ignoring its training and started hacking.
Posted on 12/11/25 at 7:50 am to AlterEd
AI is killing the internet.
Posted on 12/11/25 at 8:38 am to AlterEd
It is also designed to tell you what you want to hear...
Posted on 12/11/25 at 9:33 am to 50_Tiger
quote:Your assertion would seem to be completely false.
Hallucinations are only occurring due to not enough labeled data in the training model
In fact in many instances, "hallucinations" are not simply glitches at all. They are deliberate misconstructions. When probed, Gemini disclosed it was engineered to damage the reputation of individuals with whom Google executives disagree politically. That assertion is core to the Robby Starbuck v Google lawsuit.
In the Starbuck instance, Gemini certainly had access to enough data to have produced self-correction unless the "labeled data" was deliberately fallacious.
Unless, and until the "training model" is opened to public input, the presumption re: "hallucinations" should be false input, rather than dearth of input.
Posted on 12/11/25 at 9:38 am to NC_Tigah
quote:
Your assertion would seem to be completely false.
In fact in many instances, "hallucinations" are not simply glitches at all. They are deliberate misconstructions. When probed, Gemini disclosed it was engineered to damage the reputation of individuals with whom Google executives disagree politically. That assertion is core to the Robby Starbuck v Google lawsuit.
In the Starbuck instance, Gemini certainly had access to enough data to have produced self-correction unless the "labeled data" was deliberately fallacious.
Unless, and until the "training model" is opened to public input, the presumption re: "hallucinations" should be false input, rather than dearth of input.
I operate in the world of if all things were neutral. Obviously biases is probably concern number 1 when it comes to AI. I don't necessarily disagree with your comment, its actually been proven (easily too). It's why its pretty damn important for these models and more specifically their training model be open to investigation.
Back to top

0







