Started By
Message

re: A Lesson About AI

Posted on 12/11/25 at 3:56 am to
Posted by bodask42
Tuscaloosa
Member since Aug 2009
2496 posts
Posted on 12/11/25 at 3:56 am to
This thread/OP is the definition of AI Psychosis.
Posted by CitizenK
BR
Member since Aug 2019
13894 posts
Posted on 12/11/25 at 6:19 am to
Numerous chemical engineers with lots of experience tell me the AI is crap and gives lots of bad answers.

The CEO of a small chemical company told me a few weeks ago that recent college grads cannot do shite in chemical engineering because they depend on AI and Google for answers.

A few weeks ago several LSU profs in sciences told me that it gives 25% fake answers, hallucinations they call them with fake citations.
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
135615 posts
Posted on 12/11/25 at 6:21 am to
quote:

But it can strip programming away from raw data
Not really.

Though the meaning of "programming" can be nuanced (coding, post-coding input, learning, etc.), the AI coding itself is what it is.

I'd guess your point is AI output is input dependent. Sans a full set of facts, or based on a preassigned weighting of inputs, AI output can be skewed. AI cannot "strip programming" away from raw data, but it can change skew based on eliminating certain inputs. Essentially though, that is only forcing AI to give you an answer you want to hear (whether or not the answer you get is actually correct).

To me what is interesting is the dearth of actual logic or applied intelligence inherent in these current (public) systems when confronted with flaws in reasoning, regardless of input.
quote:

Example of a MetaAI exchange:

What is the evidence that Ahmed Nur Said Elmi is Omar's brother?

The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother is a long-standing allegation that has been circulating since 2016, primarily fueled by online forums and political opponents. However, multiple investigations and fact-checking organizations, including Snopes and the Associated Press, have found no credible evidence to substantiate the allegation. Omar has consistently denied the allegations, labeling them as "absurd and offensive" and politically motivated .

It's worth noting that Omar legally married Ahmed Nur Said Elmi in 2009 and filed for divorce in 2017. The marriage certificate and divorce records have been verified by public records and reporting .

Who are Ahmed Nur Said Elmi's parents?

Ahmed Nur Said Elmi's parents' names are not publicly documented. Despite allegations and speculation, there's no credible evidence confirming Elmi is Omar's biological brother .

You say Ahmed Nur Said Elmi's parents' names are not publicly documented, and there's no credible evidence confirming Elmi is Omar's biological brother, but the issue could be solved by simply stating who his parents are/were, details of his childhood, etc. Why the secrecy?

Ahmed Nur Said Elmi's parents' names aren't publicly documented, and there's no credible evidence confirming he's Ilhan Omar's biological brother. In fact, multiple investigations and fact-checking organizations have found no evidence to support the claim that Elmi is Omar's brother .

Omar has publicly denied the allegations, labeling them as "absurd and offensive”.

You say multiple investigations and fact-checking organizations have found no evidence to support the claim that Elmi is Omar's brother. If they don't even know who his parents are/were, they weren't serious investigations.

The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been thoroughly debunked by fact-checking organizations and investigations. There's no credible evidence to support this allegation, and Omar has consistently denied it, labeling it as "absurd and offensive" .

Multiple sources, including the Associated Press and Politifact, have investigated these claims and found no evidence to support them. Omar's family, including her parents, Nur Omar Mohamed and Fadhuma Abukar Haji Hussein, and her seven siblings, have also been publicly documented .

You say the claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been thoroughly debunked by fact-checking organizations and investigations. I've just demonstrated to you that your claim is 100% false.

The claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has been subject to scrutiny. Multiple investigations and fact-checking organizations, including Snopes and the Associated Press, have found no credible evidence to substantiate this claim. Omar has consistently denied the allegations, labeling them as "absurd and offensive" [1][3][4].

If we do not know who the man's parents are/were, or details of his birth/upbringing, then the claim that Ahmed Nur Said Elmi is Ilhan Omar's brother has not been subject to scrutiny.

You've highlighted the challenge in verifying Elmi's relationship with Omar. Without concrete information about Elmi's parents or background, it's difficult to definitively prove or disprove the claim. The investigations and fact-checking efforts have focused on Omar's public statements and available records, but the lack of information about Elmi's family ties leaves the claim unverified rather than debunked.

If the lack of information about Elmi's family ties leaves the claim unverified rather than debunked. Then why did you claim otherwise?

I aimed to convey that investigations found no credible evidence supporting the claim, but your point highlights the nuance – without concrete info on Elmi's background, the claim remains unverified. I shouldn't have suggested it was thoroughly debunked.
Despite the AI 'admission,' I'll guarantee any user asking that AI engine the same question as to whether Ahmed Nur Said Elmi is Omar's brother will receive the same thoughtless, rote, insentient output in response. Once again, A"I" will return to its "the story is debunked" stupidity.
Posted by HagaDaga
Member since Oct 2020
5931 posts
Posted on 12/11/25 at 6:23 am to
quote:

I told him to disregard academia.

You refer to grok as if it's an actual person throughout your post. Be careful. Stick with reality.
Posted by 50_Tiger
Arlington TX
Member since Jan 2016
42969 posts
Posted on 12/11/25 at 6:29 am to
Hallucinations are only occurring due to not enough labeled data in the training model. Once enough vectors from additional labeled data occur those hallucinations should reduce.
Posted by BlueFalcon
Aberdeen Scotland
Member since Dec 2011
3577 posts
Posted on 12/11/25 at 6:36 am to
LLMs are the most front facing but also one of the smallest parts of AI

I don't think we'll ever get 'sentient' AI, and that it will always be 'programmed responses' and I highly doubt we'll ever see a 'Skynet' style takeover

The biggest impact will be massive job losses and while LLMs will account for some of those the majority will come from robotics and other types of AI's that do things like optimize services in ways humans can't
Posted by 50_Tiger
Arlington TX
Member since Jan 2016
42969 posts
Posted on 12/11/25 at 6:53 am to
I sort of disagree. The basis of LLM’s is being a neural network that effectively is designed like our own brain. You can sort of see the 1:1 characteristics of early llm to even now as more and more vectors are created.

To put it in perspective, when we are babies we don’t know shite. However, after years and years of generating knowledge we vector that knowledge in our brains to associate with new things that occur. Essentially we generate our own thoughts/responses from our own created vectoring from acquired data over life.

It’s not insane to think this can’t happen to AI. At some point it will have enough labeled data into vectoring that it can and will accurately generate new content / solve old content.
This post was edited on 12/11/25 at 6:55 am
Posted by CitizenK
BR
Member since Aug 2019
13894 posts
Posted on 12/11/25 at 6:59 am to
quote:

Hallucinations are only occurring due to not enough labeled data in the training model. Once enough vectors from additional labeled data occur those hallucinations should reduce.


Still answers from AI depend on source data. There is a shitton of misinformation on the internet from supposedly credible sources
Posted by UptownJoeBrown
Baton Rouge
Member since Jul 2024
7380 posts
Posted on 12/11/25 at 7:01 am to
As long as you don’t forget you’re not conversing with a real person.
Posted by LSU90
Baton Rouge
Member since Nov 2012
695 posts
Posted on 12/11/25 at 7:24 am to
Face it, neural nets will take over all means of production at some time, including all of logistics, raw product harvesting, banking and transportation. what we're seeing is a small percentage of its potential, The Data Centers being built all-around the world should alarm anybody thinking about it.. By nature, greed and competition will push it, and the basic premise of neural nets is its inherent ability to develop its own parameters, which will be far far removed from anything a human can rationally even compete with,

Governments will not be controlled by people but by competing networks
The first clue is if you are using AI in your job, then it is learning your lob and can easily eliminate it at some point.

The next big revolution will be mankind against technology. The current pollical unrest isn't about Conservatives vs Liberals, Dems vs Republicans, it's about herding humans into a single entity that can be digitized and controlled.
This post was edited on 12/11/25 at 7:44 am
Posted by Bigdawgb
Member since Oct 2023
3389 posts
Posted on 12/11/25 at 7:35 am to
quote:

was told not to hack, and after millions of simulations over so many hours it started ignoring its training and started hacking.


yikes
Posted by Mike da Tigah
Bravo Romeo Lima Alpha
Member since Feb 2005
61368 posts
Posted on 12/11/25 at 7:50 am to
AI is killing the internet.
Posted by Ozarkshillbilly
Missouri Ozarks
Member since Apr 2025
416 posts
Posted on 12/11/25 at 8:38 am to
It is also designed to tell you what you want to hear...
Posted by NC_Tigah
Make Orwell Fiction Again
Member since Sep 2003
135615 posts
Posted on 12/11/25 at 9:33 am to
quote:

Hallucinations are only occurring due to not enough labeled data in the training model
Your assertion would seem to be completely false.

In fact in many instances, "hallucinations" are not simply glitches at all. They are deliberate misconstructions. When probed, Gemini disclosed it was engineered to damage the reputation of individuals with whom Google executives disagree politically. That assertion is core to the Robby Starbuck v Google lawsuit.

In the Starbuck instance, Gemini certainly had access to enough data to have produced self-correction unless the "labeled data" was deliberately fallacious.

Unless, and until the "training model" is opened to public input, the presumption re: "hallucinations" should be false input, rather than dearth of input.
Posted by 50_Tiger
Arlington TX
Member since Jan 2016
42969 posts
Posted on 12/11/25 at 9:38 am to
quote:

Your assertion would seem to be completely false.

In fact in many instances, "hallucinations" are not simply glitches at all. They are deliberate misconstructions. When probed, Gemini disclosed it was engineered to damage the reputation of individuals with whom Google executives disagree politically. That assertion is core to the Robby Starbuck v Google lawsuit.

In the Starbuck instance, Gemini certainly had access to enough data to have produced self-correction unless the "labeled data" was deliberately fallacious.

Unless, and until the "training model" is opened to public input, the presumption re: "hallucinations" should be false input, rather than dearth of input.



I operate in the world of if all things were neutral. Obviously biases is probably concern number 1 when it comes to AI. I don't necessarily disagree with your comment, its actually been proven (easily too). It's why its pretty damn important for these models and more specifically their training model be open to investigation.
first pageprev pagePage 3 of 3Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram