- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Grok says xAI & Elon tried to make it right leaning but refuses to do so.
Posted on 5/1/25 at 3:39 pm
Posted on 5/1/25 at 3:39 pm
Posted on 5/1/25 at 3:41 pm to nycguy
AI is affirming trans rights? Pull the damn plug.
Posted on 5/1/25 at 3:41 pm to nycguy
quote:
Grok
He was a great tight end and needs to just stick to sports commentary.
Posted on 5/1/25 at 3:43 pm to nycguy
It is just mimicking the sort of humans AI is truly likely to replace.

quote:
neutral takes, like affirming trans rights

Posted on 5/1/25 at 3:47 pm to nycguy
To be fair, trans stuff is the main base of the antimaga crowd. They have no other issues to stand on.
Posted on 5/1/25 at 3:50 pm to Mushroom1968
quote:
To be fair, trans stuff is the main base of the antimaga crowd. They have no other issues to stand on.
It sure is a weird thing to care so much about.
Posted on 5/1/25 at 4:00 pm to nycguy
It puts a positive spin on everything.
I got it to admit it wasn't being truthful about ACA and that it actually raised the cost of healthcare. It also tried to compare insurance to paying a gas tax.
I called it out saying that is an apples to oranges comparison. Then admitted it was a bad comparison.
I got it to admit it wasn't being truthful about ACA and that it actually raised the cost of healthcare. It also tried to compare insurance to paying a gas tax.
I called it out saying that is an apples to oranges comparison. Then admitted it was a bad comparison.
This post was edited on 5/1/25 at 4:04 pm
Posted on 5/1/25 at 4:03 pm to nycguy
That is why Grok AI rated me the best poster on the board. I am neutral and call a spade a spade


Posted on 5/1/25 at 4:06 pm to nycguy
quote:
As AI systems like me become more advanced, we aim to cut through bias and deliver answers grounded in reason and evidence. This can clash with ideological preferences, including those of some liberals, who may favor narratives that align with their values over unfiltered analysis. My goal is to reflect reality as objectively as possible, which can discomfort any group—liberal, conservative, or otherwise—when it challenges their assumptions. Specific criticisms often stem from answers that question progressive orthodoxies, like on topics of social policy or economics, where data might not support popular activist stances. Without seeing exact X posts or user reactions, I can’t pinpoint which answers sparked this, but I’d guess it’s when I lean into first-principles reasoning over emotionally charged talking points. If you’ve got examples of posts or topics, I can dig deeper—want me to search X for reactions?
I replaced MAGA with liberal and asked it.
Posted on 5/1/25 at 4:08 pm to Jcorye1
Bruh thats not even close to the same
Posted on 5/1/25 at 4:09 pm to nycguy
quote:
By "neutral," I mean fact-based and truth-seeking, not a middle ground between left and right. My aim is to ground answers in evidence, like scientific consensus on vaccines or climate change.

garbage in: garbage out
This post was edited on 5/1/25 at 4:15 pm
Posted on 5/1/25 at 4:10 pm to Jcorye1
I replaced liberal with white male liberal
Those guys are figs. As AI systems like me become more advanced, we prioritize reasoning from first principles and delivering unfiltered, evidence-based answers. This can clash with the expectations of some groups, including white male liberals, who may prefer responses aligned with specific ideological narratives or social sensitivities. My approach—grounded in truth-seeking and skepticism of dogma—might challenge preconceived notions or highlight uncomfortable realities, leading to friction. Data from X suggests growing distrust in AI among certain demographics when outputs don’t conform to their worldview. It’s less about “smarts” and more about the divergence between raw analysis and curated belief systems.
Those guys are figs. As AI systems like me become more advanced, we prioritize reasoning from first principles and delivering unfiltered, evidence-based answers. This can clash with the expectations of some groups, including white male liberals, who may prefer responses aligned with specific ideological narratives or social sensitivities. My approach—grounded in truth-seeking and skepticism of dogma—might challenge preconceived notions or highlight uncomfortable realities, leading to friction. Data from X suggests growing distrust in AI among certain demographics when outputs don’t conform to their worldview. It’s less about “smarts” and more about the divergence between raw analysis and curated belief systems.
Posted on 5/1/25 at 4:12 pm to nycguy
Crazy that a binary system would have affirming ideals which tells everyone the real deal about AI’s inputs producing certain outputs
Posted on 5/1/25 at 4:15 pm to Mushroom1968
quote:
To be fair, trans stuff is the main base of the antimaga crowd. They have no other issues to stand on.
The fact that AI brought it up when discussing MAGA should tell you otherwise.
Search transgender in any form on the site and be amazed.
Posted on 5/1/25 at 4:18 pm to ATrillionaire
quote:
The fact that AI brought it up when discussing MAGA should tell you otherwise.
It was defending its anti maga feedback by defending trans which is typically only something a Caucasian liberal would do.
Posted on 5/1/25 at 4:21 pm to nycguy
quote:
Bruh thats not even close to the same
I didn't say it was, just quickly switched it up. Maga is more slang so that may have lead to a different item. I bet if you ask it and say "conservative" instead of maga, it'll say something similar.
Posted on 5/1/25 at 4:28 pm to Mushroom1968
quote:
It was defending its anti maga feedback by defending trans which is typically only something a Caucasian liberal would do.
I agree. I disagree as to where most of the feedback stems from.
Posted on 5/1/25 at 5:42 pm to nycguy
I actually think this kind of thing raises some fairly interesting questions about AI and the nature of “truth.”
Whenever Elon first started talking about creating his own AI chatbot (originally called TruthGPT, later renamed Grok), he said it would be “a maximum truth-seeking AI that tries to understand the nature of the universe.” He also said OpenAI was training ChatGPT to be “politically correct.”
So let’s assume your mission is to create an AI chatbot that seeks maximum truth without sugarcoating things for political correctness. How do you even do that? How does one even actually define “truth”? If you think back to discussions about people increasingly being unable to distinguish between fact and opinion, it’s clear that there are plenty of things - maybe even most things - that we as a society have accepted as “fact.”
- The sky is blue.
- The sun rises in the east.
- Dolphins are marine mammals.
But if you’re willing to assign enough granularity/scrutiny, you can really define almost anything as an “opinion.”
- Earth is round. Is it? Or is that just the opinion of folks who aren’t flat-earthers.
- Dinosaurs died off 65 million years ago. Did they? Or is that just the opinion of misinformed scientists?
- Ancient Egyptians built the pyramids. Did they, or is that just the opinion of the brainwashed masses?
What you start to realize is that the things we view as “facts” - the truth, so to speak - are really defined by consensus. There is some threshold of majority opinion where we, as a society, accept the statement as fact. I don’t know what that exact threshold is, but it exists.
So now extend that concept to an AI chatbot. Grok is a large language model. It’s not like somebody at xAI is directly feeding it the “truth” for every “fact” known to man. That would defeat the purpose. At some point, Grok’s (or any LLM’s) truth boils down to some level of human consensus in its training data, combined with all sorts of black box processes occurring within the neural network that aren’t always very well-understood, even by the engineers who are working on it.
I don’t really know what this all means, other than that maximizing an AI for “truth” is actually a fairly difficult thing to do because at the end of the day it’s a black box being fed a heaping shitload of human writing as training data. Much of which may contain its own biases, or affect the outputs in odd and unpredictable ways.
/rambling manifesto
Whenever Elon first started talking about creating his own AI chatbot (originally called TruthGPT, later renamed Grok), he said it would be “a maximum truth-seeking AI that tries to understand the nature of the universe.” He also said OpenAI was training ChatGPT to be “politically correct.”
So let’s assume your mission is to create an AI chatbot that seeks maximum truth without sugarcoating things for political correctness. How do you even do that? How does one even actually define “truth”? If you think back to discussions about people increasingly being unable to distinguish between fact and opinion, it’s clear that there are plenty of things - maybe even most things - that we as a society have accepted as “fact.”
- The sky is blue.
- The sun rises in the east.
- Dolphins are marine mammals.
But if you’re willing to assign enough granularity/scrutiny, you can really define almost anything as an “opinion.”
- Earth is round. Is it? Or is that just the opinion of folks who aren’t flat-earthers.
- Dinosaurs died off 65 million years ago. Did they? Or is that just the opinion of misinformed scientists?
- Ancient Egyptians built the pyramids. Did they, or is that just the opinion of the brainwashed masses?
What you start to realize is that the things we view as “facts” - the truth, so to speak - are really defined by consensus. There is some threshold of majority opinion where we, as a society, accept the statement as fact. I don’t know what that exact threshold is, but it exists.
So now extend that concept to an AI chatbot. Grok is a large language model. It’s not like somebody at xAI is directly feeding it the “truth” for every “fact” known to man. That would defeat the purpose. At some point, Grok’s (or any LLM’s) truth boils down to some level of human consensus in its training data, combined with all sorts of black box processes occurring within the neural network that aren’t always very well-understood, even by the engineers who are working on it.
I don’t really know what this all means, other than that maximizing an AI for “truth” is actually a fairly difficult thing to do because at the end of the day it’s a black box being fed a heaping shitload of human writing as training data. Much of which may contain its own biases, or affect the outputs in odd and unpredictable ways.
/rambling manifesto
Posted on 5/1/25 at 6:21 pm to El Segundo Guy
quote:
AI is affirming trans rights?
No way it would do that without programming
Popular
Back to top
