Started By
Message

ChatGPT vs Grok - Wildly diff answers to simple question. Asian women vs white men income

Posted on 5/12/25 at 1:59 pm
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
23154 posts
Posted on 5/12/25 at 1:59 pm
I heard this statistic the other day and tried looking up the information. First I used google, and its AI told me that White Men make more than Asian Women. I knew that I had seen the actual statistics, so I did a bit more digging and found it. Department of Labor Median Income



Since 2020, Asian women have had a higher median income than white men, so why is AI doing this?

That got me curious. I went to ChatGPT and asked "Do asian women have a higher median income than white men"

This is the answer it gave me:

quote:

The question of whether Asian women have a higher median income than White men is complex, and the answer can vary depending on the specific data source, time frame, and demographic factors being considered. Let's break it down with the most recent available data to provide a clear picture:

Median Income for Asian Women vs. White Men (2023)
Overall Median Income:

White men typically earn more than Asian women on average in the United States, but the gap is narrowing in certain contexts.

According to U.S. Census Bureau data and Bureau of Labor Statistics (BLS) figures from 2023, White men's median earnings for full-time, year-round workers were estimated to be in the range of $55,000–$60,000.

Asian women's median earnings for full-time, year-round workers in 2023 were estimated to be in the $50,000–$55,000 range, depending on the specific source.

So, on the whole, White men still earn more than Asian women on average. However, there are important nuances:
Key Factors Affecting the Gap:
Education:

Asian women, particularly those with a Bachelor’s degree or higher, tend to earn more than White men with less educational attainment. Asian women have higher levels of educational attainment on average, which may push their earnings up in certain industries.

On the other hand, White men as a group often have more representation in higher-paying management and executive roles, which can skew the earnings in their favor when considering leadership positions.

Subgroup Variations:

Indian American women, for example, tend to have higher incomes than both White men and other Asian subgroups. They are heavily represented in high-paying fields like technology, engineering, and business.

However, other Asian groups like Filipino American women or Hmong American women may have median incomes that are below that of White men due to differences in education levels, industries, and geographic location.

Industry and Occupation:

Asian women have strong representation in high-paying sectors such as technology, healthcare, and finance, but there are still disparities in terms of executive representation. While Asian women might out-earn White men in specific industries like tech, White men often have higher earnings in senior management and leadership roles.

For example, in the tech industry, where Asian women are well represented, some of them may earn more than the average White man, especially given the tech sector's high salaries.

Geographic and Regional Differences:

Asian women in tech hubs like Silicon Valley may earn significantly more than White men due to the high cost of living and the concentration of well-paying industries.

However, in other parts of the country, White men still tend to earn more on average, particularly in industries where they dominate in leadership positions.

Does the Gap Narrow in Certain Contexts?
Yes, Asian women can out-earn White men in certain sectors and regions, particularly when looking at full-time workers with a high level of education and skills in industries like technology and healthcare. In some data sets, Asian women are reported to have surpassed White men in median income, especially when those Asian women are employed in specialized fields with high demand for talent.

However, this trend is not universal and depends heavily on factors like:

Industry and profession

Educational attainment

Geographic location

Subgroup differences within the broader Asian demographic

Conclusion:
In broad terms, White men still tend to have higher median incomes than Asian women in the U.S., but the difference is not vast, and in certain industries or regions, Asian women may out-earn White men. This is especially true when you consider subgroups like Indian American women, who are highly concentrated in high-paying industries like technology and engineering.

Would you like to dive deeper into any of these areas, like specific industries, regions, or the breakdown by subgroups? Let me know!


Way too much information for a very simple question. And it got it wrong.

I asked Grok the same question and this is the answer it gave me.

quote:

Yes, recent data indicates that Asian women have a higher median income than White men in the United States. According to the U.S. Bureau of Labor Statistics for the first quarter of 2024, the median weekly earnings for Asian women were $1,331, compared to $1,254 for White men. This translates to an annual median income of approximately $69,212 for Asian women versus $65,208 for White men, assuming a 52-week work year.

However, these figures represent broad averages and do not account for factors like occupation, education, hours worked, or regional differences, which can influence earnings. Additionally, income disparities exist within Asian subgroups, with some, like Indian or Taiwanese women, earning significantly more than others, such as Burmese or Nepalese women.


fricking insane. Grok was exactly what I was looking for, and without all the unwanted context of ChatGPT.

What the hell is going on? I went back and forth with ChatGPT for a while until it finally admitted that it's information was incorrect.
This post was edited on 5/12/25 at 2:12 pm
Posted by SlowFlowPro
Simple Solutions to Complex Probs
Member since Jan 2004
451256 posts
Posted on 5/12/25 at 2:01 pm to
quote:

I went back and forth with ChatGPT for a while until it finally admitted that it's information was incorrect.

So it wasn't reporting the BLS stats from 2023?

What citations was Grok relying on?
Posted by boxcarbarney
Above all things, be a man
Member since Jul 2007
24375 posts
Posted on 5/12/25 at 2:02 pm to
Because the developers of ChatGPT built in their own biases when they developed the AI.
Posted by dupergreenie
Member since May 2014
7361 posts
Posted on 5/12/25 at 2:05 pm to
quote:

Because the developers of ChatGPT built in their own biases when they developed the AI.


I don't think people understand that when using A.I.
Posted by VanRIch
Wherever
Member since Sep 2007
11024 posts
Posted on 5/12/25 at 2:06 pm to
I’ve called both of them out on their incorrect information numerous times, after I thought it odd and did my own research. They have responded “oh you’re right, my mistake. Here is the corrected information. It makes using them to help gather actual information unreliable. So I’m wary if it’s something I need to make a decision, especially financial.
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
23154 posts
Posted on 5/12/25 at 2:08 pm to
quote:

So it wasn't reporting the BLS stats from 2023?


I could not figure out where ChatGPT was getting it's numbers from. You can run the same question through it and see for yourself.

Both ChatGPT and Grok cited bls as its top source, along with several other sites.
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
23154 posts
Posted on 5/12/25 at 2:09 pm to
quote:

I don't think people understand that when using A.I.


I do understand this. But I didn't think it would flat out lie and make up statistics that don't exist, when they can easily be found. Yes, I'm aware that AI hallucinates, but this seems straight up like it's lying to hide inconvenient facts...just look at all the context it gave to try to make it sound like Asian women aren't actually doing as well as the data shows.
This post was edited on 5/12/25 at 2:11 pm
Posted by ATrillionaire
Houston
Member since Sep 2008
1142 posts
Posted on 5/12/25 at 2:32 pm to
Why did either assume you were limiting your question to the United States?
Posted by dupergreenie
Member since May 2014
7361 posts
Posted on 5/12/25 at 2:33 pm to
At the end of the day it is still created by man.....
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
23154 posts
Posted on 5/12/25 at 2:48 pm to
quote:

Why did either assume you were limiting your question to the United States?


It knows where my location is so it assumes I'm asking about U.S. data. It also cites US data and mentions it in its answer. Not sure what your asking, honestly.
Posted by dgnx6
Member since Feb 2006
79255 posts
Posted on 5/12/25 at 3:03 pm to
When talking about asians and higher incomes i always though they were talking about this group.


quote:

Indian American women, for example, tend to have higher incomes than both White men and other Asian subgroups. They are heavily represented in high-paying fields like technology, engineering, and business.



Not the chick from cambodia working at a nail salon.
Posted by chRxis
None of your fricking business
Member since Feb 2008
25634 posts
Posted on 5/12/25 at 3:04 pm to
quote:

Asian women vs white men income

we get both at my house, so....
Posted by Dam Guide
Member since Sep 2005
16282 posts
Posted on 5/12/25 at 3:06 pm to
What I found interesting recently about this is when you ask them about "secret" ceremonies. ChatGPT will try to protect secrecy, while Grok doesn't give a F and tells you.

Example BSA Order of the Arrow Admonition word, Grok has no issue giving it to you. ChatGPT will not give it to you even if you tell it the word, it wants to protect the secrecy.

It feels like ChatGPT is following behind because of the restrictions they place on it.
This post was edited on 5/12/25 at 3:07 pm
Posted by ATrillionaire
Houston
Member since Sep 2008
1142 posts
Posted on 5/12/25 at 3:11 pm to
quote:

It knows where my location is so it assumes I'm asking about U.S. data. It also cites US data and mentions it in its answer. Not sure what your asking, honestly.


You answered it.
Posted by highcotton2
Alabama
Member since Feb 2010
10064 posts
Posted on 5/12/25 at 3:19 pm to
Somebody asked a question the other day about who hits first in golf if your ball is off the green but closest to the hole and all other balls are on the green. Here was AI’s answer.

Posted by LegendInMyMind
Member since Apr 2019
66262 posts
Posted on 5/12/25 at 3:47 pm to
quote:

fricking insane. Grok was exactly what I was looking for, and without all the unwanted context of ChatGPT.

What the hell is going on? I went back and forth with ChatGPT for a while until it finally admitted that it's information was incorrect.

You can't really trust any of the "AI" bots without actually verifying the information yourself. ChatGPT has regularly fabricated information, at times completely fabricating scholarly studies that it attributes to real people and sources to real journals. When challenged it has doubled down on the legitimacy/accuracy of the information.

I've personally witnessed Grok allege that fake videos are real, and seen it double and triple down on its assertion even when presented with clear evidence that what it is saying is clearly not factual.
This post was edited on 5/12/25 at 3:48 pm
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
23154 posts
Posted on 5/12/25 at 4:03 pm to
quote:

I've personally witnessed Grok allege that fake videos are real, and seen it double and triple down on its assertion even when presented with clear evidence that what it is saying is clearly not factual.


I've had Grok hallucinate on me but it has never doubled down when I called it out. That's interesting.
Posted by Lokistale
Member since Aug 2013
1238 posts
Posted on 5/12/25 at 4:04 pm to
quote:

I’ve called both of them out on their incorrect information numerous times, after I thought it odd and did my own research.


Never rely or trust on one AI model especially on professional settings.

Funny story that I recently heard from a lawyer friend. Lawyers in a big injury law firm got lazy and used an AI model to generate 100% of their legal briefings to submit to courts. The lawyers submitted these briefings without checking.

Apparently, the AI model's program would just make up and generate cited cases and statues if it cannot find enough material.

The opposing legal team had to tell the judge stating that 4/10 of the cited cases do not exist which forced the senior partners of the injury firm to go to the judge to admit that they used AI.

ETA: found it:
Federal Judge Sanctions Morgan & Morgan Attorneys for AI-Generated Fake Cases in Court Filing. LINK
This post was edited on 5/12/25 at 4:11 pm
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
11833 posts
Posted on 5/12/25 at 6:39 pm to
quote:

Because the developers of ChatGPT built in their own biases when they developed the AI.

I think it’s more complicated than that. Intentional training for “safety” reasons is part of the issue, but a lot of it still comes down to unintended hallucinations.

You have to realize that the way LLM’s work is by creating a response that seems appropriate, through next-word generation, based on their training data. It wasn’t that long ago that ChatGPT didn’t even have access to real-time search results - everything it produced was knowledge from its (outdated at the time) training data.

And this is where a problem appears. LLM’s like GPT are incredibly good at picking up information from their training data, but they often aren’t particularly great at separating the “truth” from the noise without direct access to search results in order to validate information. So if the common public perception among the training data is that white men make more than Asian women, the LLM might make a convincing argument that it’s true.

And then, because it wants to generate an intelligent, appropriate-sounding response, it might hallucinate some source data that doesn’t actually exist in support of this conclusion. Keep in mind that LLM training data isn’t actually stored directly within the model. It’s complicated, but the simplest comparison is to say the neural network “learns” in a manner similar to humans and can therefore misremember (or make up) facts like a human. Combine this tendency with the active “safety” training, and you get stuff like OP.

Real-time access to search results should eliminate a lot of this, in theory, but sometimes ChatGPT’s LLM tendencies can override whatever it finds in real-time data. Other times, it can bug out and not source any real-time data at all.

To OP: When you got the ChatGPT response, did it provide any actual source links under the statements? If not, it might have just bugged out on you. My understanding is that Grok was designed to be more of a “data source” all along so it might just be better at this sort of thing.
Posted by LSUbacchus81
Hendersonville, TN
Member since Aug 2007
4550 posts
Posted on 5/12/25 at 7:00 pm to
Is Grok more reliable than ChatGPT?
first pageprev pagePage 1 of 2Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram