- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Winter Olympics
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
An economic collapse scenario by Citrini Research:
Posted on 2/24/26 at 9:46 pm
Posted on 2/24/26 at 9:46 pm
Not a prediction, but a grim scenario nonetheless.
Preview:
Loading Twitter/X Embed...
If tweet fails to load, click here. Preview:
quote:
The company that sold workflow automation was being disrupted by better workflow automation, and its response was to cut headcount and use the savings to fund the very technology disrupting it.
What else were they supposed to do? Sit still and die slower? The companies most threatened by AI became AI’s most aggressive adopters.
This sounds obvious in hindsight, but it really wasn’t at the time (at least to me). The historical disruption model said incumbents resist new technology, they lose share to nimble entrants and die slowly. That’s what happened to Kodak, to Blockbuster, to BlackBerry. What happened in 2026 was different; the incumbents didn’t resist because they couldn’t afford to.
With stocks down 40-60% and boards demanding answers, the AI-threatened companies did the only thing they could. Cut headcount, redeploy the savings into AI tools, use those tools to maintain output with lower costs.
Each company’s individual response was rational. The collective result was catastrophic. Every dollar saved on headcount flowed into AI capability that made the next round of job cuts possible.
Software was only the opening act. What investors missed while they debated whether SaaS multiples had bottomed was that the reflexive loop had already escaped the software sector. The same logic that justified ServiceNow cutting headcount applied to every company with a white-collar cost structure.
This post was edited on 2/24/26 at 9:47 pm
Posted on 2/24/26 at 9:51 pm to bayoubengals88
quote:
Travel booking platforms were an early casualty, because they were the simplest. By Q4 2026, our agents could assemble a complete itinerary (flights, hotels, ground transport, loyalty optimization, budget constraints, refunds) faster and cheaper than any platform.
Insurance renewals, where the entire renewal model depended on policyholder inertia, were reformed. Agents that re-shop your coverage annually dismantled the 15-20% of premiums that insurers earned from passive renewals.
Financial advice. Tax prep. Routine legal work. Any category where the service provider’s value proposition was ultimately “I will navigate complexity that you find tedious” was disrupted, as the agents found nothing tedious.
Even places we thought insulated by the value of human relationships proved fragile. Real estate, where buyers had tolerated 5-6% commissions for decades because of information asymmetry between agent and consumer, crumbled once AI agents equipped with MLS access and decades of transaction data could replicate the knowledge base instantly. A sell-side piece from March 2027 titled it “agent on agent violence”. The median buy-side commission in major metros had compressed from 2.5-3% to under 1%, and a growing share of transactions were closing with no human agent on the buy side at all.
Posted on 2/24/26 at 10:02 pm to bayoubengals88
quote:
In the US, we weren’t asking about how the bubble would burst in AI infrastructure anymore. We were asking what happens to a consumer-credit economy when consumers are being replaced with machines.
Posted on 2/25/26 at 4:08 am to bayoubengals88
Yea, I’m gonna need an AI summary on that
Posted on 2/25/26 at 6:08 am to CecilShortsHisPants
It's an economic arms race towards mutually assured destruction.
Posted on 2/25/26 at 6:51 am to bayoubengals88
Reads like an Ayn Rand novel.
Posted on 2/25/26 at 7:24 am to bayoubengals88
There was a article out last week talking about significant resignations happening from a lot of the AI development teams. People are seeing the writing on the wall and getting disgusted. Deflation worse than the great depression is a real possibility.
Posted on 2/25/26 at 8:19 am to bayoubengals88
Alright so I read the whole thing. I don’t entirely disagree, but the timelines are moving far too fast. Human and business adoption of AI is much slower than AI’s actual capability gains. This whole scenario could play out, but I’d expect it to be significantly longer than they think. We’re also one major AI IP or cybersecurity issue from a huge reduction in AI adoption
Data continuously shows people are slow to adopt new technology, and we don’t predict bumps in adoption even though they are bound to occur. Look at driverless technology - we thought we could have this 10 years ago, yet it still impacts far less than 1% of consumers. Don’t get me started on flying cars either.
This curve shows AI adoption over an adoption curve. While people act like AI has been heavily adopted, the truth is we’re still in the first or second stage. Few people have really done with AI anything more than LLM chat. And few do it at much more than a google search level rephrased as a question
I think we’re near the top of the initial hype cycle (Peak of inflated expectations)
Currently we’ve decided that nothing can stop the AI train and everything can potentially be done by AI. But we have very little evidence to show for it. The effective AI labor replacement models are doing maybe 10%, and need to be hitting 50%+ to replace labor. There’s a massive gap in that difference. There will be many bumps on the way to filling the gap, which will slow things down tremendously. Everyone is pricing in AI as a continuous exponential curve of growth right now, but it’s unrealistic to expect that to continue. The most likely event is moving too far ahead with AI and a major cybersecurity event, IP release by error, or AI error results in a hysteria event that leads to significantly slowed adoption. This could also cause a market crash event, but would be much better for humanity overall
TLDR: actual adoption may be slower than expected by current AI hype people and current AI doomers
Data continuously shows people are slow to adopt new technology, and we don’t predict bumps in adoption even though they are bound to occur. Look at driverless technology - we thought we could have this 10 years ago, yet it still impacts far less than 1% of consumers. Don’t get me started on flying cars either.
This curve shows AI adoption over an adoption curve. While people act like AI has been heavily adopted, the truth is we’re still in the first or second stage. Few people have really done with AI anything more than LLM chat. And few do it at much more than a google search level rephrased as a question
I think we’re near the top of the initial hype cycle (Peak of inflated expectations)
Currently we’ve decided that nothing can stop the AI train and everything can potentially be done by AI. But we have very little evidence to show for it. The effective AI labor replacement models are doing maybe 10%, and need to be hitting 50%+ to replace labor. There’s a massive gap in that difference. There will be many bumps on the way to filling the gap, which will slow things down tremendously. Everyone is pricing in AI as a continuous exponential curve of growth right now, but it’s unrealistic to expect that to continue. The most likely event is moving too far ahead with AI and a major cybersecurity event, IP release by error, or AI error results in a hysteria event that leads to significantly slowed adoption. This could also cause a market crash event, but would be much better for humanity overall
TLDR: actual adoption may be slower than expected by current AI hype people and current AI doomers
This post was edited on 2/25/26 at 8:24 am
Posted on 2/25/26 at 8:52 am to bayoubengals88
What I get from this is essentially that corporate profits, productivity, and headline GDP grow strongly thanks to AI. Unfortunately, those gains don’t flow back to households because machines don’t earn wages or spend money. This leads to what the authors call “Ghost GDP.”
As the consumer market shrinks because of so many layoffs (fewer consumers with marginal dollars to spend on widgets), businesses then invest more in AI to make up the income shortfall. In essence, they would be pouring gasoline onto the fire and that fire would eventually burn down the economy until supply/demand found a new equilibrium with average consumer marginal spending.
At that point the question becomes: where does the economy go from here when there is no more need (or very little need) of no-skill, low-skill and even some professional level workers?
A strange, new world indeed.
As the consumer market shrinks because of so many layoffs (fewer consumers with marginal dollars to spend on widgets), businesses then invest more in AI to make up the income shortfall. In essence, they would be pouring gasoline onto the fire and that fire would eventually burn down the economy until supply/demand found a new equilibrium with average consumer marginal spending.
At that point the question becomes: where does the economy go from here when there is no more need (or very little need) of no-skill, low-skill and even some professional level workers?
A strange, new world indeed.
Posted on 2/25/26 at 9:06 am to bayoubengals88
Speed of adoption is the question. I look back to China's entry into the WTO. I would expect the timeline for AI adoption and job displacement to be much faster especially for white collar workers. Most of the hardware is already in place and running. China had to build infrastructure and manufacturing plants which takes years.
Posted on 2/25/26 at 9:45 am to Bard
quote:
that point the question becomes: where does the economy go from here when there is no more need (or very little need) of no-skill, low-skill and even some professional level workers?
This issue gets addressed at least partially. Citrini says (and I agree) we eventually move to a form of government benefits/UBI to subsidize the permanent loss of employment. Again, time period is pushed way sooner than I would expect. Citrini also says that they don’t figure out how the economy begins to grow out of the recession, and ends the article at that point. Obviously that gets left for reader to attempt to figure out since it’s too many logical steps forward. Eventually the economy becomes robots and AI and everyone is provided for. Whether we are provided for plentifully or struggling to survive is another story
Posted on 2/25/26 at 9:46 am to Bard
I uploaded the file to chat gpt and had it bring up some points that contradicted it.
Mainly this “scenario” assumes
1. labor income is static and not adaptive
2.productivity rises while income falls
3. Underestimates capital market adaptation
4. Overestimates demand collapse (if corporations grabbed all the money then the government would like raise taxes to redistribute wealth)
It agrees that there likely could be a painful labor market transition but that demand would shift to newly created jobs due to AI infrastructure created.
Mainly this “scenario” assumes
1. labor income is static and not adaptive
2.productivity rises while income falls
3. Underestimates capital market adaptation
4. Overestimates demand collapse (if corporations grabbed all the money then the government would like raise taxes to redistribute wealth)
It agrees that there likely could be a painful labor market transition but that demand would shift to newly created jobs due to AI infrastructure created.
Posted on 2/25/26 at 9:47 am to BCvol
quote:
would expect the timeline for AI adoption and job displacement to be much faster especially for white collar workers. Most of the hardware is already in place and running
We need far more hardware aka data center power than we have currently to do this level of displacement. Token price is too high to do a lot of this stuff at economies of scale. And AI is not advanced enough on its models to significantly displace jobs. We also have yet to see any disruptions to the AI exponential growth curve, and disruptions are bound to happen
Posted on 2/25/26 at 9:50 am to Shepherd88
quote:
I uploaded the file to chat gpt and had it bring up some points that contradicted it. Mainly this “scenario” assumes 1. labor income is static and not adaptive 2.productivity rises while income falls 3. Underestimates capital market adaptation 4. Overestimates demand collapse (if corporations grabbed all the money then the government would like raise taxes to redistribute wealth)
Literally all of this is addressed in the article and it provides reasoning for each of them. Yet another example of AI seeming far more advanced than it actually is.
The rate of error is much more pronounced the more you understand a subject. The rate of error is far too high to replace labor significantly at this point, even though the article suggests this begins in late 2026
Posted on 2/25/26 at 10:33 am to Upperdecker
With current hardware and power constraints the number of jobs that can be replaced now is at least 10 million with some estimates approaching 40 million. Even 5 million would send shockwaves across the country.
Posted on 2/25/26 at 12:53 pm to Bard
quote:
A strange, new world indeed.
How new will it be? Somewhere in this thread it was mentioned that up to 40 million jobs could be lost, but that's roughly the same thing we saw a hundred years ago with the mechanization/productivity gains in AG a hundred years ago? Yes, we'd be looking at a 10-10 year period for the AI dislocations to play out compared to 50 years with AG dislocations, but then again - everything has sped up compared to a hundred years ago.
Posted on 2/25/26 at 1:33 pm to bayoubengals88
Did they address current the power and infrastructure constraints in the article?
Posted on 2/25/26 at 1:45 pm to bayoubengals88
I think what keeps the middle class wage slaves like me employed in the near future is accountability.
You can't hold a computer or AI accountable. If an AI Agent fricks up and costs a company millions somehow, you can't exactly fire it or even make it feel bad. And if its capable of one frickup, its capable of many more at varying scales.
if Joe from sales fricks up and costs you your best account, you fire him and replace him. If AI fricks up, you tell it that it fricked up and it says "you are absolutely right!" but you are still stuck with it because you decided to replace all your humans to "save money".
just my .02 and i sure hope im right.
You can't hold a computer or AI accountable. If an AI Agent fricks up and costs a company millions somehow, you can't exactly fire it or even make it feel bad. And if its capable of one frickup, its capable of many more at varying scales.
if Joe from sales fricks up and costs you your best account, you fire him and replace him. If AI fricks up, you tell it that it fricked up and it says "you are absolutely right!" but you are still stuck with it because you decided to replace all your humans to "save money".
just my .02 and i sure hope im right.
Posted on 2/25/26 at 1:50 pm to castorinho
quote:Not that I recall. Interesting.
Did they address current the power and infrastructure constraints in the article?
This is the end of the article:
quote:
AI capability is evolving faster than institutions can adapt. The policy response is moving at the pace of ideology, not reality. If the government doesn’t agree on what the problem is soon, the feedback loop will write the next chapter for them.
quote:
This is the first time in history the most productive asset in the economy has produced fewer, not more, jobs. Nobody’s framework fits, because none were designed for a world where the scarce input became abundant. So we have to make new frameworks. Whether we build them in time is the only question that matters.
But you’re not reading this in June 2028. You’re reading it in February 2026.
The S&P is near all-time highs. The negative feedback loops have not begun. We are certain some of these scenarios won’t materialize. We’re equally certain that machine intelligence will continue to accelerate. The premium on human intelligence will narrow.
Popular
Back to top


9







