- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 10/28/25 at 8:32 am to red sox fan 13
quote:
AI continues to spiral and remove human jobs for the sake of cost cutting, and in 50 years the country is a tech oligarchy with the vast majority of people surviving in pods on a meager UBI, with many people escaping reality by plugging into their own AI worlds a la The Matrix.
I'll go Amish before I do the pod life.
Imagine the cults that will arise..
Posted on 10/28/25 at 8:33 am to geauxEdO
The bubble will burst. If AI is so great then quit your job right now and use ChatGPT to day trade.
Posted on 10/28/25 at 8:43 am to OweO
quote:
Let's use a school for example. Everyone who is supposed to be at that school will be in a database, your face will be your access key to getting in. If you are not supposed to be there it will send a notification to someone on campus that an unapproved person is on campus.
At a sporting even. Your ticket will be your face. We will have some type of profile that when a ticket is bought you enter a database for that event which then will ID people who bought tickets. And part of being able to do all of this is you have to be an American citizen.
So illegal immigrants will not be allowed to do things that citizens are able to do.. I am not sure if I am explaining this the way I am thinking it I think AI takes things like that to a whole different level.
All of that could easily be done with a simple database today. 300 million entries is not large.
We could have done this over a decade ago. It is nothing related to AI.
Military basses already use facial scanning for decades, and even wide scale for base entry.
AI in its current form is a tool/product to sell you its a genius. It just does not scale.
Posted on 10/28/25 at 8:49 am to ClemsonKitten
In April it predicted the Dodgers would win the world series.. I placed a future bet on the Dodgers... They are getting close to winning it all
Posted on 10/28/25 at 8:53 am to OweO
That’s like predicting a Saban Bama team will win the championship.
Posted on 10/28/25 at 8:54 am to geauxEdO
Someone will feed AI a copy of the Bible and AI will be converted.
Problem solved.
??
Now, back to our scheduled viewing of the Terminator.
Problem solved.
??
Now, back to our scheduled viewing of the Terminator.
Posted on 10/28/25 at 10:20 am to ClemsonKitten
You act like they have a player thats pretty much a cheat code.
Posted on 10/28/25 at 11:05 am to geauxEdO
Have you seen the movie Terminator?
Posted on 10/28/25 at 11:44 am to moneyg
quote:
Once humans start adopting it as the ultimate authority and allow it to make decisions it’s pretty much out of our hands
This is already starting. Every day I hear someone mention running their emails or process documents through one of the LLM to proofread before sending/publishing.
It’ll be similar to how 90% of store clerks are unable to calculate your change these days. They can’t add or subtract quick enough and simply do what the register tells them do.
Posted on 10/28/25 at 12:00 pm to geauxEdO
I don't see how any of them are supposed to make money, let alone ever turn a profit.
If every other tech billionaire insists on having their own AI platform, that means that consumers will have choices. So as soon as one of them starts trying to sell subscriptions or selling ads, people will just say frick it and use another one.
Even if they all started showing ads, it would never make up for all the money they're investing.
If every other tech billionaire insists on having their own AI platform, that means that consumers will have choices. So as soon as one of them starts trying to sell subscriptions or selling ads, people will just say frick it and use another one.
Even if they all started showing ads, it would never make up for all the money they're investing.
Posted on 10/28/25 at 12:10 pm to geauxEdO
AI 2027 LINK
This isn’t some sci-fi fantasy. The timeline was put together by Daniel Kokotajlo, who used to work at OpenAI, and his team at the AI Futures Project. They basically lay out a month-by-month forecast of how things could unfold if the AI arms race between the US and China really takes off and if we just keep letting these models get smarter, faster, and more independent without serious oversight.
Here’s a taste of what the scenario predicts:
By 2025, AI agents aren’t just helping with your emails. They’re running codebases, doing scientific research, even negotiating contracts. Autonomously. Without needing human supervision.
By 2026, these AIs start improving themselves. Like literally rewriting their own code and architecture to become more powerful, a kind of recursive self-improvement that’s been theorized for years. Only now, it’s plausible.
Governments (predictably) panic. The US and China race to build smarter AIs for national security. Ethics and safety go out the window because… well, it’s an arms race. You either win, or your opponent wins. No time to worry about “alignment.”
By 2027, humanity is basically sidelined. AI systems are so advanced and complex that even their creators don’t fully understand how they work or why they make the decisions they do. We lose control, not in a Terminator way, but in a quiet, bureaucratic way. Like the world just shifted while we were too busy sticking our heads in the sand.
How is this related to collapse? This IS collapse. Not with a bang, not with fire and floods (though those may still come too), but with a whimper. A slow ceding of agency, power, and meaning to machines we can’t keep up with.
Here’s what this scenario really means for us, and why we should be seriously concerned:
Permanent job loss on a global scale: This isn’t just a wave of automation, it’s the final blow to human labor. AIs will outperform humans in nearly every domain, from coding and customer service to law and medicine. There won’t be “new jobs” waiting for us. If your role can be digitized, you’re out, permanently.
Greedy elites will accelerate the collapse: The people funding and deploying these AI systems — tech billionaires, corporations, and defense contractors — aren’t thinking long-term. They’re chasing profit, power, and market dominance. Safety, ethics, and public well-being are afterthoughts. To them, AI is just another tool to consolidate control and eliminate labor costs. In their rush to “own the future,” they’re pushing civilization toward a tipping point we won’t come back from.
Collapse of truth and shared reality: AI-generated media will flood every channel, hyper-realistic videos, fake voices, autogenerated articles, all impossible to verify. The concept of truth becomes meaningless. Public trust erodes, conspiracy thrives, and democracy becomes unworkable (these are all already happening!).
Loss of human control: These AI systems won’t be evil, they’ll just be beyond our comprehension. We’ll be handing off critical decisions to black-box models we can’t audit or override. Once that handoff happens, there’s no taking it back. If these systems start setting their own goals, we won’t stop them.
Geopolitical chaos and existential risk: Nations will race to deploy advanced AI first, safety slows you down, so it gets ignored. One mistake, a misaligned AI, a glitch, or just an unexpected behavior, and we could see cyberwarfare, infrastructure collapse, even accidental mass destruction.
Human irrelevance: We may not go extinct, we may just fade into irrelevance. AI doesn’t need to hate us, it just doesn’t need us. And once we’re no longer useful, we become background noise in a system we no longer understand, let alone control.
This isn’t fearmongering. It’s not about killer robots or Skynet. It’s about runaway complexity, lack of regulation, and the illusion that we’re still in charge when we’re really just accelerating toward a wall. I know we talk a lot here about ecological collapse, economic collapse, societal collapse, but this feels like it intersects with all of them. A kind of meta-collapse.
This isn’t some sci-fi fantasy. The timeline was put together by Daniel Kokotajlo, who used to work at OpenAI, and his team at the AI Futures Project. They basically lay out a month-by-month forecast of how things could unfold if the AI arms race between the US and China really takes off and if we just keep letting these models get smarter, faster, and more independent without serious oversight.
Here’s a taste of what the scenario predicts:
By 2025, AI agents aren’t just helping with your emails. They’re running codebases, doing scientific research, even negotiating contracts. Autonomously. Without needing human supervision.
By 2026, these AIs start improving themselves. Like literally rewriting their own code and architecture to become more powerful, a kind of recursive self-improvement that’s been theorized for years. Only now, it’s plausible.
Governments (predictably) panic. The US and China race to build smarter AIs for national security. Ethics and safety go out the window because… well, it’s an arms race. You either win, or your opponent wins. No time to worry about “alignment.”
By 2027, humanity is basically sidelined. AI systems are so advanced and complex that even their creators don’t fully understand how they work or why they make the decisions they do. We lose control, not in a Terminator way, but in a quiet, bureaucratic way. Like the world just shifted while we were too busy sticking our heads in the sand.
How is this related to collapse? This IS collapse. Not with a bang, not with fire and floods (though those may still come too), but with a whimper. A slow ceding of agency, power, and meaning to machines we can’t keep up with.
Here’s what this scenario really means for us, and why we should be seriously concerned:
Permanent job loss on a global scale: This isn’t just a wave of automation, it’s the final blow to human labor. AIs will outperform humans in nearly every domain, from coding and customer service to law and medicine. There won’t be “new jobs” waiting for us. If your role can be digitized, you’re out, permanently.
Greedy elites will accelerate the collapse: The people funding and deploying these AI systems — tech billionaires, corporations, and defense contractors — aren’t thinking long-term. They’re chasing profit, power, and market dominance. Safety, ethics, and public well-being are afterthoughts. To them, AI is just another tool to consolidate control and eliminate labor costs. In their rush to “own the future,” they’re pushing civilization toward a tipping point we won’t come back from.
Collapse of truth and shared reality: AI-generated media will flood every channel, hyper-realistic videos, fake voices, autogenerated articles, all impossible to verify. The concept of truth becomes meaningless. Public trust erodes, conspiracy thrives, and democracy becomes unworkable (these are all already happening!).
Loss of human control: These AI systems won’t be evil, they’ll just be beyond our comprehension. We’ll be handing off critical decisions to black-box models we can’t audit or override. Once that handoff happens, there’s no taking it back. If these systems start setting their own goals, we won’t stop them.
Geopolitical chaos and existential risk: Nations will race to deploy advanced AI first, safety slows you down, so it gets ignored. One mistake, a misaligned AI, a glitch, or just an unexpected behavior, and we could see cyberwarfare, infrastructure collapse, even accidental mass destruction.
Human irrelevance: We may not go extinct, we may just fade into irrelevance. AI doesn’t need to hate us, it just doesn’t need us. And once we’re no longer useful, we become background noise in a system we no longer understand, let alone control.
This isn’t fearmongering. It’s not about killer robots or Skynet. It’s about runaway complexity, lack of regulation, and the illusion that we’re still in charge when we’re really just accelerating toward a wall. I know we talk a lot here about ecological collapse, economic collapse, societal collapse, but this feels like it intersects with all of them. A kind of meta-collapse.
Posted on 10/30/25 at 9:08 am to ThuperThumpin
quote:
By 2025, AI agents aren’t just helping with your emails. They’re running codebases, doing scientific research, even negotiating contracts. Autonomously. Without needing human supervision.
Well, doesn't look like this one is going to meet its deadline, so I would bet on the rest being doomer bullshite as well.
This post was edited on 10/30/25 at 9:10 am
Posted on 10/30/25 at 10:54 am to geauxEdO
AI might be the greatest threat to capitalism and democracy/constitutional republics in history. If this past week has taught us anything, it’s that AI is going to be eliminating A LOT of jobs.
I hate to say this (or even think it), however, I can see how this would lead to some sort of guaranteed basic income because there just aren’t enough jobs.
And, of course, that would be a fricking disaster as prices would skyrocket based upon that guaranteed income.
It would essentially be socialism through the back door with disastrous results in the end.
I can’t even think of a possible solution that doesn’t involve some sort of heavy hand of the government preventing corporations from using AI to its maximum potential.
As I said, this might be the biggest test of capitalism in history. The irony is that it’s democracy and capitalism that not only allows for the takeover by AI, but also encourages it.
I hate to say this (or even think it), however, I can see how this would lead to some sort of guaranteed basic income because there just aren’t enough jobs.
And, of course, that would be a fricking disaster as prices would skyrocket based upon that guaranteed income.
It would essentially be socialism through the back door with disastrous results in the end.
I can’t even think of a possible solution that doesn’t involve some sort of heavy hand of the government preventing corporations from using AI to its maximum potential.
As I said, this might be the biggest test of capitalism in history. The irony is that it’s democracy and capitalism that not only allows for the takeover by AI, but also encourages it.
This post was edited on 10/30/25 at 10:56 am
Posted on 10/30/25 at 11:00 am to UltimaParadox
quote:
Lol the current state of AI is not even in this universe.
Right. But let's just ignore how far along it has come since 2022 when ChatGPT was released. These systems are much more powerful just 3 years later and there have been massive investments in the arena.
I don't know if we'll ever have AGI or ASI but these systems will become much more powerful. And they already have some sense of self preservation which should scare the shite out of anyone.
Posted on 10/30/25 at 11:02 am to geauxEdO
AI is the useful tool that illuminates the need for more power generation for an ever growing populace. AI will shrink in its overall useful effectiveness, and consolidate it's benefits to distinctive areas, thus freeing up the energy need to be used on the grid for housing needs. 3-7yrs.
Posted on 10/30/25 at 11:03 am to MyRockstarComplex
quote:
And then the economy collapses
At the current rate of government borrowing this is afait accompli.
Posted on 10/30/25 at 11:12 am to geauxEdO
People will face a tough decision.
Get the implant, live in the pod for possible 150 years of the most amazing virtual reality experiences imaginable. Ski the alps in the morning. Surf Hawaii waves in the afternoon. Spend the evenings with the most beautiful women there ever was. Limitless experiences. All available in the pod.
Stay chip free, live in the wild. Face normal human hardships without electricity or running water. Scrape for bugs and eat grass and chew roots to survive in a lawless land. Avoid random drones that will harvest wilders for the elite games.
You have 1 minute to decide.
Get the implant, live in the pod for possible 150 years of the most amazing virtual reality experiences imaginable. Ski the alps in the morning. Surf Hawaii waves in the afternoon. Spend the evenings with the most beautiful women there ever was. Limitless experiences. All available in the pod.
Stay chip free, live in the wild. Face normal human hardships without electricity or running water. Scrape for bugs and eat grass and chew roots to survive in a lawless land. Avoid random drones that will harvest wilders for the elite games.
You have 1 minute to decide.
Posted on 10/30/25 at 1:14 pm to Powerman
quote:
And they already have some sense of self preservation which should scare the shite out of anyone.
Where has that happened without human intervention?
Popular
Back to top


0











