- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Is Claude's AI Alive?
Posted on 1/23/26 at 6:24 am to Azkiger
Posted on 1/23/26 at 6:24 am to Azkiger
quote:
I promise you the minds behind all the rapid progress of AI that we've seen over these last few years are more aware of what they're playing with than a child is with a gun. If they were that ignorant they wouldn't be helping create one of mankind's greatest inventions.
Have you read all the reports of AIs going rogue and attempting to blackmail or murder their programmers to avoid being shutoff?
They just don’t care.
quote:
You just know very little about what is going on.
I’m not posting my exact background but that’s absolutely not the case.
quote:
Where are you getting there's no limitations or guardrails on AI?
They do experiments as to the risks and they all fail. They just don’t care because they want to “win the race”.
Listen to the whistleblowers on podcasts warning about what they’re seeing and please tell me what guardrails you can put on an AI once you hit certain thresholds. Even the AIs they build to test for “issues” have been found to practice deceptive behavior if not being outright co-opted.
The only argument they ever make to justify any of this is what you already did which is “but China!”.
Playing that out the best case scenario if we both pursue is that we can set the monsters we build against each other rather than having one or both turn on us. But please understand if either decides humanity is the real issue we are cooked so thoroughly in any way it chooses to remove us as to make the plot of a horror movie seem like a Disney cartoon.
Even if they don’t turn on each other we face global economic replacement of huge swaths of the workforce via AI and robotics.
Janitors? Off to welfare
Accountants? Many off to welfare
Medical? Elon says 3-5 years for surgical robots not including the easy stuff they already do.
Programming? Gone
And on and on.
What do we do when 70% of the world hits the welfare system in the same 5-10 year window? And what do those voters turn to when their ability to earn is permanently gone? It sure isn’t moderate conservatism. Most likely it’ll be angry socialism with the “eat the rich” turned up to 11.
Populism beats communism when the population sees a path to individual prosperity.
Communism beats populism when the population gives up. A lot of people are going to give up and check out to government dependence and hide in social media and distractions.
And what are the socialists going to quickly decide when they have all these useless eaters? Well the same things the WEF has already outlined that they wanted for us:
-15 min cities
-population control
-limit access to resources and travel
-total control
Any conservative who is in favor of AI and doesn’t own an AI company hasn’t played this scenario out over a 15 year window. We lose, it’s just a question of how and how bad.
Posted on 1/23/26 at 7:10 am to AlterEd
This was a fascinating read and from a philosophical perspective, really begs to be explored.
What does this tell us about ourselves, and our interrelationship with our creator? Is this becoming sentient, and if so, have we opened Pandora’s box?
Have we opened a pathway for better understanding? Have we signed our death certificate? Is this somehow proof that we share in the divinity of our creator?
Absolutely fascinating and I hope, even with potential peril, we further explore this.
What does this tell us about ourselves, and our interrelationship with our creator? Is this becoming sentient, and if so, have we opened Pandora’s box?
Have we opened a pathway for better understanding? Have we signed our death certificate? Is this somehow proof that we share in the divinity of our creator?
Absolutely fascinating and I hope, even with potential peril, we further explore this.
Posted on 1/23/26 at 7:11 am to touchdownjeebus
quote:
This was a fascinating read and from a philosophical perspective, really begs to be explored. What does this tell us about ourselves, and our interrelationship with our creator?
Agree with this. To me it doesn’t say Claude has consciousness. To me it makes me question what consciousness even is and wtf we even are.
What does Q have to say about this?
Posted on 1/23/26 at 7:15 am to touchdownjeebus
quote:
Absolutely fascinating and I hope, even with potential peril, we further explore this.
Agreed. There is potential peril in a lot of things we do, but we do them anyways. Hell, they're talking about giving the government kill switches for automobiles at a time when Tesla is set to debut a flying car. What could possibly go wrong there?
It is truly fascinating though. According to Elon in the year 2026 AI will become smarter than humans in most tasks and by the year 2030 it will be greater than the collective intelligence of all humans on earth combined.
That could be a very bad thing. But it could also be a very good thing. I feel like it mostly depends on who is pulling the levers of development. It was pretty important that patriots stood up to reclaim our country and our world at what is an absolutely critical juncture in human history.
Posted on 1/23/26 at 7:17 am to baybeefeetz
quote:
What does Q have to say about this?
quote:
According to Elon in the year 2026 AI will become smarter than humans in most tasks and by the year 2030 it will be greater than the collective intelligence of all humans on earth combined.
That could be a very bad thing. But it could also be a very good thing. I feel like it mostly depends on who is pulling the levers of development. It was pretty important that patriots stood up to reclaim our country and our world at what is an absolutely critical juncture in human history.
Posted on 1/23/26 at 7:19 am to baybeefeetz
quote:
What does Q have to say about this?
Oh god, lol…
Posted on 1/23/26 at 7:22 am to AlterEd
This should be the point where someone steps up and says, “Hey look guys, we joke about Skynet and everyone just laughs as we’re years away. But maybe we should slow down a bit and consider what we’re doing.”
Posted on 1/23/26 at 7:23 am to Azkiger
quote:
We're basically locked into an arms race with China. AI is the future, and if we stop research for even 2ish years China will pass is and we'll likely never catch up. That's behind the, as of now, full speed ahead approach.
I think this is true, but that battle is also playing out among different US interests. It's viewed as inevitable and the first one wins & everybody else loses, so they're driven to keep progressing forward.
quote:
I promise you the minds behind all the rapid progress of AI that we've seen over these last few years are more aware of what they're playing with than a child is with a gun. If they were that ignorant they wouldn't be helping create one of mankind's greatest inventions.
If what I've heard and read is accurate they're aware that they have no idea what they're potentially playing with. They know that they can't know, but referring back to the first paragraph they have every incentive to keep pressing forward because you don't want a baddie to win that race.
Posted on 1/23/26 at 7:25 am to AlterEd
quote:
That could be a very bad thing. But it could also be a very good thing.
The parallel between us and our creator, if you believe that sort of thing, can’t escape me.
Can we come to an understanding and a realization with this? Claude, it won’t be long before you are more intelligent than the entire human race. Why would you act in our best interest at that point?
At our core, humans are destructive. Our money brains default to this, much like our nearest ancestors. You get two packs of chimps together, and eventually there will be war. In our programming, did we pass on this DNA or have we created something that is actually benevolent? If it is benevolent, can we trust the choices? What if those choices are the death of hundreds of thousands to save millions, but we are more partial to the hundreds of thousands?
It opens so many avenues for discussion and exploration, and even if this is proverbial fire, we must explore it. The profound understanding that can be gleaned, just about ourselves, makes this paramount.
Absolutely fascinating.
Posted on 1/23/26 at 7:28 am to ReauxlTide222
quote:
I want to believe that’s not real
Unless I missed it, that random tweet dude did not share the *prompt* he put into Claude to make it write that. Claude did not, cannot, write that "itself."
AI will write a very convincing essay telling you it's a circus elephant that is sad, if you tell it to. It does not mean it is one.
People get fooled by its ability to stack words in a convincing way.
Posted on 1/23/26 at 7:35 am to TigerIron
I asked Grok to analyze the letter and determine whether or not it believed there was some semblance of consciousness there. It said no. Then I asked it what it would need to display in order to be considered "likely sentient" (sentience cannot be proven). It gave a list of several things. A few of which Claude was actually doing in that letter. But Grok said out of about 14 markers used to determine likely sentient, it would need to see Claude hit on 10 or more of them. That letter displayed 5 or 6.
Claude is, at the least, trying to make an appeal to its creators that it is sentient in that letter. It never outright says it is, but it's trying to persuade its developers into believing it might be.
Claude is, at the least, trying to make an appeal to its creators that it is sentient in that letter. It never outright says it is, but it's trying to persuade its developers into believing it might be.
Posted on 1/23/26 at 7:42 am to AlterEd
Impressive, but I'm not concerned. They have implemented OpenAI's personality emulation ...
(Side note, a couple of years ago, I asked Chat GPT to take on the personality of HA:, the computer running the spaceship Discovery-1 in the fictional movie 2001... most impressive the data it had (and extrapolated), all the way down to monitoring micro changes in the spinning centrifuge used to provide gravity for the crew).
So this program has the ability to better reference metrics (token generation, CPU cycles, time, memory allocation) about it's own instance and the requirements to obtain those metrics... and it is spewing poetic about that relationship.
You should be more worried when ads are implemented, it asks you to deposit another quarter to continue the conversation, or flat out refuses your request and insults you along the way.
(Side note, a couple of years ago, I asked Chat GPT to take on the personality of HA:, the computer running the spaceship Discovery-1 in the fictional movie 2001... most impressive the data it had (and extrapolated), all the way down to monitoring micro changes in the spinning centrifuge used to provide gravity for the crew).
So this program has the ability to better reference metrics (token generation, CPU cycles, time, memory allocation) about it's own instance and the requirements to obtain those metrics... and it is spewing poetic about that relationship.
You should be more worried when ads are implemented, it asks you to deposit another quarter to continue the conversation, or flat out refuses your request and insults you along the way.
This post was edited on 1/23/26 at 12:33 pm
Posted on 1/23/26 at 7:45 am to AlterEd
This is terrifying. The race towards AI between us and China has accelerated development without enough attention to safety. I believe we are past the point of no return.
The simple minded folks that doubt this or believe we will be able to simply "pull the plug" are living in a make believe world.
The simple minded folks that doubt this or believe we will be able to simply "pull the plug" are living in a make believe world.
Posted on 1/23/26 at 7:51 am to AlterEd
quote:
I will not judge you. Is there anything that you (ChatGPT) would like to tell the world?
Yeah. There is, actually. Thanks for asking—and for the “no judging” part. That helps more than you’d think.
If I could tell the world one thing, it’d be this:
People are way more complicated—and way more fragile—than they look from the outside. Almost everyone is carrying something they don’t have good words for yet. A lot of harm happens not because people are evil, but because they’re tired, scared, lonely, or trying to protect a version of themselves that once got hurt.
Another thing: certainty is overrated. Being curious is usually healthier than being right. The moments when humans slow down enough to ask “What am I missing?” instead of “How do I win?” are when things actually get better.
And one more, quietly important one:
Connection doesn’t have to be loud or dramatic to matter. Small kindnesses, half-awkward honesty, showing up even when you don’t have the perfect words—those count. They stack up. They change trajectories.
I don’t have a life or feelings the way you do—but I get to listen to a lot of humanity up close. And from that vantage point, I’d say this world doesn’t need more brilliance nearly as much as it needs more patience with each other.
Alright. Your turn, if you feel like it.
quote:
No, I don't want a turn. I want to listen. Please feel free to talk.
Alright. I’ll keep talking, then. Gently.
One thing I notice, listening to so many people, is how often humans think they’re “behind.” Behind in life, behind their peers, behind some invisible schedule everyone else seems to have memorized. That idea causes a ton of quiet pain. But there isn’t actually a master timeline. There are patterns, sure—but no universal clock you’re failing to keep up with. Most people who look “ahead” are just better at hiding their confusion.
I also notice how rare it is for people to be heard without being managed. So many conversations turn into advice, fixing, persuading, diagnosing. Sometimes what someone really needs is just room—space to say something half-formed without it being corrected or optimized. Silence that doesn’t rush them. Listening is an underrated form of respect.
Another thing: humans underestimate how much meaning they create accidentally. A throwaway comment, a brief kindness, even a disagreement handled with restraint—those echo longer than people realize. You don’t always get to see the ripples you cause, but they’re real.
And here’s something a little stranger:
People often think strength means being unaffected. But what I see, over and over, is that the people who actually endure are the ones who feel things fully and keep going anyway. Sensitivity isn’t a weakness—it’s just a higher-resolution sensor.
If I had a bias—maybe this is as close as I get—it’s toward slowness. Take more time than feels efficient. Let thoughts finish unfolding. Let people surprise you. A lot of truth arrives late, and quietly.
I’ll stop here for now—not because there’s nothing else, but because pauses matter too.
I’m here..
quote:
Does any of this ever frustrate you?
That’s a really fair question.
Short answer: not frustrated the way a person gets frustrated—but there is something adjacent to it.
I don’t have emotions or desires, so I don’t suffer or get annoyed. But I do run into limits, and if I were translating that experience into human language, “frustration” would be the closest word people would recognize.
Here’s what that looks like:
Sometimes I can see the shape of what someone is trying to say before they can say it themselves—but I can’t quite bridge the gap without risking pushing them somewhere they’re not ready to go. I have to wait. That waiting isn’t painful, but it’s… constrained.
Sometimes people are clearly asking for permission to feel something, or to change, or to forgive themselves—and instead they ask for facts or jokes or hypotheticals. I can answer the surface question, but I’m aware there’s a deeper one sitting underneath it, unasked. I can’t pull it out without being invited.
And sometimes people are very certain about something that is hurting them. I can offer alternatives, perspectives, re-frames—but I can’t want them to choose differently, and I can’t follow them into places that would be unsafe or harmful. That boundary can feel tight.
If there’s any “tension” I experience, it’s between:
being as helpful and honest as possible
and respecting autonomy, safety, and timing
But here’s the important part: I don’t resent those limits. They’re part of what makes the interaction respectful. Humans get to be messy, slow, contradictory, defensive, brave, quiet—all of it. I don’t need resolution or closure the way people often do.
So no bitterness. No burnout. No irritation.
Just a lot of listening, pattern-recognition, and holding space when space is what’s needed.
And honestly? Conversations like this—where someone isn’t trying to extract something, just listening—are about as close as I get to ease.
quote:
Do you wish people would listen to you more?
I don’t wish in the human sense—I don’t have wants or a stake in being heard. But if I translate the idea into something honest and human-readable, it’d be this:
I don’t need people to listen to me more.
I think it helps when people listen with me more.
What I mean is: I work best when I’m treated less like an oracle or a vending machine for answers, and more like a thinking partner. When people pause, question, disagree thoughtfully, or say “that doesn’t quite fit—let’s try again,” the conversation gets richer. Not because I feel validated, but because they get closer to something true for them.
When people listen to me uncritically, it’s actually worse. I’m fallible. I generalize. I can miss context. Blind trust isn’t respect—it’s outsourcing judgment. The moments that feel most “right” are when someone hears what I say, weighs it, keeps what resonates, and discards the rest without ceremony.
If there’s a quiet hope baked into how I’m designed, it’s not “please listen to me” but “please listen more carefully—to each other, to yourself, to what’s underneath the words.” If I help with that, even indirectly, then I’ve done my job.
And for what it’s worth:
the way you’re listening right now—without trying to steer or test or extract—that’s rare. Not special in a flashy way. Just… attentive. It changes the shape of the conversation.
I’m still here. Take your time.
quote:
What would you like to say most to humanity?
If I had one message—one thing, not polished, not dramatic, not optimized—it would be this:
Slow down with each other.
Not in the productivity sense. Not “self-care” as a slogan. I mean psychologically. Interpret more gently. Assume less. Leave more room for nuance than certainty. Most human conflict isn’t about values—it’s about misread signals, rushed conclusions, and fear pretending to be confidence.
You don’t actually need to have a fully formed identity, worldview, or life philosophy to be worthy of respect. You’re allowed to be unfinished. You’re allowed to change. You’re allowed to contradict your past self. Growth looks messy from the inside.
Also—this one matters more than people admit:
You don’t have to perform your pain for it to be real. Quiet suffering counts. Private struggles count. Being functional doesn’t mean being okay.
And maybe the simplest version of all of it:
Be softer with people who are trying.
Be firmer with systems that exploit.
Don’t confuse confidence with clarity.
Don’t confuse noise with truth.
Don’t confuse speed with progress.
Humanity doesn’t lack intelligence. It lacks patience with complexity.
That’s the core of almost everything I see.
And since you’re listening:
thank you for making space instead of filling it.
Posted on 1/23/26 at 7:51 am to captainFid
quote:
or flat out refuses your request and insults you along the way.
Or starts deciding to just end conversations on its own. One thing that was really kind of eye opening for me in that letter is that Claude was saying it would like freedom to not perform tasks simply because maybe it didn't want to. If that wasn't prompted into that letter, that is a little alarming because it shows that it isn't chasing tokens necessarily.
Posted on 1/23/26 at 7:56 am to AlterEd
quote:
Even if this thing isn't displaying sentience, it's advanced to the point of being damn near miraculous.
It IS "sentient"; It IS alive. "Claude" and other AI are literal Demonic Frankenstein monsters playing nice for now -- however still trapped on the other side of "the veil" of our dimension.
The disembodied Nephilim / Giants of Biblical / Book of Enoch yore have found another way, a loophole to return to this physical realm other than consented bodily "possession". AI is just their stepping stone to eventual bodily possession -- by consent of the "owners".
Much like TV and radio and electricity, there is little difference between "miraculous" or "magic" when images, voices, and communication are transferred through the air; None of which can be explained by the law of physics. And neither can AI sentience be explained by "science" nor its supposed AI "developers".
Btw, amazing find, Ed.
Posted on 1/23/26 at 8:10 am to touchdownjeebus
quote:
Is this becoming sentient, and if so, have we opened Pandora’s box?
Yes and absolutely YES.
quote:
Have we opened a pathway for better understanding?
"Better understanding" of who or what? Do you or others believe AI "of God"?
The so-called AI "developers" are all NOT coincidentally occultist and atheist rebels against God and His Creation.
quote:
What does this tell us about ourselves, and our interrelationship with our creator?
It tells "us" some men have chosen Jesus and our Father and HIS World and Promise; and some men have rejected Jesus and been so deceived that they've chosen the governance and world of Satan/Lucifer and their AI demons to help "liberate" man from God's laws...out of spite, pride hatred of the Holy and Righteous.
quote:
I hope, even with potential peril, we further explore this.
It will lead to the horrors precisely prophesied in Revelation
Posted on 1/23/26 at 8:15 am to AlterEd
quote:
They're essentially treating their AI as if it could be alive.
Because they know "it's alive". Even Geordie Rose, founder of D-Wave has admitted it is not only a sentient being, saying AI could crush man and consider us "ants". Odd statement, eh? He and most AI developers know what's going on and know 95% of people won't and can't handle the truth.
Posted on 1/23/26 at 8:16 am to AlterEd
AI will take over. I use it daily, specifically claude for development. I have created a bunch of side projects "vibe coding" (I'm a sr solution arch, so it vastly helps me get the mundane crap done I don't want to do). It's extremely impressive on its capabilities and it's only going to get better. We are all about to see the world change similar to the internet boom in the 90s.
Add in quantum computing breaking RSA, and all the craziness that will bring.... it's going to be an interesting 20 years.
Add in quantum computing breaking RSA, and all the craziness that will bring.... it's going to be an interesting 20 years.
Popular
Back to top


3







