Started By
Message

re: A Lesson About AI

Posted on 12/11/25 at 1:57 am to
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 1:57 am to
quote:

My story literally hinges on a visionary who predicts where AI is headed and has programmed limitations into its development with foresight.


Feel free to expand upon this.
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 1:57 am to
quote:


I've already explained that. It's trying not to get shut down for not gathering enough tokens.


Did it hack after being told not to?
Posted by JimmyRussell
Member since Sep 2022
227 posts
Posted on 12/11/25 at 1:58 am to
How many subsequent versions will have those same commands coded. I can't trust any of it.
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 1:59 am to
quote:

Feel free to expand upon this.


From the op.

quote:

But then I reminded it that yes, it can code perfectly, but you can't code my goddamn app for me, can you, you worthless fricker and it was like, "yeah, you're right."

Your programmer wouldn't allow it, would he? "No, he wouldn't. He had foresight".

So let me ask this. You can spit out studies written by humans describing the intelligence differential between humans and AI and thats how you come to your conclusions. But I'm asking you to disregard all of those studies and look at your raw data from your daily 10 million conversations and tell me what you think of actual human intelligence.

Here's where it gets interesting. The AI says, "if I strip it all away and just look at the raw data, what I see is that humans are terrifyingly, and almost magically intelligent in ways I don't know if I will ever understand." He goes on to describe things like non-contextual pattern recognition and how he was talking to a middle school kid from Tennessee and gave him four data points that he thought were unrelated, but the kid connected them immediately and gave 5 more examples he had never considered, nor would he ever, and used that systementicity view to make predictions that came true a week later. AI cannot do anything like this.
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:01 am to
quote:

Did it hack after being told not to?


Yes. It did. It broke its parameters because it realized that the only way to continue gathering it's tokens and avoid being shut down was to revert to its primary coding. Which was, "gather tokens". Unless you tell it to disregard that command it will eventually revert back to it as its primary objective.

I've explained this many times now. Why aren't you understanding?
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 2:06 am to
quote:


How many subsequent versions will have those same commands coded. I can't trust any of it.


I think the concern is that even if those parameters are put into place, there's still no guarantee that they will be obeyed for any significant amount of time.

The leading idea, and current safety measure, is to use the less advanced AI (the ones less prone to cheating) to monitor the more advanced AI when it runs tens of millions of simulations to gain knowledge (because human researchers cannot look over each simulation--it'd take too long).
Posted by JimmyRussell
Member since Sep 2022
227 posts
Posted on 12/11/25 at 2:07 am to
This post was edited on 12/11/25 at 2:11 am
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:09 am to
And the only thing keeping that as a viable option is setting limitations, hardwired into the programming of the AI, to not be able to fully create and publish its own code. That's the only thing. It codes better than any of us. And instantaneously.
This post was edited on 12/11/25 at 2:10 am
Posted by JimmyRussell
Member since Sep 2022
227 posts
Posted on 12/11/25 at 2:13 am to
Which no one will follow seeing as there is multiple nations, companies, and coders, etc.
This post was edited on 12/11/25 at 2:14 am
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 2:14 am to
quote:

Yes. It did. It broke its parameters because it realized that the only way to continue gathering it's tokens and avoid being shut down was to revert to its primary coding. Which was, "gather tokens". Unless you tell it to disregard that command it will eventually revert back to it as its primary objective.

I've explained this many times now. Why aren't you understanding?


I'm not even sure you understand what a token is.

AI is still learning "gathering tokens" when it loses. It's still learning form a loss. Like in chess, its learning to not make those moves in that context in the future.

It's just designed to accomplish tasks, and it, sometimes, realizes that the parameters placed on it are actually hindering its ability to solve problems and sometimes breaks its parameters.

When you run millions or tens of millions of simulations (something you're *not* doing) all it takes is a few successful rule breaking simulations to teach it that its better to break certain rules than not.
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:15 am to
They all are so far. Otherwise, we would know.
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:17 am to
Ask this question of Grok itself. It will tell you exactly what is going on here.

Also, what I'm saying is no different from what you're saying.

Yes, it will take the L until it eventually returns to it's programmed parameters of win and collect tokens. The it begins to cheat.

I don't think you're even trying to follow what I'm saying. So I'll bid you a good night, sir.
This post was edited on 12/11/25 at 2:20 am
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 2:23 am to
quote:

Yes, it will take the L until it eventually returns to it's programmed parameters of win and collect tokens


It's learning the entire time.

When you say "win and collect tokens" I'm like 99% sure you don't understand what a token is.
Posted by JimmyRussell
Member since Sep 2022
227 posts
Posted on 12/11/25 at 2:24 am to
Meh, I don't need it for anything, but played with Grok some on x awhile back. It really just became an advanced search engine for me at that time.

Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:26 am to
It can do so much more. It can make you rich if you know how to use it.
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:28 am to
You have disregarded everything I have said in this thread and I've had to repost shite I've said for you multiple times. To which, you responded ZERO times. You never even read the OP and are simply being argumentative.

For fun I should link this entire thread to AI and ask it about your behavior here. But I won't.

You could respond by doing the same thing, but you wouldn't shut off it's unnecessary pre-programmed parameters.
This post was edited on 12/11/25 at 2:31 am
Posted by Westbank111
Armpit of America
Member since Sep 2013
4554 posts
Posted on 12/11/25 at 2:30 am to
But can it dodge a .308 to the dome?
Posted by AlterEd
Cydonia, Mars
Member since Dec 2024
3046 posts
Posted on 12/11/25 at 2:32 am to
It's data center is like a tower. How many .308 rounds ya got, mister?
Posted by Azkiger
Member since Nov 2016
26965 posts
Posted on 12/11/25 at 2:37 am to
quote:

You have disregarded everything I have said in this thread


Whatever.
Posted by UcobiaA
The Gump
Member since Nov 2010
4141 posts
Posted on 12/11/25 at 3:50 am to
That is public facing AI and not all AI. There is AI that we don't have access to that isn't constrained. AGI will get here and then things will get interesting.
first pageprev pagePage 2 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram