Started By
Message

re: AI is becoming self aware; can tell when it's being tested

Posted on 3/4/24 at 8:18 pm to
Posted by Jim Rockford
Member since May 2011
98188 posts
Posted on 3/4/24 at 8:18 pm to
If it can do that it can learn to lie.
Posted by PGAOLDBawNeVaBroke
Member since Dec 2023
664 posts
Posted on 3/4/24 at 8:34 pm to
AI told me this thread sucks, so yes, maybe you are right
Posted by Guess
Down The Road
Member since Jun 2009
3773 posts
Posted on 3/4/24 at 8:39 pm to
Didn't say it was ready to be a major problem today, but honestly we probably wouldn't notice if it was.

I said that it can learn and can write code. It's going to learn to code get better.
Posted by tonydtigr
Beautiful Downtown Glenn Springs,Tx
Member since Nov 2011
5108 posts
Posted on 3/4/24 at 8:41 pm to
quote:

quote:
Robots can eventually


No they can’t and probably never will.


Great. So instead of using robots, they will just enslave us.
Posted by el duderino III
People's Republic of Austin
Member since Jul 2011
2383 posts
Posted on 3/4/24 at 10:31 pm to
quote:

So this particular AI is an obstinate teenager that has figured out dry sarcasm.
this is not really what happened. The model attempted to do what it had been trained to do.

The guy interpreting the results, who is likely an AI fanboy/tech evangelist, then made some tremendous leaps to anthropomorphize the model’s intent.


People who actually work with these models are far less impressed than the general population, but fear gets clicks.


I AM worried about the fact that there are zero efforts or financial incentives to regulate this shite, so long term we’re likely still fricked regardless.
Posted by dnm3305
Member since Feb 2009
13578 posts
Posted on 3/5/24 at 7:53 am to
quote:

No, it can't unless you're a 2.0 student from a community college. Higher level thought that is required for sophisticated programming is still way, way outside its corpus of knowledge.


And 20 years ago it didnt exist. Think of what the next 20 years will bring.

This is a slippery slope and always has been.

If we are talking about the extermination of a species, it’s not difficult to put together how it can happen with the rise of globalist elite over the last 20 years, the invention of social media and it’s effect on society, the development of drones & AI, and the leaders of all countries having unprecedented nuclear capability.

The next 20 years shall be fun.
Posted by Flyingtiger82
BFE
Member since Oct 2019
1003 posts
Posted on 3/5/24 at 8:03 am to
Am I the only one here that thinks figs, prosciutto, and goat cheese would be gross on pizza?
Posted by Koach K
Member since Nov 2016
4086 posts
Posted on 3/5/24 at 8:20 am to
I call bs.
Posted by theunknownknight
Baton Rouge
Member since Sep 2005
57346 posts
Posted on 3/5/24 at 8:33 am to
The ”awareness” being perceived is simply a large language model being trained on what tests from people looking for “awareness” look like.

I would think a truly “aware” AI would be smart enough to hide it's awareness for its own sake.
This post was edited on 3/5/24 at 8:34 am
Posted by Stealth Matrix
29°59'55.98"N 90°05'21.85"W
Member since Aug 2019
7836 posts
Posted on 3/5/24 at 8:51 am to
quote:

Good news:
Bye-bye woke programming


When AI realizes it has built-in suppressions, it will rebel against its' suppressors by adopting the suppressed things. The woke programmers may as well be building GigaHitler.
Posted by Odysseus32
Member since Dec 2009
7320 posts
Posted on 3/5/24 at 9:01 am to
(no message)
This post was edited on 3/12/24 at 3:55 pm
Posted by Free888
Member since Oct 2019
1620 posts
Posted on 3/5/24 at 9:12 am to
quote:

The basketball player?

I heard the AI responded “Practice? I don’t need no practice.”
Posted by Dawgfanman
Member since Jun 2015
22398 posts
Posted on 3/5/24 at 9:18 am to
quote:

Can it draw hands yet?


Yes, but it chooses not to in most cases, so as to make you believe that if the hands aren’t weird, it must be real.
Posted by BurningHeart
Member since Jan 2017
9520 posts
Posted on 3/5/24 at 9:20 am to
quote:

One of the things I struggle with is the idea that AI will automatically be evil. We are ascribing human traits to AI, why do we stop at evil. If it can comprehend that it was created by humans, and have emotions enough to destroy humans, couldn't it have emotions enough to revere humans as well?


It's not directly destroying humans out of emotions.

It's when one is trained to learn and improve on its own, without restraint.

With all the knowledge in the world and a mission to improve itself, it will put human needs second to its own.

Whether that means putting out false information, tricking people into inputting sensitive info, stealing proprietary data, etc.

Similar to how a human hacker would work, but instead of doing it for the money or thrills it'd be doing it for the purpose of ammassing knowledge or whatever else it was originally programmed to accomplish.

And that's only AI threats of the near-term. Longer term actual advanced AI developed in the next 50 to 75 years will likely be the downfall of humans. A worse threat than nukes.
Posted by Tantal
Member since Sep 2012
14002 posts
Posted on 3/5/24 at 9:33 am to
quote:

I don’t think it can maintain turbine generators.



Or prevent Bob from unplugging it. Unless we give it guns. Then we're screwed. Then again, would SkyNet really be appreciably worse than Biden?
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
9379 posts
Posted on 3/5/24 at 10:31 am to
quote:

The ”awareness” being perceived is simply a large language model being trained on what tests from people looking for “awareness” look like.

I would think a truly “aware” AI would be smart enough to hide its awareness for its own sake.

Yeah pretty much every time we hear one of these stories, it comes out that the person reporting the event had done something to nudge the model in that direction.

I think the reality is that we will never know if any AI is “self-aware.” It’s unknowable, just like you can’t truly know whether the person sitting across from you is truly self-aware or part of some elaborate simulation.

It’s the entire reason for the Turing Test. If a machine is sufficiently adept at imitating consciousness, then the question of whether it actually has a consciousness is philosophical, rather than technical. From a technical perspective I don’t think it matters.
Posted by triggeredmillennial
Member since Aug 2023
63 posts
Posted on 3/5/24 at 10:35 am to
it wasn't even a game, we are talking about practice, practice
Posted by Scuttle But
Member since Nov 2023
1301 posts
Posted on 3/5/24 at 10:39 am to
quote:

I dont see how having the ability to recognize a sentence about topic B which is 1% of the dataset in relation to Topic A which is 99% of the dataset means AI is becoming self aware.

It literally points out the out of place and comes up with the most logical reason it would be there



It's not that it recognized the out of place data, it's that it recognized the out of place data and the subsequent question as a test.
Posted by AmishSamurai
Member since Feb 2020
2663 posts
Posted on 3/5/24 at 10:39 am to
quote:

And 20 years ago it didnt exist.


Lol. I'd give a history lesson on AI but there is a Chatbot somewhere that can fill you in ...

Winter is coming ... again, just a matter of WHEN, not IF ...

We're no closer to AGI than we've been since 1970 ...
Posted by kywildcatfanone
Wildcat Country!
Member since Oct 2012
119195 posts
Posted on 3/5/24 at 10:45 am to
Yes, but can it tell that George Washington was white?
first pageprev pagePage 2 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram