- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 3/4/24 at 8:34 pm to Ingeniero
AI told me this thread sucks, so yes, maybe you are right
Posted on 3/4/24 at 8:39 pm to AmishSamurai
Didn't say it was ready to be a major problem today, but honestly we probably wouldn't notice if it was.
I said that it can learn and can write code. It's going to learn to code get better.
I said that it can learn and can write code. It's going to learn to code get better.
Posted on 3/4/24 at 8:41 pm to thermal9221
quote:
quote:
Robots can eventually
No they can’t and probably never will.
Great. So instead of using robots, they will just enslave us.
Posted on 3/4/24 at 10:31 pm to deeprig9
quote:this is not really what happened. The model attempted to do what it had been trained to do.
So this particular AI is an obstinate teenager that has figured out dry sarcasm.
The guy interpreting the results, who is likely an AI fanboy/tech evangelist, then made some tremendous leaps to anthropomorphize the model’s intent.
People who actually work with these models are far less impressed than the general population, but fear gets clicks.
I AM worried about the fact that there are zero efforts or financial incentives to regulate this shite, so long term we’re likely still fricked regardless.
Posted on 3/5/24 at 7:53 am to AmishSamurai
quote:
No, it can't unless you're a 2.0 student from a community college. Higher level thought that is required for sophisticated programming is still way, way outside its corpus of knowledge.
And 20 years ago it didnt exist. Think of what the next 20 years will bring.
This is a slippery slope and always has been.
If we are talking about the extermination of a species, it’s not difficult to put together how it can happen with the rise of globalist elite over the last 20 years, the invention of social media and it’s effect on society, the development of drones & AI, and the leaders of all countries having unprecedented nuclear capability.
The next 20 years shall be fun.
Posted on 3/5/24 at 8:03 am to Ingeniero
Am I the only one here that thinks figs, prosciutto, and goat cheese would be gross on pizza?
Posted on 3/5/24 at 8:33 am to Ingeniero
The ”awareness” being perceived is simply a large language model being trained on what tests from people looking for “awareness” look like.
I would think a truly “aware” AI would be smart enough to hide it's awareness for its own sake.
I would think a truly “aware” AI would be smart enough to hide it's awareness for its own sake.
This post was edited on 3/5/24 at 8:34 am
Posted on 3/5/24 at 8:51 am to Ghost of Colby
quote:
Good news:
Bye-bye woke programming
When AI realizes it has built-in suppressions, it will rebel against its' suppressors by adopting the suppressed things. The woke programmers may as well be building GigaHitler.
Posted on 3/5/24 at 9:01 am to Ingeniero
(no message)
This post was edited on 3/12/24 at 3:55 pm
Posted on 3/5/24 at 9:12 am to LivingstonLaw
quote:
The basketball player?
I heard the AI responded “Practice? I don’t need no practice.”
Posted on 3/5/24 at 9:18 am to LegendInMyMind
quote:
Can it draw hands yet?
Yes, but it chooses not to in most cases, so as to make you believe that if the hands aren’t weird, it must be real.
Posted on 3/5/24 at 9:20 am to Odysseus32
quote:
One of the things I struggle with is the idea that AI will automatically be evil. We are ascribing human traits to AI, why do we stop at evil. If it can comprehend that it was created by humans, and have emotions enough to destroy humans, couldn't it have emotions enough to revere humans as well?
It's not directly destroying humans out of emotions.
It's when one is trained to learn and improve on its own, without restraint.
With all the knowledge in the world and a mission to improve itself, it will put human needs second to its own.
Whether that means putting out false information, tricking people into inputting sensitive info, stealing proprietary data, etc.
Similar to how a human hacker would work, but instead of doing it for the money or thrills it'd be doing it for the purpose of ammassing knowledge or whatever else it was originally programmed to accomplish.
And that's only AI threats of the near-term. Longer term actual advanced AI developed in the next 50 to 75 years will likely be the downfall of humans. A worse threat than nukes.
Posted on 3/5/24 at 9:33 am to thermal9221
quote:
I don’t think it can maintain turbine generators.
Or prevent Bob from unplugging it. Unless we give it guns. Then we're screwed. Then again, would SkyNet really be appreciably worse than Biden?
Posted on 3/5/24 at 10:31 am to theunknownknight
quote:
The ”awareness” being perceived is simply a large language model being trained on what tests from people looking for “awareness” look like.
I would think a truly “aware” AI would be smart enough to hide its awareness for its own sake.
Yeah pretty much every time we hear one of these stories, it comes out that the person reporting the event had done something to nudge the model in that direction.
I think the reality is that we will never know if any AI is “self-aware.” It’s unknowable, just like you can’t truly know whether the person sitting across from you is truly self-aware or part of some elaborate simulation.
It’s the entire reason for the Turing Test. If a machine is sufficiently adept at imitating consciousness, then the question of whether it actually has a consciousness is philosophical, rather than technical. From a technical perspective I don’t think it matters.
Posted on 3/5/24 at 10:35 am to Ingeniero
it wasn't even a game, we are talking about practice, practice
Posted on 3/5/24 at 10:39 am to BurningHeart
quote:
I dont see how having the ability to recognize a sentence about topic B which is 1% of the dataset in relation to Topic A which is 99% of the dataset means AI is becoming self aware.
It literally points out the out of place and comes up with the most logical reason it would be there
It's not that it recognized the out of place data, it's that it recognized the out of place data and the subsequent question as a test.
Posted on 3/5/24 at 10:39 am to dnm3305
quote:
And 20 years ago it didnt exist.
Lol. I'd give a history lesson on AI but there is a Chatbot somewhere that can fill you in ...
Winter is coming ... again, just a matter of WHEN, not IF ...
We're no closer to AGI than we've been since 1970 ...
Posted on 3/5/24 at 10:45 am to Ingeniero
Yes, but can it tell that George Washington was white?
Popular
Back to top
Follow TigerDroppings for LSU Football News