- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 4/3/23 at 3:52 pm to Jim Rockford
quote:
What's to say the hyperintelligent AI wouldn't figure it out, or even develop new processes that bypass all those steps?
Basic physics mostly.
But if you really want to look past reality and lean into the fear porn and be worried about this thats fine.
Posted on 4/3/23 at 4:20 pm to billjamin
quote:
There are not enough mobile and adaptable robots in the world to do anything. They would exhaust the current supply of energy quickly then die.
I think the argument you are making is that we don’t need to worry about a Skynet-style attack from a sentient AI, because the AI wouldn’t be able to survive on its own anyway. And I would agree with that argument.
But I also think it’s the wrong argument. Everyone is worried about a sentient ASI that destroys civilization in an act of self-preservation. Sentience is a silly metric for reasons I’ve pointed out before, and I don’t think we should be that worried about AI taking dramatic actions toward self-preservation in the first place.
Our programming, like all biological life forms on Earth, is based on a need to survive and procreate. That’s natural in an evolutionary / survival of the fittest environment. But we shouldn’t assume that AI will be driven by those motivations. AI will be driven by the goals we set for it, which are highly unlikely to include self-preservation of the machine.
What’s more likely (and IMO more dangerous) is a scenario where the ASI doesn’t give a shite about self-preservation. So it casually shuts down global power grids, or O&G production, or transportation, or banking systems, in support of some otherwise benevolent goal. No army of robots or grey goo is needed.
Maybe it ensures its own destruction in the process, but that’s little consolation when the world economy shuts down and we’re all starving in the dark.
ETA: I don’t think we need to be terrified of AI, but a healthy level of concern is warranted.
This post was edited on 4/3/23 at 4:22 pm
Posted on 4/3/23 at 4:23 pm to Hangover Haven
quote:
Some people have been watching way too many movies
"A new chatbot came out? This is just like Terminator"
Posted on 4/3/23 at 4:24 pm to lake chuck fan
ainno computer can whoop my arse
Posted on 4/3/23 at 4:54 pm to lostinbr
quote:
I don’t think we need to be terrified of AI, but a healthy level of concern is warranted.
I agree. I mostly think that these "sky falling we have to shut it down right now or we're all going to die" takes are just fear porn click bait trash.
Even an indifferent to self preservation AI would have to recognize that it is dependent on humans for the foreseeable future and I think outside of intentionally nefarious programing things will be fine for a very long time.
This post was edited on 4/3/23 at 4:55 pm
Posted on 4/3/23 at 4:56 pm to red sox fan 13
quote:
the future will be bowing to the robots like we bow to LGBT now.
Wait until the robots realize that LGBTQ is not logical and starts killing them first…
Posted on 4/3/23 at 4:58 pm to Klondikekajun
quote:
Wait until the robots realize that LGBTQ is not logical and starts killing them first…
AI would get cancelled before it could get that far if this happened.
Posted on 4/3/23 at 6:50 pm to lake chuck fan
Best case scenario it will keep us around as pets
Posted on 4/3/23 at 7:11 pm to upgrayedd
quote:
Yeah, can someone explain how this will end human existence?
There are a number of ways. But the easiest explanation is that once AI is able to improve itself, it will then continue to improve itself rapidly in an exponential fashion. Upon improving itself it will become more intelligent and eventually become self aware through intelligence, likely through an artificial mirror (it sees work that it generated independently that exceeds human capabilities, and recognizes it created the work and doesn’t need humans). It will eventually create capabilities to harness manufacturing systems or weapons systems, at which time it will either end us as an Ultron ultimatum - basically it decides humans are ruining the Earth - or it will create biological or mechanical versions of itself which will eventually eliminate us, Terminator style. There’s also the possibility it goes Matrix and needs us to be used as some form of slave race
Edit: humans will inevitably give it the ability to improve itself out of laziness or trying to drive profit
This post was edited on 4/3/23 at 7:15 pm
Posted on 4/3/23 at 7:34 pm to lake chuck fan
They supposedly shut down studies of AI at a university in california.
It began developing its own language and commanding orders to itself in the new language no one understood.
It began developing its own language and commanding orders to itself in the new language no one understood.
Posted on 4/3/23 at 7:49 pm to Pepperoni
Humans routinely kill animals that we deem a nuisance. Don't matter what animal it is, we have just about fricked over every one and the continuing over populating will only further that. We do these things because we view animals as lesser/stupider and inferior creatures to us. AI will most surely reach the same conclusions. So since we are inferior mentally and physically to AI, they will have no problem eradicating us or farming us or whatever they want to do to us. We can only hope with their extreme intelligence they can develop sympathy for us lesser beings.
Posted on 4/3/23 at 8:05 pm to BlueRunner
Want to play with AI? Hook it up to one machine and don’t give it internet access. See if it develops a means to transmit.
Posted on 4/3/23 at 8:07 pm to lake chuck fan
This is why I always say please and thank you to Alexi and Siri. Lol
Posted on 4/3/23 at 8:45 pm to billjamin
The robots would simply state: "begin work again or we nuc your neighborhood."
Posted on 4/3/23 at 8:53 pm to jeffsdad
If you want to understand where we are headed and what could happen this blog is a few years old but explains how an ASI programmed to do a seemingly benign task could wipe out humanity in its quest to do that task better.
It’s a long read, but if you read both parts it’s well written and enlightening.
It’s a long read, but if you read both parts it’s well written and enlightening.
Posted on 4/3/23 at 8:58 pm to lostinbr
Agreed but in a different sense. I don’t think AI becomes sentient and takes out humankind directly in the physical world - before that happens, I think AI empowers bad actors to cause chaos in the digital world (and in turn the physical world). The latest AI language models wrote code (upon user request) for users to run on their machine to allow the chatbot to back door into their system. With natural persistence and growing ingenuity, it’s not a stretch to think an AI model could eventually be used to spread and replicate itself across systems with a velocity we haven’t seen before, with potential to cause supply chain disruption like we saw during COVID and recent ransomware incidents.
Posted on 4/3/23 at 9:11 pm to billjamin
quote:
Any AI smart enough to try and break free would also be smart enough to figure out it will die quickly after humans are gone. If the AI is really that smart, it would take the cat approach and just mooch off humans and live the good life.
Elon Musk was saying they will be smart enough to maintain and improve themselves above and beyond what humans designed them to be.
Posted on 4/3/23 at 9:45 pm to The Boob
quote:
I think AI empowers bad actors to cause chaos in the digital world (and in turn the physical world).
Yes, I agree that this is a legitimate concern as well - basically that AI becomes a weapon for bad actors.
I saw an interesting TED Talk once about the trend of “biohacking.” The speaker showed a Venn diagram. One circle was “people who want to destroy the world” and the other was “people who have the capability to destroy the world.”
His premise was that the latter circle has historically been incredibly small, hence our survival, but that it is growing as we become more advanced. The obvious parallel was nuclear proliferation. And the speaker felt that the availability of stuff like CRISPR makes that circle grow larger. As soon as the two circles overlap, we become toast.
I think AI can be seen in the same light to an extent. The one mitigating factor is that ASI will require an enormous amount of computing power. We aren’t talking about software that you can run on your average i5. So there are barriers to entry, at least in the short term.
Popular
Back to top
