- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Posted on 3/17/17 at 12:40 pm to LucasP
quote:
If you design something to find a cure for cancer then it's got a lot less possible negative impact than if you design it with the goal of optimizing profit margins.
What happens when this computer ultimately decides that most efficient way to cure cancer is to kill humans. no humans, zero risk of humans getting cancer.
Obviously that is an extreme example, but It's a legitimate concern that can't just be dismissed with "that'll never happen because we'll tell it that it cannot harm humans". Once AI becomes more "Intelligent" than the human brain, it is impossible to predict what it will be capable of doing, including making leaps in logic that will supersede any "rules" humans can think to code into it to not harm humans. It is largely accepted that ASIs will be single mindedly driven toward accomplishing assigned tasks. Cure cancer is open ended enough that it can cause a sub routine to "improve myself so I can better figure out how to cure cancer" all the way to "killing the human with cancer is the most efficient and cost effective way to cure cancer"
Posted on 3/17/17 at 12:50 pm to TigerFanatic99
quote:
including making leaps in logic that will supersede any "rules" humans can think to code into it to not harm humans.
It's worse than that.
What if, on the flip side, it has nothing to do with superseding a lame man-made rule about not harming humans and simply about extrapolating the positive out to the infinite degree.
Let's be clear, "human pleasure" our neural network firing off "yes/no", "1/0" over and over and over again in patterns that we interpret as "pleasure," and "happiness."
All of the want of a VR world with an infinite supply of hot virgins is a lot of inefficiency to fire off those 1s and 0s. Ray and OML could just be turned into switches of 1/0s and just fired off in a pattern. No virtual world. No virgins. Just switches, on and off. Forever.
I mean if that's your thing, and that's your idea of immortality, go for it. "Eternal Pleasure."
This post was edited on 3/17/17 at 12:50 pm
Posted on 3/17/17 at 1:02 pm to TigerFanatic99
quote:
What happens when this computer ultimately decides that most efficient way to cure cancer is to kill humans. no humans, zero risk of humans getting cancer.
That's exactly what I wrote next. Did you really just stop reading to comment?
Posted on 3/17/17 at 1:07 pm to LucasP
quote:
That's exactly what I wrote next. Did you really just stop reading to comment?
Kind of?
Posted on 3/17/17 at 1:23 pm to Freauxzen
quote:
Let's be clear, "human pleasure" our neural network firing off "yes/no", "1/0" over and over and over again in patterns that we interpret as "pleasure," and "happiness."
All of the want of a VR world with an infinite supply of hot virgins is a lot of inefficiency to fire off those 1s and 0s. Ray and OML could just be turned into switches of 1/0s and just fired off in a pattern. No virtual world. No virgins. Just switches, on and off. Forever.
Seems like a pretty efficient way to maximize the human experience. Neural heroin, the utilitarians would approve.
Posted on 3/17/17 at 1:37 pm to tke857
quote:
Ill proba wait this one out for a bit. Let those first adapters give it a go.
If I'm still around for that I'll be 70. Will probably be an outcast for not participating. I don't get the fascination with binge watching TV and I'm always at least one generation, usually two, behind on phones. Won't be any different when this comes around.
Posted on 3/17/17 at 3:07 pm to musick
I'm confused why these types of guys are excited about this? And why they aren't doing anything to stop this bullshite?
Posted on 3/17/17 at 3:11 pm to TigerFanInSouthland
I think the goal is to transfer memories and consciousness into a computer so you don't have to "die" when your physical body is done.
Posted on 3/17/17 at 3:14 pm to musick
Yes slapping God in the face with our dicks and cheating death is one goal.
But the reason it's called the singularity is because we simply can't comprehend what will happen, our fates will be left in the hands of things smarter than us. Very exciting, good or bad could go either way. So there's a lot of reasons to be excited, not just immortality.
But the reason it's called the singularity is because we simply can't comprehend what will happen, our fates will be left in the hands of things smarter than us. Very exciting, good or bad could go either way. So there's a lot of reasons to be excited, not just immortality.
Posted on 3/17/17 at 3:29 pm to LucasP
quote:
But the reason it's called the singularity is because we simply can't comprehend what will happen, our fates will be left in the hands of things smarter than us. Very exciting, good or bad could go either way. So there's a lot of reasons to be excited, not just immortality.
What's interesting about this is that if you looked at the human race as an entire timeline from the start of it, and consider that singularity is going to happen eventually, say in the next 100 years, then we are already basically in that era. It has already happened. The machines are going to catch up and "take over" soon but the things that set it in motion are here now.
This post was edited on 3/17/17 at 3:31 pm
Posted on 3/17/17 at 3:44 pm to musick
I honestly don't see much upside in this.
Posted on 3/17/17 at 3:46 pm to TigerFanatic99
quote:
What happens when this computer ultimately decides that most efficient way to cure cancer is to kill humans. no humans, zero risk of humans getting cance
This is not an issue and the solution is not to kill humans. The reality is that cancer will be mitigated from the combination of the biotechnology and nanotechnology revolution that is in progress already. We already know that cancer is simply a gene sequence that has been turned to the "on" position without a way to turn "off" to explain it in simplistic terms.
The nanobots will be able to identify the faulty dna and repair or destroy it. There will be no need to kill the whole organism.
Kurzweil is a bit crazy, but only because he is crazy optimistic concerning his timeline. His thoughts are very well hashed out, backed up by solid evidence and the wheels are in motion.
If you haven't read his book, "The Singularity is Near" I would highly recommend it. Just know that it's a bit of an undertaking and very detailed.
The only way the Singularity doesn't come into existence is if we all nuke ourselves to the stone ages.
Posted on 3/17/17 at 3:48 pm to TigerFanInSouthland
quote:
I honestly don't see much upside in this.
Exactly. Immortality mixed with infinitely more intelligent A.I overlords sounds more like eternal slavery.
It would be so advanced, it would consider our needs & wants much like humans are concerned with the needs of an anthill.
Posted on 3/17/17 at 3:52 pm to 0jersey
quote:
This is not an issue and the solution is not to kill humans. The reality is that cancer will be mitigated from the combination of the biotechnology and nanotechnology revolution that is in progress already. We already know that cancer is simply a gene sequence that has been turned to the "on" position without a way to turn "off" to explain it in simplistic terms.
The nanobots will be able to identify the faulty dna and repair or destroy it. There will be no need to kill the whole organism.
Why would any ASI care about any of this? Life, death, cancer, no cancer, there's no point for any of this to the ASI.
Oh wait, because we think we will "tell it what to do."
Posted on 3/17/17 at 4:19 pm to Freauxzen
The theory is that the cure for cancer will precede the singularity. Therefore, it will be a non issue for the singularity.
Posted on 3/17/17 at 4:30 pm to 0jersey
quote:
The theory is that the cure for cancer will precede the singularity. Therefore, it will be a non issue for the singularity.
Sure. But the idea that we can theorize x before y is the problem with the Singularity, we have no idea what's beyond.
Or the better question is:
Is it more likely we create self-sustaining and human repairing nanobots prior to the AGI that we evolve into an ASI? The whole problem with and the timeline is the progress speed is going to be something we have no idea of until it's too late.
Posted on 3/17/17 at 4:43 pm to 0jersey
quote:
The theory is that the cure for cancer will precede the singularity. Therefore, it will be a non issue for the singularity.
It was just an example that clearly points out a bigger issue idiot. And don't come in here throwing around Kurzweil like some kind of expert, I've read all his books and even owned the Our Lady Peace album about him (on CD!!!). So don't be a dickhead about Kurzweil, ya dickhead.
Also I've begun drinking so any meaningful contribution I may have added to this thread is now over.
Posted on 3/17/17 at 4:44 pm to LucasP
quote:
Also I've begun drinking so any meaningful contribution I may have added to this thread is now over.
No use for that post Singularity either. Bye bye drinking time.
Popular
Back to top
Follow TigerDroppings for LSU Football News