Started By
Message

Let's talk singularity.....

Posted on 5/29/14 at 8:41 am
Posted by white perch
the bright, happy side of hell
Member since Apr 2012
7144 posts
Posted on 5/29/14 at 8:41 am
.....I'm not a tech guy, but it seems we are getting closer and closer. The human-machine interface is getting more advanced with with neurorobotics and brain chip implants. Any of y'all that are more current than I care to share some inside info?
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 5/29/14 at 8:51 am to
Ray Kurzweil is considered by some to be a nut. It's been a while since I viewed any of his material but if I remember he had some pretty accurate predictions about technology. He seems to think the Technological Singularity will occur around 2040.

My only fear and hesitation about a singularity is that assumption that AI (if this is the route which brings us to it) would be benevolent enough to allow us the chance to assimliate. Chances are through sheer logic that a supreme intelligence would just wipe us out.
Posted by BaddestAndvari
That Overweight Racist State
Member since Mar 2011
18300 posts
Posted on 5/29/14 at 8:56 am to
quote:

He seems to think the Technological Singularity will occur around 2040


I for one welcome it, because:

quote:

would be benevolent enough to allow us the chance to assimliate.


and didn't:

quote:

just wipe us out.


a singularity could so quickly come up with so many different cures / fixes to the human genome, it's pretty exciting
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 5/29/14 at 9:05 am to
quote:

a singularity could so quickly come up with so many different cures / fixes to the human genome, it's pretty exciting


It's not just about cure's and fixes. The Singularity shares it's name with the event horizon of a black hole. Beyond the edge of technological progression we have zero idea what to expect or what we could become.

We're speaking of an intelligence that can improve upon it's own intelligence at an exponential rate.
Posted by whodatfan
Member since Mar 2008
21341 posts
Posted on 5/29/14 at 9:33 am to
Posted by JawjaTigah
Bizarro World
Member since Sep 2003
22504 posts
Posted on 5/29/14 at 9:38 am to
This is the face of that "benevolent" AI. An earlier one was Frankenstein. Neither are all that benevolent toward their original makers. We should also remember the image from Independence Day (the movie) of the whacked out people who were on the top of that L.A. building, shouting and screaming their welcomes to the aliens in the craft hovering above their heads... until it released its death-ray payload and everything exploded. What we don't know CAN hurt us. Beware of what you welcome.
Posted by TigerinATL
Member since Feb 2005
61564 posts
Posted on 5/29/14 at 9:49 am to
I'm only peripherally familiar with Kurzweils views, but my understanding is he thinks that everything will change once machines become as smart as humans. At that point things we had never considered before become possible and you could see ridiculous advancements. While this technically may be possible, I don't think it takes politics into account. Yesterday in the self driving car thread it was pointed out how many powerful lobbies (law enforcement, attorneys and insurance) had a vested interest in not letting safer self driving cars become the norm.

I think there will quite possibly be a backlash against enhancing humans and creating obviously smart machines. Will it be something that takes just a generation or 2 to get over the new and unfamiliar, or will it be something like on Star Trek DS9, where the genetic enhancements Dr. Bashir had were outlawed?

Just because we have the capacity to better ourselves doesn't mean we will. People in power do their damnedest to maintain the status quo, and if a technology is disruptive enough that it's seen as a threat to their status quo, they will try to demonize and squash the technology.
Posted by JawjaTigah
Bizarro World
Member since Sep 2003
22504 posts
Posted on 5/29/14 at 10:11 am to
quote:

Just because we have the capacity to better ourselves doesn't mean we will.
You make some good points. A question arises for me right away - are all human/technological advances (as per Kurzweil) going to always make us "better" - even if we/they have to capacity to change us in some way? I mean, "better" than what? And how might we know unforeseen consequences in advance?
This post was edited on 5/29/14 at 10:12 am
Posted by Korkstand
Member since Nov 2003
28710 posts
Posted on 5/29/14 at 10:18 am to
quote:

Ray Kurzweil is considered by some to be a nut.
He is kind of nutty, but just because he kind of lives in his own futuristic world, I think. Also Google hired him, so he can't be too insane.
quote:

It's been a while since I viewed any of his material but if I remember he had some pretty accurate predictions about technology.
Yeah, he has a long list of predictions, many of them surprisingly accurate.
Posted by BottomlandBrew
Member since Aug 2010
27140 posts
Posted on 5/29/14 at 10:48 am to
Singularity scares the shite out of me. Maybe I'm just scared of the unknown. I don't know.
Posted by Ye_Olde_Tiger
Member since Oct 2004
1200 posts
Posted on 5/29/14 at 11:06 am to
Even with a supreme intelligence, how does a single machine/being go about taking over the world?

Even if it was able to walk around I just can't see a single entity being that powerful.

I'll be concerned once we see 100% fully automated industry pipelines from concept to product rolling out of a factory. I'm talking buildings/factories/processes being designed and constructed by machines.

Once, and if, we reach that point, then I could see serious concerns for the limited existence of humans. But one individual machine, super smart or not, can be destroyed pretty easy.

But that just like, my opinion, man...
Posted by Bagger Joe
Baton Rouge
Member since Mar 2014
853 posts
Posted on 5/29/14 at 11:15 am to
Slightly off topic here, but why does the Terminator have teeth?
Posted by Korkstand
Member since Nov 2003
28710 posts
Posted on 5/29/14 at 11:25 am to
quote:

Slightly off topic here, but why does the Terminator have teeth?

So that he won't look like a meth addict with his skin on.
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 5/29/14 at 11:32 am to
quote:

Even with a supreme intelligence, how does a single machine/being go about taking over the world?


Well, a being of supreme intelligence capable of improving upon itself at a rate which humans could never attain, would certainly think of a way to enslave an entire race or wipe them out.

It's hard to speculate past the singularity because of its very nature. But think about the world and how interconnected it is via some form of Network. A being of supreme intelligence could assume control over our most critical infratructures. Not to mention self replication via Von Neumann type universal constructors.
This post was edited on 5/29/14 at 11:33 am
Posted by TigerinATL
Member since Feb 2005
61564 posts
Posted on 5/29/14 at 11:50 am to
quote:

Well, a being of supreme intelligence capable of improving upon itself at a rate which humans could never attain, would certainly think of a way to enslave an entire race or wipe them out.


Formulating a plan and implementing said plan are 2 entirely different things.



The supreme intelligence would have to play the long con game and wait for us to turn over significant amounts of our infrastructure to it before it could act.
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 5/29/14 at 11:58 am to
And how easy would it be for a being of supreme intelligence to con a mere mortal?

About as easy as me convinving my toddler that I did indeed steal her nose.

Again, i'm not saying this is going to happen. But, I do have to stress the exponential gains made by an AI during the Singularity.

By definition the Singularity occurs when advancement in technology and computing power reaches a point where it is so absurdly fast we can no longer predict the rate reliably.

Feasibly, an Intelligence could jump from Human level intelligence to that of omniscience in an afternoon. What human systems could we design to keep such a being in check?

Posted by TigerinATL
Member since Feb 2005
61564 posts
Posted on 5/29/14 at 12:21 pm to
quote:

And how easy would it be for a being of supreme intelligence to con a mere mortal?


But how would it know it needs to con us without the experience of attacking us and failing and us still trusting it? Intelligence =/= Wisdom

Also, why would it want to attack us? Humans are driven to fight each other over finite resources because we want things. What would a completely logical intelligence want? Maybe we give it some instructions where elimination of humans is the obvious answer, but other than that, most creatures in the universe want to do what they are good at, and this intelligence would be good at serving humanity.
Posted by white perch
the bright, happy side of hell
Member since Apr 2012
7144 posts
Posted on 5/29/14 at 12:26 pm to
quote:

The supreme intelligence would have to play the long con game and wait for us to turn over significant amounts of our infrastructure to it before it could act.


Like we're doing now?
Posted by Korkstand
Member since Nov 2003
28710 posts
Posted on 5/29/14 at 12:31 pm to
quote:

Also, why would it want to attack us? Humans are driven to fight each other over finite resources because we want things. What would a completely logical intelligence want? Maybe we give it some instructions where elimination of humans is the obvious answer, but other than that, most creatures in the universe want to do what they are good at, and this intelligence would be good at serving humanity.

I don't know, I think a true AI that has some form of consciousness would be concerned with the same thing most other creatures are concerned with: self-preservation. If humans do things that threaten an AI's survival, I'm pretty sure it would try to kill us off.

That said, I'm not sure it's possible to create a machine that is conscious or capable of actual thought.
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 5/29/14 at 12:33 pm to
quote:

But how would it know it needs to con us without the experience of attacking us and failing and us still trusting it? Intelligence =/= Wisdom Also, why would it want to attack us? Humans are driven to fight each other over finite resources because we want things. What would a completely logical intelligence want? Maybe we give it some instructions where elimination of humans is the obvious answer, but other than that, most creatures in the universe want to do what they are good at, and this intelligence would be good at serving humanity.


We are diving deep into the speculative end of the pool, but to entertain the idea...

Intelligence =/= Wisdom in any human sense of experience.

But we aren't dealing with Human Intelligence/Wisdom. We are dealing with an Intelligent Being after it has attained the ability to self-improve. Self-Improvement denotes independence.

Why would it attack? Using your hypothetical, let's start with the fact that we've tried to control and harness it to serve humanity, instead of it's own interests, whatever those may be. Independant, Intelligent beings don't much like being enslaved.
first pageprev pagePage 1 of 2Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram