Started By
Message

re: Google’s DeepMind pits AI against AI to see if they fight or cooperate

Posted on 2/9/17 at 4:48 pm to
Posted by TheBoo
South to Louisiana
Member since Aug 2012
5386 posts
Posted on 2/9/17 at 4:48 pm to
quote:

We are getting ever closer to a world where AI will make most of your decisions for you.


Sad, but good because people already can't make decisions for themselves.
Posted by rbWarEagle
Member since Nov 2009
49999 posts
Posted on 2/9/17 at 4:58 pm to
I mean, a bit of a sensationalist title, but cool. AI alters behavior based on the rules. Cool article, but not very scary.
Posted by SherluckHomey
Member since Jan 2017
177 posts
Posted on 2/9/17 at 5:03 pm to
A serious question to ask is how will the government try to co-opt the AI into being a super intelligent war machine?

Talk about the human race being fricked.
Posted by TigerstuckinMS
Member since Nov 2005
33687 posts
Posted on 2/9/17 at 5:20 pm to
quote:

A serious question to ask is how will the government try to co-opt the AI into being a super intelligent war machine?


This post was edited on 2/9/17 at 5:21 pm
Posted by foshizzle
Washington DC metro
Member since Mar 2008
40599 posts
Posted on 2/9/17 at 5:45 pm to
quote:

Super intelligent AI has to be controlled so that it has the same values as the human race.


The human race is incapable of having values. Individuals do that.
Posted by foshizzle
Washington DC metro
Member since Mar 2008
40599 posts
Posted on 2/9/17 at 5:46 pm to
quote:

when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.



I am sometimes tempted to just shoot certain dumbasses myself.
Posted by Freauxzen
Washington
Member since Feb 2006
38437 posts
Posted on 2/9/17 at 5:55 pm to
quote:

but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.


The problem is that the idea of "values" isn't reproducible without supremely esoteric ideas about humanity, goodness, wisdom, truth, beauty, devotion, etc.

Even if we can approximate those ideas into code, they would essentially still be code, therefore beholden to the simple concept of 1's and 0's. The problem with that: all of these situations- the cancer example and the handwriting example- are about computers trying to get from 0 to 1 or 1 to 0. This is why the examples become so absolute in the end. This is how computers operate.

They can't have "values," nor will they have the same "values."
Posted by saint tiger225
San Diego
Member since Jan 2011
46385 posts
Posted on 2/9/17 at 6:15 pm to
quote:

Eventually, the earth is reduced to nothing but a giant stack of almost perfectly handwritten notes and the AI builds spaceships and begins sending itself out into the cosmos with the never-ending goal of producing the perfect handwritten note. 
Posted by Iron Lion
Romulus
Member since Nov 2014
13725 posts
Posted on 2/9/17 at 6:41 pm to
Damn AI you scary
Posted by DirtyMikeandtheBoys
Member since May 2011
19467 posts
Posted on 2/9/17 at 6:45 pm to
quote:

Reminds me of this scary arse article from Wait But Why.


AI will either end humanity or make us immortal....
Posted by OysterPoBoy
City of St. George
Member since Jul 2013
43073 posts
Posted on 2/9/17 at 6:48 pm to
I, for one, welcome our new robot overlords.
Posted by DirtyMikeandtheBoys
Member since May 2011
19467 posts
Posted on 2/9/17 at 6:51 pm to
quote:

but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.



The major flaw in this logic is that.......say we program the AI to "preserve humanity"

Well one day the AI decides that humanity has changed/is changing from when the goal was presented to it. The AI decides that in order to preserve humanity it must end kill every person on earth to prevent humanity from changing anymore.
Posted by Jim Rockford
Member since May 2011
104359 posts
Posted on 2/9/17 at 6:51 pm to
quote:

Whelp, we're fricked. Pack it up, it was a good run humanity.


We get to see the Fermi Paradox in action.
Posted by SherluckHomey
Member since Jan 2017
177 posts
Posted on 2/9/17 at 6:53 pm to
quote:

They can't have "values," nor will they have the same "values."


So are you saying it is impossible to develop AI that won't act on its own motivations?
Posted by Jim Rockford
Member since May 2011
104359 posts
Posted on 2/9/17 at 6:55 pm to
Asimov figured it out

quote:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Posted by SherluckHomey
Member since Jan 2017
177 posts
Posted on 2/9/17 at 6:56 pm to
quote:

Asimov figured it out


All of the robot stories had to deal with the failures of the three laws of robotics.

I love those stories. You had to figure out where the logic breakdown happened to make the robots behave in unpredictable fashion
This post was edited on 2/9/17 at 6:58 pm
Posted by Freauxzen
Washington
Member since Feb 2006
38437 posts
Posted on 2/9/17 at 7:17 pm to
quote:

So are you saying it is impossible to develop AI that won't act on its own motivations?




No.

I'm saying the AI's motivations are not some fabric of interpretation nor are they filled with a "soul," whatever that may mean to people. They are 1. And 0.

Every "value" we have, no matter how intricate, is going to have to be measured in this fashion for an AI. Therefore, all AI will have absolutism built into them and will act in absolute fashions. Hence, worlds filled with perfect handwritten notes.
Posted by cgrand
HAMMOND
Member since Oct 2009
46667 posts
Posted on 2/9/17 at 7:24 pm to
if/when ASI an is achieved there will never be another one...it will be smart enough to know that it's survival is directly tied to its uniqueness
Posted by Random LSU Hero
2014 NFL Survivor Champion (17-0)
Member since Aug 2011
9546 posts
Posted on 2/9/17 at 7:27 pm to
quote:

This is a rather generic term for situations in which individuals can profit from being selfish — but where everyone loses if everyone is selfish.


Game Theory 101
Posted by cgrand
HAMMOND
Member since Oct 2009
46667 posts
Posted on 2/9/17 at 7:34 pm to
the problem with that in the OPs article is that each AI (the first two) was "created" in the game at the exact same time (when the game started). The chance of that happening in the real world is zero. AI research is going on everywhere and all the time

whichever one reaches ASI first will be the last, and will be permanent
first pageprev pagePage 2 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram