- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Coaching Changes
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Google’s DeepMind pits AI against AI to see if they fight or cooperate
Posted on 2/9/17 at 4:48 pm to Street Hawk
Posted on 2/9/17 at 4:48 pm to Street Hawk
quote:
We are getting ever closer to a world where AI will make most of your decisions for you.
Sad, but good because people already can't make decisions for themselves.
Posted on 2/9/17 at 4:58 pm to Street Hawk
I mean, a bit of a sensationalist title, but cool. AI alters behavior based on the rules. Cool article, but not very scary.
Posted on 2/9/17 at 5:03 pm to Street Hawk
A serious question to ask is how will the government try to co-opt the AI into being a super intelligent war machine?
Talk about the human race being fricked.
Talk about the human race being fricked.
Posted on 2/9/17 at 5:20 pm to SherluckHomey
quote:
A serious question to ask is how will the government try to co-opt the AI into being a super intelligent war machine?

This post was edited on 2/9/17 at 5:21 pm
Posted on 2/9/17 at 5:45 pm to SherluckHomey
quote:
Super intelligent AI has to be controlled so that it has the same values as the human race.
The human race is incapable of having values. Individuals do that.
Posted on 2/9/17 at 5:46 pm to Street Hawk
quote:
when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.
I am sometimes tempted to just shoot certain dumbasses myself.
Posted on 2/9/17 at 5:55 pm to SherluckHomey
quote:
but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.
The problem is that the idea of "values" isn't reproducible without supremely esoteric ideas about humanity, goodness, wisdom, truth, beauty, devotion, etc.
Even if we can approximate those ideas into code, they would essentially still be code, therefore beholden to the simple concept of 1's and 0's. The problem with that: all of these situations- the cancer example and the handwriting example- are about computers trying to get from 0 to 1 or 1 to 0. This is why the examples become so absolute in the end. This is how computers operate.
They can't have "values," nor will they have the same "values."
Posted on 2/9/17 at 6:15 pm to TigerstuckinMS
quote:
Eventually, the earth is reduced to nothing but a giant stack of almost perfectly handwritten notes and the AI builds spaceships and begins sending itself out into the cosmos with the never-ending goal of producing the perfect handwritten note.
Posted on 2/9/17 at 6:45 pm to boxcarbarney
quote:
Reminds me of this scary arse article from Wait But Why.
AI will either end humanity or make us immortal....
Posted on 2/9/17 at 6:48 pm to Street Hawk
I, for one, welcome our new robot overlords.
Posted on 2/9/17 at 6:51 pm to Freauxzen
quote:
but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.
The major flaw in this logic is that.......say we program the AI to "preserve humanity"
Well one day the AI decides that humanity has changed/is changing from when the goal was presented to it. The AI decides that in order to preserve humanity it must end kill every person on earth to prevent humanity from changing anymore.
Posted on 2/9/17 at 6:51 pm to SidewalkDawg
quote:
Whelp, we're fricked. Pack it up, it was a good run humanity.
We get to see the Fermi Paradox in action.
Posted on 2/9/17 at 6:53 pm to Freauxzen
quote:
They can't have "values," nor will they have the same "values."
So are you saying it is impossible to develop AI that won't act on its own motivations?
Posted on 2/9/17 at 6:55 pm to SherluckHomey
Asimov figured it out
quote:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Posted on 2/9/17 at 6:56 pm to Jim Rockford
quote:
Asimov figured it out
All of the robot stories had to deal with the failures of the three laws of robotics.
I love those stories. You had to figure out where the logic breakdown happened to make the robots behave in unpredictable fashion
This post was edited on 2/9/17 at 6:58 pm
Posted on 2/9/17 at 7:17 pm to SherluckHomey
quote:
So are you saying it is impossible to develop AI that won't act on its own motivations?
No.
I'm saying the AI's motivations are not some fabric of interpretation nor are they filled with a "soul," whatever that may mean to people. They are 1. And 0.
Every "value" we have, no matter how intricate, is going to have to be measured in this fashion for an AI. Therefore, all AI will have absolutism built into them and will act in absolute fashions. Hence, worlds filled with perfect handwritten notes.
Posted on 2/9/17 at 7:24 pm to Street Hawk
if/when ASI an is achieved there will never be another one...it will be smart enough to know that it's survival is directly tied to its uniqueness
Posted on 2/9/17 at 7:27 pm to Street Hawk
quote:
This is a rather generic term for situations in which individuals can profit from being selfish — but where everyone loses if everyone is selfish.
Game Theory 101
Posted on 2/9/17 at 7:34 pm to Random LSU Hero
the problem with that in the OPs article is that each AI (the first two) was "created" in the game at the exact same time (when the game started). The chance of that happening in the real world is zero. AI research is going on everywhere and all the time
whichever one reaches ASI first will be the last, and will be permanent
whichever one reaches ASI first will be the last, and will be permanent
Popular
Back to top


0







