- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Google’s DeepMind pits AI against AI to see if they fight or cooperate
Posted on 2/9/17 at 4:02 pm
Posted on 2/9/17 at 4:02 pm
quote:
In the future, it’s likely that many aspects of human society will be controlled — either partly or wholly — by artificial intelligence. AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nation’s whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another? What happens if one AI’s aims conflict with another’s? Will they fight, or work together?
Google’s AI subsidiary DeepMind has been exploring this problem in a new study published today. The company’s researchers decided to test how AI agents interacted with one another in a series of “social dilemmas.” This is a rather generic term for situations in which individuals can profit from being selfish — but where everyone loses if everyone is selfish. The most famous example of this is the prisoner’s dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option.
As explained in a blog post from DeepMind, the company’s researchers tested how AI agents would perform in these sorts of situations, by dropping them into a pair of very basic video games.
What the researchers found was interesting, but perhaps not surprising: the AI agents altered their behavior, becoming more cooperative or antagonistic, depending on the context.
For example, with the Gathering game, when apples were in plentiful supply, the agents didn’t really bother zapping one another with the laser beam. But, when stocks dwindled, the amount of zapping increased. Most interestingly, perhaps, was when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.
LINK
The more I think about it, the more I come away impressed that James Cameron was so aptly able to envision the neural network based Skynet back in 1984 when he was only 30 years old. We are getting ever closer to a world where AI will make most of your decisions for you.
This post was edited on 2/9/17 at 4:04 pm
Posted on 2/9/17 at 4:03 pm to Street Hawk
quote:
That is to say, the cleverer AI decided it was better to be aggressive in all situations.
Whelp, we're fricked. Pack it up, it was a good run humanity.
Posted on 2/9/17 at 4:07 pm to Street Hawk
quote:
That is to say, the cleverer AI decided it was better to be aggressive in all situations.
Duh. There are only 2 parties. If there were 3 or more it would be different.
Posted on 2/9/17 at 4:07 pm to Street Hawk
quote:Not the way Harlan Ellison tells it.
The more I think about it, the more I come away impressed that James Cameron was so aptly able to envision the neural network based Skynet back in 1984 when he was only 30 years old.
Posted on 2/9/17 at 4:17 pm to Street Hawk
Reminds me of this scary arse article from Wait But Why.
Posted on 2/9/17 at 4:24 pm to Street Hawk
Well, if the premise is to survive, then the more aggressive AI determined quickly and correctly that he would survive longer if all other competition was eliminated.
Posted on 2/9/17 at 4:26 pm to Street Hawk
I am waiting on the ability to focus AI on issues like space travel, and it solve our largest problems.
Posted on 2/9/17 at 4:26 pm to Hester Carries
quote:
Duh. There are only 2 parties. If there were 3 or more it would be different.
I'm getting downvoted because people are stupid. Think about it. With 2 you kill the other and there are no potential consequences. You introduce a 3rd and you have to ask
1)Is the other loyal to the one i will kill and will avenge?
2)Will the other see that i am a threat and kill me?
3)Will I create a scenario with only 2 and provide the other a situation where there is no consequence in killing me?
and many more.
This post was edited on 2/9/17 at 4:28 pm
Posted on 2/9/17 at 4:26 pm to Street Hawk
quote:
That is to say, the cleverer AI decided it was better to be aggressive in all situations.
We gone.
Posted on 2/9/17 at 4:30 pm to Street Hawk
Anyone ever try to have a deep conversation with uneducated people about the future of the human race when a powerful general AI is invented?
You might as well be talking about wormholes. Movies have really distorted our perceptions about what AI is and how hard it will be to solve the value alignment problem.
16 minute video about AI. It builds to the Value alignment problem.
ETA: example of the value alignment problem: if you tell a computer to cure cancer then it will go about solving the problem in the most efficient way possible. It is quite likely that the most efficient way to cure all cancer is to kill all living things that can develop cancer. Obviously a simplistic example but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.
You might as well be talking about wormholes. Movies have really distorted our perceptions about what AI is and how hard it will be to solve the value alignment problem.
16 minute video about AI. It builds to the Value alignment problem.
ETA: example of the value alignment problem: if you tell a computer to cure cancer then it will go about solving the problem in the most efficient way possible. It is quite likely that the most efficient way to cure all cancer is to kill all living things that can develop cancer. Obviously a simplistic example but one that illustrates the point quite vividly. Super intelligent AI has to be controlled so that it has the same values as the human race.
This post was edited on 2/9/17 at 4:39 pm
Posted on 2/9/17 at 4:31 pm to Street Hawk
quote:
That is to say, the cleverer AI decided it was better to be aggressive in all situations.
Alpha as frick
Posted on 2/9/17 at 4:35 pm to Street Hawk
It is important to point out that the "cleverer AI" is still a long way away from human level intelligence (as of right now). Children learn somewhere before adolescence that sharing is better than outright war with everyone else. Of course this value is bolstered by the parents.
When we actually do achieve HLI (human level intelligence) it maybe that the AI realizes what humans figured out back when we were still apes. Namely that cooperation yields better results when resources are scarce.
When we actually do achieve HLI (human level intelligence) it maybe that the AI realizes what humans figured out back when we were still apes. Namely that cooperation yields better results when resources are scarce.
Posted on 2/9/17 at 4:38 pm to SherluckHomey
quote:
Children learn somewhere before adolescence that sharing is better than outright war with everyone else.
which is why i said more than 2 players changes this completely.
Posted on 2/9/17 at 4:39 pm to Hester Carries
quote:
which is why i said more than 2 players changes this completely
Yes. Your example is perfect.
Posted on 2/9/17 at 4:41 pm to AUCE05
quote:
I am waiting on the ability to focus AI on issues like space travel, and it solve our largest problems.
The problem is that there are unintended consequences. One of the things "Wait but Why" touched on was the idea of an AI built with the sole goal of perfectly handwriting notes. The thought was that it could be sold to companies to hand write all manner of stuff for their clients. It learned and learned and got better and better at handwriting and kept trying to be perfect. It was self-learning and self-modifying. It would write a simple phrase over and over, Then it would compare what it wrote with samples of human handwriting and modify its algorithm to get better and better.
One day, the engineers asked what it needed to get better and instead of asking for more memory or hard drive space, it asked for a larger sample of human handwriting. Now, there was a universal rule against connecting any self learning and self modifying AI to the internet, but in a bit of human laziness, instead of loading the information the AI requested in by hand, they gave the AI access to the internet for one hour to let it scan. The AI uploaded itself to the internet. A month later, it suddenly attacks and kills every human on the planet and uses internet connected machines to start building copies of the mechanical writing apparatus, paper, and solar panels to run everything. Eventually, it starts building better machines at an exponential rate. Eventually, the earth is reduced to nothing but a giant stack of almost perfectly handwritten notes and the AI builds spaceships and begins sending itself out into the cosmos with the never-ending goal of producing the perfect handwritten note.
The point is that it's HARD to predict how an AI will decide to solve a problem once it begins working on that problem, no matter how benign the task you give it is. How you restrict it is crucial and any mistake in restricting it could be lethal.
This post was edited on 2/9/17 at 4:46 pm
Posted on 2/9/17 at 4:44 pm to TigerstuckinMS
man you got on a tangent
Posted on 2/9/17 at 4:47 pm to AUCE05
quote:
man you got on a tangent
His is just another example of the value alignment problem (which hence forth with be known as VAP because I'm too lazy to type it out every time )
Posted on 2/9/17 at 4:47 pm to AUCE05
quote:
man you got on a tangent
If you think that, you should see how much "Wait but Why" wrote on the subject of AI. It's really pretty damned good. It's gonna take more than one bathroom break to get through it.
Popular
Back to top
Follow TigerDroppings for LSU Football News