- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Regarding AI and Driverless Cars...
Posted on 10/3/17 at 7:30 pm
Posted on 10/3/17 at 7:30 pm
I was listening to a Radiolab podcast yesterday and they went into a problem that could occur with driverless cars and AI.
They started it off with two I guess philosophical questions.
1) Lets say there's a train coming wildly down the track and there's five guys who are standing directly in the train's path, but you can pull a lever and the train will veer away to another set of tracks and on the other set of tracks, there's only one person. Do you pull the lever or not?
Now, let's say the same scenario is occurring. Except, instead of the lever and separate set of tracks, there's a fat man on a bridge with you. You can push the man off the bridge to stop the train from hitting the five people. Do you push the fat man?
2) Like the last episode of MASH, you and a group of people are hiding out from an enemy who is certain to kill you if they find you. You have a baby who is sick and you know if anybody makes a sound, the enemy will certainly find you and kill you. Do you kill your sick child in order to save the lives of the people around you by smothering the child?
I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?
I can't quite remember every facet of the arguments that were made, but I do believe that (and AI as a whole with the road we're going down in regards to AI) is a very very dangerous slippery slope.
Radiolab Episode I Referenced
They started it off with two I guess philosophical questions.
1) Lets say there's a train coming wildly down the track and there's five guys who are standing directly in the train's path, but you can pull a lever and the train will veer away to another set of tracks and on the other set of tracks, there's only one person. Do you pull the lever or not?
Now, let's say the same scenario is occurring. Except, instead of the lever and separate set of tracks, there's a fat man on a bridge with you. You can push the man off the bridge to stop the train from hitting the five people. Do you push the fat man?
2) Like the last episode of MASH, you and a group of people are hiding out from an enemy who is certain to kill you if they find you. You have a baby who is sick and you know if anybody makes a sound, the enemy will certainly find you and kill you. Do you kill your sick child in order to save the lives of the people around you by smothering the child?
I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?
I can't quite remember every facet of the arguments that were made, but I do believe that (and AI as a whole with the road we're going down in regards to AI) is a very very dangerous slippery slope.
Radiolab Episode I Referenced
This post was edited on 10/4/17 at 1:27 am
Posted on 10/3/17 at 7:44 pm to TigerFanInSouthland
Ultimately AI would prevent this ever being an issue. If everyone drives autonomous cars, assuming all things perfect, there would be no traffic collisions. Hell of a lot safer than today's world. In this scenario I'm not wearing my tinfoil hat.
Posted on 10/3/17 at 7:46 pm to TigerFanInSouthland
This situation occurs so rarely I don't know why we'd even think of trying to address it. shite is still going to happen whether with AI or left in humans hands. Let's try not to make it more complicated.
Posted on 10/3/17 at 7:48 pm to TigerFanInSouthland
The autonomous cars will be programmed by humans to react in a certain way but initially I expect it will be for each individual car/truck to do its best within the time and capability of the vehicle to safeguard its passengers much like most human drivers react but with a better chance of a positive outcome.
In a John Stuart Mill's world the cars would be interlinked and each occupant given a societal "value" and the cars would work together to reduce the loss to society as a whole. Good news for the 30 year old genius surgeon and bad news for the 65yo drug addict that has spent much of his adult life in prison.
In the end when autonomous cars are ready for primetime on a wholesale level and are implemented in mass injury and death by vehicle accident will drop significantly. Productivity and/or leisure time will increase for most people with shorter commutes and the ability to utilize commute time for work or some forms of leisure. Insurance rates will drop and the cost of getting good from ports/factories to stores will drop.
Certainly, there are issues to be carefully considered with the upper limits of AI and ethical issues with the programming of autonomous cars but particularly with the latter, I don't think it will be hard to produce a significant net positive for society.
In a John Stuart Mill's world the cars would be interlinked and each occupant given a societal "value" and the cars would work together to reduce the loss to society as a whole. Good news for the 30 year old genius surgeon and bad news for the 65yo drug addict that has spent much of his adult life in prison.
In the end when autonomous cars are ready for primetime on a wholesale level and are implemented in mass injury and death by vehicle accident will drop significantly. Productivity and/or leisure time will increase for most people with shorter commutes and the ability to utilize commute time for work or some forms of leisure. Insurance rates will drop and the cost of getting good from ports/factories to stores will drop.
Certainly, there are issues to be carefully considered with the upper limits of AI and ethical issues with the programming of autonomous cars but particularly with the latter, I don't think it will be hard to produce a significant net positive for society.
Posted on 10/3/17 at 7:53 pm to TigerFanInSouthland
Are there any realistic scenarios related to #1.
Posted on 10/3/17 at 10:22 pm to TigerFanInSouthland
In fly by wire aircraft where computers are making decision that control the aircraft, the big question asked is do you let the flight computers protect the aircraft from damage and try to keep it flying no matter what the pilot does, or does the pilot have the last word and is allowed to override the computers? Boeing allows the pilot to be the ultimate decision maker, but Airbus has the flight control computers protect the aircraft no matter what the pilot does.
Posted on 10/3/17 at 10:39 pm to TigerFanInSouthland
quote:
I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?
I can't quite remember every facet of the arguments that were made, but I do believe that (and AI as a whole with the road we're going down in regards to AI) is a very very dangerous slippery slope.
It's really a mucked up trolley problem.. which left you with only two choices. Humans are limited by reaction time and if they have no obligation to face that dilemma (whichever choice they make is defensible) the same would have to be true for any AI-powered vehicle.
it's not an ethical problem if we're talking about machine learning. it's a risk management problem. an easily solved one too.
This post was edited on 10/3/17 at 10:40 pm
Posted on 10/4/17 at 12:23 am to TigerFanInSouthland
Those are not original scenarios. Those two examples have been talked about a ton in other contexts.
Posted on 10/4/17 at 12:54 am to TigerFanInSouthland
Autopilots on commercial airliners make those decisions and they’ve rarely caused issues. In fact, I can think of only one incident whereby the crash was caused by autopilot taking control from a pilot and making the wrong decision.
Air France Flight 296
As I recall that was ultimately attributed to the pilot putting the jet into a situation that it would never normally be in - a low speed low altitude fly by - and the autopilot thought the actual pilot was fricking something up and tried to “land” in the trees.
Air France Flight 296
As I recall that was ultimately attributed to the pilot putting the jet into a situation that it would never normally be in - a low speed low altitude fly by - and the autopilot thought the actual pilot was fricking something up and tried to “land” in the trees.
Posted on 10/4/17 at 1:59 am to TigerFanInSouthland
i think the better question is: will human driven cars be outlawed
Posted on 10/4/17 at 2:12 am to TigerFanInSouthland
quote:
1) Lets say there's a train coming wildly down the track and there's five guys who are standing directly in the train's path, but you can pull a lever and the train will veer away to another set of tracks and on the other set of tracks, there's only one person. Do you pull the lever or not?
Simple decision. You pull the lever. Better to kill one than 5.
quote:
Now, let's say the same scenario is occurring. Except, instead of the lever and separate set of tracks, there's a fat man on a bridge with you. You can push the man off the bridge to stop the train from hitting the five people. Do you push the fat man?
You absolutely push fat man. You only have 2 options. Neither are good but you always take the less risky approach.
quote:
2) Like the last episode of MASH, you and a group of people are hiding out from an enemy who is certain to kill you if they find you. You have a baby who is sick and you know if anybody makes a sound, the enemy will certainly find you and kill you. Do you kill your sick child in order to save the lives of the people around you by smothering the child?
See this is where it gets difficult. I wouldnt want to live without my child. My decision ultimately affects everyone involved. But it would never come to that. We all die because everyone in the bunker knows my child has to die. They would attack and I would defend. We all die because of the ruckus.
quote:How can the AI determine who will die in a car wreck? I have yet to see any argument made on preventing death that is inevitable. Like the LV shooting. There is no AI in the world could have predicted which person lives and which dies in split seconds. The car will never know and neither will the passengers.
I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?
Posted on 10/4/17 at 9:30 am to TigerFanInSouthland
The answer to this question is obvious.
The cars should be programmed to take out the uglier of the two groups. Ugly people don't matter in the long run.
The cars should be programmed to take out the uglier of the two groups. Ugly people don't matter in the long run.
Back to top
Follow TigerDroppings for LSU Football News