Started By
Message

Regarding AI and Driverless Cars...

Posted on 10/3/17 at 7:30 pm
Posted by TigerFanInSouthland
Louisiana
Member since Aug 2012
28065 posts
Posted on 10/3/17 at 7:30 pm
I was listening to a Radiolab podcast yesterday and they went into a problem that could occur with driverless cars and AI.

They started it off with two I guess philosophical questions.

1) Lets say there's a train coming wildly down the track and there's five guys who are standing directly in the train's path, but you can pull a lever and the train will veer away to another set of tracks and on the other set of tracks, there's only one person. Do you pull the lever or not?

Now, let's say the same scenario is occurring. Except, instead of the lever and separate set of tracks, there's a fat man on a bridge with you. You can push the man off the bridge to stop the train from hitting the five people. Do you push the fat man?

2) Like the last episode of MASH, you and a group of people are hiding out from an enemy who is certain to kill you if they find you. You have a baby who is sick and you know if anybody makes a sound, the enemy will certainly find you and kill you. Do you kill your sick child in order to save the lives of the people around you by smothering the child?

I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?

I can't quite remember every facet of the arguments that were made, but I do believe that (and AI as a whole with the road we're going down in regards to AI) is a very very dangerous slippery slope.

Radiolab Episode I Referenced
This post was edited on 10/4/17 at 1:27 am
Posted by Paul Redeker
Member since Jan 2013
219 posts
Posted on 10/3/17 at 7:44 pm to
Ultimately AI would prevent this ever being an issue. If everyone drives autonomous cars, assuming all things perfect, there would be no traffic collisions. Hell of a lot safer than today's world. In this scenario I'm not wearing my tinfoil hat.
Posted by C
Houston
Member since Dec 2007
27830 posts
Posted on 10/3/17 at 7:46 pm to
This situation occurs so rarely I don't know why we'd even think of trying to address it. shite is still going to happen whether with AI or left in humans hands. Let's try not to make it more complicated.
Posted by Obtuse1
Westside Bodymore Yo
Member since Sep 2016
25819 posts
Posted on 10/3/17 at 7:48 pm to
The autonomous cars will be programmed by humans to react in a certain way but initially I expect it will be for each individual car/truck to do its best within the time and capability of the vehicle to safeguard its passengers much like most human drivers react but with a better chance of a positive outcome.

In a John Stuart Mill's world the cars would be interlinked and each occupant given a societal "value" and the cars would work together to reduce the loss to society as a whole. Good news for the 30 year old genius surgeon and bad news for the 65yo drug addict that has spent much of his adult life in prison.

In the end when autonomous cars are ready for primetime on a wholesale level and are implemented in mass injury and death by vehicle accident will drop significantly. Productivity and/or leisure time will increase for most people with shorter commutes and the ability to utilize commute time for work or some forms of leisure. Insurance rates will drop and the cost of getting good from ports/factories to stores will drop.

Certainly, there are issues to be carefully considered with the upper limits of AI and ethical issues with the programming of autonomous cars but particularly with the latter, I don't think it will be hard to produce a significant net positive for society.
Posted by airfernando
Member since Oct 2015
15248 posts
Posted on 10/3/17 at 7:53 pm to
Are there any realistic scenarios related to #1.
Posted by EA6B
TX
Member since Dec 2012
14754 posts
Posted on 10/3/17 at 10:22 pm to
In fly by wire aircraft where computers are making decision that control the aircraft, the big question asked is do you let the flight computers protect the aircraft from damage and try to keep it flying no matter what the pilot does, or does the pilot have the last word and is allowed to override the computers? Boeing allows the pilot to be the ultimate decision maker, but Airbus has the flight control computers protect the aircraft no matter what the pilot does.
Posted by bmy
Nashville
Member since Oct 2007
48203 posts
Posted on 10/3/17 at 10:39 pm to
quote:


I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?

I can't quite remember every facet of the arguments that were made, but I do believe that (and AI as a whole with the road we're going down in regards to AI) is a very very dangerous slippery slope.



It's really a mucked up trolley problem.. which left you with only two choices. Humans are limited by reaction time and if they have no obligation to face that dilemma (whichever choice they make is defensible) the same would have to be true for any AI-powered vehicle.

it's not an ethical problem if we're talking about machine learning. it's a risk management problem. an easily solved one too.

This post was edited on 10/3/17 at 10:40 pm
Posted by athenslife101
Member since Feb 2013
18591 posts
Posted on 10/4/17 at 12:23 am to
Those are not original scenarios. Those two examples have been talked about a ton in other contexts.
Posted by SlapahoeTribe
Tiger Nation
Member since Jul 2012
12120 posts
Posted on 10/4/17 at 12:54 am to
Autopilots on commercial airliners make those decisions and they’ve rarely caused issues. In fact, I can think of only one incident whereby the crash was caused by autopilot taking control from a pilot and making the wrong decision.

Air France Flight 296

As I recall that was ultimately attributed to the pilot putting the jet into a situation that it would never normally be in - a low speed low altitude fly by - and the autopilot thought the actual pilot was fricking something up and tried to “land” in the trees.
Posted by Masterag
'Round Dallas
Member since Sep 2014
18811 posts
Posted on 10/4/17 at 1:59 am to
i think the better question is: will human driven cars be outlawed
Posted by saintsfan1977
West Monroe, from Cajun country
Member since Jun 2010
7771 posts
Posted on 10/4/17 at 2:12 am to
quote:

1) Lets say there's a train coming wildly down the track and there's five guys who are standing directly in the train's path, but you can pull a lever and the train will veer away to another set of tracks and on the other set of tracks, there's only one person. Do you pull the lever or not?



Simple decision. You pull the lever. Better to kill one than 5.

quote:

Now, let's say the same scenario is occurring. Except, instead of the lever and separate set of tracks, there's a fat man on a bridge with you. You can push the man off the bridge to stop the train from hitting the five people. Do you push the fat man?



You absolutely push fat man. You only have 2 options. Neither are good but you always take the less risky approach.

quote:

2) Like the last episode of MASH, you and a group of people are hiding out from an enemy who is certain to kill you if they find you. You have a baby who is sick and you know if anybody makes a sound, the enemy will certainly find you and kill you. Do you kill your sick child in order to save the lives of the people around you by smothering the child?


See this is where it gets difficult. I wouldnt want to live without my child. My decision ultimately affects everyone involved. But it would never come to that. We all die because everyone in the bunker knows my child has to die. They would attack and I would defend. We all die because of the ruckus.

quote:

I ask this because they paralleled these two scenarios with driverless cars and AI and let's say you were riding in your car and a wreck was about to happen in which either you or the people you wreck into will die, how is the AI in the driverless car supposed to decide for you who gets to die and who doesn't?
How can the AI determine who will die in a car wreck? I have yet to see any argument made on preventing death that is inevitable. Like the LV shooting. There is no AI in the world could have predicted which person lives and which dies in split seconds. The car will never know and neither will the passengers.
Posted by SidewalkDawg
Chair
Member since Nov 2012
9820 posts
Posted on 10/4/17 at 9:30 am to
The answer to this question is obvious.

The cars should be programmed to take out the uglier of the two groups. Ugly people don't matter in the long run.
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram