Started By
Message

Will self-driving car be programmed to kill you if it means saving more lives?

Posted on 6/16/15 at 5:59 pm
Posted by Street Hawk
Member since Nov 2014
3460 posts
Posted on 6/16/15 at 5:59 pm
LINK
quote:

Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?

This ethical puzzler is commonly known as the Trolley Problem. It's a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st-century take, several writers have adapted the scenario to a modern obsession: autonomous vehicles. Google's self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on "major roads" by this summer.

Who watches the watchers?

The technology may have arrived, but are we ready?

Google's cars can already handle real-world hazards, such as cars' suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google's cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation -- a blown tire, perhaps -- where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm -- even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

Posted by SabiDojo
Open to any suggestions.
Member since Nov 2010
83929 posts
Posted on 6/16/15 at 6:01 pm to
That would suck.
Posted by wildtigercat93
Member since Jul 2011
112312 posts
Posted on 6/16/15 at 6:02 pm to
Ain't no Zordons gonna be drivin my car
Posted by ruzil
Baton Rouge
Member since Feb 2012
16897 posts
Posted on 6/16/15 at 6:03 pm to
This is exactly the kind of paranoia I can get my head around.

That is, if I can leave my tin foil hat on.
This post was edited on 6/16/15 at 6:04 pm
Posted by Broseph Barksdale
Member since Sep 2010
10571 posts
Posted on 6/16/15 at 6:04 pm to
Who does Morris Bart sue if someone's self-driving car rear ends you? The owner? Google? The programmer?
This post was edited on 6/16/15 at 6:05 pm
Posted by PrivatePublic
Member since Nov 2012
17848 posts
Posted on 6/16/15 at 6:06 pm to
I fail to see how slamming into a school bus is a better option for the smart-car passenger than hitting a retaining wall.
Posted by theenemy
Member since Oct 2006
13078 posts
Posted on 6/16/15 at 6:07 pm to
Your chauffeur has arrived.


Posted by Jim Rockford
Member since May 2011
98180 posts
Posted on 6/16/15 at 6:07 pm to
TBH, the best choice is to slam into the stationary object instead of the one hurtling toward you. Now, where it gets interesting is when the choice is between oncoming traffic and plunging off a hundred foot embankment.
Posted by Zoltan
NOLA
Member since May 2010
1395 posts
Posted on 6/16/15 at 6:07 pm to
quote:

Who does Morris Bart sue if someone's self-driving car rear ends you? The owner? Google? The programmer?


That one line could almost be an entire law school exam question give or take a few facts.
Posted by TigernMS12
Member since Jan 2013
5530 posts
Posted on 6/16/15 at 6:09 pm to
I know this is selfish, but I don't give a damn. If I pay however many thousands of dollars a self-driving car will cost, it better damn well have my best interest in mind.
Posted by TigernMS12
Member since Jan 2013
5530 posts
Posted on 6/16/15 at 6:11 pm to
quote:

That one line could almost be an entire law school exam question give or take a few facts.


Yes it could. I had some of the most obscene fact patterns imaginable on my torts exam.
Posted by makinskrilla
Lafayette, LA
Member since Jun 2009
9727 posts
Posted on 6/16/15 at 6:14 pm to
Love questions like this. Also, who gets sued when something goes wrong?
Posted by whit
Baton Rouge
Member since Sep 2010
10998 posts
Posted on 6/16/15 at 6:15 pm to
Posted by goofball
Member since Mar 2015
16859 posts
Posted on 6/16/15 at 6:18 pm to
I saw this on TopGear.

I'm not sure I like the idea of self driving cars except on straight, level highways where traffic already moves at a constant speed.
Posted by jeff5891
Member since Aug 2011
15761 posts
Posted on 6/16/15 at 6:20 pm to
Well if ever car is self driving this won't be a problem.


The other cars would communicate with the out of control vehicle and simply move out of the way
This post was edited on 6/16/15 at 6:22 pm
Posted by lsumatt
Austin
Member since Feb 2005
12812 posts
Posted on 6/16/15 at 6:21 pm to
Either way I am not worried, because they will be much, much safer than the status quo especially if everyone/most has one.
Posted by TigernMS12
Member since Jan 2013
5530 posts
Posted on 6/16/15 at 6:21 pm to
I don't think I would have a problem with self-driving cars if all cars were self-driven. The worst period would be the years of transition between self-driven cars and human driven cars.
Posted by TigerBandTuba
Member since Sep 2006
2541 posts
Posted on 6/16/15 at 7:01 pm to
It's going to have to do what ever is in the best interest of the vehicles occupants. When has vehicle safety equipment ever been designed to protect someone other than the occupants? The car is going to brake as hard as possible but still plow through little Timmy when he decides to run in to the road instead of swerving in to a retaining wall.
Posted by Chad504boy
4 posts
Member since Feb 2005
166246 posts
Posted on 6/16/15 at 7:09 pm to
I don't kill my own kid willingly.
Posted by Street Hawk
Member since Nov 2014
3460 posts
Posted on 6/16/15 at 7:35 pm to
quote:

It's going to have to do what ever is in the best interest of the vehicles occupants. When has vehicle safety equipment ever been designed to protect someone other than the occupants? The car is going to brake as hard as possible but still plow through little Timmy when he decides to run in to the road instead of swerving in to a retaining wall.


From the article:

quote:

Death in the driver's seat

So should your self-driving car be programmed to kill you in order to save others? There are two philosophical approaches to this type of question, Barghi says. "Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people," he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Deontology, on the other hand, argues that "some values are simply categorically always true," Barghi continued. "For example, murder is always wrong, and we should never do it." Going back to the trolley problem, "even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one," Barghi said. And, despite the odds, a self-driving car shouldn't be programmed to choose to sacrifice its driver to keep others out of harm's way.

Every variation of the trolley problem -- and there are many: What if the one person is your child? Your only child? What if the five people are murderers? -- simply "asks the user to pick whether he has chosen to stick with deontology or utilitarianism," Barghi continued. If the answer is utilitarianism, then there is another decision to be made, Barghi adds: rule or act utilitarianism.

"Rule utilitarianism says that we must always pick the most utilitarian action regardless of the circumstances -- so this would make the choice easy for each version of the trolley problem," Barghi said: Count up the individuals involved and go with the option that benefits the majority.

But act utilitarianism, he continued, "says that we must consider each individual act as a separate subset action." That means that there are no hard-and-fast rules; each situation is a special case. So how can a computer be programmed to handle them all?

"A computer cannot be programmed to handle them all," said Gregory Pence, Ph.D., chair of the UAB College of Arts and Sciences Department of Philosophy

This could be a great PhD thesis topic for a Philosophy major.
first pageprev pagePage 1 of 2Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram