You are viewing a single comment's thread.

view the rest of the comments →

0
12

[–] MaxAncap 0 points 12 points (+12|-0) ago 

wtf. That's some dumbshit design! It should prioritize the driver's life.

0
22

[–] Thegreatstoneddragon 0 points 22 points (+22|-0) ago  (edited ago)

It probably should. But lets look at the companies making self driving cars. Google, Uber, Audi. Do you really expect these cuck companies to value the alpha male behind the wheel over the black female trans lesbian who's crossing the street? Even if there's one person it's a crap shoot.

0
9

[–] Laurentius_the_pyro 0 points 9 points (+9|-0) ago  (edited ago)

I'm half expecting them to have face recognition technology so it knows to prioritize killing the driver over any "gorillas" it happens to see.

0
2

[–] HidiotKojima 0 points 2 points (+2|-0) ago 

"alpha male" "(driving) autonomous cars"

hum

1
4

[–] buncha_cunts 1 points 4 points (+5|-1) ago 

This is actually a huge ethical problem in AI design when it comes to driverless cars. It's essentially the trolly problem.

Do you prioritize the life of the driver, or the life of other drivers, pedestrians, etc? If a terminally ill grandma jaywalks in front of your car at the last second is it better to run her over or run your family of 4 off the road?

0
2

[–] shmuklidooha 0 points 2 points (+2|-0) ago 

Yes, the car should have access to medical databases, just to be sure.

0
1

[–] HillBoulder 0 points 1 points (+1|-0) ago 

Depends. If it were her or going off a cliff, she's dead. If I'm just going into the ditch or something that won't kill my whole family then I'd probably take it. Really I could apply this to anyone not just dying old people.

How are you supposed to know if she's terminally ill anyway?

0
1

[–] blackguard19 0 points 1 points (+1|-0) ago 

This is why the "self driving car" discussion will lead into the transitioning into "smart cities" where the roads have barriers around them and even foot traffic is figuratively on rails.

0
1

[–] MaxAncap 0 points 1 points (+1|-0) ago  (edited ago)

run her over. It's so obvious. I mean it depends on teh road but if it means ending up in a tree, then run over the damned Grandma

if you can safely go offroad, then go offroad. you know. Actually, you should let the driver choose the parameter. It shouldn't be for any of us to decide. After all, we do let drivers decide in the moment.

It's not that complicated. Really most of the time humans have a god-complex, thinking their minds are special souls or some shit.Whatever a brain can do, an AI can do.

0
2

[–] NoisyCricket 0 points 2 points (+2|-0) ago  (edited ago)

Liability basically requires otherwise. As you are accepting the liability of transport within such a conveyance, it is you who have accepted the risk. Otherwise, plowing into people means the company is liable to the general public and as such will be sued out of existence.

The only way to prevent this is to pass federal standards on AI behavior modes and models and/or limit liabilities.

I actually prefer limited liabilities but this also has the side effect of encouraging unethical behavior on the part of manufacturers such that the cost of liability simply becomes a fixed cost of the product, without any regard as to the ethical issues.

Ultimately this is an ethics issue. One which is surprisingly complex and difficult to find censuses amongst humans.

If you've not done so, I encourage you to read Asimov's Robots series. Contrary to popular myth, the books actually spend considerable time speaking about the robot laws and why even the best of laws will ultimately fail. This fact while painfully obvious seems to be completely missed by most readers. Despite these laws, there is no equivalence of human intellect; save only for one possible exception. For that exception, see the Foundation series.

0
2

[–] MaxAncap 0 points 2 points (+2|-0) ago  (edited ago)

I read asimov, but I think the white lies for emotional pain thing was retarded, it shouldn't classify triggering as actual ''harming humans'', it should be an easy fix, if a human can understand it, an AI can. The other stories I forgot.

For foundation, I've only read the middle part of the series.

0
2

[–] buncha_cunts 0 points 2 points (+2|-0) ago  (edited ago)

It's not a new concept in ethics either, it's essentially the trolly problem - When left with only two options, is it better to kill 1 person or multiple people? What about 3 elderly people vs 3 children? 2 children and an elderly person vs 1 child and two elderly? What about a non-lethal injury to a young child vs a lethal injury to an elderly person? Fat ass vs olympic athlete? You start having to make decisions like a human - and very difficult decisions at that. Often times we would be selfish and aim for self-preservation in most scenarios...it's definitely dangerous territory to allow machines to make decisions "for the good of humanity"...they might decide it's best to just kill us all.

MIT actually developed a way for humans to sort of "teach" these AI what we would do in these scenarios. It's called the Moral Machine and it accounts for a lot of the scenarios I mentioned above. Check it out: http://moralmachine.mit.edu/

0
1

[–] spherical_cube 0 points 1 points (+1|-0) ago