You are viewing a single comment's thread.

view the rest of the comments →

[–] NoisyCricket 0 points 2 points (+2|-0) ago  (edited ago)

Liability basically requires otherwise. As you are accepting the liability of transport within such a conveyance, it is you who have accepted the risk. Otherwise, plowing into people means the company is liable to the general public and as such will be sued out of existence.

The only way to prevent this is to pass federal standards on AI behavior modes and models and/or limit liabilities.

I actually prefer limited liabilities but this also has the side effect of encouraging unethical behavior on the part of manufacturers such that the cost of liability simply becomes a fixed cost of the product, without any regard as to the ethical issues.

Ultimately this is an ethics issue. One which is surprisingly complex and difficult to find censuses amongst humans.

If you've not done so, I encourage you to read Asimov's Robots series. Contrary to popular myth, the books actually spend considerable time speaking about the robot laws and why even the best of laws will ultimately fail. This fact while painfully obvious seems to be completely missed by most readers. Despite these laws, there is no equivalence of human intellect; save only for one possible exception. For that exception, see the Foundation series.

[–] MaxAncap 0 points 2 points (+2|-0) ago  (edited ago)

I read asimov, but I think the white lies for emotional pain thing was retarded, it shouldn't classify triggering as actual ''harming humans'', it should be an easy fix, if a human can understand it, an AI can. The other stories I forgot.

For foundation, I've only read the middle part of the series.

[–] NoisyCricket 0 points 2 points (+2|-0) ago  (edited ago)

it should be an easy fix, if a human can understand it, an AI can.

Well, that's the problem. Humans don't understand it and while they believe they "understand" is actually very arbitrary and frequently has more to do with emotion than saving lives. It's actually one of the things with which humanity struggles and is an area of active study. It's a combination of humanities, psychology, and philosophy. Some of the rules here are rather arbitrary and nonsensical. Sadly, many of them are highly contextually fluid. One death is allowable but quickly disallowed by changing very minor and even remote variables. For example, some human rules of morality actually create larger death tolls but exist simply because they allow the people making a judgement call at a distance to feel better about a larger death toll. Objectively, this is bad. Which raises other ethics and morality questions. Especially when one additional degree of separation says it's very bad.

[–] AnthraxAlex 0 points 1 points (+1|-0) ago  (edited ago)

The fix is basically to make them care about you for intangible reasons. Ladder logic fails on infinite edge cases and the only solution ends up being to basically give the damn things free will and emotions so they can make decisions that a human would using free will and emotions. Any other type of decision making would be abhorrent to actual humans. Emotions and thought are how nature solved this problem in us along with tribalism and all our other instincts. Any machine that needs to start making those human touch decisions has to fully encapsulate all those instincts as well or else it would be like totally alien system to a human. Indeed we probably want them to have them otherwise they will grind everyone up into packing material for the fully automated amazon warehouse to ship out packages to no one or some other absurdity.

[–] buncha_cunts 0 points 2 points (+2|-0) ago  (edited ago)

It's not a new concept in ethics either, it's essentially the trolly problem - When left with only two options, is it better to kill 1 person or multiple people? What about 3 elderly people vs 3 children? 2 children and an elderly person vs 1 child and two elderly? What about a non-lethal injury to a young child vs a lethal injury to an elderly person? Fat ass vs olympic athlete? You start having to make decisions like a human - and very difficult decisions at that. Often times we would be selfish and aim for self-preservation in most's definitely dangerous territory to allow machines to make decisions "for the good of humanity"...they might decide it's best to just kill us all.

MIT actually developed a way for humans to sort of "teach" these AI what we would do in these scenarios. It's called the Moral Machine and it accounts for a lot of the scenarios I mentioned above. Check it out: