0
3

[–] BottomLine 0 points 3 points (+3|-0) ago 

It will require some kind of programming. They can't put an AI down and expect it to start running a nation by instinct. And even if they can't alter the programming, they can restrict the input so it gets so it will make decisions based on bad information. There will always be ways people can fuck with it. No system is 100% safe.

0
2

[–] jxfaith 0 points 2 points (+2|-0) ago  (edited ago)

Definitely. Such manipulations would be superficial to the neural network, however, and should be detectable in source code if it were vetted by a third party.

0
0

[–] sleazyridr ago 

I would like to make the point that the current system has 'detectable' flaws with campaign contributions etc. but no one seems to be able to do anything about it. So making a system where only a few people can understand the flaws will not help us any.

0
2

[–] BottomLine 0 points 2 points (+2|-0) ago  (edited ago)

That's the good part, I think. We can have it as open as possible, so the whole world (or at least the country) is able to monitor it.

If it were able to feedback it's input we could also see if it isn't missing data. I'm sure there are plenty of ways to defend against tampering. Maybe the Minority Report method by having 3 AI's who have to agree (while keeping them as separate as possible), just so tampering will become really difficult and easier detectable.

All in all, even if we never reach perfection, I still believe this will be able to become a much trustworthier system than we have in place now. The only thing we have now are emotional speeches. They can promise anything and do the complete opposite without us knowing or agreeing (hell, the TPP proves they do).