You can login if you already have an account or register by clicking the button below.
Registering is free and all you need is a username and password. We never ask you for your e-mail.
[+]jxfaith0 points2 points2 points
ago
(edited ago)
[–]jxfaith0 points
2 points
2 points
(+2|-0)
ago
(edited ago)
Definitely. Such manipulations would be superficial to the neural network, however, and should be detectable in source code if it were vetted by a third party.
[+]BottomLine0 points2 points2 points
ago
(edited ago)
[–]BottomLine0 points
2 points
2 points
(+2|-0)
ago
(edited ago)
That's the good part, I think. We can have it as open as possible, so the whole world (or at least the country) is able to monitor it.
If it were able to feedback it's input we could also see if it isn't missing data. I'm sure there are plenty of ways to defend against tampering. Maybe the Minority Report method by having 3 AI's who have to agree (while keeping them as separate as possible), just so tampering will become really difficult and easier detectable.
All in all, even if we never reach perfection, I still believe this will be able to become a much trustworthier system than we have in place now. The only thing we have now are emotional speeches. They can promise anything and do the complete opposite without us knowing or agreeing (hell, the TPP proves they do).
I would like to make the point that the current system has 'detectable' flaws with campaign contributions etc. but no one seems to be able to do anything about it. So making a system where only a few people can understand the flaws will not help us any.
view the rest of the comments →
[–] jxfaith 0 points 2 points 2 points (+2|-0) ago (edited ago)
Definitely. Such manipulations would be superficial to the neural network, however, and should be detectable in source code if it were vetted by a third party.
[–] BottomLine 0 points 2 points 2 points (+2|-0) ago (edited ago)
That's the good part, I think. We can have it as open as possible, so the whole world (or at least the country) is able to monitor it.
If it were able to feedback it's input we could also see if it isn't missing data. I'm sure there are plenty of ways to defend against tampering. Maybe the Minority Report method by having 3 AI's who have to agree (while keeping them as separate as possible), just so tampering will become really difficult and easier detectable.
All in all, even if we never reach perfection, I still believe this will be able to become a much trustworthier system than we have in place now. The only thing we have now are emotional speeches. They can promise anything and do the complete opposite without us knowing or agreeing (hell, the TPP proves they do).
[–] sleazyridr ago
I would like to make the point that the current system has 'detectable' flaws with campaign contributions etc. but no one seems to be able to do anything about it. So making a system where only a few people can understand the flaws will not help us any.