You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] chirogonemd 0 points 1 point (+1|-0) ago  (edited ago)

Very, very interesting. So basically, the vulnerability stems from not being able to discriminate between irrelevant data and what ought to be code. Therefore the vulnerability really comes from an inherent weakness in the computer itself which is the inability to store its own truth criteria - which sort of touches on Godel it seems.

What is lacking fundamentally is an understanding of its own nature. The computer is basically storing code indirectly in the real world and cannot discern whether something is true or false in the human context because it cannot discriminate between substrates.

That is, a human recognizes the message to do 85 is false but the machine cannot.

This is all really philosophically interesting just in the ways it relates to mind in general.

Computer science or the philosophy thereof is not my forte. Correct me if I am saying anything out of order.

But I definitely catch your point that my issue was narrowly treating the camera in and of itself without the bigger picture of the firmware. But what criteria would it be using to determine whether the picture is legitimate "code" or not; I guess what I mean is: does the "fix" for the camera/AI involve better edge detection?

0
1

[–] Deceneu 0 points 1 point (+1|-0) ago  (edited ago)

... which sort of touches on Godel it seems.

Computer science or the philosophy thereof is not my forte. Correct me if I am saying anything out of order.

It does, you're making perfect sense and neither is mine. If you want to dig further, the following wiki entry sums up the trail to follow:

Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem.

.

The computer is basically storing code indirectly in the real world and cannot discern whether something is true or false in the human context because it cannot discriminate between substrates.

That is, a human recognizes the message to do 85 is false but the machine cannot.

Precisely the problem.

But what criteria would it be using to determine whether the picture is legitimate "code" or not; I guess what I mean is: does the "fix" for the camera/AI involve better edge detection?

I would say, for features that can not put human life or property in danger nor can increase the legal liability of the driver, the current cameras are fine.

For the speed limit however, which has the potential to increase the driver's legal liability, edge detection needs to be improved, so much so, that it can detect the edges as reliably as a human would or more reliably.

In this example, the 3 was never fully formed into an 8. A human would have the opportunity to correctly read 35 from afar as well as from a closer distance. Maybe the 'OCR' determination was made when the car was too far from the sign. That determination needs to be repeated at multiple distances.

Then, the following legal question comes into play. What if the DOTr of a particular locale has the right but also the custom of temporarily 'correcting' signage on the spot by using tape ? What if the DOTr agent is not a professional graphic designer ? What if the legislation says that the driver needs to respect this temporary/signage as if it was permanent ? Who can tell what a human might or might not be able to distinguish by using their fuzzy matching biological apparatus ? Is every driver equally able to distinguish those ?

0
1

[–] chirogonemd 0 points 1 point (+1|-0) ago 

Fascinating stuff. Thank you for the back and forth.