You are viewing a single comment's thread.

view the rest of the comments →

0
0

[–] chirogonemd ago  (edited ago)

Even just a cursory glance made me think I might be using the wrong term for input here. This injection appears to be insertion of malevolent code. But the Tesla guys didn't insert any code. They didn't change the way the computer was interpreting the world...they changed the world.

I guess if we imagined a facial recognition software that permitted only one guy to enter who had a moustache. If somebody without a moustache could trick the sensor by putting electrical tape above his lip, that wouldn't be hacking. Would it? Granted this would be very poor software but it is just an example.

I just don't think every type of trick is also a type of hack.

The whole convo is interesting though. Definitely some philosophical/ethical implications in there.

0
1

[–] Deceneu 0 points 1 point (+1|-0) ago 

But the Tesla guys didn't insert any code.

Neither is the case in the SQL Injection attacks.

The inserted part is only a datum, like a picture.

The fact that the application vulnerable to SQL Injection is allowing itself to treat a datum like code (under certain circumstances) is the equivalent of the ML/AI firmware/software in the camera module choosing (of its own volition) to interpret a particular datum (picture) as code meaning to ride at 35mph or 85mph.

In other words, treating the real-world as the indirect storage to your code, was bound to be vulnerable to Injection Attacks.

See my other comment as well: https://voat.co/v/technology/3666526/22591575

0
1

[–] chirogonemd 0 points 1 point (+1|-0) ago  (edited ago)

Very, very interesting. So basically, the vulnerability stems from not being able to discriminate between irrelevant data and what ought to be code. Therefore the vulnerability really comes from an inherent weakness in the computer itself which is the inability to store its own truth criteria - which sort of touches on Godel it seems.

What is lacking fundamentally is an understanding of its own nature. The computer is basically storing code indirectly in the real world and cannot discern whether something is true or false in the human context because it cannot discriminate between substrates.

That is, a human recognizes the message to do 85 is false but the machine cannot.

This is all really philosophically interesting just in the ways it relates to mind in general.

Computer science or the philosophy thereof is not my forte. Correct me if I am saying anything out of order.

But I definitely catch your point that my issue was narrowly treating the camera in and of itself without the bigger picture of the firmware. But what criteria would it be using to determine whether the picture is legitimate "code" or not; I guess what I mean is: does the "fix" for the camera/AI involve better edge detection?