You are viewing a single comment's thread.

view the rest of the comments →

0
11

[–] CrazyInAnInsaneWorld 0 points 11 points (+11|-0) ago 

And by changing just a couple of pixels, you can change up the hash of the picture while still keeping the overall image intact. A computer looks at images, and doesn't see abstract items like you and I do, all it sees is pixels. Switch those pixels up, and as far as the computer is concerned, it's looking at two different images.

I see Oldfag Anon having an absolute field day with this...they could even start out with the images from The Fappening, just to add insult to injury.

0
2

[–] DickHertz 0 points 2 points (+2|-0) ago 

Using a hash would be very efficient but as you say change one pixel and that blows that. My guess is that they would use some kind of machine learning system to flag similar photos but it's still a pig in a poke.

0
0

[–] HeavyBrain ago 

Thats when we use very yimilar photos that have nothing to do with the original till this algorithm is flaging most pics and people start to reeee.

0
1

[–] FairDinkum 0 points 1 point (+1|-0) ago 

how do I swap pixels in a photo?

0
0

[–] CrazyInAnInsaneWorld ago 

See here.

0
1

[–] HeavyBrain 0 points 1 point (+1|-0) ago 

Yup, thats how I go around the double post wall on /pol/.

Literally opening it up in Paint and adding one dot of the same color and bam.

1
1

[–] InDifferent 1 point 1 point (+2|-1) ago 

Haven't read the article yet but I'm sure it's not going to miss flagging a picture because someone changed a few pixels on a photo. I would assume that it would figure out percentage of similarity between pictures by looking at every pixel, comparing it to the original, and return a percentage of how many pixels are the same or close enough. It also wouldn't surprise me in this case if it would weight any flesh colored pixels as more important to avoid people just changing the background of the image.

0
5

[–] CrazyInAnInsaneWorld 0 points 5 points (+5|-0) ago 

The problem with that, is how do you determine what makes a "Close enough match" to ban illicit pic-sharers without hitting collateral damage for banning false positives? What are we going to have the percentage set at, 51% Similarity = Permaban? Better get ready for a fuckton of false positives and a LOT of pissed off users going to other platforms. Also, if they change the shade of the colors, that in itself would bypass the values the database had on the picture. Remember, computers don't see abstract concepts. They see colors, pixels, and more technical aspects. They don't go, "Oh, that's a pair of tits," they say "This is a .jpg image, size of 250kb, dimensions 1000 * 900, with a collection of pixels with certain hex values for color assigned to them." That's why they have to rely on community flagging/policing for "Offensive content", because abstract concepts such as "Offensive" don't make any sense to a computer. Even if they made the database self-teaching and self-evolving to keep up with the evolving pics the community flagged, that doesn't safeguard against new mutations/iterations of the same content, and that just opens up one more attack vector against the system, via false-flagging.

My point being, there are ways around automated systems a'plenty. This won't stop people determined to bypass the automated system (Just look at the amount of people bypassing Reddit's shadowban system on a daily basis originating from these parts) and regular users, the same users that provide Facebook with their revenue, will be caught in the crossfire of this informational arms race. Zuck the Cuck is basically shooting up the barn with an AK to kill a rat...while the horses are all stabled.

Facebook forgets what happened to MySpace and disregards the possibility they can easily become MySpace 2.0, as soon as a viable competitor arises. And stunts like this are only going to speed their demise once sites like https://minds.com and http://gab.ai get rolling at full speed. Much like Reddit and Digg, their hubris has gotten the best of them.