You are viewing a single comment's thread.

view the rest of the comments →

0
6

[–] MadWorld 0 points 6 points (+6|-0) ago 

Yes, feedback on negative ccp to determine spamming is way too slow and undesirable as this can be mixed up with unpopular opinions. I have thought about using neural network or plagiarism detection. It sounds interesting at first. But it's really just a game/evolution of the cat and the mouse. Sooner or later, the spammers will always find new ways to cheat the system. Human elements will remain to be the best judgement.

@PuttItOut, I would propose something like this toward the spammers:

  1. Use a threshold on the number of spams been reported on a potential spam user.
  2. Then automatically generate a /v/ReportSpammers (or relevant subverse) submission, along with relevant info, when that threshold is triggered.
  3. Like any other subverses, the subverse users vote, discuss, and decide the accuracy of the report been submitted.
  4. If the report is correct(true), give warnings to the user and further ban persistent abusers.
  5. If the report is a lie(false), keep an abuse score on users who abused the spam report. Keep the thread ids/content if possible.
    1. When the threshold(spam report abuse) for this abuse score is triggered, like any other spammer, we can automatically generate another submission to the subverse for review.
    2. If the user who falsely and repeatly reported with the spam button is identified, restrict his account, like any other confirmed spammers.
    3. Relax this user's restriction when the user stops abuse on false reporting over a certain time period.
  6. If the report contains ambiguous(uncertainty, the grey area) content or not obvious enough to been classified as spammer, we should let it slide without any side-effect.

Something optional to keep the users motivated, but I suspect voaters might not care much since they love voat so much.

  • Reward the users who reported the spammers accurately. Reward them for their dedication and hard work. They can become the Protectors of Voat.

0
7

[–] PuttItOut [S] 0 points 7 points (+7|-0) ago 

I really like the idea of automatically making a post in v/reportspammers when any trigger level has been detected. This is a very transparent way of verifying the accuracy of the code.

If we move into any sort of reporting system, we have already decided we will have to build a confidence interval for users. If done right the system would be able to flag spam based on reports very quickly depending on who is reporting content and their history of reports vs outcomes.

This can also be gamed so we will have to still have accountability and not trust the system fully.

0
3

[–] guinness2 0 points 3 points (+3|-0) ago 

But how would this solution cope with shit like:

  • bots that create a single new accounts just so they can maliciously down-vote a random post in a sub or on the front-page;

  • bots that create a single new account just to make random false reports to /v/ReportSpammers so the mods are too busy dealing with fake reports to keep up with dealing with real ones;

@MadWorld

1
0

[–] 10249524 1 points 0 points (+1|-1) ago 

That's why I mentioned some sort of flagging system like what Cynabuns does now. Of course if this is to replace downvotes in terms of restrictions Cynabuns won't be able to manage it alone.

At first I was thinking that you could just give more people Cynabuns' janitor abilities in /v/ReportSpammers, but then I considered that we could use a certain upcoming feature to community-decide on reports. Probably wouldn't be fast enough though. I definitely think that auto-restricting based on a certain number of spam reports would just result in the same issues we have now, without some kind of confirmation.