You are viewing a single comment's thread.

view the rest of the comments →

0
7

[–] PuttItOut [S] 0 points 7 points (+7|-0) ago 

I really like the idea of automatically making a post in v/reportspammers when any trigger level has been detected. This is a very transparent way of verifying the accuracy of the code.

If we move into any sort of reporting system, we have already decided we will have to build a confidence interval for users. If done right the system would be able to flag spam based on reports very quickly depending on who is reporting content and their history of reports vs outcomes.

This can also be gamed so we will have to still have accountability and not trust the system fully.

0
3

[–] guinness2 0 points 3 points (+3|-0) ago 

But how would this solution cope with shit like:

  • bots that create a single new accounts just so they can maliciously down-vote a random post in a sub or on the front-page;

  • bots that create a single new account just to make random false reports to /v/ReportSpammers so the mods are too busy dealing with fake reports to keep up with dealing with real ones;

@MadWorld

0
2

[–] MadWorld 0 points 2 points (+2|-0) ago 

The new code base will have Vote Identity 2.7/2.8 built in to restrict the number of alt accounts that can vote on submissions and comments, assuming the bots have acquired minimum of 100 ccp. With a few exceptions, I believe it won't be possible to simply keep on creating new accounts to get around the barrier. When fake reports are identified, the bot accounts will be restricted or banned. This ban, in combination with Vote Identity 2.7/2.8 could be used to prevent bots from creating new alt accounts. Note that both spammers and false accusers can be restricted or banned.

In the case of sole upvoting/downvoting that doesn't leave any trace of spamming, how would a bot acquire enough ccp points to perform the downvote? It cannot earn enough ccp without making meaningful comments that generate ccp to permit downvoting action. If it is smart enough, possibly using AI, it would be the borderline between legitimate user and spammer.

1
0

[–] 10249524? 1 points 0 points (+1|-1) ago 

That's why I mentioned some sort of flagging system like what Cynabuns does now. Of course if this is to replace downvotes in terms of restrictions Cynabuns won't be able to manage it alone.

At first I was thinking that you could just give more people Cynabuns' janitor abilities in /v/ReportSpammers, but then I considered that we could use a certain upcoming feature to community-decide on reports. Probably wouldn't be fast enough though. I definitely think that auto-restricting based on a certain number of spam reports would just result in the same issues we have now, without some kind of confirmation.

0
2

[–] MadWorld 0 points 2 points (+2|-0) ago  (edited ago)

I didn't suggest auto-restricting based on number of spam reports. I suggested generating a submission to the /v/ReportSpammers subverse for review when the spam report passes certain threshold. Automated restriction is good if the confidence(as Putt mentioned this terminology) level is high. But human reviews will be most accurate, plus the logs will keep the users accountable should the users decided to abuse/corrupt the review process.