You are viewing a single comment's thread.

view the rest of the comments →

3
27

[–] 10246470? 3 points 27 points (+30|-3) ago 

:D

Spam is an issue and we don't want it overrunning the website. But at the same time you're right, these restrictions have been inhibiting people who have done nothing wrong but share too many unpopular opinions, and it isn't in the spirit of Voat.

We should consider what tools we have available. The /v/ReportSpammers community is very hard-working and dedicated to keeping Voat free of spam, and it is a community very capable of growing. Spam is against Voat's rules; accounts that spam get permanently banned from the website. We determine that accounts are spamming by responding to user reports against specific accounts, evaluating their comments / submissions, and then deciding if they have indeed spammed. If they have, you eventually ban them. I think that's the basic process.

Waiting for a spammer to accrue negative CCP is actually relatively slow. What we could do instead is this: if an account receives spam reports, and one of the trusted community members in /v/ReportSpammers marks the report as actual spam, then upon that marking the account could be restricted until such time as you or someone else is able to review the reports and ban the guilty users.

As far as I am aware this follows the same process as right now, except it will not restrict any account's commenting ability based on CCP, only on confirmed spam reports. As I understand it this should restrict guilty accounts much faster than negative CCP would have, without restricting non-spam accounts. All we require is a sufficiently large and trusted report marker section of the community, and then the awareness of the Voat community at large to place spam reports instead of downvotes in the first place.

The community at large can vote on who they want / trust to mark reports as actual spam, and we can keep those who have been doing a perfect job already (@Cynabuns namely. I'm sure @NeedleStack would do well also).

I can adjust anything I've written above for feasibility reasons but I think some interpretation of this will work for Voat well without punishing the innocent.

3
11

[–] Crensch 3 points 11 points (+14|-3) ago 

Spam is an issue and we don't want it overrunning the website. But at the same time you're right, these restrictions have been inhibiting people who have done nothing wrong but share too many unpopular opinions, and it isn't in the spirit of Voat.

The problem is that some of these "unpopular opinions" are actually paid-for opinions.

I know of people with unpopular opinions that don't garner downvotes. I've seen it happen all the time, actually.

The ones with downvotes were rude, or expected everyone to agree with them without supporting their position. Or they were MSM narratives that are very obviously manufactured and being espoused by suspicious usernames.

1
6

[–] 10248263? 1 points 6 points (+7|-1) ago 

I don't disagree with any of that, but at the same time we cannot say with certainty that a bunch of people behaving like autists or aggressively and espousing unpopular opinions are necessarily paid shills. Are we not, who possess free speech, strong enough to refute their baseless claims without limiting the number of claims they can make per day? If they spam their paid viewpoints they will get banned for spam; if they manipulate votes so that they can downvote they will get banned for manipulation -- but if they are just commenting as much as any other user and they happen to get downvoted for it, what justification do we really have for restricting their speech? We are stronger than that, and they are weaker than for us to need to restrict them.

5
-5

[–] Jixijenga 5 points -5 points (+0|-5) ago 

paid for

Prove that. I want you to prove every single allegation of this nature to be, without a doubt, completely true.

I can't begin to count the number of times I was accused of being CTR and then ShareBlue simply because I'm not a far-right nutjob and didn't agree with the thread's circlejerk. A bunch of downvoats because I didn't accept the tired narrative pushed by morons parroting an old post on Stormfront.

These unpopular opinions are often fabled to be the work of whatever bullshit bogeyman morons cook up, but I rarely see any evidence supporting that.

0
6

[–] MadWorld 0 points 6 points (+6|-0) ago 

Yes, feedback on negative ccp to determine spamming is way too slow and undesirable as this can be mixed up with unpopular opinions. I have thought about using neural network or plagiarism detection. It sounds interesting at first. But it's really just a game/evolution of the cat and the mouse. Sooner or later, the spammers will always find new ways to cheat the system. Human elements will remain to be the best judgement.

@PuttItOut, I would propose something like this toward the spammers:

  1. Use a threshold on the number of spams been reported on a potential spam user.
  2. Then automatically generate a /v/ReportSpammers (or relevant subverse) submission, along with relevant info, when that threshold is triggered.
  3. Like any other subverses, the subverse users vote, discuss, and decide the accuracy of the report been submitted.
  4. If the report is correct(true), give warnings to the user and further ban persistent abusers.
  5. If the report is a lie(false), keep an abuse score on users who abused the spam report. Keep the thread ids/content if possible.
    1. When the threshold(spam report abuse) for this abuse score is triggered, like any other spammer, we can automatically generate another submission to the subverse for review.
    2. If the user who falsely and repeatly reported with the spam button is identified, restrict his account, like any other confirmed spammers.
    3. Relax this user's restriction when the user stops abuse on false reporting over a certain time period.
  6. If the report contains ambiguous(uncertainty, the grey area) content or not obvious enough to been classified as spammer, we should let it slide without any side-effect.

Something optional to keep the users motivated, but I suspect voaters might not care much since they love voat so much.

  • Reward the users who reported the spammers accurately. Reward them for their dedication and hard work. They can become the Protectors of Voat.

0
7

[–] PuttItOut [S] 0 points 7 points (+7|-0) ago 

I really like the idea of automatically making a post in v/reportspammers when any trigger level has been detected. This is a very transparent way of verifying the accuracy of the code.

If we move into any sort of reporting system, we have already decided we will have to build a confidence interval for users. If done right the system would be able to flag spam based on reports very quickly depending on who is reporting content and their history of reports vs outcomes.

This can also be gamed so we will have to still have accountability and not trust the system fully.

1
6

[–] SexMachine 1 points 6 points (+7|-1) ago 

I like this idea, actually was about to suggest something similar, give someone trusted a limited admin account to restrict accounts that have been reviewed and revealed to be posting spam.

Maybe even have an automatic limiter in place for new accounts that are posting 20+ comments/links within an hour to be reviewed as well.

2
8

[–] 10246743? 2 points 8 points (+10|-2) ago 

The best part about this suggestion is that whoever is doing the marking will still be accountable the way that all Voat mods are. Every action they take will be logged publicly for the community to see. If ever there is a "Spam Flagger" who steps out of line, remove them and replace them with someone else. Very simple, and no one innocent has to suffer.

0
3

[–] absurdlyobfuscated 0 points 3 points (+3|-0) ago 

Maybe even have an automatic limiter in place for new accounts that are posting 20+ comments/links within an hour to be reviewed as well.

Yes please!

And a possible alternative to deal with spam flooding that would be friendlier to new users could involve having the system to automatically report sudden surges of posts or comments. I'd envision something that would detect lots of posts from the same domain or user or IP address - or if any other metrics exist to detect the same source it could use those. Then all those reports would end up in a queue along with the stuff that gets lots of user reports, and a human can review the domain/user/etc. and filter or ban as appropriate. The important part is that certain actions should set off some alarms so they can be dealt with, and flooding and a high number/percent of spam reports should both be easy to detect.

If an automatic system is set up so some kind of rate limit is imposed when an account starts flooding, then when a report goes to the spam review process it should also have the option to flag that source as not spam and at least temporarily remove the restrictions on it. That way you aren't stuck with reddit-like restrictions and wouldn't end up with new accounts begging for karma to get around the filters. I really like ideas like PeaceSeeker's that don't negatively impact normal users, and I think we can build something that can deal with the spam problem and at the same time not be all that obtrusive or put up too many barriers.

2
3

[–] Andalusian1 2 points 3 points (+5|-2) ago 

Wonderfully put