[–] KingoftheMolePeople 9 points 198 points (+207|-9) ago  (edited ago)

Remove restrictions from Negative accts. Put in place a Spam button. Once an account has X number of Spam button reports, acct restrictions go into effect. To prevent abuse, if the restrictions are refuted("I am not spamming"), upon investigation, anyone found to be abusing the Spam button faces consequences, from restrictions themselves to a full on site ban.

[–] PuttItOut [S] 0 points 83 points (+83|-0) ago 

We should discuss this option in detail.

[–] KosherHiveKicker 0 points 45 points (+45|-0) ago 

Put in place a Spam button. Once an account has X number of Spam button reports, acct restrictions go into effect

Such an option could easily be exploited by people known to have multiple alt / burner accounts, which could be used to easily hit "the X number of Spam button reports"

[–] MagicalCentaurBeans 0 points 24 points (+24|-0) ago 

i suspect the cracks will show when whomever is tasked with investigating is swamped with "i didn't do it" refutes.

[–] KingoftheMolePeople 0 points 11 points (+11|-0) ago 

Its a totally spur of the moment idea. There are certainly things I havent considered and problems Ive overlooked.

[–] Liber 1 points 5 points (+6|-1) ago 

Have a site wide ‘potential spammer’ flair for people who have been reported for spam more than x times. Upon evaluation you could remove this flair if the user is victim of report button abuse.

[–] SuperConductiveRabbi 0 points 36 points (+36|-0) ago 

Sounds like a big workload for the admins. And if users are placed into these positions, they can become like Reddit powerusers.

[–] [deleted] 0 points 20 points (+20|-0) ago 


[–] Zanbato 0 points 1 points (+1|-0) ago 

it doesn't have to be work for Admins. There could be a volunteer subvoat section where people could review cases.

[–] fortyfiveacp 0 points 1 points (+1|-0) ago 

Either way it takes a lot of "eyes" to make something like this happen. Maybe there can be a formula to judging content that can make it easier. For instance, comments that are primarily emotional in nature have less inherent value than objective statements.

[–] blipblipbeep 0 points 23 points (+23|-0) ago 

If this becomes a thing, there should be a record of who pressed the spam button.


[–] KingoftheMolePeople 0 points 25 points (+25|-0) ago 

a public log would be pretty cool. I think that alone would help squash abuse.

[–] TheDaoReveals 0 points 3 points (+3|-0) ago 

Seems like a double edged sword.

[–] PeaceSeeker 3 points 12 points (+15|-3) ago 

I have suggested this in the past, and I think it is close to the best solution. However, as you've identified, people can abuse the button, meaning innocent people can very easily be shut down if ten or so people cooperate to "report them for spam". Notice how you've said "if the restrictions are refuted" -- well who refutes them? A team of trusted community members, surely -- Putt can't do it himself. Well, if we're going to have a team on Voat dedicated to flagging spam reports as real or not (we have this already, by he way, with /v/ReportSpammers, and they do fantastically) then we might as well alter their approach. Instead of applying restrictions after X number of reports, apply restrictions after the "refutation team" has flagged the report as legitimate spam. That way no innocent account will be wrongly restricted (unless the refutation team messes up, but they will be accountable for that, it will be easier to keep track of, and historically they've been good at not messing up as far as I can tell.).

[–] heygeorge 0 points 14 points (+14|-0) ago 

The problem with "trusted users" is that every trusted/prolific user here becomes subject to FUD attacks.

[–] 9347723491 5 points 13 points (+18|-5) ago 

A team of trusted community members, surely

ABSOLUTELY FUCKING NOT, are you fucking high?

[–] KingoftheMolePeople 0 points 6 points (+6|-0) ago 

Notice how you've said "if the restrictions are refuted" -- well who refutes them?

What I meant was that if I get tagged as spam, I refute that I am spamming. The team then needs to verify what is true. But other than that yeah.

My idea just shifted the work from being done pre restrictions (as now) to post restrictions. Now, the Reportspammers look and verify and ban the spammer, mine is verified after auto restrictions.

[–] glennvtx 0 points 2 points (+2|-0) ago 

We can avoid "trusted users" by creating a spam button, that can only be used X number of times in a period, and this period could increase or decrease based on a number of factors. When a post is marked spam, it could be minimized for a specified period, and it's spam button indicate the community has marked it as spam, possibly allow you to "unspam".

[–] Rainy-Day-Dream 1 points 60 points (+61|-1) ago 

can you give the users more control of the site? like being able to voat out power mods?

[–] PuttItOut [S] 0 points 77 points (+77|-0) ago 

Soon. (and this time I mean it) Soon soon.

[–] Slayfire122 0 points 25 points (+25|-0) ago 

Soon soon tm

[–] Citizen 1 points 11 points (+12|-1) ago 

I've thought about this a great deal. In my opinion, the best solution would be a declaration that a subverse belongs to its users and not to its mods. On That Other Site, subs belong to the mods to do with as they please. By stating that subs belong to the users, abuse of mod powers becomes a thing that you can explain.

With that said, I'm not sure how to handle a sudden influx of users who have the intent of disrupting a community. For example, let's say /v/powerboats has an active community that loves all things nautical, including sailboats, and not just powerboats. Let's say /v/motorboats mass subscribes to /v/powerboats and tries to oust the existing community. I'd say the community that previously existed should have priority, even if the new community has numerical superiority.

Although I would imagine this would be a lot less of a problem than a single rogue mod saying, "I own you."

[–] inthetimeofnick 2 points 5 points (+7|-2) ago 

Can we vote out admins as well? :p

[–] BezM8_5o 1 points 5 points (+6|-1) ago 

This is an excellent thing!

[–] SuperConductiveRabbi 0 points 27 points (+27|-0) ago 

Pure democracy is easily subverted by factions. We should trust only the admins with the ability to decide whether or not a mod is acting against the interests of his community.

[–] Tsilent_Tsunami 0 points 12 points (+12|-0) ago 

Suppose the positive vote result mandated admin investigation and action on the issue instead of a user generated ban outright.

[–] 10247645 0 points 10 points (+10|-0) ago 

This. I support Voat as a site for FREE speech, not hive-minded echo chamber speech from a single specific ideology that currently makes up the majority of the userbase due to Reddit refugees.

[–] cultural_dissenter 0 points 3 points (+3|-0) ago 

There is something to be said for segregated spaces.

[–] Rainy-Day-Dream 0 points 0 points (+0|-0) ago 

then make it a republic and place restrictions on who can vote in the system

[–] HarveyKlinger 1 points 15 points (+16|-1) ago  (edited ago)

^^^^^^ THIS GUY ^^^^^^ gets it. So many subs got cucked and there's nothing we can do about it. v/Chicago comes to mind. We already know that transferring subs is almost impossible here so there needs to be a way to un-cuck a sub. Hell, I applied to take over a dead sub months ago. Nothing ever became of it.

[–] cultural_dissenter 0 points 10 points (+10|-0) ago  (edited ago)

That's an interesting concept.

Allow subs with a certain number of active individuals to have confidence voats, where the users who meet certain criteria (subscriber for X time, made Y posts, have at least Z CCP) are permitted to vote.

If 51% of the users vote in favour of the current moderators, nothing changes, and a confidence voat can't be called for another 3 months. If the confidence voat fails, the sub loses all it's moderators, and becomes available for request. When a potential mod comes forward, they get a couple months, and another confidence voat is again held.

I think /u/PuttItOut should leave spamming to the mods, and if the mods cause problems, let the users deal with them.

[–] Rainy-Day-Dream 1 points 0 points (+1|-1) ago 

I think as long as we restrict it's use to active participants of the sub in question it'd be a very nice thing to have

[–] ExpertShitposter 4 points 4 points (+8|-4) ago 

I want to voat out @system from v/whatever. I need that sub for my CSS experiments.

[–] Trigglypuff 4 points 2 points (+6|-4) ago 

I second this motion! @system's reign of terror is over!

[–] 123_456 2 points 35 points (+37|-2) ago  (edited ago)


I've got an idea that could work, but I don't know if you can implement this. Please patent this for our website, so reddit can't use it. Ha-ha.

This is how it goes:

1 - Comments, and submissions have a report spam button. People click that button.

2 - The report goes to the sidebar, or another area of the website, and it asks RANDOM users if it's spam. They can voluntarily click to verify if it's spam. This way the power to judge something isn't in the hands of a few people.

3 - If a user's account gets too many verifications of spam, then they will be banned. However, if they want to, they can appeal, and ask for an un-banning.

4 - Of course what will motivate random users to moderate? Users will get points, and badges for identifying spam.

So, this is almost like the system we have now, except moderation is in the hands of many users. It's not necessarily bullet proof, but it leads us away from certain groups, and a few powerful people ganging up on an individual.

[–] PuttItOut [S] 0 points 18 points (+18|-0) ago 

We have the beginning stages of publicly viewable reports here: https://voat.co/v/all/about/reports

[–] dooob 1 points 2 points (+3|-1) ago 

What do you think about making votes public? We can click on the vote count and see who votes up and who down.

[–] TrumpTheGodEmperor 0 points 7 points (+7|-0) ago 

I like the idea of decentralized moderation for simple things like spam verification.

[–] euthanizethepoors 0 points 3 points (+3|-0) ago  (edited ago)

Look, I can write a bot that will create a new account every time the spam limit is exceeded and I receive notification that the spams have ceased. There is a very low barrier to overcome this type of system. You're going to have to shadowban these accounts without notification to the user if you want this to work out even in the short term.

[–] nadrewod 0 points 0 points (+0|-0) ago 

The problems with that system become "how do we keep it out of the hands of power-hungry moderators who would use it against anyone they disagree with?" and "how do we make sure that normal users are almost entirely unable to be banned in this way?".

No system is ever going to be 100% perfect for 100% of the cases. Voat is trying to focus on Freedom of Speech (as a direct response to sites like Reddit, Twitter, and Facebook focusing on Censoring "Hate Speech"/Wrongthink), so we're willing to deal with a little more spam and unfriendly behavior if it means that we are more likely to have free and open discussions on any topic we need to have a discussion about. If we try to focus more and more on dealing with the spam, we'll slowly move towards site-wide censorship and start an arms race between the spam-bot creators and our anti-spam software that will only result in either "the site getting overwhelmed with spam via high-quality spambot accounts", "the company going bankrupt after spending all of their money on new anti-spam R&D rather than their main product", or "the company offering ways for advertisers to directly pay the company in order to let the advertisers run shill bots on the site as they pleased".

[–] ForgotMyName 0 points 1 points (+1|-0) ago 

A lot of spam is cut & paste. I think we can do a lot to combat spam simply by factoring in a user's history when a report comes through.

[–] nadrewod 0 points 0 points (+0|-0) ago 

Voat already checks to see if comments you made were literally just "Cut and pasted" into the comment box, particularly if you've done it shortly after a previous comment.

[–] AnTi90d 0 points 32 points (+32|-0) ago  (edited ago)

I 100% do not support the removal of restrictions from negative CCP accounts.

The first thing that will happen is the CTR/ShareBlue cunts come back and endlessly post their outright lies, again. If you let those people have fully enabled accounts, you're just asking them to act in concert and upvote each other's posts. (They all work in one office. It's easy for them to organize behind the scenes.)

The next things that opens Voat up to are hostile takeovers like r/the_Donald has outright said they will try to do, again or r/SRS, as they never really gave up.

Taking away negative CCP restrictions sounds so great and altruistic on paper, but that's one of the main things that have protected Voat from organized groups of subversive cunts that aim to control and change this place to their own whim. You aren't helping new users, you are only helping our enemies to attack us and attempt to gain control of the site to change the entire culture of Voat.

I like this place, as it is, and I'm not even a Christian or a Republican. I like being able to stand up and say, "NIGGER-FAGGOT," at my whim. The people that this change will help most are the people that fully intend on downvoting every instance of what they consider, "offensive speech." First, they're allowed to exist and post as much as they want, then they organize a large group to upvote eachother, then they use that large group of now positive CCP accounts to impose their own brand of cultural Marxism on this site, just as they have done, everywhere else.

This is going to be your greatest mistake, @PuttItOut. This could potentially be the beginning for the end of Voat as your current userbase is concerned. If you want to make life easier for new users, that's one thing, but unrestricting negative CCP accounts only serves to empower Voat's enemies.

[–] PuttItOut [S] 0 points 24 points (+24|-0) ago 

I think my track record is pretty good when it comes to not ruining Voat. I don't plan on starting now.

I've read your comment and I agree with many things you say. I fully understand the forces at work here and I'm not planning on giving them a win.

If we can't develop a better solution we leave it as is. I seek only improvements and this is an area I believe we can improve.

You notice that I'm soliciting feedback and not acting like a dictator. If I had malicious intent I wouldn't come to the community and get feedback well before making any decisions.

[–] AnTi90d 0 points 6 points (+6|-0) ago 

In no way am I trying to imply that you have any malicious intent. I place immeasurable value on you and your time helping to keep Voat alive and I thank you for that. I also am grateful that you voice your future plans with us instead of just changing things and telling us what you did, after the fact.

I just think that the removal of restrictions from negative CCP comes from a place of hopeful, good natured optimism.. and that is the kind of sentiment that has come back to bite many a nation.

[–] Laurentius_the_pyro 0 points 5 points (+5|-0) ago 

I second this, negative CCP restrictions are the only thing preserving voat culture in the face of the horde of "squeak" /r/the_donald and other degenerates.

[–] PeaceSeeker 2 points 5 points (+7|-2) ago 

The first thing that will happen is the CTR/ShareBlue cunts come back and endlessly post their outright lies, again. If you let those people have fully enabled accounts, you're just asking them to act in concert and upvote each other's posts. (They all work in one office. It's easy for them to organize behind the scenes.)

Upvote manipulation, especially with alt accounts, is going to be limited on the new Voat and it is also a bannable offense. Also lying is not grounds for censorship.

[–] Crensch 1 points 11 points (+12|-1) ago 

The first thing that will happen is the CTR/ShareBlue cunts come back and endlessly post their outright lies, again.

Also lying is not grounds for censorship.

No, but saturating this place with obvious lies, and manufacturing popular support will reduce the amount of real estate for legitimate users. If I have to scroll through 3 paid-for 'opinions' for each decent comment I want to read, it won't be long before I find the juice isn't worth the squeeze.

For every paid-for opinion, the amount of visible real estate for legitimate users shrinks, and it will continue to shrink until the non-paid-for opinions stop showing up, because it's not worth fighting for the small spaces left.

[–] SuperConductiveRabbi 1 points 32 points (+33|-1) ago  (edited ago)

Will Amalek, SGIS, and his/their alts still be banned?

The way I see it, if negative CCP no longer rate limits accounts, the only thing keeping subverses clear will be moderator action. We've seen what overactive mods did to Reddit.

Even in this thread one of his alts (-448, and he's earned a net negative of CCP in the thousands) is celebrating the cessation of limitations. Consider too that people who wish to detract from a free speech forum are most likely going to have time and resources that casual users will not; they can operate dozens of accounts (as we've seen in Amalek's case), either because they're multiple individuals or because he's a spergelord who goes through manic episodes.

If one earns a punishment, is that really an undeserved limitation?

[–] PuttItOut [S] 1 points 25 points (+26|-1) ago 

I don't want to ban anyone. I'd rather have a universal system in place that governs all users equally.

Voat will continue banning spamming accounts and spammed domains.

[–] adhdferret 0 points 3 points (+3|-0) ago 

That right there is why you are the boss.

You have principals and I must say you don't compromise them.

[–] Caesarkid1 0 points 2 points (+2|-0) ago 

There is an ignore/block function that could be used to also cut-down on the spam. There are multiple ways it could be used as such. There could be a "default" ignored group which is just individuals who have been ignored/blocked by the vast majority of regular users which could be optionally toggled off in the account settings if one so chooses.

There could also be "ignore" sets that could be "subscribed" to that could be curated by mods/admins.

I think the best option is allowing the individual user to have the maximum control over the content they wish to see without having to censor the content from others who just may wish to see it.

This would be a hands-off method of dealing with spam which allows for the most personal freedom and keeps us away from the whole danger of authoritarian censorship.

[–] teatime 0 points 0 points (+0|-0) ago 

The only way to do that is to exclude human bias from the process. I see nothing wrong with a system that checks for specific keywords, CAPS, bold, large font, frequency of posts, and frequency of comments that determines if someone is spamming.

People make too many irrational decisions but machines don't care who you are or what your sob story is. A fair system doesn't include people.

[–] glennvtx 0 points 0 points (+0|-0) ago 

In addition to a "spam button" we could enable sub owners to have the option of +V people, akin to IRC. Only users "voiced" could post in that particular sub.

[–] Andalusian1 2 points 6 points (+8|-2) ago 

Mods can be held accountable for banning/removing comments of non-spam. We wouldnt need this if people only dv'ed spam and not dv'ed people they disagreed with.

[–] PuttItOut [S] 0 points 10 points (+10|-0) ago 

This brings up a good point.

We have a report feature now that is under utilized.

[–] VieBleu 0 points 1 points (+1|-0) ago  (edited ago)

It takes a lot of work to hold mods accountable, often with negligible results. Work to the point of unreasonable burden. At least at the PG forum, this has been the experience.

EDIT to add - Here is a thread that overwhelmingly shows the community calling for a minimum of 100 points to comment, due to the level of shill attacks on the forum. https://voat.co/v/pizzagatewhatever/2030995

The mods are fully aware of this vote, yet take no steps to implement change or even address it. Just a massive shoulder shrug and a return to their pizza party.

[–] Tor1 0 points 0 points (+0|-0) ago 

SCR you are bringing up a very important issue.

There are worst things at Voat than spammers.

Spammers have been around forever, there are all kinds of tried and true ways to deal with them. The only decision is one of cost benefit. How much spam are you willing to tolerate.

The majority of the internet see us as an enemy. If policies are changed so they can come here and attack Voat easier. It won't be long before they do so in droves. Current Voat users who value this forum could quickly find themselves in the minority here.

If the defensive tools in place are subject to some kind of gun control and confiscated in the name of free speech idealism, who knows what would happen.

Another user who is more damaging than spammers is the Amalek Sane Goat spergelord.

The important issue isn't whether there's any kind of punishment. But whether your peaceful Mom and Pop users are free to navigate the subverse paths of Voat freely.

Already there is leftist mobisms of all sorts that constantly erupt from time to time.

Don't repost that. Don't link to the New York Times. Don't use imgur. Don't mention Reddit. Don't use YouTube use HookTube. Use archive don't directly link to certain sites. Don't post a girl with boobs spilling out of her swimsuit with out an NSFW tag. And on and on.

These are all valid arguments, but the way they are constantly shoved in everyone's faces is not warranted. For most users, it degrades the utility and enjoyment of the site.

Look at how everytime Putt comments he hears about Discord. Or sob stories about free speech being limited. Maybe Putt just wanted to use the site to talk about his golf game or his struggles with the site.

Does it make sense that no one can have a moments peace, without some spergelord getting hysterical about whatever issue he sees as of vital defcon 5 level immediate action and sitewide concern.

I don't see why anyone needs to be banned right now. It seems like things are pretty good for the most part. The only problem I see is there isn't as much good content in v/all as there could be. But isn't that always the case?

[–] Schlomo_Kikenburger 9 points -2 points (+7|-9) ago 

Everyone in the negatives isnt the same person nigger. You leave this emotional comment with no proof, simply because you don't like anyone who doesn't agree with you.

And speaking of sperging, that is a lot of projection. I can screen shot the multiple daily submission and comment pings from SBBH users and kevdude and hecho that has been going on for awhile now. I dont even respond but they still do it.

[–] PeaceSeeker 3 points 27 points (+30|-3) ago 


Spam is an issue and we don't want it overrunning the website. But at the same time you're right, these restrictions have been inhibiting people who have done nothing wrong but share too many unpopular opinions, and it isn't in the spirit of Voat.

We should consider what tools we have available. The /v/ReportSpammers community is very hard-working and dedicated to keeping Voat free of spam, and it is a community very capable of growing. Spam is against Voat's rules; accounts that spam get permanently banned from the website. We determine that accounts are spamming by responding to user reports against specific accounts, evaluating their comments / submissions, and then deciding if they have indeed spammed. If they have, you eventually ban them. I think that's the basic process.

Waiting for a spammer to accrue negative CCP is actually relatively slow. What we could do instead is this: if an account receives spam reports, and one of the trusted community members in /v/ReportSpammers marks the report as actual spam, then upon that marking the account could be restricted until such time as you or someone else is able to review the reports and ban the guilty users.

As far as I am aware this follows the same process as right now, except it will not restrict any account's commenting ability based on CCP, only on confirmed spam reports. As I understand it this should restrict guilty accounts much faster than negative CCP would have, without restricting non-spam accounts. All we require is a sufficiently large and trusted report marker section of the community, and then the awareness of the Voat community at large to place spam reports instead of downvotes in the first place.

The community at large can vote on who they want / trust to mark reports as actual spam, and we can keep those who have been doing a perfect job already (@Cynabuns namely. I'm sure @NeedleStack would do well also).

I can adjust anything I've written above for feasibility reasons but I think some interpretation of this will work for Voat well without punishing the innocent.

[–] Crensch 3 points 11 points (+14|-3) ago 

Spam is an issue and we don't want it overrunning the website. But at the same time you're right, these restrictions have been inhibiting people who have done nothing wrong but share too many unpopular opinions, and it isn't in the spirit of Voat.

The problem is that some of these "unpopular opinions" are actually paid-for opinions.

I know of people with unpopular opinions that don't garner downvotes. I've seen it happen all the time, actually.

The ones with downvotes were rude, or expected everyone to agree with them without supporting their position. Or they were MSM narratives that are very obviously manufactured and being espoused by suspicious usernames.

[–] PeaceSeeker 1 points 6 points (+7|-1) ago 

I don't disagree with any of that, but at the same time we cannot say with certainty that a bunch of people behaving like autists or aggressively and espousing unpopular opinions are necessarily paid shills. Are we not, who possess free speech, strong enough to refute their baseless claims without limiting the number of claims they can make per day? If they spam their paid viewpoints they will get banned for spam; if they manipulate votes so that they can downvote they will get banned for manipulation -- but if they are just commenting as much as any other user and they happen to get downvoted for it, what justification do we really have for restricting their speech? We are stronger than that, and they are weaker than for us to need to restrict them.

[–] MadWorld 0 points 6 points (+6|-0) ago 

Yes, feedback on negative ccp to determine spamming is way too slow and undesirable as this can be mixed up with unpopular opinions. I have thought about using neural network or plagiarism detection. It sounds interesting at first. But it's really just a game/evolution of the cat and the mouse. Sooner or later, the spammers will always find new ways to cheat the system. Human elements will remain to be the best judgement.

@PuttItOut, I would propose something like this toward the spammers:

  1. Use a threshold on the number of spams been reported on a potential spam user.
  2. Then automatically generate a /v/ReportSpammers (or relevant subverse) submission, along with relevant info, when that threshold is triggered.
  3. Like any other subverses, the subverse users vote, discuss, and decide the accuracy of the report been submitted.
  4. If the report is correct(true), give warnings to the user and further ban persistent abusers.
  5. If the report is a lie(false), keep an abuse score on users who abused the spam report. Keep the thread ids/content if possible.
    1. When the threshold(spam report abuse) for this abuse score is triggered, like any other spammer, we can automatically generate another submission to the subverse for review.
    2. If the user who falsely and repeatly reported with the spam button is identified, restrict his account, like any other confirmed spammers.
    3. Relax this user's restriction when the user stops abuse on false reporting over a certain time period.
  6. If the report contains ambiguous(uncertainty, the grey area) content or not obvious enough to been classified as spammer, we should let it slide without any side-effect.

Something optional to keep the users motivated, but I suspect voaters might not care much since they love voat so much.

  • Reward the users who reported the spammers accurately. Reward them for their dedication and hard work. They can become the Protectors of Voat.

[–] PuttItOut [S] 0 points 7 points (+7|-0) ago 

I really like the idea of automatically making a post in v/reportspammers when any trigger level has been detected. This is a very transparent way of verifying the accuracy of the code.

If we move into any sort of reporting system, we have already decided we will have to build a confidence interval for users. If done right the system would be able to flag spam based on reports very quickly depending on who is reporting content and their history of reports vs outcomes.

This can also be gamed so we will have to still have accountability and not trust the system fully.

[–] SexMachine 1 points 6 points (+7|-1) ago 

I like this idea, actually was about to suggest something similar, give someone trusted a limited admin account to restrict accounts that have been reviewed and revealed to be posting spam.

Maybe even have an automatic limiter in place for new accounts that are posting 20+ comments/links within an hour to be reviewed as well.

[–] PeaceSeeker 2 points 8 points (+10|-2) ago 

The best part about this suggestion is that whoever is doing the marking will still be accountable the way that all Voat mods are. Every action they take will be logged publicly for the community to see. If ever there is a "Spam Flagger" who steps out of line, remove them and replace them with someone else. Very simple, and no one innocent has to suffer.

[–] absurdlyobfuscated 0 points 3 points (+3|-0) ago 

Maybe even have an automatic limiter in place for new accounts that are posting 20+ comments/links within an hour to be reviewed as well.

Yes please!

And a possible alternative to deal with spam flooding that would be friendlier to new users could involve having the system to automatically report sudden surges of posts or comments. I'd envision something that would detect lots of posts from the same domain or user or IP address - or if any other metrics exist to detect the same source it could use those. Then all those reports would end up in a queue along with the stuff that gets lots of user reports, and a human can review the domain/user/etc. and filter or ban as appropriate. The important part is that certain actions should set off some alarms so they can be dealt with, and flooding and a high number/percent of spam reports should both be easy to detect.

If an automatic system is set up so some kind of rate limit is imposed when an account starts flooding, then when a report goes to the spam review process it should also have the option to flag that source as not spam and at least temporarily remove the restrictions on it. That way you aren't stuck with reddit-like restrictions and wouldn't end up with new accounts begging for karma to get around the filters. I really like ideas like PeaceSeeker's that don't negatively impact normal users, and I think we can build something that can deal with the spam problem and at the same time not be all that obtrusive or put up too many barriers.

[–] Andalusian1 2 points 3 points (+5|-2) ago 

Wonderfully put