This has been trying to coalesce in my mind for at least a year. I think it's close enough -- and the societal environment seems ripe enough for it -- that it's time to lay out my thoughts.
First off, a quote to provide a conceptual starting point:
"The Internet regards censorship as damage and routes around it."
--Andrew Anglin, 2016
That statement struck me very deeply. I think Anglin really put his finger on something of profound importance, and it leads us toward the solution. But the solution to what? I'll answer that briefly before getting to my idea(s).
All of my life, I've seen the major mass media and the government in joint control of the views of the masses. Trying to overcome their dominance has always been so difficult that few try it and fewer ever succeed. We've grown up conditioned in "learned helplessness". This is not a new observation, by a long shot; many have observed it and it's been discussed and memed into the ground for decades now ("sheeple" being one buzzword, in semi-popular use for many years now, which was borne of exactly this observation).
With this control "matrix" in place, the people were led around by the nose by those in positions of power and influence -- and that sucked, in terms of individual freedom, self-determination, and other (meaningless?) principles and ideals. But it did have one sort-of-good side: society had some cohesion provided by this control matrix; we all "knew" (even if we were all deceived) what "the news" was, and thus "what was happening" that was worth knowing about.
The Internet has busted that arrangement up fairly well, at this point. Yay!!! (It's not entirely complete, obviously, but the mass media are increasingly seen as "billionaire's blogs" and no longer as anything authoritative, and they are gradually dying off as a result.) However, in ditching the censorship and gatekeeping, we also are leaving behind the sense of intellectual cohesion which the Establishment provided. So now we are forming a bunch of "info-tribes", and it is in this environment that the recent Establishment meme-weapon of the phrase "fake news" was launched.
While the strategic wisdom of that move is yet to be entirely determined (I believe it will fail and eventually bite them HARD on the ass, but it's having some effect, at least for now), everyone now hears the phrase continually. Why is that? Is it purely because the big media have pushed it so hard? I think that's one reason, but I think there's a second one: there actually is a need which the virtual "Wild West" has not yet met. That need is as follows:
Joe and Jane Everyperson need some way to get through their day/life without having to dedicate half their waking hours to sifting through endless barrages of information, all of unknown reliability. "The news" filled that role, in the past. Now they can see that "the news" is not reliable after all. But who/what can/will play the role of "truth quality control" in a post-censorship society?
We need an answer to that question. Since we don't yet have one, the Establishment's dire warnings about the dangers of "fake news" have some degree of legitimate traction with people. Our lack of any effective tool(s) to meet this need gives the Establishment a weakness they are trying to exploit. Their answer ("trust us, and give us money!") is nonsense, of course, but we don't have a better one... yet. So people are forced to choose between 4 bad options:
(a) go back to the "loving" embrace of Big Brother Media,
(b) get randomly snookered by an endless stream of dubious info from unknown sources,
(c) join one info-tribe or another and hope they tell the truth, or
(d) move to a shack and become a hermit.
I propose creation of another option. I think -- or at least hope -- that all the technical underpinnings are available now, but nobody AFAIK has put them all together yet.
Here are some components we already see in operation:
- Voting systems, such as here on voat and also IMDB, reddit, YouTube, Gab, etc.
- Trust/identity vouching, like with Web of Trust
- Trend identification, like on Reddit, Gab, and Twitter
- Block chain technology, like with Bitcoin
- Open source tech (for just about everything/anything)
- Universal ability to contribute analysis of any page, like with Disqus
I don't personally understand block chain tech very well at all, but I believe it offers a lot of decentralization and resistance to tampering, both of which are going to be important for my idea.
So, keeping the above tech list in mind, here is my idea.
Picture a widget or add-on for a browser. The widget is small in terms of on-screen size. Mousing over it pops up a larger interface with some options for interaction, which I'll get to shortly.
What does the widget show the user? Conceptually, it shows the user how much confidence others have in the page/site being shown. Graphically, this could be handled in various ways, and I don't know the best one. Maybe a green light and a red light, each one flashing at a rate corresponding with how many "reliable" or "unreliable" votes the page/site has received? Maybe a numerical rating? Maybe both, or something else?
This probably sounds just like yet another riggable voting system, right? So far, but here is where we can start really making things interesting.
Instead of just one overall aggregate rating, I propose that this system would track each user's votes per that user's account and make those votes visible to everyone else -- provided that the user allows that, naturally.
Hmm... does that just sound like Big Brother starting to peep in? Well, what about this though:
Each user can designate which other users they trust, and how much weight to assign to that other user's ratings. Further, the system can be told to go beyond those immediately trusted, in a "degrees of separation" scheme, to produce additional ratings derived from much larger sets of votes.
Well... I don't know if anybody has done that yet. It seems like a new idea to me. Is it crazy, or brilliant?
Further, user lists could be created and referred around. For example, celebrities' lists would be of interest to many; if you're into, say, current events in Russia, wouldn't you love to know who's on Putin's list? Or Donald Trump's, or Bill Clinton's, or your favorite analyst's? Would you trust them yourself? Well, of course in some cases you might want to know, but you might distrust them yourself. No problem: the system can provide access to these lists' ratings via the mouse-over menu, rather than automatically incorporate them into your main rating scheme. So they stay quarantined, but visible. Then if you find them to be reliable, you can always add them in (or remove them again) at any time.
Perhaps this system could link into Disqus in case there are comments there from users on one's lists. Maybe highlight a list containing a user who has also posted a Disqus comment on the current page. Maybe that's too much; just brainstorming.
As for the rating activity itself, I propose that a simple "trust/distrust" rating would be of value, but a more nuanced system would be of more value, provided that users used it. (Have to consider ease of use vs. value of information gathered; information not gathered may have great potential value, but it's useless until gathered, and people are lazy so make it easy.)
So what will we see when we mouse over the widget? Some things could be:
- various kinds of ratings we could put in, like interesting, timely, useful, offensive, potentially criminal, etc. -- many attributes possible
- an option for extending the "degrees of separation" (I assume the default would be few to reduce processing/data load)
- user lists options, like searching for new ones, adding/removing lists
- an option to issue an anonymous/alt-account rating (useful for controversial/sensitive material)
What really makes this exciting to me is that we have an absolutely overwhelming ace up our sleeves, if we can just figure out how to play it. That ace is the collective value of each and every one of us poring over all the web content we see every day. At present, the impressions derived from doing that are almost all WASTED! The loss of potential value is stunning. If we can find a way to harvest that value and make it available to us all, I believe we will have solved the problem I identified at the start of this posting.
This does not, I think, automatically solve the "info-tribes" issue. What it does do is prepare us all to fight that battle with some fog-cutting goggles on. It allows us to start identifying the real sources of "fake news", both free-range and embedded within larger entities, and effectively neutralize their ability to trick busy people by simply appearing authoritative. Simultaneously, those individuals within those larger entities who really do try to tell the truth about things will (I expect, anyway) be recognized within such a merit-oriented social system.
TL;DR: we are the solution to the problem we're facing (of life in a post-gatekeeper information age). We just have to build the tool(s) needed to tap into ourselves effectively.