[–] a_fucking_dude 1 points 3 points (+4|-1) ago 

Actually, there are much better solutions to this problem and others. It might take a while(like a year) to clearly and formally lay it out, but I will eventually explain this to everyone who will listen. We can cut everyone but bandwidth providers out of the picture, and keep even the bandwidth providers in the dark and in their place: as servants, not masters.

[–] karaz 0 points 1 points (+1|-0) ago 

I was pondering this very thing the other day. You got any pre-alpha explanation of your idea?

[–] a_fucking_dude 0 points 1 points (+1|-0) ago 

I want to refine the concept to the point that I can actually do a 2-3 paragraph exposition on the concept before I even talk about it at all. Otherwise, I'm afraid that I'll either make myself sound like a delusional teenager or I'll be handing it to someone who will take the concept and use it for evil or retardedness.

[–] obvious-throwaway- [S] 1 points 0 points (+1|-1) ago  (edited ago)

The bandwidth provider in my area is publicly owned and receives federal funding. I also live in a very red area where a majority of the people believe in the 1st amendment and free speech. This publicly owned and federally funded entity wouldn't have a leg to stand on if it attempted to subvert the 1st amendment by restricting free speech.

Sure, we could come to a day where wifi signals are so strong that we could create our own user distributed wifi network that connects communities and cities through a distributed radio system, but the process I describe above could be implemented in a matter of months if enough people decided to put some effort into it. I'm not saying it would be a worldwide phenomenon with millions of websites in a week, but even if a few hosts and creators got together to put together a simple video upload site and news site, it would be a good test of the model.

[–] ShinyVoater 0 points 2 points (+2|-0) ago 

A big problem with your idea is that you can't just replicate dynamic websites willy-nilly; there needs to be some sort of 'true' server for the database. Static websites have been more-or-less solved, but that's not much different than classical filesharing.

[–] obvious-throwaway- [S] 0 points 0 points (+0|-0) ago  (edited ago)

Your are correct, but let's say that this entire process is first only workable for static websites, it still allows us to create a free distribution network of thousands of static websites. The options for the static websites could range from one time sync, scheduled sync or real time push sync (meaning, it's re-syncs every time the content is updated).

Even if such as system as I just referenced above was created and that system was able to be shared across replication servers, it could easily be used as a video, images, news and blog hosting. Instead of us linking to videos on Youtube that later get taken down, people upload these videos to these distributed networks. A parsing tool could be added to websites to link to content hosted on these distributed services.

For example, let's say you want to link to a video on Jewtube, you submit a Jewtube url which references the video, such as, https://jewtube.com/video/10003. These new links could be setup by redistribution servers and would look like an assortment of IP addresses as so, 138.32.0.3.50.5.75.10.38.223.36.13.1.74.96.32.63.133.231.55.9.0.2.44... and so on. The parsing tool Voat would use would simply choose one of the IP's from the list at random and if one was unavailable, it would attempt to connect to another and another and so on. The only way to stop that link from working would be to take down every single distribution server. Even then, a distribution server could have a thousand replicators but the link might only include 50 of them. Every generated link would be a different assortment of 50 of the 4,000. It would take incredible effort of the DDOS'ers and Jews in government to take them all down simultaneously, specially if people started setting up distribution servers all over the world.

With all that said, it still wouldn't be impossible to set a service like Voat in such a system, here's how,

First, yes, it would take a single website/database host to be able to have near real time commenting. A person wanting to host a site like Voat would have to have the money, hardware resources and live in an area with high speed Internet that allows for website hosting. I would need let's say 6 servers. 2 servers would be used to distribute content, 2 servers would be used to receive content and 2 would be used to store the database. The reason for 2 each would be redundancy. Next I would purchase 4 gigabit fiber connections, one for each incoming and outgoing server. Again, for redundancy. The 2 outgoing servers would constantly deliver updated content to the redistribution servers. The 2 incoming servers would constantly receive data from all of the redistribution servers. The 2 database servers would be used to process and store all the data.

The trick would be to make sure you split your IP's between distributors in order to hide your public IP addresses. Give one set of distributors one IP address and another set another. That way if a Jewish honeypot was setup to disclose the actual public IP for a DDOS attack, it would hopefully only get one of the IP's and not the second one. Also, the idea of distribution is that you don't distribute to every server personally, you distribute to a few, then those replicate to others and so on. It's a chain of servers passing information through and then just a handful of servers actually passing information to the 4 servers actually running Voat.

In many ways it would still be susceptible to being shut down, but it would be much more difficult than by today's standards. Right now, anyone in the Jewish chain can shut down a website, the webhost, the domain registrar, Google (by delisting), SSL cert company, or even places that protect you from DDOS attacks, such as what happened to that guy who hosted that pro white website.

I'm not saying it would be easy and trial and error wouldn't be required for long term use, because they would obviously attempt to overload the system by flooding it with messages, but anti-spam tools could be created, even distributed lists of blocked IP addresses that are used for spamming attacks. None of these problems are much different than the same problems we face today though and are more about getting better at detecting spam than building a free, distributed, uncensored world wide web.

[–] ShinyVoater 0 points 0 points (+0|-0) ago 

On static websites, you're overcomplicating it: Freenet is an advanced tool that handles them pretty damn well(assuming you don't mind if it takes longer to load than to get home during rush hour). The way it works, every page has a unique ID that your node uses to ask its peers to serve it the page; if they don't have it in their data store, they'll ask their peers and so on until the file is found and forwarded to you or the maximum number of hops is exceeded. In addition, each site also has a separate ID that can be used to locate which version is the latest.

On delisting, there's no way around it: any directory or search engine can refuse to list or show any site it disagrees with(Freenet's big indexes, for instance, refuse to show any CP sites).

And getting around to your vision of dynamic sites, what you're describing is called load balancing, which is a well-established concept. While it provides resilience against DDoS attacks, it doesn't do anything to hide anything and you still need a unified backend to host the actual resources.

[–] SpottyMatt 0 points 2 points (+2|-0) ago 

This exists and has been being built for years. Just start using IPFS.

[–] obvious-throwaway- [S] 0 points 1 points (+1|-0) ago  (edited ago)

Definitely the right direction, but if I was going to show off the abilities and power of distributed public web hosting I wouldn't show it from a traditionally hosted website using a Youtube video. His website should have been a link to a website using the technology he is showcasing and should have featured hundreds of links to people using the system. Otherwise, it's just a website showcasing a theoretical product that no one, including IPFS, seems to be actively using.

The process of setting up the initial website for the 9-5 factory worker who just wants to bitch about the government is going to have to be multitudes easier to set up as well. Ideally, the best solution would be to make modifications to the Tomato router firmware that allows it to become an optional web hosting device. Almost all modern routers already have web hosting built in, as that's the main way of accessing and controlling a router. So not a stretch by any means. There are also cheap and simple products out there designed to host websites from home, such as Synology.

Something as simple as this, a person goes to a website and reads the instructions to update their firmware on their router to the Tomato product. Once updated, a person logs into their routers web server page and clicks on a link that says, "Create website". A few more options appear like "Blog", "News", or "Photo Gallery". They pick one and setup a simple web site and for extra storage plug in a USB flash drive into their router. Once the website is setup, a new link appears on the router's web page, "Publish Website". They click the link and the router scans for known online replication directories. The user then has the option to choose which directories they want to submit their website to and whether they want a one time sync, push sync (content changed), or live sync. They then wait for the people running those directories to review and either accept or deny their website. If they accept, the user will see which websites accepted to replicate their website and will be able to go and view the content themselves.

The replication process could be kept simple as well. A person downloads a custom image with all the necessary replication software and installs it on a server. They setup and register their beacon with other directories and wait for requests to come in from people who want their content replicated.

[–] Genr8r 0 points 0 points (+0|-0) ago 

The ipfs homepage is deployed over ipfs.

The fact that you couldn't tell the difference speaks to its simplicity and robustness for end users. No plugins or extra software needed. Any browser will do. From a developers perspective, things are still pretty straight forward and simple.

The source code is readily available via github... https://github.com/ipfs/website

The main repo for the project has over 14000 stars and is forked nearly 900 times. So I would dig a little further before deciding no one is using it. As another quick real world example https://d.tube is a YouTube replacement using ipfs to manage video assets.

Finally a quick hit on docs turns up some simple ready to use code as starters for some basic projects...

https://docs.ipfs.io/guides/examples/

[–] zarababy 0 points 0 points (+0|-0) ago 

Very Nice Article keep sharing this post kindly check download android nougat