Where was the free and open source distributed search engine?
"oh we cant support any protocol or any other technology! We don't want to be promoting anything! Unless we get paid of course!"
Where was p2p video? Why no BitTorrent? Not to long ago hosting video on a server was crazy expensive but at that time people still visited small websites.
It should have been as simple as:
But in stead we got youtube which is short for: adware, censorship, spying, completely unreliable, a crazy resource hog to the point of not working on hardware older than 5 years.
And then there is privatized law making & privatized law enforcement.
If I put stuff out there from my own computer from the comfort of my house I'm subject to plenty of laws and regulations created by my democratically elected governments.
There are no multibilion robots shooting from the hip, there is no false-positive punishment without dialog. If I break the law the situation is carefully examined, I get fined or even imprisoned. Its not a perfect system but its better than persecution by robot.
The Google constitution (TOS) is something like: We may terminate you at any time without explanation.
That's not someone you want to enter into a relationship with.
Something modeled after YACY should be the default search engine. Its wonderful, you have it crawl the pages you frequent and it finds everything, other users crawl other things, nothing google could ever do.
In stead, if I type "1+1" in the search box you send me to google to solve the problem.
(1) Everyone pays for the infrastructure. That is, you pay for storing and serving the videos in a p2p video service. You pay for crawling and forwarding the search results to users of a p2p search engine. It may be unnoticeable if you run a beefy desktop / server on a 100 Mbit connection from a major metropolis. It becomes worse on your laptop or mobile on a metered connection, or on a rural DSL.
(2) The speed and reliability of a distributed system depends on all its nodes. SREs running a bunch of servers at many nines is one thing. You and me running goodwill p2p nodes is another. To achieve comparable reliability, many more nodes would be needed. Also, with enough nodes, p75 latency may be okay, but p99 is going to be poor. Compare to torrents, where last month's content may arrive in seconds, and obscure content from 5 years ago may take days.
The sad thing is that most people are not ready to shell out the amount which marketing companies pay for the ads they see. Even less are they ready to make their experience slower and requiring more technical chops.
For the few that care, there's PeerTube, ipfs, distributed search, etc.
If Mozilla only tries to pursue goals that have a chance of global success, I'm fine with that. Let them keep Firefox and Rust in a good shape; more than enough for me.
(1) Digital Ocean, Vultr, and Linode Boxes have a 300 Mbps port size, under polite use. That goes for every instance type. Search engines like Manticore or anything besides memory eating java things like elastic search can run under constrained environments in a cluster and when used with SSDs are really damn fast. Then there is Cloudflare who's services, as long as you don't abuse them, can help with massive distribution. Not to mention all of the storage backends that exist with really, really cheap storage. Backblaze, Wasabi, hell when some ingenuity you could probably use Dropbox apis for long term unlimited storage. This is your backbone.
(2) For extra redundancy and better distribution you can add in p2p which if used correctly can propagate new data to users within reasonable time frames and allow for data storage to be more resilient. Easily less than a minute in streaming environments.
Also, while IPFS is nice it doesn't have the resiliency necessary for the future. Mozilla won't do this but neither will any other browser making company.
But people who has at least a vague idea how to configure their own server are a small minority.
I hoped they'd be part of the subresource integrity spec, but a simpler system was chosen. That said, you could still piggyback on <img src="whocares" integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC" /> to deliver protocol-agnostic content.
I had to uninstall it as I started seeing captchas everywhere. The recaptcha v3 test page gave me a score of 0.1. After uninstalling this jumped to 0.9.
Wow, I had no idea this was the cause. I thought it was just Google straight up ranking Firefox lower (whether intentionally or due to bugs) because they don't care about fixing the experience for Firefox users.
Stuff like this really disgusts me, if it's true. The saddest part is that non-technical consumers will never care about walled gardens or the gradual centralization of the internet - because the truth is, the more centralized we get the more convenient everything becomes.
Google also did the same thing with Internet Explorer/Edge where they put a transparent div on top of youtube videos that was "designed to give focus to videos" but broke youtube videos for all IE users.
In real life they did it accidentally, and then kept it when realised it works against Microsoft. Also kept the plausible deniability? :)
Google does not give a f... about users/web... They care like any other company to make money, because without money they cannot survive, just simple as that.
If the roles were inverted... Mozilla would have done exactly the same thing...
All they need is programmers to write the code for the browser and someone to manage the servers where the code, bugtracker, wiki and website are hosted, anything else is unnecessary and frivolous waste of resources. Get rid of the parasites.
(Disclaimer: I work on Firefox; I've seen the public financial statements.)
As soon as Mozilla started relying on Google's money, they were doomed. Now they are staffed by lots of well-paid US developers, and have to continue taking Google's deals to pay the wages.
List of good Web tech killed in W3C by Mozilla:
2. WebSql (in favor of the far worse and cumbersome indexed db)
3. HTML5 storage / filesystem api
The grudge seems equally bad on Mozilla's side
In favor of WebAssembly, which is a much better tech in terms of actually being able to achieve interop, no?
I have contributed towards Chrome's implementation of PNacl and wasm a few years ago. Wasm is still not anywhere close to the capabilities of PNacl and has all the signs of a 'design by committee' specification. I will be (pleasantly) surprised if wasm gains more traction.