Hacker News new | past | comments | ask | show | jobs | submit login

Is there any chance of a hard fork? What about, let's say, a web 1.1 where we intentionally remove all the fancy new web APIs and mostly revert back to what we had in the late 90s? Sure, things like video support can remain but all the crazy stuff for building web apps would go away. Let the current web rot away under its corporate overlords and then, maybe, we can have the fork go back into being a fun way of publishing and sharing information.



Have you tried the dark web? For the sake of anonymity, everyone has Javascript disabled when browsing Tor hidden sites — so such sites must be designed to conform to web 1.1 principles.

It's actually a very interesting frontend platform to design for, because you don't get any Javascript support, but you get full modern CSS support.


That's the original vision of hypermedia: cross-linkable book pages who can express metadata about their content while maintaining flexibility in their visual output.

I never thought I'd say this, but HTML4, with all its billions of warts, was a pinnacle of this vision. Later developments swung too hard towards an excessively content-focused vision first (XHTML, the "semantic web"), and then swung all the way in the opposite direction, turning web protocols into dumb pipes for the general-purpose VM runtime that modern JS/HTML engines have become.

Unfortunately, the industry always, always wanted this runtime. Plugins, applets, ActiveX -- people just refused to accept basic form-based interaction. DarkWeb properties accept it only because doing otherwise would be too dangerous, in the same way people don't wear jewelry when going through a ghetto.


Rant: If people using Tor have JS disabled, then why does the Tor Project keep pushing "Tor Browser" as one-size-fits-all program for all Tor users. The program is enormous and seems like overkill for something lightweight like text retrieval. (IMO, it should be anticipated that people might use Tor for lightweight tasks on account of the latency.)

I've been experimenting with DuckDuckGo's .onion site. Below is example of how to search the light web over Tor without Tor Browser.

I'm curious .onion sites because it sounds like .onion solves the reachability problem. Anyone could have a website. No requirement for reachable IP address from ISP. No requirement for domain name and hosting subscriptions. No commercial middlemen. (Assuming Tor network operators are true volunteers.) Not every website has to be commercial or reach large scale.

pts/1

    x=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
    socat -d -d -d tcp4-l:443,fork,reuseaddr,bind=127.0.0.42 socks4a:127.0.0.1:$x:443,socksport=9050 
pts/2

     # Usage: echo query | $0 > 1.htm; 
              links -no-connect 1.htm;
              firefox ./1.htm;

     #!/bin/sh
     h=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
     read x;
     curl -v0dq="$x" --resolve $h:443:127.0.0.42 https://$h/lite/
Using socat instead of curl

     #!/bin/sh
     h=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
     read x;
     x=q=$(echo "$x"|yy046);
     export httpMethod=POST;
     export Content_Type=application/x-www-form-urlencoded;
     export Content_Length=${#x};
     export httpVersion=1.0;
     export Connection=close;
     echo https://$h/lite/|yy025|if sed w/dev/stderr;then 
      echo $x;echo $x >&2;fi \
     |socat stdio,ignoreeof ssl:127.0.0.42:443,verify=0


There are a spectrum of privacy-related use-cases that Tor Browser solves; not all of them require that you have untraceable anonymity from the perspective of the server you're talking to.

For example, when using Tor Browser to connect to regular public websites, your goal is usually just to obscure your connecting location, hide your traffic from your ISP, and punch through and firewalls between you and the destination, without first being required to establish a relationship with some specific proprietary VPN company (where doing so might not even be possible if you're in some countries.) So people doing this tend to be willing to enable JS, at least on a per-site basis.

It's only really when you're in the kind of trouble where state actors are trying to correlate your IP address through inadvertent connections you might make to state-owned-honeypot Tor hidden sites, that the full no-JS paranoia is warranted.

But, to vouchsafe the anonymity of those people, it's better for anti-fingerprinting if more people who are using Tor Browser for more mundane things, also have JS disabled. So Tor Browser disables JS by default.

Which means that it's pretty much a given that if you're doing web design specifically for a Tor hidden site, then you have to assume that people accessing your site will have JS disabled. (And that you can't just ask them to enable it — they'll say "nice try FBI" and close the tab.)


Tor Browser defaults to JS enabled.


> Is there any chance of a hard fork?

We'll build our own internet! With blackjack and hookers!

More seriously, I see echoes of the gentrification cycle. At the end of the cycle nobody wants to live in the soulless corporate hellscape they've helped create, so they follow the cool kids to the next up-and-coming neighbourhood. It works for social media sites, so why not for an entire protocol?

If you can figure out a protocol where ads don't work, I'm in.


> If you can figure out a protocol where ads don't work, I'm in.

Is it even theoretically possible to create such a protocol? Preventing tracking is feasible, but telling apart ads from real content does not sound solvable on a protocol level.


I don't believe so. But I also believe that's the root of the problems with the current web.

I believe that wherever we go, marketers will follow us. But wouldn't it be great if I was wrong...


> Is there any chance of a hard fork?

The only hope is anti-trust breakup of Google. Chrome has to be pried forcefully from their hands.

We should launch massive campaigns not just in the US, but also Europe and other critical markets.

We shouldn't back down even if they abandon WEI. They'll just keep trying as they have with AMP, Manifest v2, WHATWG [1], etc.

Google can never be allowed to build browser tech so long as they control search.

The web must remain open.

[1] WHATWG took unilateral control over the HTML spec. They abandoned the Semantic Web, which had led to RSS, Atom, etc., and would allow documents to expose information you could scrape and index without Google Search. Google wanted documents to remain forgiving and easy to author (but messy, without standard semantics, and hard to scrape info from)


The thing that I always wonder about when we talk about anti-trust action to break up big companies like this... how will that actually fix things? If Chrome, Inc. was its own independent company, it would still be incentivized to do things that Google Ads, Inc. wants.

Mozilla is barely able to fund itself, and a big chunk of that funding comes from Google. Surely Google is careful to avoid any overt impropriety with that relationship, since they don't want to come under more regulator scrutiny. But why would a Chrome, Inc. care about that sort of thing? They'd take the same money from Google Search, Inc. that Mozilla does to keep Google as its default search engine. They'd still happily implement WEI and other garbage that Google Ads, Inc. wants, and Google Productivity, Inc. would still ensure that Docs, Sheets, Drive, etc. are all super-compatible with Chrome (and work with the Chrome team to ensure that's the case), and not care so much about other browsers.

I have no doubt that things would be better if we were to break up big conglomerated companies like this, but I'm not entirely sure that breaking up Google would achieve the goal of helping the web remain open.


There are multiple platforms trying to provide this (neocities most prominently, mmm.page most recently, various others that occasionally get posted to HN). Of course, we don't need a platform; we need a culture, and infrastructure, and protocols, and some balance of organization and search. And have it all not sitting on Amazon's servers. And a way to pay for the parts people can't or won't provide for free.

I want to see it; I don't know the path there.


> Is there any chance of a hard fork? What about, let's say, a web 1.1 where we intentionally remove all the fancy new web APIs and mostly revert back to what we had in the late 90s?

Sure. It's really just a matter of mass appeal. We could fork the existing browser base and eliminate the new attestation API. Some projects are already doing this from what I understand.

What will keep attestation from being used is websites will lose business if their customers can't access the site. We went through this with user-agent string checking in the 90's/00's when IE and Netscape/Mozilla were at war and every site had a very strong opinion on which browser they would support. Even today you occasionally see sites that will hit you with "unsupported browser" errors if you aren't running a specific version of something.

The solution to this was everyone realized they were throwing money away by excluding a large portion of their customer base. At the time no single browser really dominated the market share so it was easy to see that an IE-only site was losing 33% of internet traffic. These days everything is basically chrome-based so this hasn't been as much of an issue.

So in the future we'll see this same thing. Non-attestable browsers will be locked out of attested sites and it will be a numbers game to see if sites want to risk losing these customers/viewers.

At the end of the day, you have to remember that everything on the web is just a TCP socket and some HTTP which is a flexible text protocol. We can build pretty much anything we want but it takes inertia to keep it going.


> non-attestable browsers will be locked out of attested sites and it will be a numbers game to see if sites want to risk losing these customers/viewers.

Rather than being completely blocked, I think non-attestable browsers will be subject to more CAPTCHAs and other annoyances, similar to what TOR users see today from anti-DDOS services. Perhaps ad-supported services using large amounts of bandwidth will decide not to support non-attested users, since the ad revenue from those isn't enough to pay for the bandwidth.

What I would like to see is a solution for anonymous microtransactions, so web sites have a monetary incentive to serve users who want to use a non-attestable browser (and don't want to see ads).


>Is there any chance of a hard fork?

I would like to think so but as someone who's tried to hack on the chromium codebase I'd say it's easier to make a new browser from scratch than to figure out how to make meaningful changes to chromium.


Nothing is stopping you from just not using new features in your website.


The problem is convincing everyone else to do the same, especially against Google's propaganda and the accompanying mob of rabid trendchasing web developers.


This is an even lamer version of the "If you don't like it, just don't buy products from <brand engaging in antisocial behavior>" rhetoric.

Anything that allows companies to exert more control, will be used to exert more control.


I'm down. Sign me up.



Gemini is a joke. The main proponents like Drew Devault chuck a tantrum when browsers allow users to optionally show favicons https://github.com/makew0rld/amfora/issues/199


Gemini is what happens when people are so traumatized by years of the web platform being horrifying, they react to the other extreme and build something so bare-bones (with a policy of not allowing any kind of extension) that it will never become a general-purpose solution that most people will adopt.

And that's fine; if some people want their own sandbox so they can avoid the horrors of the WWW as much as possible, more power to them.

But let's not pretend this is going to be come a widely-used platform where people are going to be able to do but a small fraction of the things they can do on the web. Again: that's fine! But it can't and won't replace the web.


This is fantastic! Just downloaded an iOS client and really having fun going down the rabbit hole.


"http get but we chopped off the low order byte of the return code" is not sufficient or necessary to implementing a non-Googlized web.


A wasm-only web perhaps?


I think you're going in the opposite direction, my friend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: