Hacker News new | past | comments | ask | show | jobs | submit login
The reckless, infinite scope of web browsers (drewdevault.com)
379 points by jlelse 7 months ago | hide | past | favorite | 234 comments



There's one thing I'd like to mention to everyone who says it is impossible to build a new browser: while that may be true today, I would just like to point out that the ancestor of the dominant browser engines of today, Blink and WebKit, is KHTML from the Konqueror web browser.

While KDE has had some contributions from large corporations, it would be a gross exaggeration to call KDE (or any Linux desktop) a popular success. It more closely resembles a hobbyist project than a commercial juggernaut.

While it may be true that KHTML/Konqueror was not as polished as WebKit/Safari or Blink/Chrome before those forks poured massive amounts of corporate development effort into those browsers, the fact is that it was a solid enough foundation for those later corporate contributors to leverage into world domination.

In other words: Today's small scrappy upstart browser that only hobbyists use may be the embryo of tomorrow's near-monopoly. Please don't abandon your ideas for new browsers+engines, give them a shot. Even if you do not directly benefit from its future success, at least you may give the web a gift that everyone will someday enjoy. The new boss Chrome is an unqualified improvement for the web over Internet Explorer, if only in standards support and open source code. I only hope that someday we can replace it with something better than a new corporate overlord.


I see these sorts of appeals a lot on HN but never really understood the point. Sure, software had a good run of small devs making big splashes, but it's pretty obvious to anyone watching that the garage days are over as far as big projects are concerned. The only real exception is the European software devs who, from a market perspective, exist mainly to sell the most promising projects to China or the US since they can deploy those projects in a way that drives growth. Why shame people for making smart moves?


Is asking people to keep hope alive the same thing as shaming them for feeling hopeless? I hope my words don't come out that way, since it's certainly not what I intend.

Also, one point I'm making here is that a small dev may not be able to make a world-dominating project by themselves, but it may serve as the foundation of something larger later. Is that really surprising or controversial?

Perhaps a more important point I'd like to make is that neither Apple nor Google were quixotic enough to start a browser engine from scratch, for all of their expertise and cash. Without KHTML, we might still be suffering under IE or something even worse. It may take someone willing to tilt at windmills to dethrone Blink/Chrome, I don't think the suits will save us.


KHTML/Konqueror, Webkit, Safari and Chrome didn't really play an important role in IE6's destruction.

It was Firefox. Firefox 1's slogan was literally "take back the web".

It was only Firefox that took IE6 down.

(I'm a KDE contributor and I may even have a couple of patches on Konqueror)


There are amazing almost turn-key FOSS solutions just an "npm install" away. (Plus 3D printing, plus dropshipping from Shenzhen, plus all the other stuff I'm not even aware of, but startups can and will use.)

Sure, garage scale nowadays means just a few millions of USD angel/seed, compared to the 100K USD a decade (?) ago, but it doesn't mean that there are no niches waiting to be occupied, no new markets emerging ever in the future, etc.


The thing is, back when KHTML was made, it supported pretty much everything you needed to browse the web. Even the version in KDE1 (which is only a handful of KLOCs) supported almost everything you'd see online.


KDE actually looks halfway decent though. Unity (if that's what Ubuntu's still calling it's UI?), though, most definitely has "hobby project" written all over it. I tried Ubuntu for the first time recently and all kept hitting little catches and speedbumps - but they weren't "oh, that isn't implemented" or "oh, this doesn't work that way", they were all bugs. The alt-tab subwindow thumbnailer lets you select multiple application windows at once. The screen locking system not only gets extremely confused by fullscreen windows, once said fullscreen apps exit the system belatedly picks that exact moment to go to sleep. Notification toasts cover the type-to-search box, which is fun when said toasts decide to get stuck on the screen. The hotkey system eats ctrl+alt+(left|right) even though the system is configured to use vertically-aligned desktops, so I can't switch between desktops in my openbox VNC session unless it's fullscreen. The system doesn't feel holistically tested. I'm sure there is actual testing done, but it feels like <10% (<5%?) of what Windows/Office/etc gets.

Sure, perhaps things will be in a different place in 5 years. (Especially if The Dispatchers Of Wisdom™ don't decide to give up and start again with a new UI and applications. Cynically not giving the current effort a high chance of surviving...)

OK, my point/question: I think one of the prerequisites for hobby projects to make it big is playing in a space that's largely uninteresting or undiscovered to the majority of people that will notice (and understand, and feel threatened by, and respond to) the new thing, before it has reached critical mass. Perceived risk being somewhat irrational and all that.

KDE and Unity feel like hobby projects because "desktop Linux" is forever the thing that never really existed, and the only people who pursue the idea are people that don't get that. But their work gives enterprises some free source code with APIs that let them make application windows with buttons in them! Nice! But there's no point for those enterprises to contribute functionality back to the UI engines, because that doesn't further the enterprises' application-specific focuses.

Web browsers definitely aren't an undiscovered thing by this point. I'm sure more people than might reasonably be expected probably check out where NetSurf is up to every 6-12 months or so, for example. (I don't mean hobby project users here, I mean would-be commercial users with adequate domain savvy to be aware of the entire ecosystem. Total theorization on my part but I do wonder if I'm right.)

"A new kind of web browser" is a fun idea, but impractical in practice as it's preeeety much the polar opposite of an unknown idea now. A new web browser would be subject to an atlas-holding-up-the-entire-world level of passive and active social scrutiny if it looked like it was taking itself seriously. (Servo is kind of a good reference example there. It wouldn't survive if it didn't have Firefox and the indefatigable Rust team behind it; and even as it is things are still struggling).

By the way, serious question - what does source-level access to Chrome mean, at the end of the day, for the vast majority of internet users? The only real quantitative (not qualitative) answer I have are the small high-reward collection of bug bounties that have been rewarded over the years to external contributors. The fact that you can build Chromium and ship it [as part of Linux distributions] does feel a bit like an implementation detail that fulfills Chrome's goal as a technology moat (and competitor to Firefox).


Ubuntu dropped Unity, no? They are nowadays just shipping GNOME. (Maybe with custom plugins/themes/extensions/config.)

Servo is not a browser, it's a research project, it's the proving ground for Rust and Firefox's Rust-ified components.


While I agree that a lot of the W3C standards are silly and not worth implementing, I'm still determined to build a new browser from scratch.[1][2]

I don't believe that it's impossible. It will take a lot of time and effort, sure. But not impossible. :)

1. https://github.com/SerenityOS/serenity/tree/master/Libraries...

2. https://github.com/SerenityOS/serenity/tree/master/Libraries...


>it will take a lot of time and effort, sure. But not impossible.

“Taking a lot of time” is what’s implied by impossible.

By the time you are done building whatever-it-took-long-time-to-build a new generation of users have just emerged.

Your product is either a) obsolete to the youngest users, or b) you, as the product builder, lost the point of reference of the youngest generation

From the blog:

>I conclude that it is impossible to build a new web browser.

And a new OD. The entire world communicates now by two OSs only..android and iOS.


> The entire world is now communicating by two OSs only..android and iOS.

That's silly and reductionist. The world uses those OSs, but not only those. Almost every person that sits at a desk for work also uses something else. Every student in the US is expected to a use a computer with a keyboard for writing papers. I'd they don't have their own computer to do this, they are provided access to one.


A significant portion of the computing for iOS/Android is also powered by offloaded cloud computing, which run a vastly more diverse array of OS environments (though to what extent the myriad varieties of Linux distros are diverse from one another is semantic/subjective).


When I was at college a decade* ago, there were quite a few student carrying around an iPad with a keyboard cover, and doing all of their schoolwork on that. Heck, I understood the draw the moment I did the same with the (doomed, but then-novel) Surface RT. That thing was lightweight and lasted all day without needing to worry about finding an outlet, which is a big deal when you're running to a different lecture hall every couple of hours.

For some disciplines, especially anything involving CAD, I think desktop computer is still necessary. That said, I think you'd be surprised just how much of a typical college education can be done on nothing more than a basic tablet and a handful of productivity apps.

* (...wow, it really was a decade. How the time flies.)


> I think you'd be surprised just how much of a typical college education can be done on nothing more than a basic tablet and a handful of productivity apps.

I think upon reflection this is less surprising than you might initially find it. After all, college predates computers.


>product

>users

Hi maram! It's not that kind of project. I'm building a new OS and browser for myself. There is no "product" and the only "user" I care about is me. :)


Ill do a digital standing ovation for that. Imagine knowing exactly what the user thinks and cares about. :-)


Good luck! Ping me if you want users to test your browser ;)


"Taking a lot of time" is not synonymous with "taking forever".

By the way he phrased his comment, even I could tell this was a project with personal, non-commercial ends.

On this basis, his product will ultimately be: accomplished, or not accomplished but having enriched himself a ton by mere virtue of the pursuit.

The entire world is not only communicating via Android and iOS. Those are the predominant OSs; however, we've all heard of Linux, haven't we?


> "Taking a lot of time" is not synonymous with "taking forever".

It can be, when you have a moving target.


The Linux is ready for production.


I don't fully understand the argument that a long piece of work will leave you behind when you're done. What if you've qualitatively moved the field forward while the "competition" simply made incremental progress? Aren't you ahead then?


You can do that! but referencing history, it's a rare thing to happen.

Regis McKenna made a point at 4:30 "The span from the first transistor to the first microprocessor is going to about a third" he was discussing product innovation. https://www.youtube.com/watch?v=5Z13NI0SuyA&t=2912s

---------

“There will be certain points of time when everything collides together and reaches critical mass around a new concept or a new thing that ends up being hugely relevant to a high % of people or businesses. But it’s hard to predict those. I don’t believe anyone can” -Marc Andreessn


A new OS is entirely buildable, precisely because it isn't beholden to hundreds of millions of words in standards that you'd be required to implement.

You could keep it simple, too, if you didn't need to support existing bloatware.. such as browsers.


You have me wonder what one could make by that formula. No one else thought it was interesting [hah] but I had a good time pondering an environment that forces the user into a productive pattern. These 2 ideas seem compatible.

Of course most people imagine themselves to be motivated and able to focus on goals. Then, after x hours of win, they log onto twitter or facebook, get calls, messages etc or lose half a day on HN.

Make something like a game console only the exact opposite. Imagine phone calls and work messages popping up in the middle of your xbox session and work being 1 click away at all time? I think that would be just as inappropriate and undesirable as doing it the other way around.

It cant be just me, as soon as I start the browser the entire internet circus is there, all of the clowns, balloons all over. This is where I'm suppose to get work done?


I support this idea. Other operating systems existed when Linus started on Linux, and it turned out not to be wasted effort at all.


Linux is a kernel, not a full system like OSX.


It is almost universally understood in the vernacular that when someone says "OS like Linux", they mean "A non-specific probably-GNU-Linux distro." The space where that understanding is excepted is when discussing actual kernels.

Much like the vernacular meaning of "A computer" usually includes mouse, keyboard, and monitor, even though those parts are completely interchangeable and you can run a computer headless with remote access (or no access at all, the computer simply churning through whatever program it was pre-loaded with and outputting to some exotic peripheral, or maybe even simply generating heat in the corner with no human-discernable output at all).


GP's argument is that Linux (in the sense of what was actually written by a relatively small team in 1990-s) is a kernel - not a full-blown OS, of which there were many at the time.

Applying to the topic about browsers, it's as if you were planning to build a skeleton of a browser, without those equivalents of GNU-tools-to-complete-distro-starting-with-Linus's-kernel. But Linus was able to get parts from GNU project - where would you get necessary parts to complete the browser which you're writing?

I'm hyperbolizing. However I believe the point of the article is that it's actually very hard to build a usable, full-blown browser - not a proof of concept, Unix-kernel-equivalent. Toys for playing - maybe; production quality, so somebody would use it day to day - unlikely.

I'm more interested with where do we go from here. Should we start with fixing W3C standards so that they would permit a simpler implementation? Or are there other ideas?


I'm not sure there's a path forward with fixing W3C standards to be simpler. Once a standard exists and has buy-in from multiple parties (in this case, browser implementers and website developers), it has lock-in; if a simpler standard isn't backwards-compatible with what's already out there, the implementation of the simpler standard isn't compatible with content on the web, and if that content is, say, Facebook, nobody will care about the simpler standard.

OpenGL ended up doing something like this with the death of the fixed-function pipeline (first via OpenGL ES omitting it from the parent standard and then via OpenGL's main standard marking it deprecated and killing it for all cards that don't mark themselves backwards-compatible). But the consequence of this is that some 3D apps simply won't work with some graphics cards (a situation users are accustom to in the brutal Wild West of high-performance gaming, so those incompatible cards aren't crippled in the marketplace for being unable to run older games and such).

Perhaps one could describe a simpler w3c standard that is an orthogonal set of "core" features, and then implement handlers atop that standard for all the legacy crap that browsers can do today? It'd be a hell of a project to even start identifying what the core feature set and architectural layer would look like.


There is a path forward: Just dropping support.

The web as a platform has done this before for things like ActiveX, Java applets, and most recently Flash. Collectively, developers decided they were overcomplicated rubbish and abandoned them.


Thanks. So, practically, developers can move capabilities deemed non-essential into plugins, which are more problematic to install, so end users would apply pressure to site authors to avoid certain technologies. Hm. History says it can work.


That fails to work if what you drop support for breaks Facebook. Newgrounds never had the clout of Facebook, but dropping Flash gutted them. In a world where Newgrounds had become Facebook, Apple would have invested in Flash instead of killing it.


Cynically, the users who need such websites shouldn't use that browser. Fewer users, but potentially more technologically literate ones. Sounds like a win-win to me.


IIUC, the goal has generally been to make browsers standards-compliant, not balkanize the space into multiple incompatible browsers.


I don't think the internet is well served by a monoculture of user agents all speaking HTTPS (and only HTTPS) and trying to cram everything under the sun into HTML/CSS/JS.

My personal preference would be for more specialized agents speaking protocols designed for their use case: email over SMTP/IMAP (or JMAP!), newsreaders with RSS, chat on XMPP, etc, content browsers with some limited markup language, maybe games and fat apps loaded as binaries to a sandbox VM.

Maybe this is unrealistic and that time is passed, but the internet used to work more like what I've described. Today, more and more things are just a JS webapp that only runs properly in Chrome.


> It'd be a hell of a project to even start identifying what the core feature set and architectural layer would look like.

If browsers are still "browsers" and not "kitchen sinks in disguise", I'd - still - identify 5 major parts: 1) networking 2) parsing (HTML/CSS/JavaScript - others?) 3) DOM rendering (images, video, audio too) 4) JavaScript engine and 5) controlling UI on top of all that.

Maybe it's too simplistic.


Thinking out loud: what about a transpiler project that goes from backwards-compatible current standards to stricter but more implementable standards. The idea is that you run everything you receive through this before your browser starts parsing it, so that you can support a larger part of the web while implementing a much smaller set of standards. Like Babel in reverse.

Of course, there's still a monoculture in transpilers, but at least there could be more browser competition after that point.


The statement holds; although it was arguably a bit unusual that Linus only built a kernel, it was just one more unix-like kernel in a world where that wasn't exactly rare. (Which, granted, worked out well with GNU's efforts to build a whole OS that did far better a userland and kinda dropped the ball on the kernel)


Since we've worked on the same browser engine before, I should ask the obvious question: what do you plan to do differently that will make it better than existing engines?


Hi Cameron! I'm not trying to compete with anyone. I'm just building a browser (and operating system) for myself to use and have fun with. :)

Regardless, the main things I aim to do differently:

1. No JS JIT, only interpreter. I don’t care that much about JS performance as a user and the difference in complexity is astounding.

2. I prefer the web as a platform for smart documents, not for applications. I don’t want to relinquish any control of my computer to the web, and don’t want most desktop integrations like notifications, etc.

3. In that same vein, very light on animations/transitions, that type of stuff.

4. Way less aggressive memory caching. I don’t like the sight of 100MB+ web content processes that’s so common these days.


> No JS JIT, only interpreter. I don’t care that much about JS performance as a user and the difference in complexity is astounding.

One of the cool techniques we used in earlier versions of JSC (although someone told me it's been dialed back since then) was caching property accesses / method calls in the bytecode interpreter. This technique ended up getting published by someone else in a different context:

https://web.cs.upb.de/cp/download.py?key=ecoop10

I think there's more to be explored along these lines.


Definitely! There's a lot of interesting stuff that can be done with interpreters if you decide to focus on them instead of JITs :)


>I don’t care that much about JS performance as a user and the difference in complexity is astounding.

>I prefer the web as a platform for smart documents, not for applications. I don’t want to relinquish any control of my computer to the web, and don’t want most desktop integrations like notifications, etc.

I wish more people share the same opinion. We haven't even perfected the Smart Documents yet, but browsers has abandoned that work and moved to Web Apps instead.


Notifications don't have much to do with whether JS runs fast or not. I have them turned off and use Twitter and Gmail and Slack and so on.

And along the 'control' axis, stuff running in the browser probably has less of that than the same stuff running outside of that sandbox.


I agree many of them are not worth implementing, but I do also think wanting a new browser from scratch because the existing one is no good. (Although it may sometimes be suitable to use parts of codes from other programs including web browsers.)

A lot of features should be omitted, and also some more things added to provide better user control.

In addition to being too complicated, I also think web browsers are overused. Other programs often work better. Switch discussion forums (including this one) to NNTP, text-based interactive services to telnet or SSH, email to SMTP, chat to IRC, many kinds of calculations to local programs which do not use the internet at all, etc.


I've been toying with the idea of writing my own browser for a while, too, because I can't find a modern browser that makes me happy.

> A lot of features should be omitted

Since I actively don't want the vast majority of functionality that is included in modern browsers, I think that this is a very doable project. Very large, certainly, but I don't see a showstopper yet.


I agree. I am willing to set up a NNTP to discuss it, in case you want to discuss it on the same NNTP that I can discuss it, and then other people can also use this NNTP for the discussion of this same thing too, I suppose.


What are the chances that these components will be portable to other unix-likes?


Shouldn't be too hard! There's already an ongoing port of the SerenityOS userspace to OpenBSD: https://twitter.com/jcs/status/1224205573656322048 :)


Ah man! I was kind of excited to see that a whole new OS was emerging. Is it now going to be swallowed up by a more popular OS?


Nah, the OpenBSD port is just a fun side-project by jcs (he is an OpenBSD developer.) I just gave it as an example of how portable the Serenity userspace components are :)


The DOM is the easy part IMO, it’s layout and hooking it up to a JavaScript vm that’s hard.


I'm surprised no one pointed this out yet, but there aren't 1,217 W3C specifications. There are 1,217 published documents... which includes drafts, old versions, and notes saying "we're not developing anything anymore."

Filter by level recommendation and latest version, and there's only 295 specifications remaining. Even then, there's still some duplication (e.g., HTML 5.1 and HTML 5.2 still count as separate), and scrolling through the list, well less than half are actually relevant for a web browser.


My take was that one needs to be "at least aware" of all mentioned documents, which, while not as bad, still isn't ideal.


The increasing complexity of web browsers has less to do with companies using browsers to shoulder competition out (after all, why would they bother to make their engines open source?) and more to do with the web becoming the default application platform. And this has its roots in companies... trying to shouldering out competition with native UI frameworks. Microsoft just two days ago introduced WinUI, another propitiatory UI that's tied into their ecosystem. Bringing Microsoft's total non standard UI frameworks to seven (or is it eight?). Developers want cross compatibility, so they naturally turn to the web. Usually in the form of election or native web. And if they do choose to go native, iOS and Android on the top of the list now days.

So developers expect native OS level API access from browsers. This is the true source of the bloat. And to be honest, we're converging on an open standard cross platform API. this is great.


> and more to do with the web becoming the default application platform.

I dont even see any trend to suggest that is the case. Even if you count Electron as being one. Most Apps are not web based, and dont ever intends to be.

>So developers expect native OS level API access from browsers.

Exactly. It was certain group of developers that want the Web to be the default Application platform, starting with Mozilla's Firefox OS and later Chrome OS.

I think we need to differentiate a Web Page Engine and a Web App Engine. Although I would agree that even a Web Page Engine today is far too complex.


> and more to do with the web becoming the default application platform.

>> I dont even see any trend to suggest that is the case. Even if you count Electron as being one. Most Apps are not web based, and dont ever intends to be.

- Email

- Chat

- Music

- Calendar

- Word Processing

- Spreadsheets

- Social

- News

- TV

A shorter list would be what isn't on the web platform?

- Video games

- Graphics programs


> A shorter list would be what isn't on the web platform?

> - Video games

Video games (not talking about Flash/HTML5 games) are on the web platform. https://github.com/emscripten-core/emscripten/wiki/Porting-E...

If you count streaming in, AAA games can be played from the browser since a while ago. I'm not a Google Stadia user but I participated in Project Stream which streamed Assassin's Creed Odyssey to browsers and from my brief experience it was pretty impressive.


None of the first list needs to be or what I would even call them as Apps ( Apart from Spreadsheet ) , they could be "Smart Document", or HTMl with JS sprinkle on top, or Interactive pages.

And the usage of Web Spreadsheets are not mainstream at all.


Huh, never heard of WinUI. It looks "open source" (whatever that means for MS), and somehow it aims to support cross-platformness via React? :o


The "somehow" is that React Native on Windows is built on top of WinUI, just as React Native on Android or iOS are built on top of those platforms' native UI frameworks.


What I think is the takeaway from this is that there is a real need for a platform-agnostic application layer above the native OS. It's a peculiarity of history that HTML evolved into it, and not Java, Flash, Silverlight, Adobe Air, etc.. Probably because the timing was right for applications to exist at least partially remotely.

It's fun to dream about a more deliberate replacement, with the same local/remote transparency of a web app, but with well thought out abstractions for local hardware and permissions.. Like, binaries are LLVM IR, Jit'd to the local architecture. Audio is openAL, graphics are Vulkan, local devices are [no-current-equivalent-standard-API], with fine grained security on all features. It could be run locally, or with resources streamed over network. Opening an application could just be a 'URL'..

A little like a progressive web app, but not a weird outgrowth of an old page layout standard.


I suspect what will doom any such project is that it won't be designed for the primary usecase that made HTML/JS succeed: teenagers with nothing more than Notepad.exe and a search engine. It will be made super elegant and efficient and no one will use it because it will take more than three lines to make text appear, and a typo will result in a bunch of errors. HTML/JS are garbage languages and web standards are cobbled together trash, but their strength is in their utter simplicity and in how forgiving browsers are in interpreting them. If you don't have that, you've already lost.


I think the effect of teenagers with notepad is greatly exaggerated. IMO, the reason why html/js succeeded is because it was (and still is) literally the only option available if you want universal discovery and security. Software distribution logistics used to be a nightmare compared to an application that only required the user to visit a website. It also eliminated an entire class of UX issues with regard to application stability, OS compatibility, and over the air updates. Additionally, as the general population became more educated about the risks of executing random binaries downloaded from the internet, and the features/performance available to browsers simultaneously exploded, html/js became the defacto application platform of contemporary computing.

The eventual advent of the iPhone and the subsequent explosion of the app store also demonstrates that ease of entry is unimportant. The iOS platform was and still is the public platform with the highest barrier to entry, yet none of that matters because getting secure software distributed easily to users is the ultimate killer feature.


> It also eliminated an entire class of UX issues with regard to application stability, OS compatibility, and over the air updates.

Did it? Web applications are not inherently more (or less) stable than native applications, and OS compatibility issues were replaced with browser compatibility issues.


> Web applications are not inherently more (or less) stable than native applications

Absolutely. The browsers are extremely stable and generally don't crash. The individual applications may still be bug prone but those bugs generally don't take down the system.

> OS compatibility issues were replaced with browser compatibility issues

Sure, but it's a much less severe problem (the potential for bugs rather than binary incompatibility) and one that developers can easily remedy based on visitor statistics without customer's having to download, update or take any action. This problem also becomes much less of a problem every day as the browsers converge on common standards.


> This problem also becomes much less of a problem every day as the browsers converge on common standards.

Only Firefox and Chromium are able to keep up. It's an improvement over native apps working only on Mac OS and Windows as they are mostly open source, but still only two platforms.


It doesn't matter because when a webapp crashes, it doesn't bring down your whole machine (or these days, even your whole browser).

The killer feature is effective per-app sandboxing.


What? Native apps crashes stopped bringing down the machine decades ago. Sure there's the occasional bug in the OS (= kernel and system services), but browsers have bugs too.


Yeah the barrier to entry with HTML/JS is so low for anyone to make a thing that the whole world can see and frankly ... that's kinda awesome.

From there you can just be a simple site or scale up to some really impressive applications with a TON of free resources available on the internet. That's pretty amazing IMO.

As a webdev who has been dipping my toes into C#... I fire up a new command line application in VS and the fans on my laptop take off and there's just a lot of boilerplate ... come on man.


But your C# application is debuggable, unlike those "impressive applications" written in JavaScript with thousands of async callbacks so you have no idea where the originating problematic call came from in your minified JavaScript, and even if you find it you still have to deal with the minified code that is unreadable.

What boilerplate are you talking about in C#? If you think that's bad, you should try something like C++ (although it's got far better in recent years), and I say this as a fan of C++.

C# is compiled, JavaScript is not. Your C# application will involve high CPU to compile ONCE but forevermore require half the resources of interpreted JavaScript in webpages, which need to be interpreted again and again and again.

This is an issue that is worth bearing in mind for energy usage, and has a significant impact on our future if we just defer to "easy to write, expensive to run" web languages.

I'd say C# is easier to write than JavaScript and is easier to debug for the most part.


I do agree that debugging async JS code is tricky depending on how your calls are structured. However, JavaScript is definitely debuggable. Stepping through minified code is a solved problem thanks to sourcemaps.


I get the differences, I'm not making an argument for any single language to be dominant or anything.

I'm not learning C# because I think it is bad ;)

As for debuggable, I'm not sure I know enough about C# to comment on that but I find JavaScript ... "debuggable".


Somethings that's going to impress you down the line with C# debugging are using parallel stack views to debug multi threaded apps (without this could be significantly harder), remote debugging (sometimes you simply can't debug your application due to practical issues like may be your app interfaces with an expensive or impossible to deploy in your dev machine to debug with it), the watch feature in the debugger that allows you to debug complex linq queries and even expression trees on the fly and debugging external assemblies by attaching debuggers. VS is heavy on resources, but it delivers value unlike a gazillion of JS apps.


Now with Quarkus on JVM (and to some degree also Asp.NET) the compiled languages has become so lightweight that I'm feeling the last reasons that existed for using problematic languages like PHP, Javascript and Python are disappearing.

They'll hopefully be restricted to small niches were they won't do much harm :-)

I realize this might be annoying but I'm not trying to annoy anyone, I'm trying to get a point through.

(And yes, I do have some experience here, I was significantly better and more productive with both JS, PHP and Python before I really learned Java.)


Yep. The fact that XHTML lost to HTML is all the proof I ever needed of this. XHTML wasn't perfect, but it sure as hell was easier to process than HTML. I guess this is the new Embrace, Extend, Extinguish: Embrace, Expand, Monopolize. Basically make it too complex for anyone to compete.

An example would be Chrome: why would anyone choose to run the All-Seeing Eye edition from Google when it's F/LOSS and de-Googled alternatives exist? Because modifying it is complex and expensive, testing it is complex and expensive, keeping up with changes is complex and expensive (Microsoft basically owned the desktop because of introducing changes so fast the desktop application competition was always spending a significant amount of their budget just supporting the latest change), distributing modified versions quickly and reliably is expensive and complex, getting the community to agree on which anti-features to remove is impossible, and marketing it to end users is expensive. Thus we end up with obviously user hostile open source software, which would be impossible with simpler software.


> Like, binaries are LLVM IR, Jit'd to the local architecture.

LLVM IR is not platform independent, it's really just a bunch of (closely-related, ofc.) platform-specific IR's. What's wrong with WASM? Everything else seems broadly OK, of course. (Also, WASM directly supports "applets", via Canvas.)


Then something like WASM, but avoiding the 1.5x or so slowdown over native. Perhaps WASM is the answer, but with improved runtime.

What I'm driving at is - if we can dream, why not aim for maximal hardware performance and minimal overhead? Even if it comes down to just pre-compiling native code for common architectures like Android apps do. Right now, the browser is an idiosyncratic HAL burdened by its past... what would a really nice next generation be?

Interesting thing to know about the LLVM IR - I'm only casually familiar.


Wasm comes to mind for a new browser arch because in theory it could be a userland replacement for traditional platform features like the UI. You could have a wasm module for HTML, for instance, which could be chosen for legacy sites, but replaced for a different approach (VR UI, cloud service which doesnt need one, etc). This would help minimize the browser core and make the execution environments swappable and reusable across browser cores.

That said, I agree that performance may be a dealbreaker on that idea. HTML/DOM is also tightly integrated into a lot of browser behaviors, so I have no idea whether it's feasible to extract it into a swappable module. EDIT: It's still an idea that gets me pretty excited to try.



A working compiler from Wasm to native using LLVM: https://innative.dev/


> Then something like WASM, but avoiding the 1.5x or so slowdown over native. Perhaps WASM is the answer, but with improved runtime.

Once we are done with WASM we will look at it, compare it with Java Applets and say: Look, we already had that!


The difference is that Java bytecode is fundamentally insecure. Wasm seems to take inspiration from the research that went into NaCl/pNaCl.


The worst problem with the Java sandbox was not the bytecode (though it did have its share of bytecode validation bugs). The worst problem was the vast API surface with its "blacklist" security model: all methods which should not be called from the sandbox have to explicitly call a AccessController.checkPermission() or similar. Miss one, and you have a hole in the sandbox. WASM instead follows a "whitelist" approach, which is much more robust.


How is Java bytecode fundamentally insecure? (Example?)


> What's wrong with WASM?

Wake me when it can free memory.


The difference with those other platforms is that they were all still single vendor with no open standard behind them. I’m skeptical of the ability to create a new system because the surface area is so large that you need the massive resourcing of large corporations, but those companies will not willingly create new open standards that would put others on equal footing, at least not in today’s economic culture.


I don't disagree. It would take a concerted shared will in an industry that still views, perhaps rightly, platform lock-in as a competitive advantage. Open source initiatives are great, but I feel that they're weaker than a bigcorp (benevolent?) dictatorship would be at picking a direction and moving in it.

When the headline comes out that DirectX13 will be a layer on top of Vulkan, then maybe it's a sign that the world is ready :)


> there is a real need for a platform-agnostic application layer above the native OS.

It is rather focused on robotics but that is essentially what the Robotic Operating System (ROS)[1] is, although it is less for users and more for autonomous systems. It allows an operating system to be distributed across many machines and multiple entities to collaborate. There are many extensions available but ROS core is just the bare minimum abstractions needed to have a distributed operating system to build applications on.

[1]: https://www.ros.org/about-ros/


ROS is a publish/subscribe framework for distributed computing. It can’t really solve this problem. Heck, it barely solves the robotics problem.


I don't see how ROS solves 'a real need for a platform-agnostic application layer above the native OS.'

It's not platform agnostic (works best with Ubuntu Linux) and is hardly a layer above the native OS, just applications communicating via standardized means.


> It's not platform agnostic

ROS2 supports Windows, OS X, and Linux. I expect the support will only get better as ROS2 matures.

> just applications communicating via standardized means

I can't imagine any way to make such a system work without some kind of middleware for applications which gladly is DDS now for ROS2.



The web is essentially becoming what you propose, with WebAssembly binaries, WebGPU graphics, WebAudio API.

Currently you need an HTML and JavaScript shim to use those, but it's easy to remove it and execute WebAssembly directly.


> Like, binaries are LLVM IR, Jit'd to the local architecture.

You've basically described WASM, sans the LLVM part.


Could also just bring back applets. Its what the web is doing already, slowly reinventing applets but worse.


I don't think I'd call it worse. The original implementation treated the DOM as, at most, an undesired wrapper environment and mandated Java as its implementation architecture. I think wasm's approach improves on both those issues.


We can call it Qt5.


This post… probably doesn’t really mean anything.

Firstly, judging a web browser’s compelxity by judging the spec catalogue is already unfair. The catalogue is basically a dump of things related to the web, for example the specs of JSON-LD. (Which probably has almost no relationship to implementing web browsers, since that’s just a data format of JSON.)

Also, word count doesn’t correlate with complexity. That’s like…. saying that a movie will be more entertaining than another one because it’s runtime is longer. Web-related specs are much, much more detailed than POSIX-specs because of their cross-platform nature: we’ve already seen what happens if web-related specs look like POSIX: anybody remember trying to make web pages that work both in IE, Safari, Firefox in the early 2000s? (Or, just try to make a shell script that works on both macOS, FreeBSD, Ubuntu, and Fedora without trying out on all four OSes. Can you make one with confidence?)

Really, it’s just tiring to hear the complaints about web browsers on HN, especially the ones about ‘It wasn’t like it in the 90s, why is every site bloated and complex? Do we really need SPAs?’ Things are there for a reason, and while I agree that not every web-API is useful (and some are harmful), one should not dismiss everything as ‘bloat’ or ‘useless complexity’.


>Firstly, judging a web browser’s compelxity by judging the spec catalogue is already unfair. The catalogue is basically a dump of things related to the web, for example the specs of JSON-LD. (Which probably has almost no relationship to implementing web browsers, since that’s just a data format of JSON.)

https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6...

Specs like JSON-LD are included but do not meaningfully change the numbers. Also, the word count I used in the article is less than half of the real word count I ended up with, just to put to rest any doubts like these about the margin of error.

>Also, word count doesn’t correlate with complexity. That’s like…. saying that a movie will be more entertaining than another one because it’s runtime is longer. Web-related specs are much, much more detailed than POSIX-specs because of their cross-platform nature: we’ve already seen what happens if web-related specs look like POSIX: anybody remember trying to make web pages that work both in IE, Safari, Firefox in the early 2000s? (Or, just try to make a shell script that works on both macOS, FreeBSD, Ubuntu, and Fedora without trying out on all four OSes. Can you make one with confidence?)

This doesn't seem right at all. How do you figure that POSIX is less specified than the web? Have you read either standard?

Can you make a website with confidence which works on all browsers? When was the last time you found a browser-specific issue? Mine was three days ago.


> Specs like JSON-LD are included but do not meaningfully change the numbers.

JSON-LD was a simple example that I just used b.c. it was on the front page. That JSON-LD doesn't meaningfully change the numbers doesn't change the fact that the catalogue is a dump of things related to the web, which is probably different from each spec (POSIX, C, C++, etc...) which has a narrow scope. For example, the link dump has 104 links that mentions CSS in the slug: it probably makes sense to make a comprehensive CSS spec (which would probably reduce the size since it will remove a lot of repetitive parts), but the web standards don't work like that.

> This doesn't seem right at all. How do you figure that POSIX is less specified than the web? Have you read either standard?

That's based on some @chubot's oilshell blog posts[0][1][2][3] about shell. Excerpts from blog post:

> POSIX Uses Brute Force: In theoretical terms, a language is described by a grammar, and a grammar accepts or rejects strings of infinite length. But POSIX apparently specifies no such thing. Only the "unspecified" cases are allowed to use a grammar!

> Over the last few years of implementing shell, I've found many times that a careful reading of the POSIX spec isn't sufficient.

> The POSIX shell spec says that shell arithmetic is C arithmetic, So it's natural to wonder what the creators of C used. They didn't use grammars to specify their language. The code came first and grammars came later.

> Discovery: all shells are highly POSIX compliant, for the areas of the language that POSIX specifies. But POSIX only covers a small portion of say dash, let alone bash.

There are probably much more posts about this, but I think this is sufficient enough to explain that POSIX is underspecified.

> Can you make a website with confidence which works on all browsers?

I can make a website that reasonably works well on all browsers with confidence. At least on the level where one can develop a full SPA based on one browser and then try & test it on others. Most browser-specific issues don't exist and when they do, it's usually a one-liner.

> When was the last time you found a browser-specific issue? Mine was three days ago.

That's interesting, what was it? (Genuinely interested in that one.) In my experience, it doesn't surface that much unless you're trying to use WASM or some new/non-standard APIs.

[0] https://www.oilshell.org/blog/2017/08/31.html#posix-uses-bru...

[1] https://www.oilshell.org/blog/2020/01/alias-and-prompt.html#...

[2] https://www.oilshell.org/blog/2017/04/22.html

[3] https://www.oilshell.org/blog/2019/01/18.html#criteria-for-t...


As someone who has also worked on implementing a POSIX shell, and read and implemented large swaths of POSIX besides, in my experience the main thing it lacks in is edge cases. These would normally be corrected with as much as one sentence, and by my intuition I would expect no more than 100,000 additional words would be necessary to fully specify these edge cases. It doesn't meaningfully move the dial on any of these numbers.

>I can make a website that reasonably works well on all browsers with confidence. At least on the level where one can develop a full SPA based on one browser and then try & test it on others. Most browser-specific issues don't exist and when they do, it's usually a one-liner.

On Chrome and Firefox, maybe. How about IE or Safari? How about Netsurf or Lynx or w3m? SourceHut (of which I am the founder and lead developer) works in all of those browsers, by the way.

>That's interesting, what was it?

Believe it or not, it was in how they interpreted margins. This was the fix:

https://git.sr.ht/~sircmpwn/core.sr.ht/commit/a4a290fbeea23c...

todo.sr.ht rendered differently on Firefox and Chrome before this change. I didn't dig into it enough to make any bug reports, so connecting the dots is up to you if you're interested.

I hit another browser bug a while ago because Chrome arbitrary decided that the maximum number of elements in a CSS grid is 1,000.


It's a bit too late to reply, but...

> These would normally be corrected with as much as one sentence, and by my intuition I would expect no more than 100,000 additional words would be necessary to fully specify these edge cases. It doesn't meaningfully move the dial on any of these numbers.

I expect it would take much more than that to specify exactly what bash, dash and a lot of other shells should do in a cross-platform manner, consolidate the language to one, and allow backwards compatibility (just like where the web APIs are based on the consolidation efforts of IE, Netscape, and a bunch of other browsers in the 90s)

> On Chrome and Firefox, maybe. How about IE or Safari? How about Netsurf or Lynx or w3m? SourceHut (of which I am the founder and lead developer) works in all of those browsers, by the way.

Chrome, Firefox, and Safari are pretty easy to target all at once, as they have the usual modern features.

Considering Lynx, Netsurf, or w3m as modern web browsers invalidate your points, since I'm pretty sure they don't implement a lot of the standards, does w3m implement flex box/CSS grid for example?

I can't say this in confidence, but I think it won't be that hard to (not that it's easy) implement a web browser with a complexity of w3m.

> Believe it or not, it was in how they interpreted margins. This was the fix:

Hmm, my intuition feels like that's a SCSS-compiling problem where they mixed up the rules... but if they weren't, that would definitely be at least one browser bug.


I think there's some good points in there, but I'm not sure about this one:

"Firefox is filling up with ads, tracking, and mandatory plugins."

I feel like I follow these issues pretty close, and I don't think that's accurate. Did I miss somethings? I know they've made at least a few bad moves, but in general they do the right thing, and have stepped back some of the bad things. It's entirely possible I'm missing something though.


Ads:

https://www.ghacks.net/2018/12/31/firefox-with-ads-on-new-ta...

https://www.zdnet.com/article/firefox-60-will-show-sponsored...

Tracking:

https://gist.github.com/0XDE57/fbd302cef7693e62c769

https://www.zdnet.com/article/firefox-tests-cliqz-engine-whi... (ads, too)

Mandatory plugins:

https://news.ycombinator.com/item?id=9667809

There are more cases of each, but these are the ones I thought of off-hand. Setting up Firefox today still requires you to manually go to about:config and turn off a whole bunch of crap. A stock install of Firefox has ads and sends telemetry, searches, and more to both third- and first-party network services.


These are totally bullshit things blown out of proportion. These are trials that never went live, and/or weren't even nearly as bad as the uproar make them to be.

Come on, the Pocket hysteria? It's a bit of JS and a one button you can turn off with two clicks. Pocket is now owned by Mozilla. It's a Firefox feature now, and not any more of "mandatory plugin" than the Sync or Add On Store are. I thought you were at least flipping out about EME, which is a 3rd party code and actually a plug-in.

Your central point is valid. There's no need to embellish it with clickbait backed by sources that are clickbait themselves.


[flagged]


So... then disable them using the instructions on that github gist that you already linked? What exactly is the gripe here? There is no need for a fork when you've already disseminated the information needed to rectify the problem. I don't know what else anyone would consider adequate -- web advertising and tracking is not going to magically go away or diminish just because browsers have less features. Focus on one thing at a time, please.


> I don't know what else anyone would consider adequate

I'd consider opt-in rather than opt-out to be adequate.


This is splitting hairs. You can easily use (or make) a downstream distribution of Firefox where that is the case. No forking required, just ship a default config.

This still would be a reasonable course of action even if your real concern is what is adequate for the wider community, not just for yourself. Threatening to switch to another browser isn't going to help because anything you switch to at this point is going to be a fork of Chrome or Firefox, probably with additional ads and tracking added from the other downstream vendor. Like it or not this is the funding model of the web. Edit: I haven't checked in a while but I believe there are actually already several privacy-focused forks of these browsers that ship with the default configs changed to remove ads and tracking, among other things.


> Setting up Firefox today still requires you to manually go to about:config and turn off a whole bunch of crap.

An alternative is to use packages provided by Debian (and possibly other distributions), which turn most of this stuff off by default


But but but how do I do this on my Mac and Windows machines??

Time to go back to Linux. Hopefully the desktop has improved since the abandonment of 30-year-old UI paradigms for "cleanliness and focus".


Lost me there too. FF has had a few foibles but has course corrected quickly. It's my daily driver and certainly haven't seen anything like that happening.


The only negative thing that comes to mind is pocket. DoH is also controversial due to the implementation, but at least I understand the incentive they had.

I just hope Firefox keeps being a real counterpart to Chrome and doesn't try to mirror it. Funding is a problem as Mozilla is far too dependant on Google, but I don't know a solution to that.


I already posted this on Lobsters, but the "1,217 specifications totalling 114 million words" is pretty off-base. To copy my comment from there:

This calculation is wrong. For example searching for HTML[1] reveals different versions of the same document, various informative notes ("HTML5 Differences from HTML4", "HTML/XML Task Force Report"), things no one uses like XForms, and other documents that really shouldn't be counted.

I looked at the full URL list[2] and it includes things like the HTML 3.2 specification from 1997[3]. A quick spot-check reveals many URLs that shouldn't be counted.

I'm reminded by the time in high school when I mixed up some numbers in some calculation and ended up with a door bell using 10A of power. The teacher, quite rightfully, berated me for blindly trusting the result of my calculations without looking at the result and judging if it's vaguely in the right ballpark. This is the same; 114 millions words is a ridiculously large result, and the author should have known this, and investigated further to ensure this number is correct (it's not) before writing about it.

I wouldn't be surprised if the actual word count is two orders of a magnitude smaller; perhaps more.

[1]: https://www.w3.org/TR/?title=html

[2]: https://paste.sr.ht/~sircmpwn/fd74cf95eb6c1740f4af3aaaf2a0f4...

[3]: https://www.w3.org/TR/2018/SPSD-html32-20180315/


Those specifications from 1997 are still relevant. That's why we end up with things like quirks mode:

https://quirks.spec.whatwg.org/

And on the subject of WHATWG, all of them were excluded from the word count. And, the JavaScript spec, and nearly all of the JavaScript APIs browsers are implementing. Things omitted include WebGL, Web Bluetooth and Web USB, the native filesystem API, WebXR, Speech APIs... and, the informative notes you mentioned are (1) a rounding error when compared to the specs, and (2) are also included in the word counts for POSIX, C11, and so on.

And the word count I gave in the article is half of the real count I ended up with, and I didn't even finish downloading all of the specs to consider.

My full write-up on the methodology is here:

https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6...

Anyone who thinks that the web isn't hundreds or thousands of times more complicated than almost anything else out there is lying to themselves.


I just poked through 50+ the first ~4000 things in that list. Of them, every single one was either

- Unrelated to an actual web standard (such as a guide for authors of web pages(www.w3.org/TR/html5-author/dimension-attributes.html ), or a guide on how to create a PDF for a W3 event (https://www.w3.org/TR/2016/NOTE-WCAG20-TECHS-20160317/pdf_no...)

- a raw xml file (www.w3.org/TR/2012/WD-its20-20121023/examples/xml/EX-locale-filter-selector-2.xml)

- a diff (www.w3.org/TR/prov-dm/diff.html)

- an error (www.w3.org/TR/unicode-xml/index.html)

None were actual signal that relates to the web's specifications.


>such as a guide for authors of web pages

I explained why I included these in my methodology doc. They felt this necessary to document, so I included it. The same is true of other specs I compared against, such as POSIX.

>a raw xml file

This XML file is 18 words according to my measurement. The total words I claim in my article are 113 million. Do you really think that this changes anything?

>a diff

Okay, I should have caught that. There are ~700 of these and I am computing the difference these make to the word count now. I expect it will be within the >100M word margin I left on these figures. [Edit: 28M words from diffs, which eats up about 25% of the 100M word budget I allocated for errors]

>an error

123 words. See my XML comment.

Out of curiosity, is it your intention to also look for flaws in my approach to word-counting the non-web specs I compared against?


I don't think you fully appreciated my comment. I've looked at now 100+ documents from that list. Not a single one has had actual content related to the web standard.

I was finally able to find one, by looking elsewhere: https://www.w3.org/TR/css-grid-1/. You include 8 copies of the css-grid-1 standard in your count. So of the small fraction of documents that are actually web standards, you're miscounting by an order of magnitude. In other words, I expect that the actual count here is off by 2 orders of magnitude and that the real size of the "relevant" web standard is 1-2 million words, and the rest is just bad measurement.

> Out of curiosity, is it your intention to also look for flaws in my approach to word-counting the non-web specs I compared against?

No, I think pointing out a 2-3 order of magnitude mistake in your methodology speaks for itself.

> They felt this necessary to document, so I included it. The same is true of other specs I compared against, such as POSIX.

The posix spec includes examples and docs yes. But so do the actual web specs (see again the css grid spec doc). What the posix spec doesn't include is a parallel version of the docs meant entirely for posix users, that is wholly irrelevant to people who are building a posix shell. Again, you're including an analysis of which PDF readers to test the accessibility of the PDF you're writing in an analysis of web standards.

Edit:

For an even more egregious example, https://www.w3.org/TR/2013/CR-xpath-datamodel-30-20130108/ is one of eighty versions of the xpath datamodel spec that you count, and xpath isn't even an officially supported browser thing.


I think I was extremely generous with my margins and went to lengths to be selective with my inclusion criteria, I didn't even catalogue everything under those criteria, and I omitted huge swaths of web standards on the basis that (1) it was more forgiving to W3C and (2) they would be difficult to compare on the same terms. At most you've given a credible suggestion that there might be an order of magnitude off, but even if there were, it changes the conclusions very little. I explained all of that and more in my methodology document, and I stand by it. If you want to take the pains to come up with an objective measure yourself and provide a similar level of justification, I'm prepared to defer to your results, but not when all you have is anecdotes from vaugely scanning through my dataset looking for problems to cherry pick.


No, I've given credible reasons for two orders of magnitude:

1. The majority of the documents you are including are not reasonably considered web standards

2. Of those that are, you are counting each one 5-50 times.

That's two orders of magnitude.

All your analysis has proven is that it's (ironically) difficult to machine-parse the w3 data, and that you did so in a way to justify your preconceptions.


Web browsers make more sense when you think of them as emulators for a novel architecture / OS built atop an existing OS.

It is "impossible" to build a new browser in the sense that it's "impossible" to build a new Windows 10. Of course, a key difference is that the web is specified atop an open standard so someone can at least try.


I don't think anyone will try. It's 'impossible' in the sense that both Microsoft and Opera threw in the towel and turned their browsers into Chromium skins. If even Microsoft can't pull it off, it seems unlikely anyone else is going to seriously try starting afresh.

Mozilla have been successful in swapping out large chunks of their browser for superior replacements (for instance replacing the JIT, several times iirc), but that's not quite the same thing.

Another point: why would a company bother? Amazon's 'Silk' is just another Chromium-based browser, but I imagine that works just fine for Amazon. What's the downside of taking Google's hard work and putting your branding on it?


Exactly. It's much like asking "Why would a company bother writing a new OS from scratch when they could instead become a new Linux distro?"


Like Google's Fuchsia, oh wait...


Fuchsia seems to be an exception that proves the rule.

And it doesn't find the rule wanting. Three years since it stealth-launched, and forward momentum on Fuchsia seems... Questionable?


Fuchsia looks pretty active as far as commits go:

https://fuchsia.googlesource.com/fuchsia/+log

It also seems I regularly see new posts about it on HN, with a generally positive reception and optimism among its users.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


The only piece of Fuchsia that's excited me so far is the Xi Editor, which seems to be developing rather slowly if not stalled out completely. https://github.com/xi-editor/xi-editor

(Yes, it's not part of Fuchsia anymore, but so far I haven't heard anything about the remaining parts of Fuchsia that makes me inclined to care about it until Google forces me to care.)


Looks ambitious, and happily there's some recent GitHub activity, but I already have Vim and Notepad++. What does Xi have that those two don't?


It's just intended to be way faster than existing editors: "Incredibly high performance. All editing operations should commit and paint in under 16ms. The editor should never make you wait for anything."

If it succeeded in that, I think it would pave the way for faster, less-bloated software in general, so I'm rooting for the project to succeed.


I like lightweight software, but performance isn't a problem in any modern text editor. Notepad++ and Vim are already light enough.

Vim is 'real-time' responsive even over SSH. I don't see much room for improvement.


Google has their search, ads, and pending browser monopoly to fund their endeavors. Most companies don't.


Why is it impossible to build a new Windows 10? Shouldn't that be an area of great interest to Microsoft? What other endeavor would give Microsoft as great a leap forward as a ground up rebuild of the operating system?


It's not impossible, but it will take an enormous amount of effort to replicate. Consider the ReactOS project, which strives for compatibility with Windows Server 2003. I use it occasionally in a VM and it is roughly at par with Windows 2000/XP, but this project has been going on for over two decades (granted, the target has moved over the years; originally it targeted Windows NT 4.0 if I remember correctly). It will take years of work for ReactOS to reach compatibility with Windows 10.


They tried for 9 years and eventually (rightly or wrongly) gave up: http://joeduffyblog.com/2015/11/03/blogging-about-midori/


Microsoft doesn't need to build Windows 10 from the ground up; they have a Windows 10.


There are a lot of valid critiques to be made about the web as a development platform. But the reality of the situation is there was a need for an OS agnostic, installation free, reasonably sandboxed application platform. A generation of developers turned a document viewer into that application platform, and that's the platform that has stuck with non technical users. I think developer energy is better spent improving performance, keeping the web free, and ensuring Google doesnt own the platform.


>Firefox is filling up with ads, tracking, and mandatory plugins.

What's the basis of this claim? I use FF as my default browser, and I have not experienced anything like this?


Agreed. I have no idea what the author is referring to here. Every version of FF seems to have better tracking prevention, though Mozilla do advertise the services they provide. Which I don’t find unreasonable - as those services are otherwise not discoverable.

Regarding ‘mandatory plugins’ I presume this is referring to the OpenH264 codec and Widevine CDM?


> What's the basis of this claim? I use FF as my default browser, and I have not experienced anything like this?

Not @op, but my thoughts on the matter (for what it's worth, I also use Firefox):

Pocket. Just try disabling it. Good luck getting it all.

Then there's the advertisements displayed on the default home page.

Firefox account and Sync. Non-starter for me. I have zero interest in storing credentials in my browser (eg, passwords, autofill, etc) and even less interest in storing anything browser-related outside of my machine; I have no interest in synchronizing tabs across devices using the browser's own functionality.

Then there's the bits about recommending extensions or features as you browse.


I don't think FF is awesome (it's the least worst practical choice) and it sucks that advertising is enabled by default, but your comment is not fair to FF.

Pocket can be disabled with about:config => pocket.enabled set to false (non-obvious but easy).

FF accounts & sync do nothing if you don't use them. The data is e2e encrypted. It is extremely useful for a huge number of people while doing nothing bad to those who don't use it. FF would be a worse browser without it.

Extension & feature recommendations are done on browser with no personal data sent to Mozilla. You can search the web for details, iirc it was also mentioned on HN.


> your comment is not fair to FF.

I disagree. I could have been a hell of a lot more vindictive.

> Pocket can be disabled with about:config => pocket.enabled set to false (non-obvious but easy).

That doesn't remove the feature from the right-click menu. And, there's a ton of other configs in about:config which mention Pocket, some of them include URLs. Does setting pocket.enabled=false disable retrieval of those URLs?

FF accounts & sync do nothing if you don't use them... except recommend to me that I use them. The data might be e2e encrypted but that's still more attack surface than _not_ having the data there in the first place. Yes, it's extremely useful for a huge number of people but nonetheless it's still a privacy issue for me.

Extension & feature recommendations being done in browser isn't the point. The point is that they're _done_ in the first place. Which means that not only does Firefox parse and render content, but it inspects it and tries to determine what I'm doing and what extensions "might" help. Thanks but that falls right into creepy territory for me. It's only one step away from sending that already-data-mined data to some marketing company.


Right. If your tagline is privacy first, the defaults should be set to maximum privacy (unless if it might break sites - in which case it may be acceptable to set to a less intense config). Any features that is a compromise between privacy and convenience, should be easily enabled/disabled through the UI (not a bloody config file). If they can build a ui for everything else, I don't think it's too much to ask for a simpler switch for privacy related options.

Trying to save my passwords is particularly annoying till I disable it.


I think it contains recommended articles ie ads by default on the home page. They also launched a campaign for booking: https://www.cyberciti.biz/web-developer/firefox-is-now-placi...

Telemetry = tracking.

Not sure about what he means regarding plugins.


Telemetry is not tracking.


I assume that was a reference to pocket (which was an extension that they shipped as a baked-in feature around the acquisition).


pocket support is implemented as a kind of extension, to my understanding.

for example the about:config setting wording is `extensions.pocket.RelevantFeatureHere`


One built into the browser that can only be disabled by going to a hidden configuration page that pops up a big warning not to change anything.

I'm really glad it can be turned off, but I also wish it wasn't there. I have trouble wholeheartedly recommending Firefox to others, because the default experience (without about:config tweaks) feels messy.


I recently had similar thoughts on Unicode: http://replicated.cc/concepts/unicode Most of its complexity comes from its least-useful features.

We must find some way to rollback that spiralling complexity. Or maybe, like in the old days, burn it all and start anew?


Keep in mind that the 16-bit limitation relied on the Han unification, an incredibly controversial project where related languages from three Asian cultures all got butchered in the process. It was culturally insensitive and limited the range of Unicode, and is a major reason Unicode is actually rare in these locales: China still majorly uses GB, and Japan Shift-JIS, and Taiwan Big5.

The experience was one of the big reasons Unicode 2.0 raised the limit, as they never wanted to go through that process again.


16-bit is also a non-starter if you want to include historical or rare CJK characters, historical/ancient scripts, symbol sets, etc. So a 32-bit encoding does make a lot of sense.


I don't know that I've ever heard a good reason why we need to have historical/ancient scripts, symbol sets, etc. in something like Unicode.


Scholarly publications on the web.


Don't buy it. That's a huge amount of complication to add for such a niche use case that could just as easily use a custom font or straight-up images. Besides, Klingon apparently doesn't get a mere 40 slots for its characters and is probably used more in text communication than some of these ancient languages.


The only extinct language used in scholarly publication is Latin, AFAIK.


Ah, so nobody is doing any research on older non-european cultures then? No archeological or linguistic or other kinds of research? Nothing on african, asian, american cultures?


I mean, a corpus of researchers writing articles in an extinct language, reading them like that, googling in the language? Vatican is the only example I can think of.


No. They are writing about those languages and need to cite the original. Show phrases in that language and then comment on them in English.


Not an expert on languages, but I really think Unicode should leave CJK alone emoji as a separate project.


I think it is just mistaken in general to use one character set for all purposes. If you try that (as is the case with Unicode), it results usable for most purposes but equally bad for all of them, rather than good.


It's because its creators were well aware of the shortcomings of the previous half-solutions and wanted to future-proof the standard.


I can't help but wonder how they concluded that they should add arbitrary emojis to Unicode. Are emojis the future of language?


If not text, what even emojis are? How is a smiley face any different than "!" or "?" it carries a meaning in the text itself. As a programmer, I want my user to type text including emojis so I can store it as string, not a messy mix of string and images.


I've started advocating for a subset of html5 that

- drops all legacy features tht aren't needed to provide documents anymore

- provide sane defaults

- a choice between

- 1) no JS

- 2) JS based on a small number of vetted packages for stuff line autocomplete etc

No DRM. No JS frameworks. Just plain old HTML goodness plus more modern goodness (autocomplete) on top.

In my view there's even room for ads as long as they are injected server side and don't report back anything.

Hopefully this could be so much faster that it could become a hit in tech and academic circles.


The problem I have whenever I see arguments along these lines is that nothing is stopping anyone from writing simple HTML using that "subset" if they want to, so a completely separate simpler HTML isn't necessary. If you don't want JS, don't use JS. If you want to vet packages, start a service that vets Node packages. That last part would actually probably be really useful.


Almost. The idea is to make our browser extremely much faster than the rest of them for rendering such pages.

This should hopefully get people to switch to our new browser and force other browsers to start competing.


Most people don't want the modern web replaced with an extremely minimalist version only good for publishing whitepapers, and they wouldn't want to use a browser that enforces such a restriction. Pages with minimalist HTML already render extremely fast in modern browsers, that's not a problem anyone actually has.

So why would other browsers even bother to compete with you? Do they bother competing with Lynx?


No, it doesn't need to be minimalist.

Styling is totally OK. And the browser could even be Firefox, just with a separate rendering path if it detects the page is html5core or whatever.

Old pages works just as before. New, html5core compliant pages loads blazingly fast and displays just as nice though.


But as far as I can tell, 'html5core' is just a subset of HTML meant for static documents, meaning it's just HTML.

There's no reason for that to have its own separate rendering path, or even to be its own thing. The process of validating it as 'html5core' would probably take more time than just rendering it as is.


I think you misunderstand.

Html and a limited subset of css is OK. I'd try to remove unused constructs but more importantly for the speed: as much as possible of what stands in the way for efficient rendering.

In the second model there's even room for limited Javascript, think autocomplete and reloading of parts of a page etc, the only thing is it will work more like components with parameters instead of a Turing complete language.

It will definitely not take more time to decide: you add a marker to the start of the document. The browser then sends it the fast path.


That sounds backwards to me. It's corporations which are generating the vast majority of the HTML in the world. Individuals can stop writing JS (and for content-first websites, I have), but that won't stop us from having to see it.

Turning off JS entirely will kill a lot of common webpages, so that's not really an option any more. Instead, everybody runs a content blocker, to try to battle with corporations over what dumb junk we're exposed to, or exposed by.

Corporations are also writing most of the popular web browsers. This isn't a fair fight.


I do disable scripts. I find that some web pages will still work if you delete or hide an element that covers up everything else (and the script would presumably hide or delete), or unhide an element that is normally hidden (that the script would normally unhide). This doesn't always work, but sometimes it does.


> Individuals can stop writing JS (and for content-first websites, I have), but that won't stop us from having to see it.

And that's the rub.

None of this is actually about wanting to improve the web with a sane minimalist subset of HTML, or separating "documents" from "apps", it's about wishing the web never left universities, never got mainstream, and never got complicated by modernity or tainted by capitalism.

But you can't put that genie back into the bottle and, honestly, I don't think it should be done were it possible. The modern web enables a great deal of creative freedom and cultural expression which simply wouldn't be possible if it were only plain hypertext documents.


Actually, this "JS based on a small number of vetted packages" is similar to my <widget> idea, so that could be how it is implemented. However, another feature I want (which you did not mention) is better user customization, and it would allow that, too.

As an example:

  <script widgettype="http://example.org/ns/widget/unitconv" src="unitconv.js"></script>
  ...more stuff in between...
  <li><widget widgettype="http://example.org/ns/widget/unitconv" a-unit="cups:236.5882365mL">4 cups</widget> of water
(Although not used above (due to being unneeded in this case), the widgettype attribute is also allowed for <noscript>.)

It has the following benefits:

- You can host all of the files yourself if you want to; no need for a separate package manager.

- Full backward compatibility.

- Works with browsers that don't implement <script> or <widget>, as well as browsers that implement one or the other, or both. (However, it needs to implement either <script> or the "unitconv" widget, and one or the other has to be enabled, in order for automatic unit conversion to work.)

- The user can disable features they don't want, or customize them.

- If <widget> is implemented and enabled, then the script need not be downloaded, even if <script> is implemented and enabled.

There are other possibilities too. For example, autocomplete could use wrapping a <input> inside of a <widget>.


I have a project I would like to build one day called web 3.0.

It's HTML only. No JS or CSS. All styling is done browser side under user control. All interactions are done using simple GET or forms and POST.

There are no cookies, and not even hidden form fields, All data you send with a request is visible to the user.


I also had similar ideas. Note, however, that doing styling on the browser side under user control is already possible (although due to how CSS works, it doesn't work in the ideal way). Furthermore, what you mention is a subset of existing specifications and can be done easily if you are writing the web page. (I generally do this too when writing my own web pages, although often I will just write plain text instead anyways.)

However, there are some features missing from HTML unfortunately; I had a different idea for how icons should have been displayed instead (I posted before on Hacker News), and that there should be a special tag for footnotes, but there isn't. Still, it is probably good enough.


I often thought of the same. Or even more Radical, something that is better than HTML and CSS at presentation but compiled to ( Good ) subset of Modern HTML and CSS.

Move more functions to Browsers without the use of Javascripts. At least standardise subset of something like TinyMCE, and Ajax / Pjax.

Basically we need to pick all the good part of HTML, CSS, and Javascript without the "Apps" baggage.


People like to talk about how web applications are more secure than running something locally but that is only because web applications, for now, are significantly less powerful and have less hardware access than a native application running on your OS.

But it's obvious to anyone that that is rapidly changing. As people try to do more and more with web apps the corporations now in charge of web standandards implement more raw access to system hardware and all the benefits of being in a browser go away while none of the downsides do.


I think it's more than that. Web applications run atop a user-agent-enforced security model that has been bought with decades of educational experiences.

Building one's own native app that direct-binds to the networking layer, passes user credentials around without the paid-for-in-blood browser API, etc., is going to re-invent the wheel on a lot of security problems that browsers had to solve (and history has shown developers love to ignore given the opportunity). It isn't just whether the web app will be a better user experience; it's also whether the app will allow, say, exploits delivered via the server-side state (introduced to your web app via user-modifiable content on your site) to sniff the user's password out of the pastebuffer or some such nonsense.


> Because of the monopoly created by the insurmountable task of building a competitive alternative

And if you go for a competitive alternative based on a more sane stack than HTML/CSS/JS/..., then you cannot compete with the content. Unless your new tech stack is really really really really better for current usage.

It probably means designing it for mobile first: small footprint, low energy consumption and a lot of resilience wrt to network hiccups. Plus an integrated monetization scheme that doesn't waste half of the bandwidth for ads or half the battery capacity to try to circumvent ad blockers - all that without selling the color of the user's pants to anyone wanting to buy it.

I know, utopia, utopia.


The over-complication is recursive too. CSS was already too complicated when it was initially written, and the main specification got so complicated itself that it has now been split up into a dozen internally-overcomplicated submodules. JavaScript went from a language that could be implemented in a weekend to a C++-like multiparadigm monstrosity, somehow without fixing most of the serious "WTF happened here" problems.


> The over-complication is recursive

The phrase "fractal of bad design" comes to mind.. Although it was originally used for PHP, this anti-design-pattern seems to rear its head almost inevitably with popular and long-lived languages/systems (which I suppose includes C++, though I'm not qualified to judge).

I've heard people pinning hopes on WebAssembly. It does seem possible that eventually it could allow new languages to replace CSS and JS, maybe even support the development of new kinds of applications that replace (the current paradigm of) the browser as a cross-platform VM.


JavaScript is getting simpler in a lot of ways. Things that are confusing to developers of others languages e.g. function scope can be avoided by telling people to use let instead.

Also a lot of the things I wanted 5-10 years ago that I had to work around are now being included. For example there is a decent-ish http request API.

I don't like web assembly. I think it is actually worse in a lot of ways than the huge JS frameworks that people think that it will replace.


> I conclude that it is impossible to build a new web browser. The complexity of the web is obscene. The creation of a new web browser would be comparable in effort to the Apollo program or the Manhattan project.

True. But if the web did not adopt those features, then you would have to install more native applications to fill the void.

If you make a simpler web browser, one that is doable by a newcomer, with the subset of features that you deem appropriate for the web, nothing is stopping you. You could still visit most websites that are content only. But for some of the web applications you would have to install the native version of those applications to fill the void.

Either way you have the same outcome. You have a simple web browser that can be made by newcomers, but you have this other application platform (Windows, Mac, Android, Linux) where "the complexity is obscene, the creation of a new operating system would be comparable to the Apollo program or the Manhattan project."


Agree very strongly. The amount of innovation we’ve lost because of the high barrier to entry is also staggering to try to conceive.


Can you give an example of what innovation we've lost?


The first thing that comes to my mind is accessibility. If browsers were simpler to create and the web were simpler, people could much more easily develop accessibility tools.

I think the privacy and tracker-blocking crowd would have a lot more options. You could also have browsers designed for low-power or weak-CPU environments. Better automatic limits on what code runs and when.

Next category I picture are browsers with really customized interfaces and displays. Different approaches to navigation, hotkeys. Tiling sites across the screen.

Browser functionality could be built into more devices and contexts (e.g. inside another application).

I know none of these is "staggering" but I'm one person brainstorming for five minutes. If enthusiasts and hobbyists all over the world could more easily tinker and try things out, who knows what ideas would pop up.


So curtailing the complexity of sites so that individual users can make heterogeneous deep content.

That doesn't actually solve the problem. Excluding webgl so that someone can write an accessibility layer doesn't mean we have an accessible 3D site; it just means we can't do 3D sites.


That's a tradeoff worth discussing. I think there should be an entirely different cross-platform layer for interactive and intense SPAs like Google Docs and 3D sites. But making it possible for 0.00001% of sites to have 3D comes at a huge cost.


Making it possible for 3% of users to access a site via accessibility APIs comes at a huge cost, but it's done anyway because it's a good idea.

The thing about cutting features because they aren't ones that fit one's use case is that there's always someone who doesn't need that use case. "Not a lot of people use it so we should cut it" isn't a great criteria for a platform.


They probably tried, but it was staggering to try to conceive.


Of all the specifications, words, and complexity this article mentions there are only 3 really costly areas that the web browsers devoted a good portion of their net worth to solve:

* Presentation (generally and CSS specifically)

* JavaScript JIT (and other performance pipelines, DOM efficiency, caching architecture, and so forth)

* and accessibility

I know there is much more to the web than that and notice that I did not mention security. Those 3 things took years and lots of research and failure to get to today's status quo. Everything else is comparatively trivial when compared on the basis of man hours and money spent. If you can really nail those three quickly and cheaply then everything else is trivial from a perspective of money and time.


I can see a 'hipster' web browser with a reduced feature set coming along at some point, and all these heavy monstrous JS apps masquerading as web pages will fall out of style.


Highly doubt that. The vast majority of users do not notice, or care.


Plenty of users notice that the web is slow, and complain about it. Unfortunately, they have no idea why so they blame their mobile or broadband provider.


wasn't that Chrome upon its introduction? compared to familiar IE and FF -- no title bar, no menu bar, no row of buttons, single Omnibar instead of address bar + search bar, separate memory process per tab


Sometimes I dream of a Gopher type of web where you're allowed to make simple input fields, SSL is allowed. That's it. No JavaScript, just basic input / output. I could imagine HN being on this simple web. Maybe some sort of markup to support the upvoting capabilities of HN but that's about it.

I would even allow for ads since they would at best be banner images, not pop ups, not things that track you around the internet, and what have you. As for page style? Let the client decide, which ultimately means: let the end-user decide what they're most comfortable with.


Luckily Gopher is still alive, and in fact on the Gopher mailing list there was a thread earlier this week about `gopher://` over TLS [1]; in fact you can still use `gopher://` either natively (by using `lynx` or a number of Firefox plugins), or through a HTTP gateway like [2] [3]. (There are countless servers and clients out there.)

Also there is a newly developed Gopher alternative called Gemini [4] that uses TLS by default, and supports natively as an alternative to HTML a Markdown-based simplified format. (Given that Gemini and Gopher are much alike, most of the content is also available over `gopher://`, and via an HTTP gateway it can be accessed from a plain browser, like for example [5].)

[1] https://lists.debian.org/gopher-project/2020/03/msg00005.htm...

[2] https://gopher.floodgap.com/gopher/

[3] https://gopher.floodgap.com/gopher/gw

[4] https://gemini.circumlunar.space/

[5] https://gopher.floodgap.com/gopher/gw?gopher://zaibatsu.circ...

----

However if Gopher or Gemini is "just too much", one could be active in pushing web publishers (and thus browser providers) into simplifying their web presence by: disabling JavaScript and thus background fetches and workers, cookies, fonts (thus including "icon fonts"), etc., forcing HTTPS and caching, disabling anything except `GET` methods.

I've tried some of these myself, and unfortunately the experience is one of two extremes:

* either everything just works, and it works flawlessly, the page loads instantly, no pesky cookie consent popups, no adds, and it's just a pleasure to browse that site; (most "small" blogs fall into this category;)

* either nothing works, and most of the times I get a "blank" page or just a paragraph "JavaScript is required to run this application"; (how hard is it to provide a basic HTML?) (and unfortunately even documentation sites fall in this category, like for example Go-lang's own documentation site...) (most of the times I just close that site, or if I really want to access it, I open it in a "normal" Firefox / Chrome profile;)

So, like with any civic right, everything is gained through participation: if we want a simpler web, we need to actively start consuming a simpler web. :)


I'd happily do away with web browsers and use only native programs for each site I usually visit. For instance, when I still used reddit, I did through a fantastic command-line program called rtv [0]. I'd most definitely use something similar for HN, or any other platform--Wikipedia, for example, or ArtWiki, or MUBI. We certainly don't need a web browser as it is conceived today.

[0] https://github.com/michael-lazar/rtv. No longer maintained.


So you'd happily do away with web browsers and use shell apps instead.

rtv is still a hosted app; it's hosted in a command-line interface with command-line rules instead of an HTML-rendering interface with w3c standardized rules. Take away the shell, and rtv won't run any more than a web page "runs" without a browser to interpret it. But these things aren't quite so different as one may imagine (the capability set of one is much broader; the compatibility set of the other is probably much broader).


You might be interested in weboob then. I am not using it, but I know they provide lots of small apps to access different websites. ttps://weboob.org/


Modern web browsers are the single best thing for security. I'm not going to install hundreds of apps from hundreds of sources.


I feel really torn about this issue. When I was a child, the web was much simpler and similar to the ideal the author proposes. I enjoyed it a lot and today appreciate a website (like HN!) that isn't an exploding mass of APIs and functionality. I worry about users with browsers that are less feature-complete or have less computing resources than I do. If computing resources ever become a scarcity, I want someone to be able to use the internet with a 15-year-old computer if they need to. That barely even covers accessibility issues as well.

At the same time, I love the vibrant opportunities to be creative with the modern web. I recently made a submission[1] for the 7-Day Roguelike competition and used WebGL, WebAudio, and other various APIs to make something I'm really proud of. It's incredible that someone with little formal training can create art and easily share it with others over the web.

Is there a balance here? I almost want webpages to have a "simple web" view and a "rich web" option that browsers both support.

[1] https://danbolt.itch.io/nayr-odyssey


Completely agree

We can add to that the churn of websites trying to remain in sync with browser 'features'


> it is impossible to build a new web browser.

Mission accomplished. The W3C constituents have created facts with their own browsers, and then they burned the ladder behind them. The classical move to stop others to get to power the same way you did.


Or the complexity is irreducible and making something on-par with browsers that have had decades of development will require real effort.

It isn't "ladder burning" that countries already have electrical grids so it's hard to build a competing electrical grid.


If I'm allowed to innovate here: Howabout a DNS server only for Protocols?

since the list is short atm the broser or OS can periodically download the index and register each known protocol to a dummy handler that simply displays what one can do with it.

Monetization (if needed) can be done by auctioning commercial slots in the list. (Marking clearly which are free and open source and which are commercial efforts to support that protocol.) Each entry can have an explainatory link to a world wide web document, pfd, text file, DOCX, XLSX, PPTX, etc (go nuts) explaining these new and exiting times.

Ideally the software to work with the protocol can also be installed from or by the dummy at whim. If multiple [say] video:// handlers are installed the dummy can be configured to pick a default or present a menu every time thereafter.

Then we can end the days of circular finger pointing where not explaining a protocol is always someone else's fault and we silently agree to have error pages (or worse search results) if one tries to open a link like ipfs:// gopher:// news:// nntp:// etc

If all members of the collective of IT nerds know exactly what the user is trying to do or what desired behavior looks like we cant be hiding behind "I dunno!?" type error pages. Its just to embarrassing. It is our responsibility to teach grandma how to use magnet uri's if she appertains it.


What platform would the protocol handlers be built for? Would they have full access to the system like most native programs, or run through a sandbox?

I think it would be ideal if the handlers were able to run on a platform that was defined by open standards and well-sandboxed by default... the web is a good example of this.


yes


I have to admit I was being facetious; my point was that anything solving this problem is going to have the same scope of web browsers and is more directly solved by them. If you imagine your setup was in-place, and pretty much everyone defined their own protocol with their own app, then that's roughly equivalent to the web as it is now, except also that the web has already done the hard work of establishing the open standards and being cross-platform. It's not clear to me what benefits introducing the things in your post would add to the web.


Web browsers have an ideological position that isn't compatible. Mozilla doesn't want to promote anything.

While we can create a system that supports n protocols we really only need a hand full of new things at a time.

The benefit is fetching stuff over ipfs, onion, gopher, freenet, zeronet, dat, blockstack, news and even irc

Until we can visit ipfs://example.com after a clean install the whole project borders a pipe dream.

I argue that if you cant see the benefits we've done a terrible job explaining them to you. It is sort of a chicken and egg problem. Why would I put a news:// link on my website if you cant do anything with it?

The expected behavior is a prompt asking if you want to install a news reader and register with a news server.

https://en.wikipedia.org/wiki/List_of_Usenet_newsreaders#Fre...

https://www.eternal-september.org

Don't expect grandma to figure out that stuff by herself. I might as well not post news:// links. She would just be confused.


There may be a market for a "microkernel" redesign of browsers in the next 5 years. OP's point is right that the current design is impossible for new entrants to implement. That alone probably won't create a market for something new (in no small part because the 2 major engines are FOSS) but if we hit an intersection of new technologies along with a need to rapidly adapt the Web to new computing platforms (eg AR/VR) then it could certainly happen.


The space seems ripe for someone to come along and re-implement the standard w3c model as an app running atop a more flexible strata (possibly wasm?).


I sometimes browse with Dillo ( https://www.dillo.org/ ) and although there are a lot of sites that are broken, there are many that render just fine. (E.g. http://www.mathsinstruments.me.uk/page66.html )

Dillo finishes starting before I finish clicking on its icon.

(As in, it loads and displays itself all in the moment between the button-pressed and button-released events.)

- - - -

Check out Effbot's "Generating Tkinter User Interfaces from XML" https://www.effbot.org/zone/element-tkinter.htm

There's a gulf between that and XUL, eh? ( https://en.wikipedia.org/wiki/XUL )

"The Wheel of Reincarnation" has been turning for a long time now: http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...

X Windows!? NeWS? Postscript and PDF? (PDF has JS in now!) I'm sure we could fill a book just with languages and frameworks for GUI, eh? VPRI STEPS program created Nile, etc...

Display problem, Y U NO solved yet?

Why hasn't something (better than HTML/CSS/JS soups) congealed in this area? Is it really so hard? Are we just running in circles?


The history of the web browser looks intriguingly like the history of the terminal, although I'm not sure what lessons we can take from that.


The shiny new macs being advertised today will ship with a terminal that is compatible with standards established 45 years ago. That's quite a humbling thing to realise.

I'm sure the various character sets, error correction and parity, control codes etc constituted standards hell for terminals back in the day. But a solid set of conventions seems to have survived.

I'm not sure how much of HTML will survive in 45 years, and with what historical 'depth of field'. Is it more likely to contain <marquee> or web components?


Yes, funny fact when debugging a problem with a new UI library, the Cocoa API for handling keyboard-shortcuts does not support Cmd-. because that was the original signal to terminate the proccess (like ctrl-c). The solution involves setting, calling a bunch of API just to listen to the signal.


Here's a feature (technology) I wish more programming languages/ecosystems had: Gaoling (sandboxing)

The unique thing about JS is the fact that it's locked down, and further permissions need user auth. Android API has a permissions model too.

Any browser needs to run some kind of code, but in a safe manner, so we need watertight code Gaols - outside the above mentioned, it's not so easy.


Windows has a similar permissions model (in AppContainer/UWP)


A rendering engine is not a browser.

Plenty of people are building browsers, yours truly included, but they're not building their own rendering engine, indeed for the reasons outlined in this article.


I'm optimistic.

I think browsers will continue adding features until they become the inevitable end result, a portable operating system. Once browsers reach this point it will probably slow down and the next goal will be pulling it to pieces and making it modular so we can deploy whatever subset of browser we want for any given use case. And we will probably all be using some "distro" of chromium.


I agree with the author's assessment of the state of the web and browsers today. Both are far, far too expansive and risky for my comfort.


Why is there no major Firefox (Gecko) based browsers? Microsoft, Opera, Brave, etc. have all gone Chromium.


Ever since I've tried to modify and run Firefox on windows, I'm not surprised anymore that it isn't bigger. It's a mess to work with and we quickly gave up on it.


Chromium engine is fairly easy to embed and integrate with new code. Firefox's isn't.


Even worse, we trust web browsers to manage passwords now too. This is insanity icing on top of the complicated cake that no one can understand.

I feel as if we are hurtling faster and faster to our ultimate doom. And all we seem to do is mash the gas pedal harder.


We already trust our auth cookies and the actual content of the connections to our browsers. Having them be more in charge of the auth process makes a lot of sense. Especially given that modern browsers have a lot more security hardening (sandboxed multi-process architecture) and professional review than other applications.


Silly rant. If everyone who didn’t like the world changing got their ways, we’d be back in agricultural times. There’s no weight to this argument other than building a web browser is hard. It’s been hard for a long time well before CSS3!


Can we agree that your imagined non-reckless, bounded alternative must be able to render the content of a document, where the content of a document includes both wrapping text and a button-triggered DOS version of Prince of Persia?

https://archive.org/details/msdos_Prince_of_Persia_1990

If we agree on that then I'm very interested to hear about non-reckless alternatives to our current bloat web.

If we don't agree, then I am not at all interested in hearing about Amish web.


"I've got 12 browser windows with 50 tabs each. And one of the tabs is playing a YouTube video while another is playing a movie on Netflix. And I've got a Facebook tab or two. Bunch of social media. Then there are the browser extensions. Gmail. A chat client or three. Basically running whatever quality of code and image optimization the sites I visit.

"Hey! Why is my browser using up 10GB of RAM all by itself? It's so bloated!"

tl;dr: don't just blame the browser


What’s wrong with WebVR?


The premise of this article is false. It’s possible to build a new browser. It’s just not worth it. If you can simply just fork chromium like Microsoft recently did, why would any time restricted project not go that route?

You can remove the google integrations fairly easily and still get patches from the main project for core browser work.

My main point is that Microsoft could have written a new browser instead but they saw it was more efficient to just fork one.


Is this a fundamental problem? I mean, all the code required to make your own browser is Open Source - Microsoft can customize the Chromium engine as much as they like, adding in their own patches and removing parts they dislike.

I mean, would you write your own networking stack? Probably not. You'd take an existing tool and, if you really want, make it your own.

Most of humanity and what we do is based on those who made our lives easier with tools they built, and that changed how the manufacturing process altogether.


Another reason why this is not a fundamental problem is that a user agent made for the user would have to violate and ignore most of the standards anyway in order to honor user's interests. Reimplementing all those corporate and ad tech driven standards is simply a waste of engineering, if anything it will only hold back anyone attempting to do it and won't let them compete.


What specific standards would a user-respecting browser not implement? I can't really think of anything besides maybe EME and third-party cookies, though it's not like those are really core features.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: