
The reckless, infinite scope of web browsers - jlelse
https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html
======
skyfaller
There's one thing I'd like to mention to everyone who says it is impossible to
build a new browser: while that may be true today, I would just like to point
out that the ancestor of the dominant browser engines of today, Blink and
WebKit, is KHTML from the Konqueror web browser.

While KDE has had some contributions from large corporations, it would be a
gross exaggeration to call KDE (or any Linux desktop) a popular success. It
more closely resembles a hobbyist project than a commercial juggernaut.

While it may be true that KHTML/Konqueror was not as polished as WebKit/Safari
or Blink/Chrome before those forks poured massive amounts of corporate
development effort into those browsers, the fact is that it was a solid enough
foundation for those later corporate contributors to leverage into world
domination.

In other words: Today's small scrappy upstart browser that only hobbyists use
may be the embryo of tomorrow's near-monopoly. Please don't abandon your ideas
for new browsers+engines, give them a shot. Even if you do not directly
benefit from its future success, at least you may give the web a gift that
everyone will someday enjoy. The new boss Chrome is an unqualified improvement
for the web over Internet Explorer, if only in standards support and open
source code. I only hope that someday we can replace it with something better
than a new corporate overlord.

~~~
udueebdhdu
I see these sorts of appeals a lot on HN but never really understood the
point. Sure, software had a good run of small devs making big splashes, but
it's pretty obvious to anyone watching that the garage days are over as far as
big projects are concerned. The only real exception is the European software
devs who, from a market perspective, exist mainly to sell the most promising
projects to China or the US since they can deploy those projects in a way that
drives growth. Why shame people for making smart moves?

~~~
skyfaller
Is asking people to keep hope alive the same thing as shaming them for feeling
hopeless? I hope my words don't come out that way, since it's certainly not
what I intend.

Also, one point I'm making here is that a small dev may not be able to make a
world-dominating project by themselves, but it may serve as the foundation of
something larger later. Is that really surprising or controversial?

Perhaps a more important point I'd like to make is that neither Apple nor
Google were quixotic enough to start a browser engine from scratch, for all of
their expertise and cash. Without KHTML, we might still be suffering under IE
or something even worse. It may take someone willing to tilt at windmills to
dethrone Blink/Chrome, I don't think the suits will save us.

~~~
emilsedgh
KHTML/Konqueror, Webkit, Safari and Chrome didn't really play an important
role in IE6's destruction.

It was Firefox. Firefox 1's slogan was literally "take back the web".

It was only Firefox that took IE6 down.

(I'm a KDE contributor and I may even have a couple of patches on Konqueror)

------
akling
While I agree that a lot of the W3C standards are silly and not worth
implementing, I'm still determined to build a new browser from scratch.[1][2]

I don't believe that it's impossible. It will take a lot of time and effort,
sure. But not impossible. :)

1\.
[https://github.com/SerenityOS/serenity/tree/master/Libraries...](https://github.com/SerenityOS/serenity/tree/master/Libraries/LibJS)

2\.
[https://github.com/SerenityOS/serenity/tree/master/Libraries...](https://github.com/SerenityOS/serenity/tree/master/Libraries/LibWeb)

~~~
maram
>it will take a lot of time and effort, sure. But not impossible.

“Taking a lot of time” is what’s implied by impossible.

By the time you are done building whatever-it-took-long-time-to-build a new
generation of users have just emerged.

Your product is either a) obsolete to the youngest users, or b) you, as the
product builder, lost the point of reference of the youngest generation

From the blog:

>I conclude that it is impossible to build a new web browser.

And a new OD. The entire world communicates now by two OSs only..android and
iOS.

~~~
kbenson
> The entire world is now communicating by two OSs only..android and iOS.

That's silly and reductionist. The world uses those OSs, but not _only_ those.
Almost every person that sits at a desk for work also uses something else.
Every student in the US is expected to a use a computer with a keyboard for
writing papers. I'd they don't have their own computer to do this, they are
provided access to one.

~~~
zeta0134
When I was at college a decade* ago, there were quite a few student carrying
around an iPad with a keyboard cover, and doing all of their schoolwork on
that. Heck, I understood the draw the moment I did the same with the (doomed,
but then-novel) Surface RT. That thing was lightweight and lasted _all day_
without needing to worry about finding an outlet, which is a big deal when
you're running to a different lecture hall every couple of hours.

For some disciplines, especially anything involving CAD, I think desktop
computer is still necessary. That said, I think you'd be surprised just how
much of a typical college education can be done on nothing more than a basic
tablet and a handful of productivity apps.

* (...wow, it really was a decade. How the time flies.)

~~~
smichel17
> I think you'd be surprised just how much of a typical college education can
> be done on nothing more than a basic tablet and a handful of productivity
> apps.

I think upon reflection this is less surprising than you might initially find
it. After all, college predates computers.

------
jcranmer
I'm surprised no one pointed this out yet, but there aren't 1,217 W3C
specifications. There are 1,217 published documents... which _includes_
drafts, old versions, and notes saying "we're not developing anything
anymore."

Filter by level recommendation and latest version, and there's only 295
specifications remaining. Even then, there's still some duplication (e.g.,
HTML 5.1 and HTML 5.2 still count as separate), and scrolling through the
list, well less than half are actually relevant for a web browser.

~~~
IanSanders
My take was that one needs to be "at least aware" of all mentioned documents,
which, while not as bad, still isn't ideal.

------
pippy
The increasing complexity of web browsers has less to do with companies using
browsers to shoulder competition out (after all, why would they bother to make
their engines open source?) and more to do with the web becoming the default
application platform. And this has its roots in companies... trying to
shouldering out competition with native UI frameworks. Microsoft just two days
ago introduced WinUI, another propitiatory UI that's tied into their
ecosystem. Bringing Microsoft's total non standard UI frameworks to seven (or
is it eight?). Developers want cross compatibility, so they naturally turn to
the web. Usually in the form of election or native web. And if they do choose
to go native, iOS and Android on the top of the list now days.

So developers expect native OS level API access from browsers. This is the
true source of the bloat. And to be honest, we're converging on an open
standard cross platform API. this is great.

~~~
ksec
> and more to do with the web becoming the default application platform.

I dont even see any trend to suggest that is the case. Even if you count
Electron as being one. Most Apps are not web based, and dont ever intends to
be.

>So developers expect native OS level API access from browsers.

Exactly. It was certain group of developers that want the Web to be the
default Application platform, starting with Mozilla's Firefox OS and later
Chrome OS.

I think we need to differentiate a Web Page Engine and a Web App Engine.
Although I would agree that even a Web Page Engine today is far too complex.

~~~
tylerjwilk00
> and more to do with the web becoming the default application platform.

>> I dont even see any trend to suggest that is the case. Even if you count
Electron as being one. Most Apps are not web based, and dont ever intends to
be.

\- Email

\- Chat

\- Music

\- Calendar

\- Word Processing

\- Spreadsheets

\- Social

\- News

\- TV

A shorter list would be what isn't on the web platform?

\- Video games

\- Graphics programs

~~~
oefrha
> A shorter list would be what isn't on the web platform?

> \- Video games

Video games (not talking about Flash/HTML5 games) are on the web platform.
[https://github.com/emscripten-
core/emscripten/wiki/Porting-E...](https://github.com/emscripten-
core/emscripten/wiki/Porting-Examples-and-Demos)

If you count streaming in, AAA games can be played from the browser since a
while ago. I'm not a Google Stadia user but I participated in Project Stream
which streamed Assassin's Creed Odyssey to browsers and from my brief
experience it was pretty impressive.

------
Nihilartikel
What I think is the takeaway from this is that there is a real need for a
platform-agnostic application layer above the native OS. It's a peculiarity of
history that HTML evolved into it, and not Java, Flash, Silverlight, Adobe
Air, etc.. Probably because the timing was right for applications to exist at
least partially remotely.

It's fun to dream about a more deliberate replacement, with the same
local/remote transparency of a web app, but with well thought out abstractions
for local hardware and permissions.. Like, binaries are LLVM IR, Jit'd to the
local architecture. Audio is openAL, graphics are Vulkan, local devices are
[no-current-equivalent-standard-API], with fine grained security on all
features. It could be run locally, or with resources streamed over network.
Opening an application could just be a 'URL'..

A little like a progressive web app, but not a weird outgrowth of an old page
layout standard.

~~~
coldpie
I suspect what will doom any such project is that it won't be designed for the
primary usecase that made HTML/JS succeed: teenagers with nothing more than
Notepad.exe and a search engine. It will be made super elegant and efficient
and no one will use it because it will take more than three lines to make text
appear, and a typo will result in a bunch of errors. HTML/JS are garbage
languages and web standards are cobbled together trash, but their strength is
in their utter simplicity and in how forgiving browsers are in interpreting
them. If you don't have that, you've already lost.

~~~
duxup
Yeah the barrier to entry with HTML/JS is so low for anyone to make a thing
that the whole world can see and frankly ... that's kinda awesome.

From there you can just be a simple site or scale up to some really impressive
applications with a TON of free resources available on the internet. That's
pretty amazing IMO.

As a webdev who has been dipping my toes into C#... I fire up a new command
line application in VS and the fans on my laptop take off and there's just a
lot of boilerplate ... come on man.

~~~
72deluxe
But your C# application is debuggable, unlike those "impressive applications"
written in JavaScript with thousands of async callbacks so you have no idea
where the originating problematic call came from in your minified JavaScript,
and even if you find it you still have to deal with the minified code that is
unreadable.

What boilerplate are you talking about in C#? If you think that's bad, you
should try something like C++ (although it's got far better in recent years),
and I say this as a fan of C++.

C# is compiled, JavaScript is not. Your C# application will involve high CPU
to compile ONCE but forevermore require half the resources of interpreted
JavaScript in webpages, which need to be interpreted again and again and
again.

This is an issue that is worth bearing in mind for energy usage, and has a
significant impact on our future if we just defer to "easy to write, expensive
to run" web languages.

I'd say C# is easier to write than JavaScript and is easier to debug for the
most part.

~~~
duxup
I get the differences, I'm not making an argument for any single language to
be dominant or anything.

I'm not learning C# because I think it is bad ;)

As for debuggable, I'm not sure I know enough about C# to comment on that but
I find JavaScript ... "debuggable".

~~~
kovac
Somethings that's going to impress you down the line with C# debugging are
using parallel stack views to debug multi threaded apps (without this could be
significantly harder), remote debugging (sometimes you simply can't debug your
application due to practical issues like may be your app interfaces with an
expensive or impossible to deploy in your dev machine to debug with it), the
watch feature in the debugger that allows you to debug complex linq queries
and even expression trees on the fly and debugging external assemblies by
attaching debuggers. VS is heavy on resources, but it delivers value unlike a
gazillion of JS apps.

------
pcr910303
This post… probably doesn’t really mean anything.

Firstly, judging a web browser’s compelxity by judging the spec catalogue is
already unfair. The catalogue is basically a dump of things related to the
web, for example the specs of JSON-LD. (Which probably has almost no
relationship to implementing web browsers, since that’s just a data format of
JSON.)

Also, word count doesn’t correlate with complexity. That’s like…. saying that
a movie will be more entertaining than another one because it’s runtime is
longer. Web-related specs are much, much more detailed than POSIX-specs
because of their cross-platform nature: we’ve already seen what happens if
web-related specs look like POSIX: anybody remember trying to make web pages
that work both in IE, Safari, Firefox in the early 2000s? (Or, just try to
make a shell script that works on both macOS, FreeBSD, Ubuntu, and Fedora
without trying out on all four OSes. Can you make one with confidence?)

Really, it’s just tiring to hear the complaints about web browsers on HN,
especially the ones about ‘It wasn’t like it in the 90s, why is every site
bloated and complex? Do we really need SPAs?’ Things are there for a reason,
and while I agree that not every web-API is useful (and some are harmful), one
should not dismiss everything as ‘bloat’ or ‘useless complexity’.

~~~
ddevault
>Firstly, judging a web browser’s compelxity by judging the spec catalogue is
already unfair. The catalogue is basically a dump of things related to the
web, for example the specs of JSON-LD. (Which probably has almost no
relationship to implementing web browsers, since that’s just a data format of
JSON.)

[https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6...](https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6d10e9303dc)

Specs like JSON-LD are included but do not meaningfully change the numbers.
Also, the word count I used in the article is _less than half_ of the real
word count I ended up with, just to put to rest any doubts like these about
the margin of error.

>Also, word count doesn’t correlate with complexity. That’s like…. saying that
a movie will be more entertaining than another one because it’s runtime is
longer. Web-related specs are much, much more detailed than POSIX-specs
because of their cross-platform nature: we’ve already seen what happens if
web-related specs look like POSIX: anybody remember trying to make web pages
that work both in IE, Safari, Firefox in the early 2000s? (Or, just try to
make a shell script that works on both macOS, FreeBSD, Ubuntu, and Fedora
without trying out on all four OSes. Can you make one with confidence?)

This doesn't seem right at all. How do you figure that POSIX is less specified
than the web? Have you read either standard?

Can you make a website with confidence which works on all browsers? When was
the last time you found a browser-specific issue? Mine was three days ago.

~~~
pcr910303
> Specs like JSON-LD are included but do not meaningfully change the numbers.

JSON-LD was a simple example that I just used b.c. it was on the front page.
That JSON-LD doesn't meaningfully change the numbers doesn't change the fact
that the catalogue is a dump of things related to the web, which is probably
different from each spec (POSIX, C, C++, etc...) which has a narrow scope. For
example, the link dump has 104 links that mentions CSS in the slug: it
probably makes sense to make a comprehensive CSS spec (which would probably
reduce the size since it will remove a lot of repetitive parts), but the web
standards don't work like that.

> This doesn't seem right at all. How do you figure that POSIX is less
> specified than the web? Have you read either standard?

That's based on some @chubot's oilshell blog posts[0][1][2][3] about shell.
Excerpts from blog post:

> POSIX Uses Brute Force: In theoretical terms, a language is described by a
> grammar, and a grammar accepts or rejects strings of infinite length. But
> POSIX apparently specifies no such thing. Only the "unspecified" cases are
> allowed to use a grammar!

> Over the last few years of implementing shell, I've found many times that a
> careful reading of the POSIX spec isn't sufficient.

> The POSIX shell spec says that shell arithmetic is C arithmetic, So it's
> natural to wonder what the creators of C used. They didn't use grammars to
> specify their language. The code came first and grammars came later.

> Discovery: all shells are highly POSIX compliant, for the areas of the
> language that POSIX specifies. But POSIX only covers a small portion of say
> dash, let alone bash.

There are probably much more posts about this, but I think this is sufficient
enough to explain that POSIX is underspecified.

> Can you make a website with confidence which works on all browsers?

I can make a website that reasonably works well on all browsers with
confidence. At least on the level where one can develop a full SPA based on
one browser and then try & test it on others. Most browser-specific issues
don't exist and when they do, it's usually a one-liner.

> When was the last time you found a browser-specific issue? Mine was three
> days ago.

That's interesting, what was it? (Genuinely interested in that one.) In my
experience, it doesn't surface that much unless you're trying to use WASM or
some new/non-standard APIs.

[0] [https://www.oilshell.org/blog/2017/08/31.html#posix-uses-
bru...](https://www.oilshell.org/blog/2017/08/31.html#posix-uses-brute-force)

[1] [https://www.oilshell.org/blog/2020/01/alias-and-
prompt.html#...](https://www.oilshell.org/blog/2020/01/alias-and-
prompt.html#the-alias-problem)

[2]
[https://www.oilshell.org/blog/2017/04/22.html](https://www.oilshell.org/blog/2017/04/22.html)

[3] [https://www.oilshell.org/blog/2019/01/18.html#criteria-
for-t...](https://www.oilshell.org/blog/2019/01/18.html#criteria-for-the-osh-
language-definition-analogy-to-posix)

~~~
ddevault
As someone who has also worked on implementing a POSIX shell, and read and
implemented large swaths of POSIX besides, in my experience the main thing it
lacks in is edge cases. These would normally be corrected with as much as one
sentence, and by my intuition I would expect no more than 100,000 additional
words would be necessary to fully specify these edge cases. It doesn't
meaningfully move the dial on any of these numbers.

>I can make a website that reasonably works well on all browsers with
confidence. At least on the level where one can develop a full SPA based on
one browser and then try & test it on others. Most browser-specific issues
don't exist and when they do, it's usually a one-liner.

On Chrome and Firefox, maybe. How about IE or Safari? How about Netsurf or
Lynx or w3m? SourceHut (of which I am the founder and lead developer) works in
all of those browsers, by the way.

>That's interesting, what was it?

Believe it or not, it was in how they interpreted margins. This was the fix:

[https://git.sr.ht/~sircmpwn/core.sr.ht/commit/a4a290fbeea23c...](https://git.sr.ht/~sircmpwn/core.sr.ht/commit/a4a290fbeea23cf76ea6bb58798c6763d7bf6b9a)

todo.sr.ht rendered differently on Firefox and Chrome before this change. I
didn't dig into it enough to make any bug reports, so connecting the dots is
up to you if you're interested.

I hit another browser bug a while ago because Chrome arbitrary decided that
the maximum number of elements in a CSS grid is 1,000.

~~~
pcr910303
It's a bit too late to reply, but...

> These would normally be corrected with as much as one sentence, and by my
> intuition I would expect no more than 100,000 additional words would be
> necessary to fully specify these edge cases. It doesn't meaningfully move
> the dial on any of these numbers.

I expect it would take much more than that to specify exactly what bash, dash
and a lot of other shells should do in a cross-platform manner, consolidate
the language to one, and allow backwards compatibility (just like where the
web APIs are based on the consolidation efforts of IE, Netscape, and a bunch
of other browsers in the 90s)

> On Chrome and Firefox, maybe. How about IE or Safari? How about Netsurf or
> Lynx or w3m? SourceHut (of which I am the founder and lead developer) works
> in all of those browsers, by the way.

Chrome, Firefox, and Safari are pretty easy to target all at once, as they
have the usual modern features.

Considering Lynx, Netsurf, or w3m as modern web browsers invalidate your
points, since I'm pretty sure they don't implement a lot of the standards,
does w3m implement flex box/CSS grid for example?

I can't say this in confidence, but I think it won't be that hard to (not that
it's easy) implement a web browser with a complexity of w3m.

> Believe it or not, it was in how they interpreted margins. This was the fix:

Hmm, my intuition feels like that's a SCSS-compiling problem where they mixed
up the rules... but if they weren't, that would definitely be at least one
browser bug.

------
blakesterz
I think there's some good points in there, but I'm not sure about this one:

"Firefox is filling up with ads, tracking, and mandatory plugins."

I feel like I follow these issues pretty close, and I don't think that's
accurate. Did I miss somethings? I know they've made at least a few bad moves,
but in general they do the right thing, and have stepped back some of the bad
things. It's entirely possible I'm missing something though.

~~~
ddevault
Ads:

[https://www.ghacks.net/2018/12/31/firefox-with-ads-on-new-
ta...](https://www.ghacks.net/2018/12/31/firefox-with-ads-on-new-tab-page/)

[https://www.zdnet.com/article/firefox-60-will-show-
sponsored...](https://www.zdnet.com/article/firefox-60-will-show-sponsored-
stories-but-you-can-disable-them-says-mozilla/)

Tracking:

[https://gist.github.com/0XDE57/fbd302cef7693e62c769](https://gist.github.com/0XDE57/fbd302cef7693e62c769)

[https://www.zdnet.com/article/firefox-tests-cliqz-engine-
whi...](https://www.zdnet.com/article/firefox-tests-cliqz-engine-which-slurps-
user-browsing-data/) (ads, too)

Mandatory plugins:

[https://news.ycombinator.com/item?id=9667809](https://news.ycombinator.com/item?id=9667809)

There are more cases of each, but these are the ones I thought of off-hand.
Setting up Firefox today still requires you to manually go to about:config and
turn off a whole bunch of crap. A stock install of Firefox has ads and sends
telemetry, searches, and more to both third- and first-party network services.

~~~
pornel
These are totally bullshit things blown out of proportion. These are trials
that never went live, and/or weren't even nearly as bad as the uproar make
them to be.

Come on, the Pocket hysteria? It's a bit of JS and a one button you can turn
off with two clicks. Pocket is now owned by Mozilla. It's a Firefox feature
now, and not any more of "mandatory plugin" than the Sync or Add On Store are.
I thought you were at least flipping out about EME, which is a 3rd party code
and actually a plug-in.

Your central point is valid. There's no need to embellish it with clickbait
backed by sources that are clickbait themselves.

------
Carpetsmoker
I already posted this on Lobsters, but the "1,217 specifications totalling 114
million words" is pretty off-base. To copy my comment from there:

This calculation is wrong. For example searching for HTML[1] reveals different
versions of the same document, various informative notes ("HTML5 Differences
from HTML4", "HTML/XML Task Force Report"), things no one uses like XForms,
and other documents that really shouldn't be counted.

I looked at the full URL list[2] and it includes things like the HTML 3.2
specification from 1997[3]. A quick spot-check reveals many URLs that
shouldn't be counted.

I'm reminded by the time in high school when I mixed up some numbers in some
calculation and ended up with a door bell using 10A of power. The teacher,
quite rightfully, berated me for blindly trusting the result of my
calculations without looking at the result and judging if it's vaguely in the
right ballpark. This is the same; 114 millions words is a ridiculously large
result, and the author should have known this, and investigated further to
ensure this number is correct (it's not) before writing about it.

I wouldn't be surprised if the actual word count is two orders of a magnitude
smaller; perhaps more.

[1]: [https://www.w3.org/TR/?title=html](https://www.w3.org/TR/?title=html)

[2]:
[https://paste.sr.ht/~sircmpwn/fd74cf95eb6c1740f4af3aaaf2a0f4...](https://paste.sr.ht/~sircmpwn/fd74cf95eb6c1740f4af3aaaf2a0f4d22ce22719)

[3]: [https://www.w3.org/TR/2018/SPSD-
html32-20180315/](https://www.w3.org/TR/2018/SPSD-html32-20180315/)

~~~
ddevault
Those specifications from 1997 are still relevant. That's why we end up with
things like quirks mode:

[https://quirks.spec.whatwg.org/](https://quirks.spec.whatwg.org/)

And on the subject of WHATWG, all of them were excluded from the word count.
And, the JavaScript spec, and nearly all of the JavaScript APIs browsers are
implementing. Things omitted include WebGL, Web Bluetooth and Web USB, the
native filesystem API, WebXR, Speech APIs... and, the informative notes you
mentioned are (1) a rounding error when compared to the specs, and (2) are
also included in the word counts for POSIX, C11, and so on.

 _And_ the word count I gave in the article is half of the real count I ended
up with, and I didn't even finish downloading all of the specs to consider.

My full write-up on the methodology is here:

[https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6...](https://paste.sr.ht/~sircmpwn/13c1951014a256e9f551296a129bf6d10e9303dc)

Anyone who thinks that the web isn't hundreds or thousands of times more
complicated than almost anything else out there is lying to themselves.

~~~
joshuamorton
I just poked through 50+ the first ~4000 things in that list. Of them, every
single one was either

\- Unrelated to an actual web standard (such as a guide for authors of web
pages(www.w3.org/TR/html5-author/dimension-attributes.html ), or a guide on
how to create a PDF for a W3 event ([https://www.w3.org/TR/2016/NOTE-
WCAG20-TECHS-20160317/pdf_no...](https://www.w3.org/TR/2016/NOTE-
WCAG20-TECHS-20160317/pdf_notes.html\)))

\- a raw xml file (www.w3.org/TR/2012/WD-its20-20121023/examples/xml/EX-
locale-filter-selector-2.xml)

\- a diff (www.w3.org/TR/prov-dm/diff.html)

\- an error (www.w3.org/TR/unicode-xml/index.html)

None were actual signal that relates to the web's specifications.

~~~
ddevault
>such as a guide for authors of web pages

I explained why I included these in my methodology doc. They felt this
necessary to document, so I included it. The same is true of other specs I
compared against, such as POSIX.

>a raw xml file

This XML file is 18 words according to my measurement. The total words I claim
in my article are 113 _million_. Do you really think that this changes
anything?

>a diff

Okay, I should have caught that. There are ~700 of these and I am computing
the difference these make to the word count now. I expect it will be within
the >100M word margin I left on these figures. [Edit: 28M words from diffs,
which eats up about 25% of the 100M word budget I allocated for errors]

>an error

123 words. See my XML comment.

Out of curiosity, is it your intention to also look for flaws in my approach
to word-counting the non-web specs I compared against?

~~~
joshuamorton
I don't think you fully appreciated my comment. I've looked at now 100+
documents from that list. Not a single one has had actual content related to
the web standard.

I was finally able to find one, by looking elsewhere:
[https://www.w3.org/TR/css-grid-1/](https://www.w3.org/TR/css-grid-1/). You
include 8 copies of the css-grid-1 standard in your count. So of the small
fraction of documents that are _actually_ web standards, you're miscounting by
an order of magnitude. In other words, I expect that the actual count here is
off by 2 orders of magnitude and that the real size of the "relevant" web
standard is 1-2 million words, and the rest is just bad measurement.

> Out of curiosity, is it your intention to also look for flaws in my approach
> to word-counting the non-web specs I compared against?

No, I think pointing out a 2-3 order of magnitude mistake in your methodology
speaks for itself.

> They felt this necessary to document, so I included it. The same is true of
> other specs I compared against, such as POSIX.

The posix spec includes examples and docs yes. But so do the actual web specs
(see again the css grid spec doc). What the posix spec doesn't include is a
parallel version of the docs meant entirely for posix users, that is wholly
irrelevant to people who are building a posix shell. Again, you're including
_an analysis of which PDF readers to test the accessibility of the PDF you 're
writing_ in an analysis of web standards.

Edit:

For an even more egregious example, [https://www.w3.org/TR/2013/CR-xpath-
datamodel-30-20130108/](https://www.w3.org/TR/2013/CR-xpath-
datamodel-30-20130108/) is one of _eighty_ versions of the xpath datamodel
spec that you count, and xpath _isn 't_ even an officially supported browser
thing.

~~~
ddevault
I think I was extremely generous with my margins and went to lengths to be
selective with my inclusion criteria, I didn't even catalogue everything under
those criteria, and I omitted huge swaths of web standards on the basis that
(1) it was more forgiving to W3C and (2) they would be difficult to compare on
the same terms. At most you've given a credible suggestion that there might be
_an_ order of magnitude off, but even if there were, it changes the
conclusions very little. I explained all of that and more in my methodology
document, and I stand by it. If you want to take the pains to come up with an
objective measure yourself and provide a similar level of justification, I'm
prepared to defer to your results, but not when all you have is anecdotes from
vaugely scanning through my dataset looking for problems to cherry pick.

~~~
joshuamorton
No, I've given credible reasons for two orders of magnitude:

1\. The majority of the documents you are including _are not_ reasonably
considered web standards

2\. Of those that are, you are counting each one 5-50 times.

That's two orders of magnitude.

All your analysis has proven is that it's (ironically) difficult to machine-
parse the w3 data, and that you did so in a way to justify your
preconceptions.

------
shadowgovt
Web browsers make more sense when you think of them as emulators for a novel
architecture / OS built atop an existing OS.

It is "impossible" to build a new browser in the sense that it's "impossible"
to build a new Windows 10. Of course, a key difference is that the web is
specified atop an open standard so someone can at least try.

~~~
angleofrepose
Why is it impossible to build a new Windows 10? Shouldn't that be an area of
great interest to Microsoft? What other endeavor would give Microsoft as great
a leap forward as a ground up rebuild of the operating system?

~~~
linguae
It's not impossible, but it will take an enormous amount of effort to
replicate. Consider the ReactOS project, which strives for compatibility with
Windows Server 2003. I use it occasionally in a VM and it is roughly at par
with Windows 2000/XP, but this project has been going on for over two decades
(granted, the target has moved over the years; originally it targeted Windows
NT 4.0 if I remember correctly). It will take years of work for ReactOS to
reach compatibility with Windows 10.

------
nepeckman
There are a lot of valid critiques to be made about the web as a development
platform. But the reality of the situation is there was a need for an OS
agnostic, installation free, reasonably sandboxed application platform. A
generation of developers turned a document viewer into that application
platform, and that's the platform that has stuck with non technical users. I
think developer energy is better spent improving performance, keeping the web
free, and ensuring Google doesnt own the platform.

------
stevage
>Firefox is filling up with ads, tracking, and mandatory plugins.

What's the basis of this claim? I use FF as my default browser, and I have not
experienced anything like this?

~~~
jsjddbbwj
I think it contains recommended articles ie ads by default on the home page.
They also launched a campaign for booking: [https://www.cyberciti.biz/web-
developer/firefox-is-now-placi...](https://www.cyberciti.biz/web-
developer/firefox-is-now-placing-ads-and-here-is-how-to-disable-it/)

Telemetry = tracking.

Not sure about what he means regarding plugins.

~~~
SanchoPanda
pocket support is implemented as a kind of extension, to my understanding.

for example the about:config setting wording is
`extensions.pocket.RelevantFeatureHere`

~~~
Wowfunhappy
One built into the browser that can only be disabled by going to a hidden
configuration page that pops up a big warning not to change anything.

I'm really glad it can be turned off, but I also wish it wasn't there. I have
trouble wholeheartedly recommending Firefox to others, because the default
experience (without about:config tweaks) feels messy.

------
gritzko
I recently had similar thoughts on Unicode:
[http://replicated.cc/concepts/unicode](http://replicated.cc/concepts/unicode)
Most of its complexity comes from its least-useful features.

We must find some way to rollback that spiralling complexity. Or maybe, like
in the old days, burn it all and start anew?

~~~
Jasper_
Keep in mind that the 16-bit limitation relied on the Han unification, an
incredibly controversial project where related languages from three Asian
cultures all got butchered in the process. It was culturally insensitive and
limited the range of Unicode, and is a major reason Unicode is actually rare
in these locales: China still majorly uses GB, and Japan Shift-JIS, and Taiwan
Big5.

The experience was one of the big reasons Unicode 2.0 raised the limit, as
they never wanted to go through that process again.

~~~
zozbot234
16-bit is also a non-starter if you want to include historical or rare CJK
characters, historical/ancient scripts, symbol sets, etc. So a 32-bit encoding
does make a lot of sense.

~~~
AnIdiotOnTheNet
I don't know that I've ever heard a good reason _why_ we need to have
historical/ancient scripts, symbol sets, etc. in something like Unicode.

~~~
Tomte
Scholarly publications on the web.

~~~
gritzko
The only extinct language used in scholarly publication is Latin, AFAIK.

~~~
SahAssar
Ah, so nobody is doing any research on older non-european cultures then? No
archeological or linguistic or other kinds of research? Nothing on african,
asian, american cultures?

~~~
gritzko
I mean, a corpus of researchers writing articles in an extinct language,
reading them like that, googling in the language? Vatican is the only example
I can think of.

~~~
Tomte
No. They are writing about those languages and need to cite the original. Show
phrases in that language and then comment on them in English.

------
eitland
I've started advocating for a subset of html5 that

\- drops all legacy features tht aren't needed to provide documents anymore

\- provide sane defaults

\- a choice between

\- 1) no JS

\- 2) JS based on a small number of vetted packages for stuff line
autocomplete etc

No DRM. No JS _frameworks_. Just plain old HTML goodness plus more modern
goodness (autocomplete) on top.

In my view there's even room for ads as long as they are injected server side
and don't report back anything.

Hopefully this could be so much faster that it could become a hit in tech and
academic circles.

~~~
krapp
The problem I have whenever I see arguments along these lines is that nothing
is stopping anyone from _writing simple HTML_ using that "subset" if they want
to, so a completely separate simpler HTML isn't necessary. If you don't want
JS, don't use JS. If you want to vet packages, start a service that vets Node
packages. That last part would actually probably be really useful.

~~~
ken
That sounds backwards to me. It's _corporations_ which are generating the vast
majority of the HTML in the world. Individuals can stop writing JS (and for
content-first websites, I have), but that won't stop us from having to see it.

Turning off JS entirely will kill a lot of common webpages, so that's not
really an option any more. Instead, everybody runs a content blocker, to try
to battle with corporations over what dumb junk we're exposed to, or exposed
by.

Corporations are also writing most of the popular web browsers. This isn't a
fair fight.

~~~
zzo38computer
I do disable scripts. I find that some web pages will still work if you delete
or hide an element that covers up everything else (and the script would
presumably hide or delete), or unhide an element that is normally hidden (that
the script would normally unhide). This doesn't always work, but sometimes it
does.

------
superkuh
People like to talk about how web applications are more secure than running
something locally but that is only because web applications, for now, are
significantly less powerful and have less hardware access than a native
application running on your OS.

But it's obvious to anyone that that is rapidly changing. As people try to do
more and more with web apps the corporations now in charge of web standandards
implement more raw access to system hardware and all the benefits of being in
a browser go away while none of the downsides do.

~~~
shadowgovt
I think it's more than that. Web applications run atop a user-agent-enforced
security model that has been bought with decades of educational experiences.

Building one's own native app that direct-binds to the networking layer,
passes user credentials around without the paid-for-in-blood browser API,
etc., is going to re-invent the wheel on a lot of security problems that
browsers had to solve (and history has shown developers love to _ignore_ given
the opportunity). It isn't just whether the web app will be a better user
experience; it's also whether the app will allow, say, exploits delivered via
the server-side state (introduced to your web app via user-modifiable content
on your site) to sniff the user's password out of the pastebuffer or some such
nonsense.

------
astrobe_
> Because of the monopoly created by the insurmountable task of building a
> competitive alternative

And if you go for a competitive alternative based on a more sane stack than
HTML/CSS/JS/..., then you cannot compete with the content. Unless your new
tech stack is really really really really better for current usage.

It probably means designing it for mobile first: small footprint, low energy
consumption and a lot of resilience wrt to network hiccups. Plus an integrated
monetization scheme that doesn't waste half of the bandwidth for ads or half
the battery capacity to try to circumvent ad blockers - all that without
selling the color of the user's pants to anyone wanting to buy it.

I know, utopia, utopia.

------
gok
The over-complication is recursive too. CSS was already too complicated when
it was initially written, and the main specification got so complicated itself
that it has now been split up into a dozen internally-overcomplicated
submodules. JavaScript went from a language that could be implemented in a
weekend to a C++-like multiparadigm monstrosity, somehow without fixing most
of the serious "WTF happened here" problems.

~~~
lioeters
> The over-complication is recursive

The phrase "fractal of bad design" comes to mind.. Although it was originally
used for PHP, this anti-design-pattern seems to rear its head almost
inevitably with popular and long-lived languages/systems (which I suppose
includes C++, though I'm not qualified to judge).

I've heard people pinning hopes on WebAssembly. It does seem possible that
eventually it could allow new languages to replace CSS and JS, maybe even
support the development of new kinds of applications that replace (the current
paradigm of) the browser as a cross-platform VM.

~~~
uk_programmer
JavaScript is getting simpler in a lot of ways. Things that are confusing to
developers of others languages e.g. function scope can be avoided by telling
people to use let instead.

Also a lot of the things I wanted 5-10 years ago that I had to work around are
now being included. For example there is a decent-ish http request API.

I don't like web assembly. I think it is actually worse in a lot of ways than
the huge JS frameworks that people think that it will replace.

------
combatentropy
> I conclude that it is impossible to build a new web browser. The complexity
> of the web is obscene. The creation of a new web browser would be comparable
> in effort to the Apollo program or the Manhattan project.

True. But if the web did not adopt those features, then you would have to
install more native applications to fill the void.

If you make a simpler web browser, one that is doable by a newcomer, with the
subset of features that you deem appropriate for the web, nothing is stopping
you. You could still visit most websites that are content only. But for some
of the web applications you would have to install the native version of those
applications to fill the void.

Either way you have the same outcome. You have a simple web browser that can
be made by newcomers, but you have this other application platform (Windows,
Mac, Android, Linux) where "the complexity is obscene, the creation of a new
operating system would be comparable to the Apollo program or the Manhattan
project."

------
bo1024
Agree very strongly. The amount of innovation we’ve lost because of the high
barrier to entry is also staggering to try to conceive.

~~~
shadowgovt
Can you give an example of what innovation we've lost?

~~~
bo1024
The first thing that comes to my mind is accessibility. If browsers were
simpler to create and the web were simpler, people could much more easily
develop accessibility tools.

I think the privacy and tracker-blocking crowd would have a lot more options.
You could also have browsers designed for low-power or weak-CPU environments.
Better automatic limits on what code runs and when.

Next category I picture are browsers with really customized interfaces and
displays. Different approaches to navigation, hotkeys. Tiling sites across the
screen.

Browser functionality could be built into more devices and contexts (e.g.
inside another application).

I know none of these is "staggering" but I'm one person brainstorming for five
minutes. If enthusiasts and hobbyists all over the world could more easily
tinker and try things out, who knows what ideas would pop up.

~~~
shadowgovt
So curtailing the complexity of sites so that individual users can make
heterogeneous deep content.

That doesn't actually solve the problem. Excluding webgl so that someone can
write an accessibility layer doesn't mean we have an accessible 3D site; it
just means we can't do 3D sites.

~~~
bo1024
That's a tradeoff worth discussing. I think there should be an entirely
different cross-platform layer for interactive and intense SPAs like Google
Docs and 3D sites. But making it possible for 0.00001% of sites to have 3D
comes at a huge cost.

~~~
shadowgovt
Making it possible for 3% of users to access a site via accessibility APIs
comes at a huge cost, but it's done anyway because it's a good idea.

The thing about cutting features because they aren't ones that fit one's use
case is that there's always someone who doesn't need that use case. "Not a lot
of people use it so we should cut it" isn't a great criteria for a platform.

------
austincheney
Of all the specifications, words, and complexity this article mentions there
are only 3 really costly areas that the web browsers devoted a good portion of
their net worth to solve:

* Presentation (generally and CSS specifically)

* JavaScript JIT (and other performance pipelines, DOM efficiency, caching architecture, and so forth)

* and accessibility

I know there is much more to the web than that and notice that I did not
mention security. Those 3 things took years and lots of research and failure
to get to today's status quo. Everything else is comparatively trivial when
compared on the basis of man hours and money spent. If you can really nail
those three quickly and cheaply then everything else is trivial from a
perspective of money and time.

------
theandrewbailey
I can see a 'hipster' web browser with a reduced feature set coming along at
some point, and all these heavy monstrous JS apps masquerading as web pages
will fall out of style.

~~~
phailhaus
Highly doubt that. The vast majority of users do not notice, or care.

~~~
buckminster
Plenty of users notice that the web is slow, and complain about it.
Unfortunately, they have no idea why so they blame their mobile or broadband
provider.

------
giancarlostoro
Sometimes I dream of a Gopher type of web where you're allowed to make simple
input fields, SSL is allowed. That's it. No JavaScript, just basic input /
output. I could imagine HN being on this simple web. Maybe some sort of markup
to support the upvoting capabilities of HN but that's about it.

I would even allow for ads since they would at best be banner images, not pop
ups, not things that track you around the internet, and what have you. As for
page style? Let the client decide, which ultimately means: let the end-user
decide what they're most comfortable with.

~~~
ciprian_craciun
Luckily Gopher is still alive, and in fact on the Gopher mailing list there
was a thread earlier this week about `gopher://` over TLS [1]; in fact you can
still use `gopher://` either natively (by using `lynx` or a number of Firefox
plugins), or through a HTTP gateway like [2] [3]. (There are countless servers
and clients out there.)

Also there is a newly developed Gopher alternative called Gemini [4] that uses
TLS by default, and supports natively as an alternative to HTML a Markdown-
based simplified format. (Given that Gemini and Gopher are much alike, most of
the content is also available over `gopher://`, and via an HTTP gateway it can
be accessed from a plain browser, like for example [5].)

[1] [https://lists.debian.org/gopher-
project/2020/03/msg00005.htm...](https://lists.debian.org/gopher-
project/2020/03/msg00005.html)

[2] [https://gopher.floodgap.com/gopher/](https://gopher.floodgap.com/gopher/)

[3]
[https://gopher.floodgap.com/gopher/gw](https://gopher.floodgap.com/gopher/gw)

[4] [https://gemini.circumlunar.space/](https://gemini.circumlunar.space/)

[5]
[https://gopher.floodgap.com/gopher/gw?gopher://zaibatsu.circ...](https://gopher.floodgap.com/gopher/gw?gopher://zaibatsu.circumlunar.space:70/1/~solderpunk/gemini/)

\----

However if Gopher or Gemini is "just too much", one could be active in pushing
web publishers (and thus browser providers) into simplifying their web
presence by: disabling JavaScript and thus background fetches and workers,
cookies, fonts (thus including "icon fonts"), etc., forcing HTTPS and caching,
disabling anything except `GET` methods.

I've tried some of these myself, and unfortunately the experience is one of
two extremes:

* either everything just works, and it works flawlessly, the page loads instantly, no pesky cookie consent popups, no adds, and it's just a pleasure to browse that site; (most "small" blogs fall into this category;)

* either nothing works, and most of the times I get a "blank" page or just a paragraph "JavaScript is required to run this application"; (how hard is it to provide a basic HTML?) (and unfortunately even documentation sites fall in this category, like for example Go-lang's own documentation site...) (most of the times I just close that site, or if I really want to access it, I open it in a "normal" Firefox / Chrome profile;)

So, like with any civic right, everything is gained through participation: if
we want a simpler web, we need to actively start consuming a simpler web. :)

------
Funes-
I'd happily do away with web browsers and use only native programs for each
site I usually visit. For instance, when I still used reddit, I did through a
fantastic command-line program called rtv [0]. I'd most definitely use
something similar for HN, or any other platform--Wikipedia, for example, or
ArtWiki, or MUBI. We certainly don't need a web browser as it is conceived
today.

[0] [https://github.com/michael-lazar/rtv](https://github.com/michael-
lazar/rtv). No longer maintained.

~~~
shadowgovt
So you'd happily do away with web browsers and use shell apps instead.

rtv is still a hosted app; it's hosted in a command-line interface with
command-line rules instead of an HTML-rendering interface with w3c
standardized rules. Take away the shell, and rtv won't run any more than a web
page "runs" without a browser to interpret it. But these things aren't quite
so different as one may imagine (the capability set of one is much broader;
the compatibility set of the other is probably much broader).

------
danbolt
I feel really torn about this issue. When I was a child, the web was much
simpler and similar to the ideal the author proposes. I enjoyed it a lot and
today appreciate a website (like HN!) that isn't an exploding mass of APIs and
functionality. I worry about users with browsers that are less feature-
complete or have less computing resources than I do. If computing resources
ever become a scarcity, I want someone to be able to use the internet with a
15-year-old computer if they need to. That barely even covers accessibility
issues as well.

At the same time, I love the vibrant opportunities to be creative with the
modern web. I recently made a submission[1] for the 7-Day Roguelike
competition and used WebGL, WebAudio, and other various APIs to make something
I'm really proud of. It's incredible that someone with little formal training
can create art and easily share it with others over the web.

Is there a balance here? I almost want webpages to have a "simple web" view
and a "rich web" option that browsers both support.

[1] [https://danbolt.itch.io/nayr-odyssey](https://danbolt.itch.io/nayr-
odyssey)

------
tsegratis
Completely agree

We can add to that the churn of websites trying to remain in sync with browser
'features'

------
classified
> it is impossible to build a new web browser.

Mission accomplished. The W3C constituents have created facts with their own
browsers, and then they burned the ladder behind them. The classical move to
stop others to get to power the same way you did.

~~~
shadowgovt
Or the complexity is irreducible and making something on-par with browsers
that have had decades of development will require real effort.

It isn't "ladder burning" that countries already have electrical grids so it's
hard to build a competing electrical grid.

------
6510
If I'm allowed to innovate here: Howabout a DNS server only for Protocols?

since the list is short atm the broser or OS can periodically download the
index and register each known protocol to a dummy handler that simply displays
what one can do with it.

Monetization (if needed) can be done by auctioning commercial slots in the
list. (Marking clearly which are free and open source and which are commercial
efforts to support that protocol.) Each entry can have an explainatory link to
a world wide web document, pfd, text file, DOCX, XLSX, PPTX, etc (go nuts)
explaining these new and exiting times.

Ideally the software to work with the protocol can also be installed from or
by the dummy at whim. If multiple [say] video:// handlers are installed the
dummy can be configured to pick a default or present a menu every time
thereafter.

Then we can end the days of circular finger pointing where not explaining a
protocol is always someone else's fault and we silently agree to have error
pages (or worse search results) if one tries to open a link like ipfs://
gopher:// news:// nntp:// etc

If all members of the collective of IT nerds know exactly what the user is
trying to do or what desired behavior looks like we cant be hiding behind _" I
dunno!?"_ type error pages. Its just to embarrassing. It is our responsibility
to teach grandma how to use magnet uri's if she appertains it.

~~~
AgentME
What platform would the protocol handlers be built for? Would they have full
access to the system like most native programs, or run through a sandbox?

I think it would be ideal if the handlers were able to run on a platform that
was defined by open standards and well-sandboxed by default... the web is a
good example of this.

~~~
6510
yes

~~~
AgentME
I have to admit I was being facetious; my point was that anything solving this
problem is going to have the same scope of web browsers and is more directly
solved by them. If you imagine your setup was in-place, and pretty much
everyone defined their own protocol with their own app, then that's roughly
equivalent to the web as it is now, except also that the web has already done
the hard work of establishing the open standards and being cross-platform.
It's not clear to me what benefits introducing the things in your post would
add to the web.

~~~
6510
Web browsers have an ideological position that isn't compatible. Mozilla
doesn't want to promote anything.

While we can create a system that supports n protocols we really only need a
hand full of new things at a time.

The benefit is fetching stuff over ipfs, onion, gopher, freenet, zeronet, dat,
blockstack, news and even irc

Until we can visit ipfs://example.com after a clean install the whole project
borders a pipe dream.

I argue that if you cant see the benefits we've done a terrible job explaining
them to you. It is sort of a chicken and egg problem. Why would I put a
news:// link on my website if you cant do anything with it?

The expected behavior is a prompt asking if you want to install a news reader
and register with a news server.

[https://en.wikipedia.org/wiki/List_of_Usenet_newsreaders#Fre...](https://en.wikipedia.org/wiki/List_of_Usenet_newsreaders#Free/Open-
source_software)

[https://www.eternal-september.org](https://www.eternal-september.org)

Don't expect grandma to figure out that stuff by herself. I might as well not
post news:// links. She would just be confused.

------
pfraze
There may be a market for a "microkernel" redesign of browsers in the next 5
years. OP's point is right that the current design is impossible for new
entrants to implement. That alone probably won't create a market for something
new (in no small part because the 2 major engines are FOSS) but if we hit an
intersection of new technologies along with a need to rapidly adapt the Web to
new computing platforms (eg AR/VR) then it could certainly happen.

~~~
shadowgovt
The space seems ripe for someone to come along and re-implement the standard
w3c model as an app running atop a more flexible strata (possibly wasm?).

------
carapace
I sometimes browse with Dillo (
[https://www.dillo.org/](https://www.dillo.org/) ) and although there are a
lot of sites that are broken, there are many that render just fine. (E.g.
[http://www.mathsinstruments.me.uk/page66.html](http://www.mathsinstruments.me.uk/page66.html)
)

Dillo finishes starting before I finish clicking on its icon.

(As in, it loads and displays itself all in the moment between the button-
pressed and button-released events.)

\- - - -

Check out Effbot's "Generating Tkinter User Interfaces from XML"
[https://www.effbot.org/zone/element-
tkinter.htm](https://www.effbot.org/zone/element-tkinter.htm)

There's a gulf between that and XUL, eh? (
[https://en.wikipedia.org/wiki/XUL](https://en.wikipedia.org/wiki/XUL) )

"The Wheel of Reincarnation" has been turning for a long time now:
[http://cva.stanford.edu/classes/cs99s/papers/myer-
sutherland...](http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland-
design-of-display-processors.pdf)

X Windows!? NeWS? Postscript and PDF? (PDF has JS in now!) I'm sure we could
fill a book just with languages and frameworks for GUI, eh? VPRI STEPS program
created Nile, etc...

Display problem, Y U NO solved yet?

Why hasn't something ( _better than HTML /CSS/JS soups_) congealed in this
area? Is it really so hard? Are we just running in circles?

------
dllthomas
The history of the web browser looks intriguingly like the history of the
terminal, although I'm not sure what lessons we can take from that.

~~~
afandian
The shiny new macs being advertised today will ship with a terminal that is
compatible with standards established 45 years ago. That's quite a humbling
thing to realise.

I'm sure the various character sets, error correction and parity, control
codes etc constituted standards hell for terminals back in the day. But a
solid set of conventions seems to have survived.

I'm not sure how much of HTML will survive in 45 years, and with what
historical 'depth of field'. Is it more likely to contain <marquee> or web
components?

~~~
wrnr
Yes, funny fact when debugging a problem with a new UI library, the Cocoa API
for handling keyboard-shortcuts does not support Cmd-. because that was the
original signal to terminate the proccess (like ctrl-c). The solution involves
setting, calling a bunch of API just to listen to the signal.

------
Chris2048
Here's a feature (technology) I wish more programming languages/ecosystems
had: Gaoling (sandboxing)

The unique thing about JS is the fact that it's locked down, and further
permissions need user auth. Android API has a permissions model too.

Any browser needs to run _some_ kind of code, but in a safe manner, so we need
watertight code Gaols - outside the above mentioned, it's not so easy.

~~~
contextfree
Windows has a similar permissions model (in AppContainer/UWP)

------
kilian
A rendering engine is not a browser.

Plenty of people are building browsers, yours truly included, but they're not
building their own rendering engine, indeed for the reasons outlined in this
article.

------
PinkMilkshake
I'm optimistic.

I think browsers will continue adding features until they become the
inevitable end result, a portable operating system. Once browsers reach this
point it will probably slow down and the next goal will be pulling it to
pieces and making it modular so we can deploy whatever subset of browser we
want for any given use case. And we will probably all be using some "distro"
of chromium.

------
JohnFen
I agree with the author's assessment of the state of the web and browsers
today. Both are far, far too expansive and risky for my comfort.

------
surround
Why is there no major Firefox (Gecko) based browsers? Microsoft, Opera, Brave,
etc. have all gone Chromium.

~~~
mschuetz
Ever since I've tried to modify and run Firefox on windows, I'm not surprised
anymore that it isn't bigger. It's a mess to work with and we quickly gave up
on it.

------
_wldu
Even worse, we trust web browsers to manage passwords now too. This is
insanity icing on top of the complicated cake that no one can understand.

I feel as if we are hurtling faster and faster to our ultimate doom. And all
we seem to do is mash the gas pedal harder.

~~~
AgentME
We already trust our auth cookies and the actual content of the connections to
our browsers. Having them be more in charge of the auth process makes a lot of
sense. Especially given that modern browsers have a lot more security
hardening (sandboxed multi-process architecture) and professional review than
other applications.

------
azinman2
Silly rant. If everyone who didn’t like the world changing got their ways,
we’d be back in agricultural times. There’s no weight to this argument other
than building a web browser is hard. It’s been hard for a long time well
before CSS3!

------
jancsika
Can we agree that your imagined non-reckless, bounded alternative must be able
to render the content of a document, where the content of a document includes
both wrapping text and a button-triggered DOS version of Prince of Persia?

[https://archive.org/details/msdos_Prince_of_Persia_1990](https://archive.org/details/msdos_Prince_of_Persia_1990)

If we agree on that then I'm very interested to hear about non-reckless
alternatives to our current bloat web.

If we don't agree, then I am not at all interested in hearing about Amish web.

------
miles-po
"I've got 12 browser windows with 50 tabs each. And one of the tabs is playing
a YouTube video while another is playing a movie on Netflix. And I've got a
Facebook tab or two. Bunch of social media. Then there are the browser
extensions. Gmail. A chat client or three. Basically running whatever quality
of code and image optimization the sites I visit.

"Hey! Why is my browser using up 10GB of RAM all by itself? It's so bloated!"

tl;dr: don't just blame the browser

------
SimeVidas
What’s wrong with WebVR?

------
haburka
The premise of this article is false. It’s possible to build a new browser.
It’s just not worth it. If you can simply just fork chromium like Microsoft
recently did, why would any time restricted project not go that route?

You can remove the google integrations fairly easily and still get patches
from the main project for core browser work.

My main point is that Microsoft could have written a new browser instead but
they saw it was more efficient to just fork one.

------
tcd
Is this a fundamental problem? I mean, all the code required to make your own
browser is Open Source - Microsoft can customize the Chromium engine as much
as they like, adding in their own patches and removing parts they dislike.

I mean, would you write your own networking stack? Probably not. You'd take an
existing tool and, if you really want, make it your own.

Most of humanity and what we do is based on those who made our lives easier
with tools they built, and that changed how the manufacturing process
altogether.

~~~
zzzcpan
Another reason why this is not a fundamental problem is that a _user_ agent
made for the user would have to violate and ignore most of the standards
anyway in order to honor user's interests. Reimplementing all those corporate
and ad tech driven standards is simply a waste of engineering, if anything it
will only hold back anyone attempting to do it and won't let them compete.

~~~
AgentME
What specific standards would a user-respecting browser not implement? I can't
really think of anything besides maybe EME and third-party cookies, though
it's not like those are really core features.

