Hacker News new | past | comments | ask | show | jobs | submit login
So you want to build a browser engine (ocallahan.org)
225 points by zmodem 10 months ago | hide | past | favorite | 129 comments



Engine diversity is really important for the ecosystem and continued innovation. While building something competitive with the big three engines is a monumental task there's still a lot of value in building alternative engines to try new ideas even if getting "the whole web" implemented is basically impossible.

For example:

- Servo vs Blink vs Cobalt do selector matching and style resolution very differently.

- WebKit has a selector JIT, no one else does though.

- WebKit and Blink do layout super differently (with LayoutNG shipped).

- Firefox's html parser is written in Java and converted to C++ at build time.

The big value in folks trying to write new engines is finding these alternate paths. We can't depend on the big three for all the innovation.

That's what's great about Ladybird. Sure it'll need a lot of sophisticated sandbox improvements to get us to a big four situation, but it's more likely its value will be in finding another road the other engines didn't travel across the huge universe of specs, content and features.


But none of those differences are especially interesting. They certainly don't matter much for the ecosystem, and have no impact on web developers. All the major engines have comparable performance. And the differences are dwarfed by what they must do the same, because the way web tech works basically only allows one implementation architecture.

If you wanted to innovate in browser tech you'd really need to leave the specs behind. SCIter is an example of such an engine. It exposes APIs that are useful for app devs but that regular browsers don't have, for example it provides the vdom diffing algorithm React uses but implemented natively.

Even better if your browser can go beyond HTML and enable fully different ways to do apps and documents. That's the only place where design decisions can start to have a big impact.


> If you wanted to innovate in browser tech you'd really need to leave the specs behind.

This is pretty much what the author is suggesting by the end the article: "Instead of being a Web browser, you might want to initially try just being a faster, lighter or lower-power Electron or WebView. Then the Web compatibility barrier would be much less of an issue."


"- Firefox's html parser is written in Java and converted to C++ at build time."

This is surprising. Is it really true in the sense that the code that parses the HTML in a regular Firefox install was autoconverted when it was built?



Huh, I was curious about how that was done, and it looks like it's a specialized Java to C++ converter: https://github.com/validator/htmlparser/blob/master/translat....

Looking at the source, it seems pretty easy to find Java programs that wouldn't compile correctly, but of course a fully general Java to C++ converter would be a huge undertaking. I guess with OK test coverage of the generated C++ code (which Firefox certainly has), editing the Java code remains doable.


Doesn't GWT transpile to Javascript?


Yes, that sentence is stating two different things:

> The parser core compiles on Google Web Toolkit *and* can be automatically translated into C++


Then I don't understand why GWT was mentioned. Does compiling on GWT have anything to do with the finished C++ code?


Diversity alone is great but not quite enough. The alternative browsers need to actually see significant usage. Otherwise the developers and corporations will go "this browser has 99% market share so I can just ignore the others", giving the dominant browser makers enormous leverage over the others.


I'm writing one for fun: https://sr.ht/~bptato/chawan/

"Fixed-width text to a grid" makes things easier (sometimes), but I think it still qualifies.

On the article itself; it might be better to start with more... basic things than the optimizations it talks about:

* Cascading & layout must be cached and optimized - far from trivial. Layout is one of the hardest parts to get right at all, to make it fast as well is another level... and I'm just talking caching, not even multi-threading.

* The "web platform" contains a very large amount of poorly thought out features harmful for user privacy and/or security. You can choose between convenience (opt out) and privacy (opt in), or try to balance the two as major browsers do. Both is often impossible.

* "Serialize the entire application state" sounds like the most complex thing you can come up with as a distinguishing feature. Much more low hanging fruit exists if you give up on writing a Chromium clone. e.g. a fun side project I've had is making my browser "protocol-agnostic", or adding a bunch small QOL stuff for myself. You can probably find new ideas easily once you start out.

* Older browsers are useful for inspiration too. The coolest features in mine are often just carbon copies of the same things from w3m, just a bit more polished. Reading about what historical engines did - even IE! - is often enlightening too. Sometimes it's just a smaller scale of what engines do now, and easier to implement. Other times it's a whole different approach.

* It's easy to get lost in the details - careful with perfectionism. As you start getting more components interacting, it will slowly become clear when the naive approach is wrong and how to improve it.

Overall, it takes time, but is also fun and rewarding - you will learn a lot, even without replacing Chromium.[0] That said, if you want to learn how "modern" browsers work, getting involved with a "modern" engine itself is probably much more productive.

[0]: To write a Chromium replacement, you may have to reincarnate as awesomekling ;)


> Much more low hanging fruit exists if you give up on writing a Chromium clone.

One idea that’s crossed my mind (may never get around to trying it) is writing an engine that completely ignores everything that’s not modern HTML5 and CSS3. That’s still a lot but it seems like it’d cut down on the scope significantly. This engine wouldn’t be very useful as a browser but it’d probably be enough for displaying bespoke web content in an app or something like that.


It's not really clear what this even means. HTML5 and CSS3 aren't new versions of HTML and CSS that obsolete the prior stuff; they are extensions to what already existed.

So, for example, as far as I know, every web browser uses the HTML5 parsing algorithm for parsing HTML. This algorithm is very complicated, because it describes what parse tree to produce for any possible document. There's not, like, a separate HTML4 parsing algorithm; the HTML5 parsing algorithm replicates a lot of the complexities of pre-HTML5 parsing, but standardized.

Similarly, in CSS 2 the biggest complexities are things like floats, inline layout, margin collapsing, and stacking contexts. (My PhD was most of a standards-compliant CSS 2.1 layout engine.) There's not a CSS inline layout algorithm (the thing that puts words into lines) other than the CSS 2 one, though CSS 3 does add a lot more features.

In other words: the browser doesn't have a separate code path to handle old content. Instead, CSS 3 contains layout algorithms that extend CSS 2 but they don't replace them.[1] Similarly HTML 5. There are obsolete features you could probably dump (obsolete image formats, obsolete codecs) or rarely-used features (eval, document.write) or edge cases that probably don't matter that much (margin collapsing) or features where maybe your user base doesn't need it (writing direction? floats?) but this is really not so different from what the article talks about: a WebView/Electron replacement, where you commit to supporting a known universe of pages instead of the whole web.

[1] Granted some features like floats have become a lot less necessary, since a lot of their use cases can now be replaced by flex-box.


It's a semi-noob illusion that focusing only on modern standards and practices would surely result in a leaner implementation. This goes all the way back to the big push for web standards in 1998—e.g. you can find Slashdot Q&As from the early 2000s where people bring up the idea of how much smaller the browser could be without quirks mode and IE compatibility and then get corrected about how much of the code base this stuff actually takes up.


The Ladybird folks frequently claim that directly implementing the modern versions of standards is a huge benefit.


It's a lot easier to implement the HTML5 parsing algorithm from the spec than trying to reverse engineer it yourself. That's a completely separate matter from the confused belief that "ignor[ing] everything that's not modern HTML5 and CSS3" would somehow "cut down on the scope significantly".


It's not a huge benefit because they are simpler; they are usually equivalent to the old standard. The benefit comes from newer standards being a lot more precise.

e.g. HTML4 did not specify what to do with invalid markup, which makes writing a conformant parser easier. In practice, many websites weren't valid HTML4, so you had to reverse engineer whatever the other parsers did with invalid markup.

HTML5 doesn't really have a formal grammar, it's specified as an imperative tokenizer and parser. It actually takes somewhat longer to implement than HTML4, but it doesn't suffer from compatibility issues.

OTOH there are new problems with the "modern" standards that old ones did not have:

* It's unversioned, updated pretty much daily; insert walking on water quote[0]. Random example: I added support for the (ancient) document.write API to my browser a few months ago. Recently I looked at the standard again, and it turns out my implementation is outdated, because there's a new type of object in the standard that must be special cased. Many similar cases; this particular one I don't mind, but it shows how hard it is to just stay fully compliant.

* It's gigantic and bloated, full of things nobody ever uses. If something gets into the standard, it typically won't ever get removed, and WHATWG has operated under this policy for more than a decade. So implementing it from start to end takes way too long, and the best strategy to get something useful is to "just" implement features that websites you use will need.

* Above is the WHATWG model, which applies for HTML and DOM, but not CSS. The W3C model (used in CSS) has versioning, but they broke it in a different way: there is no comprehensive "CSS 3 standard", just a bunch of modules haphazardly building on top of each other and CSS 2. Plus it's much less precise than the HTML standard, with basic parts left entirely unspecified for decades. See the table module[1], or things like this[2].

[0]: "Walking on water and developing software from a specification are easy if both are frozen." - Edward V Berard

[1]: https://drafts.csswg.org/css-tables-3/ still in "not ready for implementation" limbo after years.

[2]: https://github.com/w3c/csswg-drafts/issues/2452 - "resolved" by an unclear IRC log(?), but never specified to my knowledge. It's not an irrelevant edge case either, Wikipedia breaks if you get it wrong.


It worked for Wayland... sort of.


The underlying baked assumption is chromium as a rendering engine is better than Safari. How is chromium rendering engine better than safari ? Quality wise they both are same Safari actually consumes way less memory.


I could be wrong but AFAIK Safari still doesn't have site isolation, so security-wise it's considerably weaker.


I guess one of my points is that layout algorithms are not really part of the "most basic" decisions anymore. Replacing layout algorithms is actually a lot less disruptive to the engine architecture than switching to site isolation, say.


Fair; re-reading TFA, now I realize you explicitly instructed me to stop reading in the first paragraph :)

Trying to redeem myself with an on-topic question: isn't what you want more of a "refactoring of Blink" than "building a browser engine"? I would be surprised if a complete rewrite was really necessary for the features you want, since "saving state" already happens to some extent in all engines (even if it's just reloading from the cache) and I've seen reports about Gecko integrating multi-core cascade from Servo. What makes it hard to incrementally improve upon the current engines?


Indeed you can incrementally improve existing engines, and certainly that's what you would try to do if you wanted one of those specific features. But I didn't write the post because I want those features, I wrote it in the hope that it might be helpful to people who are already planning to write a competitive browser engine from scratch.

Yes, for almost every conceivable features you'd be much better off adding it to an existing engine. Maybe you won't get buy-in from the core maintainers, so you'll have to maintain an out-of-tree branch, but that's still going to be much less work than doing your own engine.


I’m not sure that performance needs to constrain web engine design the way it traditionally has. The world has changed since the browser perf war started.

Just try disabling the JIT in a modern browser and you’ll find your UX is not really any worse.

HW has gotten faster than when the current browser perf wars kicked off. So it might be that to have a meaningfully useful browser engine alternative, perf won’t be the top concern. Compatibility will though, just because of the sheer number of features a modern browser engine supports.

And if you don’t have JIT, then it’s not clear if Spectre is even a meaningful concern.


Those are interesting points, but disabling the JIT doesn't really change anything unless it means you can forgo site isolation, and that would be a very risky bet.

It may be that disabling the JIT is fine for users most of the time. However, that is also a tough call --- so easy for your competitors to beat you up with benchmarks, and there will be real use-cases (e.g. emulators) where it matters to some users.

And of course there's a lot more to perf than just the JIT. I barely mentioned JIT in my blog post.


>and there will be real use-cases (e.g. emulators)

Those are so niche as a concern, that might as well not take into account at all when doing a new browser engine.


Yeah but you mentioned Spectre. It’s not clear if you can do a Spectre attack on an interpreter. Maybe you can, but it seems super hard and not practical.

And without Spectre, the argument for site isolation is much weaker.


This makes writing a compiler or writing an OS kernel look like child's play.


I'd say a browser is an OS. It is, of course, higher level than a kernel like Linux that can run on the bare metal, but it has most of the aspects of a full OS.

It manages memory, processes and security, it can run user-supplied arbitrary code. It has an API, not unlike system calls that allows programs to access the underlying hardware. It doesn't do thing like writing on the address bus directly, but it does have the equivalent of drivers that are proxies to the underlying OS services for display, audio, video, etc..., which it abstracts away. Like many OSes it comes with a compiler and a shell.

Sure, it doesn't access the hardware directly through the address and data buses, like one may think an OS should do, but now that we have stuff like hypervisors, even what are unquestionably true OSes like Windows and Linux may not access the hardware directly either.


Any application (or library, or framework) that can dynamically load files that lead to causal executable behavior meets your description above in substantial part.


Windows, Linux, MacOS, et al. are all just bootloaders for Chrome.

I'm joking. Maybe. Probably. It seems like we've come full circle with MS-DOS and Windows 3 in the 90s.


"We can cause any problem by introducing an extra level of indirection."

- The Fundamental Theorem of Bloatware Engineering


>This makes writing a compiler or writing an OS kernel look like child's play.

Indeed. The modern web browser is the single most advanced, complex, and highly developed piece of software that 99% of people will ever interact with, and we treat it as a given. Having a highly performant fully sandboxed VM that runs on every system imaginable in billions of devices used by nearly every human on earth is the single greatest triumph of software engineering ever, perhaps only second to the Linux kernel.


> used by nearly every human on earth

Roughly 60% of earth's population used the internet in 2023. So not quite nearly every human.


25% of the Earth's population is younger than 15. Assuming 20% of them use the internet, the TAM (not including the <15 year olds) is 80% of the world's population.

3 out of 4, or 75%, seems like nearly ever human, I'd say (though it is a matter of opinion I guess.)


Indeed it is a matter of opinion.

Much like the cliché "the internet is the sum of all human knowledge".

If you have deep specialist knowledge in a field that has existed for more than thirty years it can become quite obvious that the internet really in is nowhere near "the sum" of our knowledge.


If we exclude over 75 with no interest in using it (and no use case for it in their environmnent/culture), and under 10, we're pretty close to covering all the rest.

And even the 60% is some hand wavy estimate about internet reach, about people using the internet frequently. More people use the internet than that, some transparently through some smartphone app or basic feature phone, which almost everybody has even in the poorest places in Africa.


Lol at under 10 and over 75 having no interest. I know avid users … content creators… in both categories.


Lol at you missing the qualifier "and no use case for it in their environmnent/culture".

Everybody knows avid users at 75+ or 10-, you haven't discovered something knew. But there are whole countries/areas/cultures where e.g. the majority of 75+ have no internet use, and no interest in getting any. You know, in the world, not in the US.


I think "nearly every human" is a bit more hand wavy than the 60% statistic.


A browser engine is a compiler. Or, more properly, it's at least two compilers (HTML + CSS -- you can outsource JS to V8 or whatever).


The DOM stuff, JS, WASM, WebGL, WebGPU... at least five compilers, with JS and WASM needing two distinct frontends (baseline/optimizing) and at least two backends (x86/ARM), and WebGL and WebGPU needing three backends each (D3D/VK/Metal).


Isn't webGL's GLSL directly delegated to the driver just like normal OpenGL? Also one could easily write a lot of frontends and a single massive centralised backend with multiple processor targets and optimisation profiles. Think about V8 which works for both JavaScript and WebAssembly. This would create a much simpler codebase and if you're going to use a parser generator it could very well be a breeze.


> Isn't webGL's GLSL directly delegated to the driver just like normal OpenGL?

Perhaps you could in an MVP implementation, but in practice no, none of the serious implementations do that by default. First because native OpenGL drivers are generally a mess, so browsers actually implement WebGL on top of DirectX, Vulkan or Metal wherever they can, and even when those aren't available the browsers still parse, validate and reconstitute the GLSL rather than passing it straight through to OpenGL as a layer of insulation against driver bugs. Chrome and Firefox do have hidden feature flags which bypass that behavior and call the native GL implementation directly, but you probably shouldn't enable those unless you're really into ShaderToy and want it to compile faster.


Wow that was something I wasn't expecting. Well I guess it does kinda make sense that running untrusted code on a gpu wouldn't be the best idea, however I seriously thought that browsers just passed their glsl directly to the GPU. Also since linux doesn't have its own graphics API I am afraid that WebGL support would introduce a lot of complexity, since you can definitely pass via OpenGL but Vulkan could be also an option.

Thank you for the tip!


https://github.com/google/angle

ANGLE is the de-facto standard library that all of the big browsers use to implement WebGL on top of other graphics APIs, if you want to read up on it.


Thank you, I'll definitely take a look at it.


It's basically an OS too. Application isolation, UI toolkit, 3D graphics APIs, storage, network stack. Netscape was right — browsers reduce the underlying OS to (a buggy set of) device drivers.


Your use of the term "compiler" to mean "something which transforms an input into a (different kind of) output" seems a little broad here.

We do not typically call TeX a "compiler". We don't consider XSLT engines to be "compilers". dot(1) is not normally considered to be a "compiler". And so on.


There are at least two garbage collectors in there as well.


A compiler by itself is surprisingly easy, especially if you read Abdulaziz Ghuloum's or Jeremy Siek's tutorials. Making it competitive with state-of-the-art compilers like Clang or HotSpot is difficult, but this is true for every kind of software.


A compiler for a toy language is easy. 99% of professional programmers probably could not implement a compiler for C.


A compiler for what? Some languages are easier than others.


Does anyone have the idea of escaping from HTML/CSS? As these specs are too complicated and not friendly for web developers as well. Maybe we could re-invent a browser engine without conforming to HTML/CSS specs?

An (early) alternative spec/engine would be a Figma-compatible vector graphics spec[2] and its rendering engine[3]. It is called VeryGoodGraphics[1].

[1] https://verygoodgraphics.com/, the website is built with its own technology (wasm version).

[2] https://docs.verygoodgraphics.com/specs/overview

[3] https://github.com/verygoodgraphics/vgg_runtime


VeryGoodGraphics has no accessibility tree and no results when searching the documentation for “accessibility”, which makes it broadly immoral (or if you want to disagree with that, at least illegal to use it to build production systems in many locations). If you can’t get that right from the start, or even have plans for it, then you’re obsolete.


Even if you don't care about that (and you should!), "you can't highlight text [without doing additional work that nobody will do because it wasn't an explicit KPI for them]" is itself really disappointing and bad.

We escaped Flash. We shouldn't clamor to go back.


The reasons to escape Flash are the performance and power-consumption issues, rather than accessibility.

If you take browser as a document viewer then accessibility is critical. However if you take browser as a universal application platform, then accessibility is not necessary, right?


Sorry, I was just raising my question. Because in my country most app maker does not care about accessibility. Now I know.


No? Why shouldn’t apps be accessible?


> However if you take browser as a universal application platform, then accessibility is not necessary, right?

Are you really suggesting that people with disabilities shouldn't be able to use web apps?


If there is a VGG-native browser then accessibility is not so hard to implement. The awkward problem is that current VeryGoodGraphics is just a canvas node in HTML (using WebAssembly + WebGL). So adding accessibility support will be a nightmare technically.


Lively Kernel was one such idea.

https://youtu.be/gGw09RZjQf8?feature=shared



Lock Casey Muratori in a room until he designs the right API? He definitely believes the current one is the wrong API :-)


Relevant XKCD: https://xkcd.com/927/


My opinion is that the rendering engine should be WASM specified in a header. This way the site provider can choose whatever engine they want, including possibly not even using HTML.


Yeah - there really is a opportunity now to rethink browsers as just sandboxed rendering windows using WebAssembly + WebGPU.

Could still have typical DOM rendering handled with Webassembly delivered by the web sites (ideally cached). The challenge is though still having standards and accessibility options. That VeryGoodGraphics example allows for no text selection - and doesn't at all handle zooming. Still though it'd be a good bottom up way for a new browser to disrupt Chrome


How would ad blocking work in this world? A browser without ad blocking is useless.


How would ads work in this world? The advertising ecosystem relies on adding a 1-2 line JavaScript blurb to the page, and then the ads are added at display time.


Seems a good place to mention https://sciter.com/

It's been on HN loads of times.

A "browser" engine but very narrow scope. Works a treat for LOB type apps.


I was sad when the creator tried to raise enough funds to open-source Sciter and there wasn't much interest. If 1% of the people who complain about Electron had pledged something, we would have it as a good alternative today.


Open sourcing software to which you hold copyright doesn’t cost anything.


False. This is very naive. If this was true, we wouldn't get the regular posts on HN about OSS drama, and how some people gave up OSS. Opening source up is more than showing us the source.


This post starts with a false dichotomy: You're either making a toy browser for fun, or you're trying to make the next Chrome. There are many points in-between on that spectrum. (Look how long Microsoft IE existed, despite being highly inferior.)


Not at all: at the very end it suggests some in-betweens, like building an alternative to Electron or embedded WebViews.

There are lots of browser engines like that still around. It'd be really interesting to see one that supports the vast majority of web standards but without trying to do everything else a full browser does.


That's only because it was the "Chrome" (i.e. the dominant browser). Look at what happened when Microsoft eventually tried to catch up with Chrome - they gave up.


> Look at what happened when Microsoft eventually tried to catch up with Chrome - they gave up.

That wasn't because they weren't up to the job. It's because Google was using Microsoft's playbook against them. https://news.ycombinator.com/item?id=18697824

> I very recently worked on the Edge team, and one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn't keep up. For example, they recently added a hidden empty div over YouTube videos that causes our hardware acceleration fast-path to bail (should now be fixed in Win10 Oct update). Prior to that, our fairly state-of-the-art video acceleration put us well ahead of Chrome on video playback time on battery, but almost the instant they broke things on YouTube, they started advertising Chrome's dominance over Edge on video-watching battery life. What makes it so sad, is that their claimed dominance was not due to ingenious optimization work by Chrome, but due to a failure of YouTube. On the whole, they only made the web slower.

(While this may seem like poetic justice, Google's also been doing this against everyone else, so we shouldn't cheer. Also, unlike when Microsoft pulled these stunts, this might not even be deliberate on the part of the Google webdevs.)


YouTube did this to Firefox too - for a while they were serving a special degraded version of YT to Firefox users (polyfilled web components, instead of native web components or their classic non-web-components version) even though they had versions that would perform better. "Oops".

If you used user-agent tricks you could get them to serve a good version.


Now I wonder if this is a thing that still exists, and I wonder if I should be spoofing my user-agent to Chrome just as a general rule in Firefox.


It was fixed years ago, though it's possible they still do similar things in other scenarios.

UA spoofing in firefox will get you blocked by recaptcha and cloudflare.


IE/Edge was behind the competition at standards adoption since IE7. Just look at the history of Acid3, HTML5Test or Caniuse.

https://en.wikipedia.org/wiki/Acid3

https://html5test.co/results/desktop.html

https://caniuse.com/ciu/comparison


I hate that they did it but I have a hard time sympathizing with Edge/MS


It's a travesty that Google hasn't been slapped harde for that kind of behavior.


What a weak excuse. To blame an empty div for the demise of your browser engine just seems desperate.


There will always be a case that kicks you out of the fast path, but shouldn't (or a case where you take the fast path incorrectly, resulting in incorrect behaviour). This is a corollary of Rice's theorem.


More like a tautology: “if it ain’t Chrome/Safari/Firefox etc. it must be a toy”


If you actually want to build one I cannot recommend https://browser.engineering/ enough.


I've wanted to make my own browser by forking chromium. But then I got confronted with the reality of hacking on a Google scale c++ project.

I've tried doing it by embedding webkit which is much more doable but I've found that many sites I use don't work my embedded webkit.

So now I think if I want to make my own browser it would be better start making my own versions of those sites first and make sure those are good and popular. At which point I'll just make native clients for them and forget about html/js/css.


interesting how oldpersonintx's comment "thinly-veiled passive-aggressive swipe at Ladybird" was dead'd here when that accurately reflects the authors comments (as roc-robert o'callahan) here on LWN.net: https://lwn.net/Articles/977625/


Firefox is also competitive with Chrome when it comes to telemetry and other anti-features.

Really being competitive with Chrome is not enough because that still doesn't give anyone any reason to use your browser over Chrome. You need to actually provide some end-user benifit that Chrome doesn't (and ideally won't) and that is more important than being competitiv with Chrome on the features Google wants to push. This is where Mozilla really dropped the ball. 15 years of Chrome lighting fire under their ass and zero innovation in the way we browse the web. Well, less than zero when you consider all the useful extension broken by browser changes over the years.

Meanwhile Mozilla keeps gaslighting us how they care about privacy while trying numerous ways to integrate ads in the browser.

... yeah I see why someone would rather create a browser from scratch than support Mozilla.


I was going to suggest:

> initially try just being a faster, lighter or lower-power Electron or WebView.

But he mentioned it himself, though maybe someone might want to try this with no intention to become a full browser. Can you skip any of the tricky security requirements if it'll be bundled into an app? Or is that just asking for trouble?


I think sooner or later you're going to want to load lower-trust content --- IFRAMEs of third-party Web content, or sandboxed extensions, or something like that. Building your entire architecture on the assumption you'll never have to do that is very risky.


You could use the system webview for embedded third-party web content while using your own framework for trusted content.


> Or is that just asking for trouble?

The average Node project pulls in hundreds of dependencies. While you'd hope these would have some security vetting because of the Many Eyes theory, you have no fucking idea what your project is doing. Even a trivial Electron app is running a ridiculous amount of unreviewed third party code.

Just one module able to exercise some local exploit in your engine because you didn't fix Security Footgun #8176 screws over all of your users.

A browser engine that's been developed with a billion dollars of person hours runs that same untrusted third party code but has security guardrails everywhere.


Aren't those dependencies trusted anyway? If they want to do something evil, they can just do it, they don't need to look for a zero-day in the engine they're running on.


The LCE doesn't need to be in the engine, the engine just needs to lack protections for the code to run something locally. As for Node dependencies being trusted, they are trusted but that's largely unearned trust.


So basically what Sciter does?


> Or is that just asking for trouble?

With the interactions an electron-like app might be doing with external services and the ton of JS third party library it could use, I think it would be indeed risky.


None of the security mitigations described in the post (nor any of those implemented in any browser engine) are aimed at protecting developers against themselves when they run an agglomeration of third-party modules as a single bundle under the same policy.


CSPs and mechanisms against cross site scripting are such protections. They would block a script from calling home or executing arbitrary scripts or displaying images that could exploit vulnerabilities.

So browser engines definitely protect developers against themselves a bit.

Although I agree with you that there's only so much you can do for the devs bundling crap themselves, I was wrong on this indeed.

Still, I would not be overly confident with web code running in a browser where security is not well studied if it has any network capacity. Especially if the app displays any external content in something like an iframe.


Tauri basically?


No. Tauri is not a web browser. It uses the existing platform browser.

This would be more like Servo which I believe is focusing on embedded use cases. It makes sense because for Electron/embedded you don't need it to work for every site (really really hard), you only need it to work for one site. (Or a few hundred/thousand if you count all your users.) That is several orders of magnitude easier.


How important is process separation if the browser engine is programmed with a memory safe language like Rust? I am under the impression that things like site separation are to bandage memory safety issues. Is that right, or are there other issues at play?


Rust does not give you protection against speculative execution attacks. Entirely different beast than memory safety errors.


I honestly think (I've been thinking about that for a few years now) that eventually a OS will be nothing more than a browser.


Isn't that what ChromeOS is?


that was the idea of FirefoxOS. Come help https://capyloon.org if you believe in that!


Great read :)


Who is this post intended for?

Nobody builds their own browser engines. And I mean nobody. I didn’t get the feeling that he wrote this for pet projects either.

Andreas over at Ladybird is probably the only one (and Servo?) who is really doing it the way that this post describes.

Still, the last couple of paragraphs made me think that this is more of a reflection of his own time over at Mozilla: could have / would have.


I was inspired to write it by reading about Ladybird, so I wrote it for Andreas Kling basically :-). But there is also Flow: https://www.ekioh.com/flow-browser/ and Servo. Maybe in the future someone else will try. Also I think it's fun to think about these things even if you're not building a browser engine. It only took about 90 minutes to write.


And I mean nobody.

One such project has pretty regular HN chonkthreads, including last week.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Yeah that is what I meant. To my knowledge, Andreas is the only person crazy enough to do it and also do it in the open.

You have to watch some of his YouTube videos to appreciate the insanity that goes into getting things to work, but he at least has some help from the OSS community: building automated tests, scripts, etc.

As I was reading the post I thought he would at least shout him out too!


Ah that's 'statistically nobody' vs 'nobody'.


The word directly under "Nobody" is "Andreas."


> Nobody builds their own browser engines

Anymore.

https://eylenburg.github.io/browser_engines.htm


Most people don’t need to write their own OS or compiler. Knowing the details of how things work is apart of getting to gaining expertise


Flow is another example I've seen.

https://www.ekioh.com/flow-browser/


There's a reason they sometimes on interviews ask "what happens when I fill in a url and hit enter", because almost no one knows. It's good to know how your daily tools work.


As someone who worked with roc for many years at Mozilla, and with 25 years of Mozilla and Gecko experience myself, I think it serves as a solid warning.


For general web browsing it's true there's basically just Gecko, WebKit and Blink that are broadly used and compatible with the whole ecosystem.

There's a fair number of other engines though like Sciter, Flow, Cobalt and Servo.


> So You Want To Build A Browser Engine

> [bunch of things that are only relevant to an application platform]

Yeah you really don't need any of this for a web browser to be usable for its intended purpose.


I mean, yeah, you can ignore all that and have a functioning web browser.

But it would be riddled with security issues.


> So You Want To Build A Browser Engine

The only correct answer is, "don't".

I mean, if you want to build a toy browser engine for a CS class or fun or something, then sure.

But the idea that "you want to build an engine that’s competitive with Chromium" is, quite simply, nonsensical.

If you want your own browser engine, you're going to fork Chromium or Gecko (Firefox). I mean, even Microsoft gave up on maintaining its own independent engine and switched to Chromium.

I literally don't understand who the author thinks this post is supposed to be addressed to.

Building an independent browser engine could have made sense in 1998. These days it would cost hundreds of millions of dollars in dev time to catch up to existing engines... just to duplicate something that you already have two open-source versions of?


HTML and CSS are pretty terrible languages. Basic things like form interaction, centering, printing a page of something, overwriting every default UI, etc. often take an odd amount of LOC to make basic things.

We're collectively getting better via following the same spec, which is great, but challenging the browsers and even the spec can shed light on potentially better implementations. Let's not pigeonhole ourselves to a couple vendors just because it's easier and works _good enough_.

My favorite web design is that '90s style where text went from very left to very right, barely any CSS, and Javascript (somehow my favorite language) was for fart buttons.

I'm currently most excited about keeping up with WebRTC. And yeah, I have no interest in writing a new browser engine.


Even Chromium started with WebKit which itself was a fork. This doesn't mean you shouldn't be interested in browser dev but you also don't have to do a totally clean sheet implementation.


Looking back, it was kind of weird how both apple and google used webkit for separate proprietary browsers for so long.


But that's exactly my point.

This article appears to be entirely about an implementation from scratch.

It makes very clear that it is not about forking Chromium.

But even Google was smart enough not to start from scratch.

So again, I don't know who the intended audience for this article is. It's advice on something no sane person or organization would ever do.


And yet a few people are doing it. So yeah, it's advice to insane people.


I have succumbed to the temptation to implement various aspects of this. I have also tweaked existing implementations for my day job. I even had the assignment to pitch an implementation to a client. So I am sure that some would question my sanity NOW. TFA suggests that I might not have been sane to start with.

I admire any team-of-one that takes on this endeavor and publishes their work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: