Hacker News new | past | comments | ask | show | jobs | submit | igrunert's comments login

While the modern web is complicated, there's a few things working in Ladybird's favor.

Web Platform Tests (1) make it significantly easier to test your compliance with W3C standards. You don't have to reverse engineer what other engines are doing all the time.

The standards documents themselves have improved over time, and are relatively comprehensive at this point. Again, you don't have to reverse engineer what other engines are doing, the spec is relatively comprehensive.

Ladybird has chosen to not add a JIT compiler for JS and Wasm, reducing complexity on the JS engine. They're already reached (or exceeded) other JS engines on the ECMAScript Test Suite Test262 (2).

There's a big differential between the level of investment in Chromium and the other engines - in part because Chrome / Chromium are often doing R&D to build out new specifications, which is more work than implementing a completed specification. There's also a large amount of work that goes into security for all three major engines - which (for now) is less of a concern for Ladybird.

I'm confident that the Ladybird team will hit their goal of Summer 2026 for a first Alpha version on Linux and macOS. They'll cut a release with whatever they have at that point - it's already able to render a large swathe of the modern web, and continues to improve month-on-month.

(1) https://web-platform-tests.org/ (2) https://test262.fyi/


The Chromium codebase also implements requirements that you may not need to take on for just a web browser, e.g. all of the infrastructure to make it ChromeOS, including for example being a Wayland compositor and a lot of other stuff. The projects are somewhat apples to oranges.

Ladybird does have another slight advantage in that it only has an interpreter for JS and wasm, instead of maintaining multiple tiers of JIT compilation for both. That choice materially reduces the surface area for exploits.

For Ladybird - Andreas Kling called out that the vast majority of "easy tests" are passing and each additional test is going to be more difficult to come by going forward.

https://www.youtube.com/watch?v=-l8epGysffQ (1 minute - 4 minute)


There are a handful of git features which work significantly better with a clean history on main. If `git blame` points at a well crafted commit, it can help bring additional context to the line in question. In addition `git log -S<string>` can be used to find when code was introduced.

Both of these features aren't very useful when they point at a "wip" or similar commit message.

By all means push lots of little commits to your branch while you're figuring stuff out, but squash and rewrite history into logical commits (usually just one) before landing the change on main.


I learned about the --first-parent flag of git blame recently. It allows git blame to work well in repos that use merge commits.


There is also -r, which leaves merge commits and lets you move them around.


When discussing security it's important to keep in mind the threat model.

We're mostly concerned with being able to visit a malicious site, and execute wasm from that site without that wasm being able to execute arbitrary code on the host - breaking out of the sandbox in order to execute malware. You say the only benefit is that access to the OS is isolated, but that's the big benefit.

Having said that, WebAssembly has some design decisions that make your exploits significantly more difficult in practice. The call stack is a separate stack from WebAssembly memory that's effectively invisible to the running WebAssembly program, so return oriented programming exploits should be impossible. Also WebAssembly executable bytecode is separate from WebAssembly memory, making it impossible to inject bytecode via a buffer overflow + execute it.

If you want to generate WebAssembly code at runtime, link it in as a new function, and execute it, you need participation from the host, e.g. https://wingolog.org/archives/2022/08/18/just-in-time-code-g...


The downside of WASM programs not being able to see the call stack is that it makes it impossible to port software that uses stackful coroutines/fibers/whatever you want to call them to WASM, since that functionality works by switching stacks within the same thread.


> But (Safari is) the only option here that’s unsupported outside one manufacturer’s hardware.

You can use WebKit on Linux via GNOME Web (WebKitGTK), it's maintained by Igalia. Though that won't get you cross-device syncing, which is the main reason people want support on alternate hardware.


I think a big reason developers don't choose WebKit is due to the Windows port requiring significant work, and most new browsers want to support Windows.

On this thread there was a rough estimate of $1M - $2M USD to do that work. It's probably not far off the mark.

https://orionfeedback.org/d/2321-orion-for-windows-android-l...


Building an browser engine from scratch is a great exercise for validating both the specifications and the web platform tests.

For example, here's some bugs raised by Andreas Kling in the HTML spec that were found while building Ladybird:

https://github.com/whatwg/html/issues?q=is%3Aissue+author%3A...


Yep, embedding the Servo engine into the final binary. This work is sponsored by another (separate) NLnet grant:

https://nlnet.nl/project/Tauri-Servo/


Legacy Layout refers to the original system, Layout 2013. There was a second system started, Layout 2020, to address challenges with implementing parts of the CSS spec which didn't cleanly map to Layout 2013's architecture.

There's a good report in the Servo wiki from this year (authored by a group of Igalians) summarizing the differences between the two and why the decision was made to move forward with Layout 2020.

https://github.com/servo/servo/wiki/Servo-Layout-Engines-Rep...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: