Hacker News new | past | comments | ask | show | jobs | submit login
Ladybird: A new cross-platform browser project (awesomekling.github.io)
1341 points by eatonphil on Sept 12, 2022 | hide | past | favorite | 473 comments



Given that he's already written an entire OS, I don't think he's being too ambitious. It already passes Acid3, which is more than can be said for Dillo and NetSurf, the two other prominent "alternative browser" projects. Those aren't even capable of passing Acid2.

The Ladybird browser came to life on July 4th

Coincidence or not, that's a great date.


> Given that he's already written an entire OS, I don't think he's being too ambitious.

Also, he actually did work on web browsers previously:

> Until that point, my career had been focused on web browsers (WebKit at Apple & Nokia).

So he knows exactly what he's doing.

Based on those two datapoints, I'd say he (and other contributors) have a good shot at it.


I haven't used it, but it seems that Flow[0] is also a plausible modern alternative browser which passes the Acid tests[1], unless you disqualify it for being closed source and borrowing its JS engine from Firefox.

[0] https://www.ekioh.com/flow-browser/ [1] https://www.ekioh.com/devblog/acid/


> unless you disqualify it for being closed source

Yes, I do disqualify it for that reason.


>> The Ladybird browser came to life on July 4th

> Coincidence or not, that's a great date.

Especially if considering that St. Ulrich is also the saint of the dying.


Was thinking more along the lines of "browser independence" (from Big Tech) day.


Ladybird doesn't pass Acid 2 either. It does pass Acid 3.


what's the failure? I _think_ some stuff got changed in spec land which makes acid2 no longer "correct"


That's surprising. I'd expect Acid3 to require far more effort to pass than Acid2.


Why? Acid3 just requires basic CSS support and a someone ES5 conformant js-implementation.

Acid2 was specifically built to call IE out and to put browser builders on notice. It tests the most nuanced little quirks of some specific specs; and, because of some changes in the modern standard, doesn’t actually conform anymore.


Firefox passes acid2.


Safari on iOS only scores 98/100 on Acid 3!


my local FireFox is only 97/100; which susprises me...


Strangely I get 99/100 and it is totally smooth but pauses for just a second at item 68. Red text also appears in the very upper left that says " YOU SHOULD NOT SEE THIS AT ALL "

This is with all extensions off using Firefox ESR

https://imgur.com/a/ExBMzlx



That's also "only" 97% in Firefox.


I was also seeing only 97% in Firefox.

Disabling the LastPass add-on brought it to 100%. I'm guessing you also use LastPass?


That's a very fingerprintable observation.


Most rendering oddities are not directly detectable by JS on the page, with the notable exception of the <canvas> element.

Though I suppose rendering differences that affect z-order or visibility could be detected indirectly, by listening for pointer events on elements that are "supposed to be" visible/hidden.


Isn't one of the main uses of JS to get the contents of elements on the page? That "97%" and "100%" would be clearly evident.


You are of course correct. I should not try to think about software at 4am! :)


You're right, disable LastPass and I get a 100%... Strange.


I get 100% using Firefox 104.0.2 (64-bit) on Windows 10. Maybe it's related to the platform or addons? Could it be uMatrix or something blocking some kind of behavior that would be questionable on a normal webpage?


I get 100% also, using Librewolf fork of FF (104.0.2). RFP is on and uBlock Origin enabled.

edit: MacOS and Windows.


Also 100% on Firefox Mobile 104.2.0


100% on FF 104.0 for Ubuntu


Try disabling addons. 10ten reader, for example, is known to alter pages' style and to break Acid3. There may be other addons doing the same.


100/100 on my Firefox, and it's fairly customized with a bunch of extensions and non-default settings to the extent some sites refuse to work normally and I have to either ignore them or open them in an "alternative" "clean" browser.


Huh. I get 100% with 104.0.2 on Ubuntu.


100% - Firefox 103.0.2 on Mac OS


99% :-/


Try “https”?


Strangely I get 99% with https and 100% with http

On FF for Android it hits 100% but the image is incorrect (there is red text in the top-left).


> Strangely I get 99% with https and 100% with http

Same (FF 104.0.2 on Mac, with uBO).

The failing test is:

    Test 64 failed: object.data isn't absolute


That test is actually assuming the url is http, and doesn't work on https.


try click on A for more info. In stable i get 100/100 (97/100 in beta, but with privacy.trackingprotection.enabled=true and privacy.resistFingerprinting)


Came to Hacker News expecting the top comment to be a critical one outlining the inherent problems with this project and why it was doomed to failure, so this whole thread has made for quite pleasant and exciting reading! Thank you!

Yes, I am aware also that HN has blindspots, eg with Show HN Dropbox


July 4th is only celebrated in the US of A. The author is Swedish.


I tried to find some July 4th where something Swedish happened:

> 1708 Battle of Holowczyn: Swedish King Charles XII defeats superior Russian force in surprising vctory

That must have been it.


Given how that campaign ended for Charles, I hope not!


I'm still living in 1708 Sweden, don't spoil anything for me!


The whole world celebrates 4th July thanks to Jeff Goldblum's virus and Will Smith's flying skills /s


>July 4th is only celebrated in the US of A.

Not entirely!

https://en.wikipedia.org/wiki/Rebild_Festival


July 4th matters, even if you are not American. It represents the first non-imperialist great power.

I’m not American by the way.


“Non-imperialist”? What?


First in all of history? I highly doubt that.

Relatedly, I find that with regards to history, most people have quite a large recency bias. If I ask someone who the greatest actors of all time are, at least half will be within the last several years. Same with this question, the first non imperialist great power is likely some random Chinese or Mesopotamian or African kingdom.


Non-imperialist at the time maybe?


Surely some people have birthdays, anniversaries, etc. on July 4th outside the USA.


That must be why it's a coincidence, then, because it's a date in the year that surely some people must have birthdays, anniversaries, etc. on. I didn't think that was unusual for dates, though.


July 4th occurs worldwide.


which is probably why the person you're replying to didn't say "July 4th only occurs in the US of A" but instead said "July 4th is only celebrated in the US of A"


You'd be hard-pressed to find it occuring in Iran.


Do they just skip from the 3rd to the 5th in Iran and have some kind of un-leap year? This is news to me. I thought the calendar was the same all over the world.


They don’t have July. See https://en.wikipedia.org/wiki/Solar_Hijri_calendar.

https://en.m.wikipedia.org/wiki/Tir_(month)#Observances:

“Independence Day (United States) - 14 or 15 Tir”


Jokes aside, the calendar is in no way the same all over the world.


All european US vasals are at least very happy to remember this day.


Why is July 4th relevant?


Because of LadyBird Johnson former first lady of the United States.


Acid tests are not strict standards test suites. You should not be looking to adhere to them.


They're better than strict standard test suites, because they exercize real world useful stuff AND give a fun visual representation of the maturity. Wish we had such "acid test" for later CSS features like Flexbox too...


No, they're immediately provably worse. If you started work on a CSS rasterizer today, there are zero test suites you can use for reference comparison.

I'm talking about easy stuff, too, like normal flow, margin, border, padding box alignment.

The existing official test suites are entirely manual.

Edit: What you're describing requires you to make the entire world first, then finally test something.

Here's an example: I want to test the white-space property. How do I test that? Oh also, font rasterizers are all different and standards don't dictate what happens after box layout. It's acceptable that glyphs render with different dimensions.

How do I test that? A casual HN reader isn't going to care. Someone actually writing this stuff, will.


Sure, but I think you have a misconception: that those are like compliance tests for developers or have that role.

Rather, those acid tests were tests that test the rendering engine (in multiple aspects at the same time), and are meant for the end users: to give them a nice visual representation of the browser's progress, or lack thereof, and to push browser developers to fix issues that cause visual glitches (with features drawing the test image selected to be nice for layout writers to have) and have them race each other to pass them...


I don't have any misconceptions about the state of web technology testing, or how WaSP made an impact on the miserable state of the industry at the time. I work on browser technology.


> there are zero test suites you can use for reference comparison.

What about W3C's web-platform-tests suite [1]?

[1]: https://github.com/web-platform-tests/wpt/tree/master/css


To test white space, quick thought have not tried, perhaps render some text that hits a chunk of the differences. Start default/normal. like perhaps text has a lot of tabs. JS measure offsetWidth. Or maybe use canvas export image see if base64 is same

Do it again with different CSS settings.


No, but they're good signals for the capabilities of a browser engine. Passing Acid3 means you've got a browser that would have been pretty damn good in 2010 or so. It's a good mark of progress for any three month old browser engine.


two years and 3 months


> Q: Do you have a JavaScript JIT compiler?

> No, we have a traditional AST interpreter that is being replaced by a bytecode VM. You can track the LibJS test262 score for both backends here. I’m not convinced that the complexity and security burdens of a JavaScript JIT are reasonable, and given recent developments like Microsoft Edge’s Super Duper Secure Mode, I’m interested in pushing for best-effort JIT-less performance while keeping the codebase simple.

Always excited to see new JS engines. I'd be curious to see where LibJS's performance / code simplicity / memory balance tends towards over time.


I wonder if a new implementation of JavaScript was needed. My first hunch would have been to try Fabrice Bellard's QuickJS: https://bellard.org/quickjs/


It definitely wasn't needed, but it goes against what I think makes SerenityOS so special.

That being that everything in SerenityOS and its several related projects is made from scratch.


> If you wish to make an apple pie from scratch, you must first invent the universe.

They are using Qt and C++. They have chosen where to draw a line, which is fine, and that line happens to be on the other side of "Javascript Engine".


They are not using Qt in Serenity, only on Linux in order to hook up painting and events.

Also, they are making a new programming language (Jakt) to replace a lot of the C++


To be fair Qt is just to create the window on Linux, but all the browser internals are their own.


It's also being used for networking.


For now…

Everything is from scratch on SerenityOS. Ladybird takes as much of that as it can but the is not afraid to use existing tech for the bits of Serenity that cannot currently be used on Linux ( the GUi framework and networking ).


I think they'll replace Qt with their own GUI lib eventually - and they're already building their own language to replace C++ (for their uses I mean)...


> they're already building their own language to replace C++

Of course, every respectable software project sooner or later invents its own language!


I guess a precise defition of the line would be something like "no third party code dependencies"


Qt is not third party?


for their operating system (serenityOS) they're not using any dependencies, they have their own windowing framework. for the cross platform version of the browser, they use Qt instead. but on serenity itself the browser does not use it.


fair enough


Well... not if you're using KDE as your window manager?


Evidently the line is a curve and did not include a third party JS engine, since the project doesn't include a third party JS engine.


I know. I think you did not understand my comment.


This actually makes a lot of sense, especially when you consider that WASM is now a thing.


It's worth noting, as I'm sure you already noticed, that they are supporting WASM in addition to JS. So maybe the right answer for "your simple, secure bytecode JS interpreter is too slow" really is "Just precompile that slow part to WASM".


I've been watching his YouTube videos on developing the LibJS and LibWeb components. They are interesting and show that this kind of thing can be done.

As a larger point, it is important to have multiple web stacks so that no one player can dictate what the web should look like, or gate access through that stack. Having this new stack grow to the point where it is becoming a viable alternative is great to see.


I've been watching his videos too for the last 6mo or so. If you like them, you should watch his friend's videos on SerenityOS as well, Linus Groh [1]. He does a lot of stuff on LibJS and LibWeb libraries. I think he is the lead developer for those because every month when Andreas does the monthly summary video of changes in the OS, he defers to Linus to talk about the browser changes.

1) https://www.youtube.com/channel/UC84u7JhM9EIAYzyjdf6cBbA


> As a larger point, it is important to have multiple web stacks so that no one player can dictate what the web should look like, or gate access through that stack.

That strikes me as a pretty naive view. If you y've got a webstack with >90% market share, you effectively have one webstack which dictates what the web should look like.

Because most devs will only test for it (best ROI) and most users will switch to it when their ability to use the web on other stacks starts degrading.


This is what nearly happened with IE, except Firefox then later Chrome managed to displace it to some extent. Now it would be likely to happen with Chromium based browsers if not for Mobile Safari being an important target as well.


Chrome displaced it purely from the huge marketing power of Google, being the search engine the world over. By putting a "use chrome" message into Google Search, they got the word out in a way that no other browser vendor could ever hope to do.

I'm not hopeful that it can ever be replicated. That said, I will always continue to use browsers other than Chrome, so at least selfishly I'm excited about projects like this!


Pure marketing isn't the reason chrome took over. It was super speed and security.


Nope. Firefox got up to speed really fast, and security should never be used to describe a browser that is inherently spyware ( _do you even have a clue about what the browser is sending home?[1]_ ).

Playing dirty [eg. purposely changing Youtube to not work with the standards adhered to to by FF], making promises that they never filled and lots of aggressive marketing is the only reason that they achieved dominance.

[1]: https://github.com/ungoogled-software/ungoogled-chromium/blo... That's the list of unique divisions that Chrome sends to, but don't forget that they send which page you went to, how long you stayed on the page, your passwords in plaintext, etc.


> your passwords in plaintext

are you sure they do that? That's a big claim, and a huge security problem.


I saved a password in Chrome on Android.

Went home, logged into the same website from my desktop. Chrome offered to log me in.

My unprofessional conclusion - Google could not have encrypted my password in a way that would have prevented them from logging in as me on another device.

As an aside, I was logged into GMail on the desktop, but I couldn't recall having logged into Chrome. If correct, than once you have logged into a Google property, all is fair game on any other sites you visit.


It’s absolutely the reason it took over. Without it, that speed and security meant nothing.


> This is what nearly happened with IE, except Firefox then later Chrome managed to displace it to some extent.

It took MS sleeping on it for several years and the entire web taking off like a rocket for that to happen. Not to mention “full web” mobile expectations.

It took cataclysmic disruptions in the space to even have a chance if breaking IE’s monopoly.


It's already happened

A website I use said I need to use their Chrome add-on to automate some info gathering stuff. But I use Firefox?


In all fairness, that's one example. There's probably some outdated stuff that still requires IE.


What does noone can dictate mean?

Google dictates it and I've ran into many many sites that doesn't function with firefox or anything other than chrome


Got any concrete examples? I hear this a lot, but have never experienced it. I’ve been using Firefox as my primary desktop browser for years now. I can’t think of any website that didn’t function. But maybe I encountered one and just thought the site was down?


In my case: several financial institutions that I have accounts with and lots of enterprise apps and gear with web UIs either break silently, have certain functions that don't work, or block with an error telling you to use a supported browser. (If they're nice, they even tell you which ones are supported.)

For a while, Slack didn't support all features on Firefox despite the technology being there, I don't know if this is still the case.


Please report this kind of bug on webcompat.com. It's often possible to get them fixed.


Shopify admin is one (but also one that's better to just not use if you have that option)


More than once I have spent precious time filling out some form or going through some wizard only to find that I cannot “submit” or that validation is failing on some field or other in Firefox. Switching to a Chromium based browser works. I know it has happened to me at my bank. I cannot recall exactly where else. Reservation systems? A couple of travel approval sites I think ( during COVID ).


The Hebrew language Nespresso site pops up a browser dialogue "you have unsaved changes" every time one tries to browse to another page. The dev team is aware of it, but "the site only supports Google Chrome" so they don't fix it. https://www.nespresso.com/il/he/


The version of Unit 4 that we use at work only works in Chrome.

That said, lately, that particular thing has been the only one that I need to use that outright doesn't work.

I think a number of Google technologies are slower in Firefox on purpose, but I just avoid them and keep telling competition authorities about their abuse of power.

Sooner or later they will get caught.


Godaddy complains with other browsers...

https://news.ycombinator.com/item?id=32093987


Reading that thread, it sounded like OPs issue was more with an extension blocking something than the browser, but I’ll admit, I do not use GoDaddy, so I haven’t experienced it first hand.


While common knowledge says that building a new browser is "too big, too hard", I'm excited that some intrepid people are tackling this problem and wish them success!


The SerenityOS project is managed with tremendous pragmatism and so I think they can get there.

If you watch his first Ladybird video after the project announcement, it is just him saying that he has spent the day getting Reddit to look right. He records the hour or so that it takes him to make CSS font emblems render properly. It is a priority because they appear on Reddit.

The video that brought SerenityOS the most fame was probably him porting Diablo ( well, DevilutionX ) to his OS. When he encounters a missing function from his C Standard Library, he simply adds it.

He is very driven by practical usability.

He wants an OS written from scratch with a full suite of software that he can use as his daily driver. So, he needs a web browser. He needs one that properly renders the kinds of sites he visits. Simple.

Compare this to projects like ReactOS that resist even adding support for a working browser.


Yes. I mostly agree with the common knowledge, but am glad someone is putting it to the test. We'll learn from either their success or failure.


When it comes to common knowledge in programming it's more often incorrect. I still see people telling experts they shouldn't use global variables, that you can't beat the optimizer, that you should use a library (for something as trivial as left pad) etc


Andreas Kling is a legend. Having followed him since the early days, I am so happy that the whole ecosystem has come so far. What a refreshing change to get unadulterated insight into the mind of a true hacker, when the rest of the world is stuck grinding leetcode.


Isn't it wonderful what a social safety net brings us?

For context, SerenityOS was dreamed up during a stint at a state rehab facility. (This information is included in a link from TFA)


One has to wonder what Terry Davis could have made if he got care for his schizophrenia.


I sympathized for him, a brilliant and broken person. The U.S. is full of such homeless people, coast to coast, fallen through the social safety net like crazy diamonds on the streets.

His operating system was deeply related to his obsessive mystical thoughts. The computer was an "oracle", a medium through which he communicated with the Divine. If he had received the mental healthcare he needed, I wonder if he would have continued with the project. In a way, that project was all he had, when he had lost everything.


I've heard from multiple schizophrenic people that many of the meds given for schizophrenia also cause significant cognitive impairment or make them drowsy/low-energy. So hard to say.


Tangentially, but on the subject of alternate browser engines, how is Servo doing these days? Has anyone managed to keep that project alive after the Mozilla layoffs in 2020?


Poorly. It's updated, but looking at 3k issues and just 30 PRs requests paints a dark picture.

Servo had trouble getting 10k per year funding for just basic CI expenses.


Interesting that there is little to no mention of Servo here. The once hyped up browser engine 'Written in Rust™' that was meant to be a revolutionary new alternative browser engine meant to be used in Firefox for its fearless safety features. Yet the entire project headed for the scraps at Mozilla and is sitting somewhere on GitHub with no-one using it.

When I see hype around a new browser engine from scratch I am immediately skeptical, unless those 'donations' turn into 'corporate sponsorships'.


Servo was never meant to be an alternative browser engine that would replace Gecko.

It was an R&D project and, if you look at the bits that made their way in to Firefox, a fairly successful one. Stylo, WebRender, various parsers etc...


(sorry to post twice on the same topic, but your first sentence is not accurate, see https://news.ycombinator.com/item?id=32813714)


I know better than to dispute Alon Zakai on Mozilla internals, but it remains true that Mozilla's external messaging unwaveringly insisted that Servo was not going to replace Gecko. Even in the later years it emphasized the strategy of gradual replacement of select components.


Interesting. I may be less aware of the external messaging than you. Do you have some examples of that perhaps?

I'm not doubting you, I'm just curious what was written. I'd expect stuff like "Servo is an experimental browser engine," or, "We don't currently have plans to replace Gecko with Servo" (which was obviously true at all times in history), but I'd be surprised to see something like "Servo will never replace Gecko." But maybe I just didn't see that messaging - could be!

It's also worth mentioning that "replacing Gecko" may be the wrong way to look at it. More reasonable is for Gecko and Servo to eventually come into alignment - somehow. That's what happened in practice, in a way that has a lot more of Gecko than Servo, but in theory it could have happened the other way, with a lot more Servo. Even in that situation, I wouldn't say Servo "replaced" Gecko - it's more complicated than that.

For Mozilla, Servo was an experiment. Experiments can end up as full products, or parts of them, or not at all. I think it's clear one possible outcome of the experiment was a full browser (I was personally rooting for that all along). It didn't end up that way, but when people say a full browser was never the intention, I think that's not accurate history.


> Do you have some examples of that perhaps?

Not off hand (and Mozilla's official marketing rarely goes so low-level as to mention the browser engine itself by name), but (as someone who has been hanging out in Rust spaces since 2011) I recall the Servo devs always being very careful to disclaim any intentions to wholesale replace Gecko in Firefox (though I did always wonder if they were instructed to do so, as a way to keep Gecko contributors from rebelling).


Yeah, I also didn't see a culture of saying "we're going to replace Gecko." Not because there was any instruction to speak carefully AFAIK, but because that's not the goal. The goal was for Servo to succeed, hopefully both technically and as a product.

Product success could happen through convergence with Gecko (as actually happened, but again, the convergence could have been tilted the other way), or through Servo powering something separate from Gecko (for an example of that, in later years Servo was meant to power Mixed Reality projects, which is a story in itself; but other ideas include Servo as an embedded engine, or powering distinct tabs in Firefox and other obvious ideas).

In some of those options Servo could have been a full browser engine or even a full browser. I think there was always a hope for that. I'm sad it didn't happen, but it's still a big success in several important ways. Anyhow, I just hope the history doesn't get rewritten as "the goal was always exactly what happened in the end."


> Anyhow, I just hope the history doesn't get rewritten as "the goal was always exactly what happened in the end."

Indeed, that's not my goal, I wish that Servo would have been developed even further and still hope that it might happen someday, somehow.


Me too!


Note that if you are running Firefox now, you are using Servo's graphics stack.


As well as its style system.


Servo wasn't funded by donations, it was funded by Mozilla. That single point of failure was their downfall. I'll take 100 monthly donations over one corporate sponsor any day.


> Servo wasn't funded by donations, it was funded by Mozilla.

All thanks to Google; 'funding' more than 85% of Mozilla, which not only having a direct competitor browser going against what Mozilla is standing for, they continue to fall behind features that Google pushes into the web standards committee at the W3C with Mozilla sitting there doing nothing, but going along with it, but late.

Mozilla failed because for 14+ years it was unable to make money for itself other than Google and could not move away from them when they ate their lunch and quickened their decline.

> I'll take 100 monthly donations over one corporate sponsor any day.

I didn't say 'one corporate sponsor', but let's go with the '100 monthly donations'.

How many of those developers are active and full time like awesomekling? Less than 5?

With a simple observation on their Patreon / GitHub sponsorship page, paying $4 a month average with 100 people is hardly enough to support a single developer, even with the $1K monthly going to only one developer for building a browser and especially with less than 5 active developers. Realistically, corporate sponsors would make this more sustainable.


I'm not very knowledgable about this, but did failing to a new revenue source really lead to Mozilla's decline? From articles about their CEO's massive salarly, I'm (perhaps naively) thinking mismanagement is a far greater issue.


[flagged]


Servo produced a very fast CSS rules matching engine now used in Firefox (see https://nolanlawson.com/2022/06/22/style-scoping-versus-shad...), and WebRender which underpins Gecko's rendering. That's not nothing.

About embedding Servo, I have some experience around that (https://github.com/fabricedesre/servonk/) and it's been easy to embed: you just need to provide a GL surface and hook up your input events into their event loop. Clearly there were not zero integration points, and a some other examples exist (like https://github.com/paulrouget/servoshell).


Components of Servo were integrated just fine into Firefox. WebRender is the big one, and iirc many smaller ones too (the css parser maybe? memory hazy)

There's also non-web-related software that uses WebRender, such as Azul. https://azul.rs/

Maybe there aren't integration points at the layer you were hoping for, but that doesn't mean the entire project is "bullshit"


Servo supported the CEF (Chronium embedded framework) API for embedding. See https://github.com/paulrouget/servoshell and https://github.com/jdm/servo-embedding-example


[flagged]


Rust is actually good at parallelization. Good examples include ripgrep (multithreaded grep) and Stylo (multithreaded CSS selector matching), both are state of the art in their fields.

I agree with you that Rust is not best at implementing data structures and dynamic language runtimes and async networking. But Rust is good at parallelization.


Rust is good at all those things. Just not using the reference/ownership patterns that people are used to. There's a lot of mental overhead in managing lifetimes that becomes quite taxing if you start the wrong way.


Have you tried using Rust for anything?


Added to Arch Linux repos a few hours ago!

https://archlinux.org/packages/community/x86_64/ladybird/


The problem with browsers is that they conflate the execution environment, which should be as small, understandable, analyzable, obviously free of bugs as possible, with the common APIs, which as developers we want as rich as possible.

An interesting alternative was embassies project[1], which got WebKit running on an understandable computing base. The current-day equivalent would be to get WebKit running on top of, say, WebAssembly and the Canvas API. That would be immense impactful, but isn't as sexy or cool as writing your own browser from scratch.

That being said, I applaud the Ladybird initiative. Scratching your own itch is the way to go when aiming for maximum enjoyment.

[1] https://www.usenix.org/conference/nsdi13/technical-sessions/...


I still think the problem underlying the complexity of browsers is inherent in using HTML/CSS as the base language.

Most complex languages get interpreted or compiled into a much simpler format, and the simpler format gets executed. This lets the execution environment be small and understandable, as you said.

Using HTML/CSS as primitives means the execution environment has to be complicated, because the execution environment itself needs to be aware of things like whether certain attributes get inherited and how they are inherited.

I have hope for WebAssembly + Canvas, because it's a simpler set of primitives. There's a clear boundary between the simple execution environment and the tools that translate complex instructions into simple instructions.


And then let's throw out all browser extensions and accessibility software as webpages become an unapproachable unmodifyable <canvas> tag, a true corporate dream. People underestimate the importance of webpages being documents. It doesn't translate well to all use cases - see apps and games - but to throw it out and go for sealed binary formats would be a big step back.


That's a valid concern.

I think there could be middle-ground approaches, we just don't have them right now. E.g. React is basically entirely interpreted; we could create something like React that used a different set of primitives than the DOM. Browser extensions could act as middleware between React and those new primitives, as a plugin to the execution engine similar to how they currently work. Someone smarter than me probably has a better idea of how that would be implemented, but it seems possible.

A lot of webpages I see aren't even really documents anymore. They serve a very basic HTML page and use JS to flesh out the rest. It's not all that conceptually different from Qt to me, except that there's a standards-enforced way to serialize the objects out into text. We could create a middle-ground engine that used simple primitives and could be serialized out into a text format. Qt kind of does that QtCreator, from what I know. I believe it creates XML docs that specify the objects to create.


They also need to handle a lot of backwards compatibility as well as gracefully handling syntax errors


> as well as gracefully handling syntax errors

Do they? Maybe they don't? Maybe we're at a point where people can author html/css, and if there are syntax issues, they just 'break' until someone fixes it. 25 years ago, maybe we needed some laxness, and perhaps today, we don't?


They do, because web content can live for a very long time - that's a strength of the web. So today's and tomorrow's browsers need to be able to process these 25 years old documents, because they are still part of the web, syntax issues included and no one will come to fix them.


The counter-argument from the embassies point of view would be that, since you're running the browser on top of the simpler execution environment, you could ship the exact version of the browser that works with your content. That way, only the execution environment (i.e. the simpler APIs) would have to be backwards compatible. So content could theoretically live longer, as we'd ship the interpreter of the content along with the content.


You can just load that content in another browser.

If your browser handles all of the popular sites, people will use it. Nobody cares about whether Flash loads anymore, do they? Yet there's millions of Flash animations and games never ported to another platform


The web is made of links. It's quite annoying to switch browsers when following links.


I think the idea would be that when you click on a link to a 25 year old page, the browser brings up a little warning saying "This page is 25 years old and has some errors that may stop it displaying correctly. Press the Refresh button to re-render it using an older browser engine."

In the worst case, the browser would then have to download some additional code that contained all the support for invalid HTML/CSS code. In the best case, the 25 year old page would be served with a header containing a cryptographic proof that the site really had been around for 25 years, and it wasn't just some newly created attack site that was exploiting some weird behaviour in old renderers.


Again, nobody linked to Flash content in a while. You might still encounter it in the wild and it won't work. I don't hear anyone complaining.


I mean I don't entirely disagree, but XHTML was not very popular and HTML5 demands some wiggle room like many tags can be omitted as per the spec.


I've semi-seriously suggested the canvas road twice. I have little doubt we're going to make the same mistakes yet again but it might end up better overall.

Gonna need vulkan, better tooling, and lots front-end brain rewiring. Might happen before the end of the decade


There is already Webkit.js: http://trevorlinton.github.io/


I didn't know about Super Duper Secure Mode. That's pretty interesting.

https://microsoftedge.github.io/edgevr/posts/Super-Duper-Sec...


GrapheneOS adds a permission to chromium on android so you can manually allow websites to have their javascript JIT'd. Most websites are plenty quick without the JIT although SPAs like Tinder and Google Maps did struggle without it.


I use Bromite which does the same and the only issue I run into is when sites need WASM. (Chromium doesn't have a non-JIT WASM interpreter, though IIRC Edge does.)


Would this prevent me from looking at or modifying the Javascript code that is running, since it's already compiled?


This is a completely different thing.

The "super duper secure mode" is not at all about you accessing or modifying the JS code from a website you are visiting. What it is about is trying to protect against some piece of JS code bypassing the security of the browser sandbox and accessing stuff it shouldn't ever have access to.

:)

Edit: Also, in general, no, disabling JIT would not prevent any particular JS code or extension from running. At most, it might mean some code would run slower. But that's what they claim, that it isn't noticeable.


Thanks for the clarification.


Hmmm. Maybe I misunderstood the link or missed something. I thought all they did is disable JIT and leave the JS as interpreted.


I'm not knowledgable with JS at all, so you're probably right. I do wonder what the "Code Integrity Guard" is going to mean - who has to sign the JS? Does this mean tampermonkey/adblock scripts would stop working? Genuinely asking as like I said I don't really know too much about javascript


This seems to be referring to https://docs.microsoft.com/en-us/microsoft-365/security/defe... which would mean that they require the browser itself to be digitally signed, not the JS.


Congratulations and best wishes to Andreas and his team. His work is very inspiring.

My question as a systems programming newbie:

Could a compiled binary of this project with HTML and CSS be used to create cross platform apps like Electron?


The original LibHTML commit makes it sound like it was originally intended for embedding, so I don't see why not: https://github.com/SerenityOS/serenity/commit/a67e823838943b.... Electron has affordances beyond just being a web view, so you would have to add those (or work around not having them).


Probably. The Chromium browser engine first had to be turned into Electron, so there is no reason why you couldn't do the same here.


Yes, it should work. There are also cross-platform alternatives: Sciter and Ultralight.


> The browser and libraries are all written in C++. (While our own memory-safe Jakt language is in heavy development, it’s not yet ready for use in Ladybird.)

Have a look at Jakt, it looks a like a really cool language, that strike a balance between performance and simplicity. And it has proper sum-types!

https://github.com/SerenityOS/jakt


And memory-safe.


Unpopular Opinion: Ditch HTML/JS/etc. altogether and invent a new way to create, deliver, and execute ANY content natively on any platform.

Replace browsers -and- existing operating systems altogether.

——

As of now, in terms of end results for the user, what's missing from the Linux/Mac/Windows GUIs:

• Typing something (URL etc) and accessing the latest version of an app, with latest content, and knowing that everyone else in the world is getting the same thing.

• Linking to content within apps.

What's missing from browsers and the web: Too much to list, but mainly:

• Basic shit behaves differently on everywhere website.

• Too many artificial restrictions: Can't scroll freely, can't select freely, can't even freely zoom or save images on the most popular image-sharing websites, fuck you Instagram.

• I can't save my own data on my own machine, or easily move it between web-apps. Have to go through hoops to even access all your data that the website's company has access to, if ever.

• Inefficient usage of your hardware.


HTML is ok. Get rid of js. Rewrite DOM stuff, render HTML as more efficient graph mechanism (e.g. virtualized, can have 100k nodes)


> Typing something (URL etc) and accessing the latest version of an app, with latest content, and knowing that everyone else in the world is getting the same thing.

This is why I prefer managing the software I use myself. Upon first receiving a new version of a web app, I have never tried it or had a chance to review the changes to figure out how they affect my workflow. With software installed natively with my package manager, I have a chance to review everything and update only the software I want to update.

> • Linking to content within apps.

No, all major operating systems support scheme handlers.


> No, all major operating systems support scheme handlers.

No, you know that rarely works out in practice and is not what I meant.

Take this example: https://news.ycombinator.com/reply?id=32814887&goto=threads%...

Or: https://old.reddit.com/r/Gloomhaven/top/?t=month

But can we do something like this? file://Users/Me/Pictures/?filter=cats&sort=new

Or this? pixelmator.app/lastdocument/thumbnail


> No, you know that rarely works out in practice and is not what I meant.

No, as far as I know it works very well in practice and thus I don't know what you mean.

Take this example. Steam has registered as a handler for the steam scheme on my PC. Any other app can open e.g. steam://store/655480 to link to a game in Steam. It's up to the scheme handler how the URI works. Steam supports a bunch of different actions and forms of deep linking via the steam scheme.

Here's what Slack supports via its scheme handler, for example: https://api.slack.com/reference/deep-linking#client

It's exactly what you're asking for, so I have no idea what compelled you to immediately dismiss what I was saying.


If you control the application/backend then I don't see why you would need HTTP(S) URLs to get the right content.

Android actually allows registering HTTP(S) URLs with certain domains and wildcards (https://developer.android.com/training/app-links). Not a big fan of this "validate a token through Google to skip the app choice dialogue" approach but luckily Firefox always prompts before opening a registered app link anyway. iOS supports a similar feature. It's up to browsers on other platforms to implement their own support if this is something one would want to add to the browser.



You can't "execute ANY content natively on any platform".

It is true that the things that you list are the actual problems, and there are many others, too. The modern WWW is a too messy design, and additionally to that, web pages and web browsers are also badly designed.

I had partially written specification of "VM3", which hopefully will be an improvement for cross-platform application programs. There is no CSS, nor Unicode (well, technically you can include Unicode text fragments, but it is discouraged), and it is independent of protocols (so you can use with local files, DVD, HTTP(S), Gemini, etc, but none of these are required nor is it limited to any specific kind). All I/O must be done using extensions (identified by UUID), which require a specification; it also has a mechanism for polyfills which is (in my opinion) superior than that of WWW. Furthermore, is intended better for the end user controls/customizations. Also, some of the internal design decisions I have done differently due to things that I think are better than what are currently done (e.g. how linking works, and many features use a binary format, etc). Extension declarations can also be linked with each other, too (note that this is linking the interface, not the implementation; the implementation is deliberately not specified by VM3).

Some VM designs are good for interpreted, and some are good for JIT, but I think that to make implementations competitive, and improve efficiency, it may be better for the design to be good for both, so that is what I have tried to do. For example, there are restrictions on how branch addresses can be derived (e.g. one feature is that you cannot store branch addresses in general registers).

VM3 cannot "execute ANY content natively on any platform" (and nor can anything else), but my intention is to make something better than existing systems.

Things similar to TerseNet, and other possible different things people will want to make, some of them may be possible as specific subsets of VM3. An implementation of VM3 can then support specific subsets or can have full capabilities (preferably, configurable by end user).


And we could make it rely on Postscript and call it "NeWS" :) https://en.wikipedia.org/wiki/NeWS


On the one hand, yes I like native apps. On the other, I don't want to have each site install its native app and have one more trojan/tracker to worry about.

I guess SPAs and WASM kind of give you that within the security of the browser's sandbox.


20 years ago I thought flash would take over. Shows you what I know.


> Ditch HTML/JS/etc. altogether and invent a new way to create, deliver, and execute ANY content natively on any platform

Sounds like you want WebAssembly.


That's just more turd polish.


Obligatory XKCD link: https://xkcd.com/927/

Variety is good. New browsers with new engines are good (though I think an announcement like this without any pre-built binaries for normies to download is a bit premature). I say this as a web developer who well remembers the IE6 struggles of yesteryear - but also fears the Chrome monoculture of tomorrow. UI differences between web pages are annoying, especially when they break things like scrolling and selecting, I agree, but the answer isn't to creatively flatten what the web can be, Big Brother-style.

As for Instagram, well, perhaps you should consider making a back-up of your own photos before you entrust them to a service owned by a company whose sole profit model is taking your content and putting ads next to it.


It's interesting to see Windows in the list of platforms to be supported.

LibJS itself was very interesting, but unfortunately it depends on a bunch of other Serenity libraries (AK, LibCore, etc.) which in turn heavily depend on POSIX.

I'm curious if that means they want to make LibWeb / LibJS actually portable C++ code and how they plan on achieving that (hopefully without introducing a POSIX compatibility layer on win32 like msys or whatever).


FAQ in there [1] states "Windows (WSL)" with no mention about plans for native Windows build.

[1] https://awesomekling.github.io/Ladybird-a-new-cross-platform...


That's just confusing, they shouldn't include Windows at all. WSL is a virtual machine running a Linux kernel.

It's akin to saying Microsoft Office runs on Linux; you just need to set up a VM running a Windows kernel first :)


Yes it's confusing and running on WSL isn't really the same as running on Windows.

I've just tried it, and it's early days- there are hundreds of thousands of edge cases to consider just to get it really "working" on linux, so it's no easy task. On the other hand , implementing a posix compatibility layer for Windows for just the things you need to support in the browser is relatively easy.


Theoretically it should be possible to build Ladybird for Windows using Cygwin. For now, you can run an AppImage in WSL: https://github.com/SerenityOS/ladybird/issues/33#issuecommen...


Happy to see new browsers appearing. Up to this day I'm still trying to find a good replacement for Conkeror(http://conkeror.org/), with little luck. webmacs was quite promising for a bit but it just petered off.


Check Nyxt (https://nyxt.atlas.engineer/). Based on WebKit (actually designed to be engine agnostic but WebKit is the only one supported) instead of Gecko, and scripted (also developed) in Common Lisp rather JavaScript.


And if you need something a bit more robust and based on up-to-date FF code and maintained I've been toying with LibreWolf (https://librewolf.net)


ACID3 100% WOW ! Some fuzzing then ready to replace Mozilla ?

But it looks like whole WWW ecosystem resembles CVS -> Subversion situation and L.T. -like comments starts to look very sane...


Acid3 is not nearly 100% of features you would want to cover in a contemporary browser engine. The author agrees:

> we do pass the classic Acid3 standards test, which covers a bunch of basic CSS layout features, and various DOM/HTML APIs. However, the test does not cover many of the features used on the web today (like CSS flexbox, CSS grid, etc.)


Let's hope this gains some momentum. Also see http://www.netsurf-browser.org/ - another neat project working to create their own browser engine that has made good progress.


I use netsurf on my Sony Vaio P, it's the best it can run



To those who want to try NetSurf on Ubuntu, I recommend installing from Flatpak. The default Ubuntu package seems to be broken on HTTPS sites; or at least was when I checked it the last time.


The SerenityOS project builds everything from scratch. In addition to “fun” they have made a very interesting argument about this.

Ladybird implements its own HTML parser, CSS renderer, TLS layer, JavaScript engine, WASM VM, and even its own font rendering. As if building a browser from scratch was not hard enough, why not re-use some existing libraries as some comments here suggest.

Andreas had countered that he finds it slower and more difficult to work with a typical Open Source project that uses many libraries authored independently. In SerenityOS, if you want a feature or find a bug, you just make the change wherever required and check it into Git. This is true even if you make changes to multiple sub-systems at once. You have access to everything. In a more typical project, you may have to work around bugs in layers you do not control. You may be completely blocked by missing features. The standard argument is that you can just add it yourself but, as he points out, you may have to wait months for your changes to be incorporated and they could even be rejected. You can fork stuff but having to maintain a foreign code base written for different use cases using different philosophies is no fun. If you have to maintain it, why not write it the way you want.

Anyway, he argument is that for large, complex projects, it may actually be easier to write things from scratch than to try to cobble together a basket of independent dependencies.

There is something to be said for this perspective in my view. His progress with SerenityOS seems to support the argument.

At a rate, I am hopeful they can succeed and certainly wish them luck.


This is great! SerenityOS comes up on HN quite a bit and I've always been super impressed. Last time, he almost offhandedly mentioned it had its own browser and I thought that was so notable in itself that I started sponsoring him on GitHub. I'm thrilled to hear his goal has become to make it cross platform now!


Haha writing a browser is easy! (No, no it’s not at all. Please don’t hurt me anymore.)

The worst part is the fucking patent lawyers: https://pdfpiw.uspto.gov/.piw?docid=09576068


No, that's definitely not the worst part. The worst part is there is no nice way to convert the abstract language from CSS 2.1, the part that everyone cares about right before JavaScript integration, into a standards-compliant algorithm. Why? Because no definition for it exists. It is left as an exercise to the reader.

WHATWG doesn't improve on this either, in fact, they completely leave it out, whereas at least W3C's original work makes it clear that it's descriptive and you need to figure out an algorithm that makes it work.

Edit: The section describing the processing model for CSS is non-normative. The authors provide an example flow, but a normative algorithm doesn't exist for CSS.[1]

[1]: https://www.w3.org/TR/CSS2/intro.html#processing-model


FWIW, I know two (partial, kinda) formal specifications of CSS normal flow and float layout, both of which are finished ie dead projects:

[1]: https://lmeyerov.github.io/projects/pbrowser/pubfiles/paper....

[2]: https://github.com/uwplse/cassius

(not counting the 1990s constraint CSS effort).

The first was merely part of a parallel compiler project and also covers table layout, whereas the second is a Racket (Scheme) program to formulate the HTML doc and CSS rules as a theory for submitting to z3 SMT to solve all kinds of decision problems (it can also produce a rendering).

Not sure that's very helpful; it would be cool if W3C can invest some time into better specs (not just prose).


Sure, but how does this affect a browser that is not a product for sale?


Oh almost certainly not. That technique was obsolete ten years ago. I was just showing my battle scars from having had to do a ground-up browser in deference to the magnitude of the accomplishment.

Writing a browser was no walk in the park like 6 standards revs ago: I bet it’s a fucking nightmare now.


LAME is an LGPL MP3 encoder that you always had to download separately from whatever software used it because of MP3 patent fears. Even if it doesn't create direct financial liability for the creators it can have a chilling effect on adoption.


The support both monetarily and in contributions to help him work on this full time and stay sober is one of the stories in tech that always warms my soul. I will always follow the project and with a soft spot for it because of this. At the end of the day who cares about the software, a man has kicked his vices and is healing. Bravo.


Firefox is great, I've been using it for the past few months. It required some customization, yes even to the source code to get it to my liking. For example you can't override CMD+P or use CMD+Shift+C to toggle the dev console so I had to make a patch [1]. I also had to develop a couple of plugins that were missing [2]. It's a joy to use knowing you're giving the middle finger to Google, your data is not tracked and stolen from you by Google and making the way for a more free internet. Sure sites like YouTube are slower to load (bc Google makes them so if not on Chrome?) but those are minor inconveniences and there are not a lot of them. So I'm very happy with Firefox and wish more people would tolerate a slight initial inconvenience and make the switch.

[1] Also made a Mac OS launcher filesystem hook to auto apply the patch every time the browser updates.

[2] Yes, I packaged and published them too.


Webkit/Blink has a really extensive test suite for the HTML rendering parts...

I wonder if that test suite could be run on LibWeb to give a good target for all those test-driven-development developers to target their efforts towards...?

As a side effect, multiple browser engines sharing test suites might help make the web more compatible...


I really hope this is successful. We need more web platforms outside of chromium


Previous discussion (56 points / 8 comments): https://news.ycombinator.com/item?id=32014061


I'm afraid the web is too far gone for new browsers. I'm already struggling to get through captchas on a Linux machine and if I roll a non-chrome based browser then I essentially end up solving 2-3 captchas every few pages. TLS and Javascript fingerprinting is making so only privilidged few can browser the web without giving away hours of free labor.


My experience running Ladybird has been that websites generally tend to work fine, even on Cloudflare. Only though shady VPNs and TOR do I really get bothered by captchas.


This is pretty exciting time as we enter into the 3rd phase of browser development. It seems like there are some very interesting paths forward. Arc - via the Browser Company - looks very promising. Brave is an interesting concept to browser the web through another persons servers... There's a lot of ideas and one of these will no doubt become the next Chrome.


Kling's story in general, and this in particular chapter, is truly inspiring.

This led me to think that I'm not sure inspiring of what, and so made me think of whether a sense of inspiration must always be to do something, or if is it sometimes just a feeling (like being sad, happy, angry).

Sorry for the somewhat off-topic thread.


What's the minimum Clang for this? I've been playing around with old versions of MacOS recently and most of the "Let's keep some old version of Firefox alive" projects are failing or bound to. Sooner or later this might be a good alternative.


I believe this relies on quite a few of the libraries in SerenityOS and according to one of the lines [1] in one of the build scripts for that it looks like Clang 13 will be required as a bare minimum. The Serenity project is also using the fairly recent C++20 standard according to their contributing document [2] and they additionally are working on a new programming language of their own called Jakt which I imagine will become a requirement in the future. For the time being, it looks like this will make it harder to support older environments which is a bit unfortunate because their browser project does look really interesting.

[1] https://github.com/SerenityOS/serenity/blob/master/Meta/sere...

[2] https://github.com/SerenityOS/serenity/blob/master/CONTRIBUT...


This is awesome and I would like to understand the code for layouting.

I wrote an extremely primitive layout engine a few days ago that I was intending to build a custom browser around but it's very challenging problem. I use the ORCSolver greedy algorithm for widths or heights. I plan to implement text support but that's hard problem really.

I used WxWidgets and wrote it in Python for simplicity and understandability.

I plan to implement branch and bound with intervals.


That made my day. Go ladybird!


Maybe now they can make a better web browser, hopefully. I would have chosen to design some things a bit differently, including a better separation of parts that can avoid binding too closely (to make it more possible to customize it in programs that use these libraries), if it does not already do (I do not know how closely they already are; I haven't read all of it). I would also put the protocol interface separate from the implementation (so that other protocols can more easily be added). (I also would have preferred C instead of C++, but C++ might also do OK, anyways.)

Also should be helpful that independent programs can be written which combine them and use them in their own ways, e.g. to change the connections between components, to deliberately exclude some features, and/or to add new protocols and URI schemes and file formats and JavaScript functions and HTML commands and CSS commands (I have idea how to do by "meta-CSS", which is a feature to be used purely by the end user (document authors cannot use it)) and character encodings and audio/video codecs and other features, and/or changing some features. (I would hope to be able to write such extensions/programs in C, instead of having to use JavaScript.)

I also would have designed that it avoids converting to/from Unicode if possible, preferring to work in the text encoding directly, and using fonts of those encodings directly, when possible, and using Unicode as a fallback when needed or if there aren't fonts, etc available for the specific encoding already used. (This is to work around many problematic features of Unicode.)

User options are also needed. For example, there should be an option to disable MIME sniffing if the user does not want it. (One way to support this and other options is to implement a "header overriding" menu that the end user can configure (both request headers and response headers); this can be used to implement many options that the end user can configure, including (but not necessarily limited to): DNT, language, cookies, HSTS, etc.) Also, a better implementation would not try to hide things from the user or believe they know better than what the user has specified.

I also think that some parts of the W3C specification aren't so good and need to work around that somehow, in some cases.

Nevertheless it seem a good idea that they can make a new one in order hopefully is better than the other one (which, in time, we may be able to see).


I believe I speak for all web developers when I say "noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!"


Looked at one file only but isn't it out of bounds memory access if domain_string will have one char only?

https://github.com/SerenityOS/ladybird/blob/master/CookieJar...


Why post here and not a PR if you think there's a bug?


Interesting, from libHTML:

    class Node {
      Vector<Node*> m_children;
      Node* m_next_sibling { nullptr };
      Node* m_previous_sibling { nullptr };
    }
Either m_children or m_next_sibling/m_previous_sibling are clearly superfluous.

m_children + m_parent are enough for tree representation.


You usually keep next/previous sibling to speed up iteration

See: http://aosabook.org/en/posa/parsing-xml-at-the-speed-of-ligh...


You can store index of the node in parent m_children collection.

Iteration on array is faster (cache locality) than DL list traversal. And there are issues when you traverse tree while changing it.

In any case I've found that storing child nodes as m_children collection is more convenient in many cases. At least in Sciter (https://sciter.com).

    struct node {
      weak_ref<element> m_parent;
      uint              m_node_index;
    }

    struct element : node {
      vector<ref<node>> m_children; // strict ownership, sic!
   }


Unless sibling is sideways (same level of the tree) and children is downward (one level closer to the leaves).

I haven't looked at the code but that's my first impression given the nomenclature.


No, you didn't get it.

You may have children accounted as DL list and so each Node should have m_next / m_previous node references. + Parent node to have m_first_child reference (if list is circular). + m_parent (Node) reference.

So you need 4 pointers per each non-terminal node. Or 3 pointers for terminals.

Otherwise (vector of children):

You need only m_parent (Node) reference in terminals. And in container nodes additional vector<NodePtr> m_children; to store child references.


You don’t need parent because you can use a breadcrumb stack to track parents.

You want sibling nodes otherwise you will have to traverse back to the root node sometimes to find a sibling.


> You don’t need parent

You do need parent, check this: https://developer.mozilla.org/en-US/docs/Web/API/Node/parent...

> You want sibling nodes otherwise you will have to traverse back to the root node sometimes to find a sibling.

The only need for this is in Node.nextSibling implementation: https://developer.mozilla.org/en-US/docs/Web/API/Node/nextSi...

Where vector<>::find is pretty sufficient.

But in reality (at least in my Sciter) node stores its index in parent's m_children so it is O(1) operation.


Is there source code available online illustrating your more-efficient DOM data-structure ?


How about the + operator in CSS?


And what is wrong with it?

and ~ combinator and :nth-child(n) selector for that matter too.


I'm responding to the claim that you only need the ability to find a sibling for the nextSibling method.


Planning on being Widevine et al compatible? It's the only part I see as having an outside constraint.


kling for king! ... the good people of the world love and cheer for you. onwards, while there is life. :)


Anyone else having trouble building ?

    ladybird-git/src/ladybird/ConsoleGlobalObject.cpp:10:10: fatal error: LibWeb/Bindings/NodeWrapper.h: No such file or directory
   10 | #include <LibWeb/Bindings/NodeWrapper.h>


Pull the latest from github rather than the AUR package. It's out of date. All things serenity related tend to get out of date quickly :)


I also can't build it (on Alpine), but I get a different error. I've tried to build Ladybird 3 or 4 times over the course of as many months and had no success.


Did you manage to do it? I (also) failed miserably.

It would be great if they added a Github actions script to perform automated builds on each commit or PR merge.


Just curious, is there a strong need out there for alternative web browsers? It’s seems like there are a lot of choices already to Chrome and Firefox: eg Opera, Brave. What’s the goal of yet another web browser?


There's really only Firefox and WebKit derivatives (everything else you listed), for the most part. I'm increasingly of the opinion that duopolies don't result in good progress, so I fully support new entrants in this space, even if I think it's unlikely they'll get near the popularity needed to compete at a high level.


Andreas gets asked these questions weekly (probably daily), and answers in his FAQ. Its not to compete, or be a business.

https://awesomekling.github.io/faq/


Opera, Brave, and most other browsers besides Firefox are based on Google’s Chromium.


Safari and Orion are based on WebKit. WebKit and Chromium are quite different these days.


Someone could have said the same around media players, there's Windows Media Player, Quicktime etc. Thankfully VLC didn't listen, and we have a marvel today to thank for it.


Also some would say MPC is a great improvement over VLC even though no one really thought they needed one at the time


I wonder, and this is speaking from virtually no experience in the area, if it would be possibly to use machine learning to infer all the myriads of layout rules (etc.), instead of actually writing to spec


I'm fairly familiar with ML, and I'd say that's a definitive no.

Implementing a layout spec is exactly the kind of thing that is "easy for computers, hard for humans". ML is for things that are "hard for computers, easy for humans" (like, telling dogs apart from cats, or transcribing speech, etc).


My favorite example is differentiating between blueberry muffins and chihuahuas. :)


You could probably get something that would work pretty well for most common things but wouldn't have the same behavior in more complex edge cases.


I'd bet even that would be only with a some or all of (1) a LOT of training data, (2) a LOT of preprocessing, and (3) using less famous architectures (perhaps recursive neural nets).

I say so because (among other reasons) in general, current popular ML architectures (like transformers) count processing heirarchical data as one of their weaknesses. For example, there's a theoretical paper proving that self attention blocks (which are central to transformers) cannot solve arbitrarily nested negation(Ie, resolve not(not(not(true))) to true). Practically as well, we see most times that language models have trouble dealing with naturally occurring double negation in language, etc. But CSS/HTML is, I think, very heirarchical.


if you want to completely give up any agency over actually being able to fix concrete rendering problems


There was a paper on HN recently about auto-inferring semantics of x86 instructions — although it used an SMT solver (I think) rather than ML.

https://cs.stanford.edu/people/eschkufz/docs/pldi_16.pdf


No


I would be happy to use a new browser as long as it supports ad blocking.


It does, here is an old video of Andreas building it: https://www.youtube.com/watch?v=Jc22wPqpaBQ

The standard Serenity install even comes with a basic filter list: https://github.com/SerenityOS/serenity/blob/master/Base/home...


I know very little about web engines, but I would be interested to know how this project is architected perhaps to supplant webkit for alternative browsers such as Vivaldi.


Hell. Yeah. Screw the naysayers. This is awesome!


I am excited for this. However note, builds are not available for Windows, and it is not currently possible to even compile for Windows:

https://github.com/SerenityOS/ladybird/issues/33


If he was going full scratch everything, why didn't he make a new markup and styling that would fix everything the web is wrong about instead of reinventing a jaggy wheel? That would've been a revolution.


There is something like it called Project Gemini: https://gemini.circumlunar.space


Then you couldn’t use it to browse any of the existing web…


Practically, would anyone use it to browse the web anyway?

HTML/CSS/JS combo has been pretty much as is, only with some better tooling, for the past 25 years and we could use a fresh innovation.

It's usually a small team who initiates an innovation as big companies don't like niche ideas but when it if gets enough dedication to take enough traction, it can fly.


> Practically, would anyone use it to browse the web anyway?

Yes, I think that's the point of the browser: to allow users of SerenityOS to browse the web.


My favorite part:

> Q: Why bother? You can’t make a new browser engine without billions of dollars and hundreds of staff.

Sure you can. Don’t listen to armchair defeatists who never worked on a browser.


My thoughts exactly. If Linux could, I don't see why a new browser can't.

If we start slowly, focus not on delivering a working product ASAP but a nice codebase that's accessible for other developers to read, carefully split modules so each part of the browser is a mini project by itself, with clearly defined business logic, outward connections (GUI, APIs, Drivers, etc), it can be done.

Yes, it will be a lot of work. But with a tidy codebase and a welcoming community (at least for a programmer) that "a lot of work" will instead become a playground to hack whatever component they desire.

The problem is that people don't want another Linux-like hobby project, they want another Firefox


If you want to build another Chromium you need billions, since that's how Chromium was built. But maybe you don't even need to build another Chromium.

The enduser couldn't care less what if-else nightmare logic you ran in your "application engine" in order to reach the current framebuffer, they care only about the framebuffer. Following this logic, that tidy codebase should perhaps be also tiny: in the extreme case, just thinking on a whim right now, it should be comprised of only one "module": a stable diffusion-like algorithm which ingests the HTML/CSS/JS specification documents and the requested website response and "simply" outputs the adequate framebuffer, with which the enduser interacts accordingly. This kind of approach probably reduces the cost from billions to only a few millions for the GPUs training time. Sort of a "generative browser", a "genser", if you will.


No offense, but I think that's the problem with our approach to browsers. People want the shortest route to deliver an open source chrome that's not chrome. And Chrome won. That's it. Even Firefox, an amazing feat of engineering by itself, couldn't even grasp the userbase that Chrome has.

The if-else nightmare logic it's exactly what drives developers away and a surefire way to kill a project before it's even born.

Since the success of Ubuntu as a distro, people started to think of the FOSS ecosystem as a competitor to commercial software but gratis. And IMHO that's a wrong approach to Open Source.

We should go back to the roots, forget about competing, focus on what's the best not for the end users but on what's fun for the FOSS developers, and bring back the hobby on open source projects. Maybe someday, this project will become huge. Maybe not. Dunno.


This. And I just hope that the license doesn't kill this project - there is a reason why Linux (GPL) is still here while Konqueror/KHTML (MIT) is nowhere to be seen, or worse, was (ab)used by Google to get us to this mess.


KHTML is LGPL not MIT, as is WebKit (which was made by Apple, not Google - Google forked WebKit to make Blink, also LGPL).


Thank you, it is LGPL - I remembered wrong. No difference though; Blink is open source, but Chrome is not.


Almost all of Chrome is open source. You're confusing it with Safari, where the engine is open source but the browser UI isn't.


It depends what we're after. Almost all of the code in LoC percent terms might be open-source, which helps people who want to launch derivative browsers or learn from the code. But on the other hand, almost all of the Chrome instances running today are proprietary builds with mystery special sauce added (I'm guessing desktop Linux users are the only significant userbase to run vanilla Chromium), which doesn't help us on privacy, neutrality, etc...


Almost. There is no confusion, rest assured. They are both proprietary, but one of them likes to act as if it was open source (see also: Android), and clearly it hides well.


Android as far as I know uses a source-drop model. For both WebKit and Chromium, all code reviews are as far as I know done publicly.


AOSP is open source, but Android itself is far from it - just ask Huawei. More and more code is in proprietary bits of Google code. And Chromium is not Chrome, most people use Chrome builds which are proprietary.


Sure, I was merely thinking out loud (bad move it seems from the children comments) how to solve the "need billions, got maybe millions" issue, and since I can't get stable diffusion out of my mind, as everyone recently, I just saw it as a possible light at the end of the tunnel: why not let the machine implement the specification.

I am all for going back at the roots, but at the root of all the roots is the enduser: the machine must do something useful, otherwise it's a niche postmodernist art contraption (not that there's anything wrong with that).


I think the "if-else nightmare logic" caused some non-friendly answers over there but I get your idea.

I don't entirely agree with the end user being the root of all, at least in FOSS. While they shouldn't be alienated, the focus should be on having a good approach to the project, one that is friendly with the idea of having to code after a full time job, like most open source developers do.

Keep in mind that this isn't a job, nobody will fire you and most complains by end users can be ignored with no consequences. If the project is not fun and engaging, then what's the incentive to keep going?


My point was that a machine is always intended to make some useful work, and even as a sole developer of a private project, that project still has one enduser, the developer itself, since the existence of the project is to achieve meaningful computation one way or another, even if that computation is performed only in the test suite, hence the enduser as root. However, as a project organizer, sure, you want to strive for welcoming others, making it easy to embark, 1-cli command to setup, interesting to engage and persevere in development, and so forth.


> We should go back to the roots, forget about competing, focus on what's the best not for the end users but on what's fun for the FOSS developers, and bring back the hobby on open source projects.

I prefer Blender's approach of heavily prioritizing users over developers [0]. It's been a huge success.

[0] https://youtu.be/qJEWOTZnFeg?t=1612


Firefox was more popular before Chrome existed. It was winning ground from MSIE. Chrome ate marketshare from both, and Safari took the mobile share (Mozilla Mobile / Fennec, while in development before release of iPhone never got popular).


> If you want to build another Chromium you need billions, since that's how Chromium was built.

Chromium isn't successful because of the billions thrown at it. It's successful because it was significantly better than anything else during its rise:

It was significantly faster than IE and FF. Much better memory management and crash handling via separate processes. Strict adherence to web standards. Seamless auto-updates that required no user intervention - ever. No admin privileges needed for updates. Clean UI that stays out of your way. Top-tier developer tools built-in. Very secure. Has there ever been a widespread instance where users were infected with malware from a Chrome exploit? I haven't heard of one yet.

Any competitor to Chromium needs to be significantly better than it for genuine reasons. Unfortunately, "not Google" and "privacy" aren't going to cut it for most average users to switch over. Unlike Meta, Google's reputation isn't in the trash so people still trust them.


> Chromium isn't successful because of the billions thrown at it. It's successful because it was significantly better than anything else during its rise

I think you're dramatically underestimating the importance that advertising and bundling had in the rise of Chrome. Every non-techie I've talked to about this basically uses Chrome because Google told them it was the best, and Google is the first internet page they go to whenever they go to the internet.


Google pushed Chrome hard for years on its sites, it's true. It used to be suggested at the footer of every Youtube results page ~10 years ago, be suggested at the top when you visited Google search, Gmail, Translate, etc. It'd pester Firefox users by saying it's better and ironically even users using Chromium forks which it's based upon.

IIRC some Firefox devs also accused them of tweaking sites like Youtube in particular ways that only affected competing browsers and made performance worse comparatively.


I wouldn't say it's the advertising and bundling as much as other factors I didn't mention as well. For example, Chromebooks in schools drew a lot of people to use Chrome and its ecosystem. Google Apps for Workplace/Education as well.

We've seen Microsoft throw money and bundling with IE and Edge, and it still hasn't done much. Even with being able to bundle Edge as the OS default - the biggest advantage anyone could ask for.


IE 3 and 4 compared to Netscape navigator in 1998 were also superior and (relatively) bug free. It didn’t make a dent.

Then Microsoft bundled IE4, and killed Netscape (it took a while, but the unstoppable momentum was built with bundling/embedding).

Chrome is “bundled” with Google. Every Google search recommends it, and everyone uses Google. Same for YouTube which to this day works better and faster on Chrome. Android (80% user base at the time it happened) also pushed Chrome.

Chrome wouldn’t have become popular if it wasn’t good. But the market dominance did not come from being good - it was just a necessary condition for market dominance in a market already dominated by incumbents.


Google is not your is, and a vast majority of users don't see Chrome add nor would know what to make of it. Many people still use edge.


> Has there ever been a widespread instance where users were infected with malware from a Chrome exploit? I haven't heard of one yet.

Maybe not widespread attacks, but the Chrome team regularly see 0days being actively exploited in the wild, I imagine as rare and isolated incidents instead of mass pwning billions of users. For some exploits to work you have to visit a carefully crafted page that has the payload in it. And that's not easy to do at scale. You'd have to cajole millions (billions?) of people into navigating to a specific URL. Also I imagine you don't hear about these exploits because the actors would have good op-sec and keep all their data gathering secret.


Of course. But my point is that the old days where visiting a very reputable website and being infected with a drive-by exploit delivered via an ad seem to finally be over.


He’s explicitly refuting that you need billions. Just because they did it that way doesn’t mean the next browser/chromium has to be that way.


> If you want to build another Chromium you need billions, since that's how Chromium was built.

Not even Google started from scratch. They started from WebKit and used their billions to overhaul, maintain and change everything from the engine to the renderer.

> But maybe you don't even need to build another Chromium.

That's what Servo said years ago. I don't see the progress or hype around that anymore.

Maybe Ladybird is different, but overall I'm very skeptical; but an alternative browser that is not Firefox, Chrome or Safari needs a high multi-person and project contribution for it to work.


> Not even Google started from scratch. They started from WebKit and used their billions to overhaul

And WebKit didn’t start from scratch either, it started from khtml.

A few years ago there used to be 4 active browser lineages, now there are just 2.


It's a pity Opera just died. Something something patents was given as a reason they didn't publish source code under some Free license


> That's what Servo said years ago. I don't see the progress or hype around that anymore.

The Mozilla team didn’t intend Servo to be a fully functional new browser. The project was amazing as a Rust-only implementation of the basics of a browser. But ultimately, the utility of Servo was that it served as a place to rewrite modules into Rust for use in Firefox.

Incidentally, there were some good talks by the Firefox team about how they chose to select which modules to rewrite in Rust (since a complete rewrite with feature parity is prohibitively expensive).


> The Mozilla team didn’t intend Servo to be a fully functional new browser.

This is not accurate. Some parts of Mozilla thought that way, other parts didn't. And thoughts changed over time. For years Servo was officially an experiment, with many possible futures.

Had it succeeded in rendering the modern Web well enough, I think it could have become a fully functional new browser. Sadly, that didn't happen.

(source: I was at Mozilla at the time, and adjacent to the Servo people, who I talked to a lot)


Have all innovative parts of Servo been incorporated into Firefox now? Or are there some modern GPU based implementations or something like that which give it the potential to be faster than Firefox?


> If you want to build another Chromium

I don't want to build another Chromium. I want to build a web browser. I don't think every browser is required to be like or support as many websites as Chromium does. Note that competing with the popular browsers of today has never been stated as one of Ladybird's goals.


I would argue that rather than 'supporting many websites' Chromium has actually warped web development around its own goals. This is one of the effects that really turns me off of it. Google has pushed to make Chrome the OS of the internet.


> I want to build a web browser.

Mild nitpick but "web browser" and "rendering engine" and "JavaScript" are different.

Anybody can take WebKit and do some cool stuff on top in the UI and call it a "web browser".

At the end of the day, without patching to the HTML/CSS rendering engine, it's going to perform like every other WebKit based browser, right?

https://en.wikipedia.org/wiki/Comparison_of_browser_engines

I know Chromium uses Blink and not WebKit now.

Looks like Ladybird is based on https://en.wikipedia.org/wiki/SerenityOS "LibWeb"

That's all just rendering. You need to hook a JavaScript engine up to the rendering engine as well (with glue to the DOM from what I know?)

Chromium is Blink + V8, right?

Does Ladybird SerenityOS Libweb also do JavaScript?

> Browser with JavaScript, WebAssembly, and more (check the spec compliance for JS, CSS, and WASM)

It does. https://github.com/SerenityOS/serenity https://github.com/SerenityOS/serenity/tree/master/Userland/...


Ladybird, as part of the larger SerenityOS project, is building the browser UI, layout engine and JavaScript engine all on its own. That's what I referred to when I meant "web browser".


"Listen man, that's great and all, but all I wanna know is, how slow are you gonna be with patching them there vulns?!"


> If you want to build another Chromium you need billions, since that's how Chromium was built.

This is akin to saying that if you want visit the US from Europe you must built a wooden tall ship, secure a crew of sailors, pack several barrels of limes, and spend weeks sailing across the ocean.

That's how they got there the first time but the choices they made then were determined more by the information they had at the time (or lack thereof) and the technology they had (or lack thereof).

Re-implementing a browser, starting today, is a fundamentally different process from building one starting over a decade ago while the web was constantly evolving.


Hence the second sentence "[b]ut maybe you don't even need to build another Chromium." and then the second paragraph "cost-reducing" billions to millions day-dreaming of a specification-based pixel generator.

Of course, the n-th time is cheaper, easier, faster, case in point: I implemented 'deon', a notation format for structured data [1], using your amazing "Crafting Interpreters" for which I paid nothing since I was reading the web version as you were writing. Never had the chance to say thank you, somewhere in my drafts there is an email of appreciation: reading your book and applying it chapter by chapter, crafting a final, useful artifact, has been a beautiful experience, thank you very much, for all your writing, since I am a longtime reader of your technical and otherwise texts.

[1] https://github.com/plurid/deon


> Hence the second sentence "[b]ut maybe you don't even need to build another Chromium."

Sorry, yes. I didn't intent to disagree with your entire comment, but just remark on the first sentence of it.

I'm glad you enjoyed the book! :D


> If you want to build another Chromium you need billions, since that's how Chromium was built.

Reminds me of: "According to all known laws of aviation, there is no way that a bee should be able to fly. Its wings are too small to get its fat little body off the ground. But the bee doesn't know that, so it flies anyway."

I guess Kling & co. didn't get the memo :)


> a stable diffusion-like algorithm which ingests the HTML/CSS/JS specification documents and the requested website response and "simply" outputs the adequate framebuffer

haa, that's a fun idea! setting aside efficiency, though, neural networks aren't usually Turing-complete, so arbitrary JS isn't going to work. but, I could imagine building a very strict, minimalist browser engine (think XHTML and Scheme rather than HTML and JS), and learning a transformation between the two.

and for perf to be attainable, rather than an NN you could learn a bunch of syntax transformation rules between the two.


Precisely, fun is the operative word. As I was writing I was thinking that perhaps you would still need to run client-side non-layout JS, however, "attention is all you need": it seems that there are some architectures for Turing complete neural networks [1].

I wouldn't worry about performance: Nvidia breaks world records with H100 [2], Intel is going for 6 GHz processors [3], for performance you just have to be patient.

[1] 2019, Jorge Pérez et. al, On the Turing Completeness of Modern Neural Network Architectures, https://arxiv.org/pdf/1901.03429.pdf

[2] https://blogs.nvidia.com/blog/2022/09/08/hopper-mlperf-infer...

[3] https://www.tomshardware.com/news/intel-teases-8-ghz-raptor-...


The neural network architectures in use are all Turing complete, it's just that halting is a precondition of meeting their reward function.

It would be interesting to see if a search engine's worth of raw data, and enough training on Chrome's output, could build a JS interpreter. I'm skeptical but don't see why not in principle.


KHTML / Konqueror is the foundation of Blink / V8 / Chromium. It wasn't build in one day by Google. They forked an existing project, utilizing (L)GPL. Gnome never had their own browser engine which was build from the ground up. Mozilla Firefox (or Phoenix/Firebird) was Gtk+ back in the days, Konqueror Qt. Later on Gnome got a native browser but it was some silly fork anyway. Chrome benefits from the network effect, integration with other popular Google products such as Google search, Gmail, YouTube.


> If Linux could, I don't see why a new browser can't.

The comparison doesn't make sense.

Linux was not released in 2022, competing against 2022's Windows, it was released in 1991, competing with the technology at the time, and has since evolved and grew progressively, as the competition evolved and grew.

Releasing a new browser in 2022, competing against 2022's Chrome and Firefox, is a way bigger task (not that it's their goal).


I think their goal is to make a browser as usable as SerenityOS itself - that is, not for everything, not for the long tail, but for a big enough chunk of things to be fun to drive and hack on.


> competing with the technology at the time

The competition at the time was also cutting edge at the time, the result of billions of dollars of investment over decades, and Linux was nothing. There's no reason that your argument couldn't have been made in 1991. Why, specifically, would it have been a bad argument in 1991, since we now know it would have been?


> The competition at the time was also cutting edge at the time

I don't deny that, but just the feature gap was a lot less compared to what it is now.

We're talking about a time where video games were made by teams of 5 people, not 500.

If the cutting edge technology is using a rock on a stick and you're using a rock, it's easier to catch up than if you're using a rock and they already have tree cutters working on nuclear energy.


Rocks and sticks are things that people find on the ground. Computing and operating system design was in 1991 a deep science/craft/industry that had been developed and implemented over many decades by thousands of people employing billions of dollars worth of resources.

Related: are you trying to make an argument that a small team or a single person shouldn't try to write a video game in 2022?


> The problem is that people don't want another Linux-like hobby project, they want another Firefox

I don't want that at all. Firefox lost their spine long ago, and at this point they aren't a Chrome clone, but its pretty close.


Love it. Definitely following!


> If we start slowly, focus not on delivering a working product ASAP but a nice codebase that's accessible for other developers to read, carefully split modules so each part of the browser is a mini project by itself, with clearly defined business logic, outward connections (GUI, APIs, Drivers, etc), it can be done.

My understanding is that this is exactly the opposite of how Linux was started. It started off as a barely working product on 386 only and went from there.


That's what I said.

It was a barely working product, but it was accessible enough to other developers to develop on it and extend it.


Right; the first part contradicts the second part and I’m not really sure what you’re advocating for as a result.


Your comment made me go and read the article and the history behind it all, which was a fun read. To the naysayers, we’re HackerNews FFS, if we don’t encourage moonshots, who else will?


Armchair defeatist here: Browsers are huge, complicated beasts, the standards are a constant moving target, and websites won't uphold standards.

I wouldn't be surprised if 80% of the work spent on current webbrowsers is spent on maintaining compatiblity and edge cases.


I wouldn't be surprised if 80% of the work spent on current webbrowsers is spent on maintaining compatiblity and edge cases.

I don't really see anything wrong with a browser that doesn't fully support every website and every weird example of HTML and CSS that a developer decided to hack together. If a browser supports the standards well and does a pretty good job of rendering modern websites then I'd happily trade failing to render some ancient HTML for a web browser that was truly open and free and doesn't send swathes of data about me back to some corporation.

Web browsers are compatible with old HTML because we choose to make them that way. That is a choice. If the cost of having a free and open browser is a requirement to drop some backwards compatibility with old HTML then I think that's actually quite a simple choice. After all, if people want to view an old website then they can still use Chromium or something. It's not like people have to pick a single browser and stick with it.


> I don't really see anything wrong with a browser that doesn't fully support every website and every weird example of HTML and CSS that a developer decided to hack together.

Consider from the perspective of the user. They're using chrome and browsing the web - all their favorite sites work. They switch to a new browser and some of the sites stop working correctly. Is it the fault of the browser? or is it the fault of the site?

For most users, if site X works using browser A and breaks using browser B - then it is browser B that is at fault.

This is a lesson that Microsoft learned. https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost... There's no refund involved, but switching back to the old browser is certainly possible.


Serenity and Ladybird aren't written from the perspective of the user though. They're written from the perspective of the developers, who write functions they'd like, and that they'd like to learn more about, or as the Serenity website states:

> This is a system by us, for us, based on the things we like.


And funnily enough, this is exactly how Linux was born. It was literally started as "just a hobby, won't be big and professional."[1]

Nowadays it runs in almost every home at least in the developed world.

[1]: https://www.cs.cmu.edu/~awb/linux.history.html


Linux was inevitable as there was no GNU licensed kernel anger there was a need for one. If not Linux, something else would have emerged within months.

New browser is not inevitable, as the need for free-ish browser is already satisfied.


Hurd was in development and it still is I guess. Hurd was a predecessor of Linux


Hurd stalled precisely because Linux appeared: there was an urgent need for a GNU kernel, but there was no urgent need for another GNU kernel.


This is correct, and it's why most open-source software will never have much in the way of users:

> They're written from the perspective of the developers

And I get it. A few years back I had an open-source project [1] get users and it was terrible. What had previously been a fun technical exercise became a pain in the ass that felt a lot like actual work. I was relieved when my hardware broke and I had an excuse to archive the project.

But that does create a huge gap that mostly gets filled by commercial interests.

[1] https://github.com/wpietri/sucks


They're using chrome and browsing the web - all their favorite sites work. They switch to a new browser and some of the sites stop working correctly.

There are a couple of implicit assumptions here that I think should be unpacked;

First, you're assuming that users have favorite websites, and that those sites wouldn't work. As I said in my post, so long as the browser worked with standards compliant modern websites I think that would be fine ... and that would cover 99.9% of user's favorite websites. People don't carry on using old, unmaintained (or even maintained) sites for a particularly long time. I don't believe people return to old websites that haven't been maintained very much. I don't think many users would even notice if their browser stopped supporting things that browser developers consider edge cases that eat up dev time.

Secondly, you're assuming that users who choose a browser like Ladybird wouldn't understand the choice they're making. Of course they would. They could still make that choice though. The idea of trading out-of-standards compatibility for increased privacy is actually really appealing to a lot of people. Hell, a lot of people see out-of-standards compatibility as a bug rather than a feature.


> First, you're assuming that users have favorite websites, and that those sites wouldn't work.

Both are true almost in my case. Browsers based on older Firefox versions such as Palemoon and Seamonkey don't work with GitLab and Github - which are not my "favorite" websites but rather, are hard to avoid.

> I don't think many users would even notice if their browser stopped supporting things that browser developers consider edge cases that eat up dev time

That's not the issue. The issue is that some websites use the shiny new features - probably through some new shiny framework - that older browsers don't support.

And this is a loser's game for browsers done from scratch, because by the time they implement one missing features, two others have been standardized. That will probably kill Firefox too.


> First, you're assuming that users have favorite websites, and that those sites wouldn't work. As I said in my post, so long as the browser worked with standards compliant modern websites I think that would be fine ... and that would cover 99.9% of user's favorite websites.

I wonder then why developers test on multiple browsers... Is it because website developers code not to standards per se, but rather how their top two browsers behave?

That being said, I think this is a cool project and since the goal is personal satisfaction and usability, there isn't the pull to satisfy hundreds of thousands of users.


> I wonder then why developers test on multiple browsers...

Do they still? I find myself using only Firefox and I haven't had a Chrome-related bug for a loooong time. I do however avoid using latest features and wait until the support is good enough.

I agree with GP, seeing a new hobbyist browser enter the arena is awesome, I need to check it out. I for one don't care if some pages don't display, I can still open them in FF. I just hope that corporation doesn't "happen to it" too soon...


There are certainly "favorite" websites. You could use that as "sites that I use for professional work and track the latest features." Gitlab, Github, Gmail, and so on.

There are also the "Mom's cooking blog" that is running an ancient copy of Wordpress and mom still doesn't understand that <b><i>This is important</b></i> is not formatted correctly. Some browsers are able to handle this, but if you browse it with a "this browser only follows standard compliant html" you may get the entire page in italics or bold.

The article even acknowledges this - https://awesomekling.github.io/Ladybird-a-new-cross-platform...

> Fidelity of modern websites in Ladybird is steadily improving, but you’ll often see lots of layout and compatibility issues. For example, here’s Reddit right now:

> (mangled layout)

We could put Reddit in that list of favorite sites that need to work. And so, is it the fault of reddit or the browser that the page isn't rendering right? As reddit works on other browsers, surely its the browser's fault and not something janky that reddit is doing that the other browsers happen to work with.

And thus my post - if a page works correctly in a different browser but incorrectly on Ladybird, to a user it is always the browser's fault.


> so long as the browser worked with standards compliant modern websites

Define which subset of standards would qualify for "standards compliant". Don't forget that quite a few of standards (for example, HN's darlings like bluetooth) are barely drafts.


Then let that user use Chrome and be happy with it.

If the average user wants to lean to a corporation and use its product because on the reliability it offers then that's fine. While the privacy concerns are there, I don't think it's wrong for a consumer to reach to a company to fulfill their needs.

Product mentality can be a problem on FOSS software. People forget that all of this started as a project made by volunteers because it was fun to them.


There would need to be a compelling reason to use the browser other than it being "different." For instance, the user experience you describe is exactly what it's like to use Firefox. Many websites don't support Firefox, even big ones like Vanguard. And yet, I continue to use Firefox when possible because I know uBlock Origin will continue to work in the future and I value privacy.


Users often prefer it when sites don't work as the developer intended. Users like browsers that strip away adverts and that show undecorated content. If you make a browser that does this, you stand a chance against a browser that complies to the whims of corporations.


you properly mean the average user. The average user would not download this browser. so the user of ladybird might want to live with the shortcomings. That being said , Firefox still exists and is a very good cross platform OSS browser


I get that it’s a chicken and egg problem but this strikes me as, at least preliminarily, being more about developers than users in a world where devs often no longer even test for compatibility with Firefox.


People focus too much on Google Chrome, and when they read "Open Source Browser" they instantly think of competing with Chrome.

Google Chrome is a product. But a new browser doesn't have to be a product. And doesn't have to compete with Chrome.

Linux as a project wasn't created to compete with other Unixes. And it wasn't deemed a failure when it lacked features compared to them 20 years ago


of course linux was created to compete with other unixes. it's literally baked into the name. competition transcends 'product' as you seem to be using it, which seems to be something akin to 'formal market offering'. linus meant to create an OS for a segment that he thought was underserved (OS hobbyists). it competed from the beginning with other OSes for mindshare and usage (but not necessarily for end-use). so you're really just distinguishing the segment served, not competition (other than to say it likely won't be competitive outside its niche).

more than that, i don't think this framing does you any good with the segment this browser is targeting, as developers still want others to recognize their (good) work. but that's fine, as there's plenty of time to develop the brand (and yes, brand also exists outside of being a formal product).


HTML isn't the complexity driver and relatively static; CSS and its myriad of microsyntaxes, layout models, and insufficient specification is, plus JS of course.


If the goal is to compete with Chrome/Edge/Safari directly then yeah some spunky team of ragtag volunteers isn't going to be able to match what the megacorps can bring to the table. Even Firefox struggles here.

If the goal is to have something that's usable in limited cases for a self-selected set technically competent enthusiasts, then this is well within reason and not without value.

What I find particularly interesting is that there's an intent to support WebAssembly. That may turn out to be overly ambitious, but if they succeed there's a lot of potential upside to having another WebAssembly engine that's independent from the major browsers.


What's ambitious about supporting WebAssembly? It's very well-specified and minimalistic compared to almost anything in the web space.


Yes, they are. And even if the project fails (for any definition of failure), putting hundreds of people in contact with production quality core browser code is worth it.

Also, lightweight browsers have been a dying breed for a while, if ladybird ends up as a more modern alternative to surf and Dillo, I'm already satisfied.


Surf[1] is merely a Webkit wrapper. If you mean NetSurf[2], it's actually doing OK in resource constrained environments, although it's been a while since a release, which is a bit concerning.

[1] https://surf.suckless.org [2] http://www.netsurf-browser.org


Seen it way too late to edit it seems, but you are correct, I meant NetSurf.


I'm fairly certain Andreas knows, considering he worked on Safari before starting Serenity.


Yeah the argument is just about the definition of "browser". Something that can pass Acid3 and render simple sites fine, or even complex specific sites? Totally doable.

Something that will work reliably on every site and support the gazillions of APIs that Chrome provides? Hell no. Even Microsoft threw in the towel on that. Mozilla barely manages it.

You can make a browser that people can use if they really really want to make a point of using your browser. You can't make one that the average person would choose.


What we need isn't to make one "that the average person would choose", since they already have that.

What we do need is to make one that you may choose if you want to be able to fully control the system (with fine customization, and interaction with other programs including native codes in C), and is designed for advanced users who have read all of the documents, rather than trying to hide things from the user or believe they know better than what the end user specified, and also it must be one that avoids all of the bad stuff that Google put in.


None of that stuff is about the browser engine. If you want all that you'd just use an existing engine, which is what people do.


It's insane we let Google run the web and be the literal gatekeeper of it. Chrome is literally the gate you use to enter the web and its owned by Google. The solution is surely to make that monopoly illegal. After all we/government constantly blocks mergers that seem much more benign and less impactful (see Amazon/iRobot acquisition) just because they involve money? This is far worse.


> Chrome is literally the gate you use to enter the web and its owned by Google.

I don't even have it installed and yet I'm typing this into a website.


Three quarters of the internet uses Chrome.


Firefox can't even keep up/not tested by developers. We sometimes use roll20 to game, but things like blank screens make it unusable. I've had to just use chromium and not worry about it.


Interesting, could it be plugins/extensions causing the issue? I've been using Roll20 with Firefox and with Edge for a while without any major issues.


Our last game was about 2 weeks ago... two of us hit the same issue with the screen not "repainting" and being black. No issue with the chrome browsers, so it's easier to just switch rather than bog the game down with debugging/diagnosis.

It's usually "ok" and I can deal with performance issues; but these little niggles just makes you want to not deal with it.


Did you have an ad blocker or other privacy add-ons installed? Many "website doesn't work in Firefox" issues are related to ad blockers.

OTOH, if the page is black and not repainting, that might be a graphics issue (in Firefox or the GPU driver). Firefox uses the GPU more than Chrome for rendering and video decoding, which can be great for performance but unfortunately means users can be exposed to GPU driver bugs.


>Did you have an ad blocker or other privacy add-ons installed?

How can you possibly use any web browser without an ad blocker installed?


It seems like there could be a nice middle man software project. Something that translated the non-standard compliant code on the web to something more standard compliant, and then you could have many different browsers each doing different things with the standard html it output. I wonder if such a tool is extractable from current projects? Disclaimer: I am not a browser scientist.


I would say that HTML5 changed the game a lot. There are serious standards now that go into deep detail about how something should work, leaving relatively little on the margins.

There are still loads of cross compatibility issues and performance differences based on certain features of course! But I would say the set of "sites that require weirdness" is going down, not up, every day


It is true, but it does not change the fact that it is a worthwhile effort just to force some level of differentiation on the browser scene.


I was happy to see them state that influence over their browser cannot be bought. Mozilla taking $400 million+ a year from Google is the elephant in the room when it comes to browser competition. Not to mix metaphors, but this would give us a large competitor not beholden to Google, the gorilla in the room.


Much of it is the other way around where the browser compatibility drives the development of sites. Everybody developing a site needs to test on the major browsers before releasing. It may take a while for Ladybird to make the list of browsers to be tested before release, but it can get there.


> Armchair defeatist here: Browsers are huge, complicated beasts, the standards are a constant moving target, and websites won't uphold standards.

I think that's the irony here, the standards are pretty much set in stone and just incrementally grow. Some old cruft is beyond obsolete though. On the other hand nobody uses most of the bleeding edge APIs. But JS (and the part that is actually in use) is indeed growing at a solid pace.


One nice thing about html/css/javascript is that a lot of it works even when some of it doesn't. It's not an all (or even almost all) or nothing proposition.


> the Ladybird browser, based on the LibWeb and LibJS engines from SerenityOS.

So, the project does _not_ cover rendering web pages and tracking the HTML, CSS and JS standards. Those are separate projects.

What remains, then? I guess that would be the user interface and user experience.


> So, the project does _not_ cover rendering web pages and tracking the HTML, CSS and JS standards. Those are separate projects.

> What remains, then? I guess that would be the user interface and user experience.

To be clear it uses a novel rendering engine (libweb) that the creators of LadyBird develop, it's just that that rendering engine has its own name and exists as a separate library.

This is not just another skin around webkit or chromium.


They're "separate" projects only in name; Ladybird is under the SerenityOS organization on GitHub.


But SerenityOS was started 4 years ago...


Doesn't Ladybird let you work on LibWeb and LibJS from a Linux angle though?


Yes. LibWeb and LibJS were written for the SerenityOS. As mentioned in the article, some work was done to make them buildable and usable on Linux in a headless environment. The Ladybird project is then taking that work and adding a Qt GUI on top of it.


Exactly. This isn't a commercial product that must be shipped yesterday because of competition and not to anger investors; just go ahead and do things, the journey is a big part of the reward. The guy is amazing! Should we have some day the analog of the Nobel prize for software development that benefits people, he should be among the first 10 to receive it.


Look at the Turing award, it is known as the "Nobel of computing."


The ACM Turing Award is for computer science, not building software. The ACM Software System award is more relevant here.

https://awards.acm.org/software-system/award-winners

For example, Richard Stallman (GNU) and Chris Lattner (LLVM) have both won the software system award, but it seems unlikely for them to be considered for a Turing award.


Webkit and Blink trace their roots back to KDE 2.0, afterall.

The web was a bit simpler back then, but still...


Too many people forget WebKit was KHTML initially... no, Apple didn't start it.


This is encouraging because I too plan to make my own browser engine.


As someone who has been and still is working on one of my own for many years now (and at this point it's far even from his efforts), I think having the time to do it is the biggest challenge.


Yes. With me it's motivation and energy. I'm not forcing it though. This is for me and it's currently solo project so I don't feel pressure to ever get it off and complete it. It's still worth it because I really enjoy doing it.


Maybe you could continue developing Kosmonaut? https://github.com/twilco/kosmonaut


I mean, there is a degree of regulatory capture that goes with standards bodies. The big boys keep making the ladder taller and that ultimately makes it harder for new people to join because they don’t follow the standard.

I am still sad that xhtml-basic failed, as it was an attempt to create a legacy free spec that might be feasible on embedded devices and phones. It got some adoption and then the iPhone ran full safari and that was the end of that.


You can make "a browser", but it's impossible to make a modern one that supports all sites without those human resources. Even Fabrice Bellard said it's too difficult.

I know it isn't a pretty thing to hear, but someone in this comment section has to say it.


> it's impossible to make a modern one that supports all sites without those human resources.

Then let's not - death to irregular HTML and let the era of lean browsers begin !

Yes, I know, Martian Headsets (https://www.joelonsoftware.com/2008/03/17/martian-headsets/) - but considering how much of a contemporary web browser is about supporting irregular markup and other grandfathered hacks, deliberately scoping them out is an exciting approach to a possible non-RFC793 compliant future. Won't fly with mass market, but worth exploring by people who don't care about the mass market - and the marketing will come later, once someone with no money and a device to ship figures out how fast such browser can be with ridiculously low resources !


For sure. There's also a big question of what "support" means.

If rendering is usable but not pixel-perfect, then maybe that's fine. Especially when what's being rendered is in categories like "visual fluff" and "needed for advertising".


As much as I love personal projects, if you want something to survive it needs industry support, that's how Linux and Python are still going.

Who's going to support a browser compatible with stuff from 2011? There are companies like Ekioh that make niche solutions for embedded stuff, but there's nothing open source.


The specs are pretty good though in that most of the handling of mangled HTML is actually defined in the spec. So while the specs are enormous, simply implementing exactly what they say to implement can get you most of the way there.


> You can make "a browser", but it's impossible to make a modern one that supports all sites

So what? Interested people can have multiple browsers installed.

Not even Firefox is supported by all sites, in which case I dust off Chromium I keep installed for just such situations.


I’ve spent few years trying to build my own browser engine ~15 years ago, when they were much much simpler.

You cannot make a useable, modern engine without such huge investments.

But it’s a cool hobby project.


You should quote the second line two. It makes it sound like that's something you're adding and the author disagrees with your sentiment.


Came here to post that exact quote. Truly inspirational


Honest question, why not contribute to Firefox instead?


Because it’s fun and rewarding to write it yourself, and it means you have experience and control of it


It's a cool pet project, but not realistic. Even Mozilla can't compete. There isn't anything "new" to be done that will give a browser an edge over chromium.

And the elephant in the room, regardless of adoption it will never be on iphone.


> realistic

They are doing it though; it's "real." Perhaps you are setting goals for them that they are not themselves setting?

> can't compete

You actually can read about their goals on the linked page, though. They are not attempting to "compete" with Chromium.

> iphone

So don't use iPhone, and use a phone that allows you to use the browser engine you want to!


> They are doing it though; it's "real."

It's obviously the grandparent meant it as not a toy implementation. As in, non computer literate people can use it without complaining about broken sites.

The first 80% will take 10% of time. Now last 20% will take 90% of the time.


That’s irrelevant as it’s not a goal of the project


Ok, but then why get excited about it? I mean it neat, Andreas is a great programmer and can make stuff at amazing speed.

There are lots of tiny quasi browsers lying around.


It’s exciting because Andreas is very inspiring and motivating and also the entire Serenity OS is very understandable and usable and basically a nicer version of C++. Just a browser would not be interesting, but a browser combined with a full exciting monorepo OS is


> Even Mozilla can't compete.

Apparently they can't but they still do. Typing this from a mozilla browser.

> realistic

Having looked at the project and followed a few serenityos updates videos and if there is one word I would do to describe Andreas Kling and his contributors is that they are realistic about what they do and do not.


> Typing this from a mozilla browser.

Same, but I doubt I'll continue to do so in far future, unless something major changes. Mozilla seems to be inching closer and closer to just another Chromium.

Like Google being forced to remove Chrome integration.


> Mozilla seems to be inching closer and closer to just another Chromium.

Not sure I understand this. I mean Chrome drives the feature set and performance, but it's not like Mozilla's switching to Blink/v8 AFAIK. Competition is good. Even the proliferation of WebKit/Blink browsers is fine to me.

I run into the occasional site that does the block-by-user agent of ye olde IE days and an odd niche commercial product used by my company that only works in Chrome or Safari. And honestly, I hit more of the latter that only work in IE and I have to RDP to an old windows server host to use them.

I like the Firefox UI better, and it doesn't get as sluggish as fill-in-your-chrome on my Mac. Now, that's "feel" and not measurable. Safari's a lot snappier, but too sparse for me. To me, browsers and OSes are like shoes. Pick what's comfortable to you. There's not a "right" answer.

Addressing the thread more than you: This project never states it's trying to slay the giant. The whole argument is weird. I'll play with it once I can without compiling (I'm lazier than I used to be). It'll be fun. Probably not a daily driver, but interesting.

And who knows? KHTML was a broken (and oft neglected) toy and now dominates with world via Blink/WebKit. We may all bemoan Ladybird in 20 years.


> but it's not like Mozilla's switching to Blink/v8 AFAIK

For now. If they are willing to fire their Servo team, MDN on a whim. Why not fire your engine team and outsource it to Chromium/Blink?

> Pick what's comfortable to you.

That's the issue. I like Firefox, but I have doubts that they'll soon replace their shoes with a model that is two sizes smaller for me.

> And who knows? KHTML was a broken (and oft neglected) toy and now dominates with world via Blink/WebKit. We may all bemoan Ladybird in 20 years.

Only after two big corps (Apple and Google) injected huge amount of cash and developer man hours on it.


That's a problem of internal governance, not technical impossibility.


It's not just internal governance. But it definitely didn't help.

Google has been caught sabotaging browser based on user agent.

In addition to its strong marketing machine.


Well, what is it trying to be "realistic" about? Does the announcement mention anything about taking over other browsers?

I suggest reading https://justforfunnoreally.dev/ .


Mozilla competes just fine IMO

- a developer who uses firefox 99% of the time


Mozilla market share and lack of income begs to differ. They will be broke in less than 10 years, and either dissolve or reskin chromium. Browsers are now mobile tools, that's where the users and money is.


Even if we assume that is true, mozilla has already a mobile version that not only works very well but allows for things that the mobile chrome doesn't, the idea that a reskinned chromium would work better than firefox itself is...weird. There are two relevant phone OS's and they're owned by google and apple, both companies have their own browsers, firefox will never get wide default usage on phones regardless of the quality of their mobile browser, which I already believe to be extremely competitive.

There is no winning going down this road for Mozilla other than maybe going the firefox phone route, and we all saw how that went.


Or maybe the truth in this is Firefox is a sinking ship and Mozilla is kept on life support as they are unable to make money outside of Google (a direct competitor) who is eating their lunch.

I'm afraid there is no route for sustainability for Mozilla at this point.


The armchair defeatist appears


> Even Mozilla can't compete.

maybe if mozilla tried to compete, instead of all the irellevant activities they waste their energy on


realistic? compete? new? iphone?!????

you are lost.


We need to have another web browser. I just hope it really solves problems than being just another alternative like openoffice. More power to you.


It's frustrating to see so much nationalism in this day and age.


Do you consider any instance of taking pride in your country - including small remarks pointing out trivial coincidences - as nationalism? Do you disagree with any example of recognizing the existence of a nation-state and it's relationship to individuals?

I tried to phrase these questions without snark, but your comment genuinely surprised and confused me. Maybe I'm misunderstanding. I do see how OP's comment is a bit nonsensical in light of the author's Swedish origins, but I don't think this justifies your invocation of a term with such negative connotations.


Pointlessly injecting a remark relating the birthday of a project to the "great day" of 4th of July is absolutely america-centric and presumptive. By extension I view it as a definite case of nationalism.


You are correct. Pride should not be confused with nationalism.


I think the "backlash" is more so for the presumptive and dismmissive America-centric view of the world


> presumptive and dismissive

You’re injecting dismissiveness, again a word with very negative connotation, at a simple mistake.

I would be more inclined to take your point if it wasn’t a cheerful, good-natured attempt at humor - nothing more.


We detached this subthread from https://news.ycombinator.com/item?id=32809994.


There's nothing wrong with that. In fact you could say it was nationalism which motivated the country's greatest achievements.


[flagged]


I would very much use this browser for everything if I could, just to be different.

That said, I'd prefer if they had used plain C, which is even more portable.


Wait until you realize how much of the bank’s software is written in low level languages like C or C++…


Which is horrible but their Software is behind a firewall at least and not used to execute random code Form the Internet all day


* "lower-level" / "system-level" / "middle-level"


People have done web/mobile banking on Java/Dalvik and ObjC for years. Why would this be an exception?

First, blaming a tool is lazy. While there are problems with C/C++ much of this is due to practices and code complexity. There are tools to help us humans with both of these things. Additionally, do you trust Cython? Why is that (or any other language written in C) better than C? Have you audited that code yourself?

Second, if the developers are as diligent about testing this new browser as they have been with everything else they've done, I would trust it as much as I do Google's Chrome (where I don't trust Google at all) or Apple's Safari (I feel them to be only slightly better than Google).

Third, and perhaps most importantly, if it doesn't work for you that is completely fine. It's not your computer or your data. Why denigrate the project and those who use it or work on it?


yeah, that's why I only surf the internet using the ??? browser


The Chrome or Firefox Browser where literal billions of dollars and hundreds of developers work to ensure the worst fuck ups that follow from being written in C++ are mitigated.


How much is that due to clinging to old standard and legacy ways of doing things? If I look at my own code which tracks recent c++ standard quite aggressively, between c++11 and c++20 some parts of the code have enjoyed more than a 10x reduction due to the better abstractions, which definitely reduces the amount of bugs and development investment needed. I also don't remember the last time I had a memory leak/error that my IDE didn't show me inline to the code while I was typing thanks to clang-analyzer. Ladybird starts from these 2022 best practices - I'd wager it can reach its goals in muuch less time and engineering than it'd take chrome or FF with their (inescapable, no one wants to let good code go) legacy baggage, both social and technical


While this is certainly true, two browsers being written by megacorps doesn't exactly make me eager to use the browsers. I would much much rather use software made by people who are passionate about the project than people who are working on something just clocking in and clocking out.

For those who think Mozilla isn't about money:

https://en.wikipedia.org/wiki/Mozilla_Corporation


and this is a hobby project that isn't ready for real world usage anyway, and they're planning to port it to a memory safe language long term. It's not really a fair comparison, and it's not like this is a project with commercial ambitions anyway.


I wonder if it'll stay C++ because of momentum or switch to the aforementioned home grown memory safe Jakt like much of the rest of Serenity is expected to.


This is something I want to know too. C++ certainly has some issues like pretty much every programming language does but in the original article Andreas mentions that he wants to make a cross-platform browser and cross-platform, portable applications is one of the areas where C++ can excel. If they end up switching to Jakt, I fear this will greatly limit the platforms that Ladybird can run on.


Jakt transpiles to C++


That's kind of interesting that it currently transpiles to C++. I have a suspicion though that using C++ as an intermediate language will probably often result in lower performance, higher memory usage, make debugging more complicated, etc... than if it was compiled directly to a lower level language like assembly or LLVM IR. If I'm right about that, those wouldn't be very good characteristics for a programming language that you intend to make a complex web browser in.

In zamadatix's reply to your message he said that C++ is only being used to bootstrap the language and that it's certainly not the plan to require C++ forever. I imagine if Jakt thrives, he will be right and Jakt will probably start compiling into a lower level language instead. If they do target a lower level language though, I imagine this will result in Jakt being less portable than C++ which has been around much longer and is more established.


Only to bootstrap the language, it's certainly not the plan to require that forever.


So what language do you propose they should use?


Sigh... same trash UX/UI as every browser out there. :(

Get a good UX guy to gain an edge over the current offerings.


It follows SerenityOS' philosophy of using the best ideas from 90s interfaces so I wouldn't expect them to make changes to the UI.


You could submit some defects or documentation to the open project. It is 100% community driven.


Grey on black. Some people never learn.


It's great that this Browser is now independently developed from Serenity OS. But I'm not a huge fan of the QT dependency. You might as well use Qt WebEngine Widgets at this point.


It doesn't look too hard to remove that dependency, but then you'd have to write your own cross-platform GUI software, which might be out of scope. Might be a fun exercise though.


Why does it have to be cross platform? At this point Ladybug does not compile on Windows anyways. Also the target audience is probably mostly using Linux.


And what windowing toolkit do you think it sould use on Linux? GTK? Meh. Qt is fine and using QtWidgets is pretty far from depending on QtWebEngine.


None? Talking to bare X11 is actually pretty simple.


Don't forget that you're also expected to support Wayland nowadays on Linux.


Who is expecting Wayland support besides Redhat and Collabora?

And yes, talking to Wayland directly is extremely cumbersome and unnecessary complex. That's why nobody uses it.


Talking to bare x11 is one of the worst ways to conform to XDG desktop spec.


Well, the goal from the article is obviously to make it cross-platform, it's even in the title ;). To achieve that they just wrapped it in Qt. To replace that you'd have to achieve parity for the (admittedly smallish) subset of Qt that is being used. Starting with Linux for that would be a good start, but I'd assume the eventual goal would be to expand to other platforms as well.


It seems to compile on MacOS and Android already so not being on Windows (yet) doesn't really sidestep having a cross platform GUI/events library.


Yep. Not sure what Qt is at now, technically. But that's because when you go download it you have to click through 5 screens that threaten you that you shouldn't use the LGPL version and pay instead. I don't like being threatened so I don't check out Qt.

To the point that it's banned at my job because we're too small to have lawyers on retainer to protect us if we use the LGPL version and do too few GUI jobs to justify the licensing model for the commercial version...


If your framework of choice can implement a network request manager, a timer, and has an event loop, it's quite trivial to make it run within that framework, especially after the recent changes done by Andreas in this video: https://www.youtube.com/watch?v=S8lXroxngYo


I was at first excited, a new browser that isn't firefox or chrome. But then it said QT and this made me loose interest immediately.


They only use QT to display onto the screen and get user inputs (also for network requests but that's subject to change)


As a web developer who has to test websites across multiple browsers, having to support another browser sounds expensive.

While its fine to create new languages and platforms, attention should be paid to the costs this will have on the developers that have to now support the new stuff as well as the old. One example I bring up is Gradle. The jump from Ant to Maven was huge. Maven was so much better than Ant that Ant basically disappeared from my purview. But when Gradle came out it was not so much better than Maven that is caused a sea-change in the Java community. Now I see a split. Some projects use Maven, some use Gradle. And some projects post their instructions in both. I have to become an expert in both. And I don't see this changing any time soon. So now the Java community has to pay this tax going forward. Its no one's fault. The intentions were good by the folks who created Gradle or who adopted it in their project. And projects do need competitors to keep them motivated. I offer no solutions here, only gripes.


>> As a web developer who has to test websites across multiple browsers, having to support another browser sounds expensive.

The problem is testing M browsers against N web sites - M*N complexity with M tests being done by each website. It should be M browsers tested against a single conformance test, and some means of testing N websites against a conformance test. The web browsers compatibility should not be your problem. Once people start accommodating non-conformists there is NO incentive for them to conform.


This is the biggest failure of the web as a platform. If it was extremely strict to begin with, like "display error and nothing else when website is broken" then people would actually care about standards


this implies we should still be using the web like we did in the 90s. evolution happens in stages and is never perfect nor free from error


But web browsers are big and bloated because they have to render older websites too, like government websites that haven't been updated since the 90s


Bad start: c++. We have already blink/geeko and webkit in c++. There is no disruption here. To make a real difference, this new web engine should not force ppl to use those grotesquely and massively complex and massive c++ compilers.

To bring something really interesting: plain and simple C (not like the linux kernel gcc C dialect) with, if required, some assembly which do not abuse its macro preprocessor.


>The browser and libraries are all written in C++. (While our own memory-safe Jakt language is in heavy development, it’s not yet ready for use in Ladybird.)


so it is like servo with rust in mozilla...

not good omens.

hopefully, jakt is written in plain and simple C...


Jakt is bootstrapped with Rust, and they're working on making it selfhosted.


They passed the selfhosting milestone a few months back IIRC


So, I need a rust compiler, to compile jakt which will compile their web engine which does not really exist currently, only forks of c++ code. To bootstrap the rust compiler written in rust, I would need static binaries of this compiler.

The first words which come to my mind is "convoluted and expensive SDK".

That said, I have not checked on rust syntax lately to see if it became insane like c++. Last time I had a look, it was not, but it was a long time ago.


As other commenters mentioned, the jakt language is self hosted as of a few months ago. If you look at the projects README and source code, you can see that it's currently transpiling to C++. It's also only been around since May of this year.

LibWeb very much exists, and is not a fork of anything. It lives in the SerenityOS project and is written entirely green field C++.


So in the far future, jakt will compile in machine code with the rust-written rust compiler (or a bootstrap jakt-written jakt compiler) and will generate native code for a libWeb and libJS that are able to browse youtube and disney+, that with a SDK not pulling a grotesquely and absurdely complex and massive c++ compiler.

I think I got the picture now. Thx.


I don't understand why choosing Rust or C++ would be a bad omen for a project. Not everyone prefers C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: