Hacker News new | past | comments | ask | show | jobs | submit login
The Ladybird browser project (ladybird.dev)
625 points by defied 10 months ago | hide | past | favorite | 284 comments



It's been so inspiring to see him and his crew of hackers build a new, independent browser from scratch. I must admit I didn't think it was possible on this small scale in terms of man hours and funding.

However, the thought has also crossed my mind if we're finally seeing fruits of browsers being better standardized on "95%"+ of the popular features -- and if writing a browser today is in fact easier than both writing AND maintaining a browser a decade back. While the web is of course still evolving, it feels more "settled in" than 10-15 years ago.

There's also the factor that past developers didn't have the more complete roadmap set when they initially planned browser design, but now we have huge amounts of web standards already there AND also know how popular they got over time i.e. what to prioritize to support a modern web. One might superficially think there's simply more of everything, but I also think ideas that can be discarded. Just imagine that Internet Explorer had XSLT support, and FTP was common once upon a time!

It would be interesting to hear more about their own thoughts on these topics!

Edit: My bad; XSLT is still commonly supported and by all major browsers but a rarely used feature and stuck in limbo in XSLT 1.0. So it's probably among those things that can be safely omitted for quite some time.


> It's been so inspiring to see him and his crew of hackers build a new, independent browser from scratch. I must admit I didn't think it was possible on this small scale in terms of man hours and funding.

Thanks jug! I'm super proud of all the folks who have worked on it with me :^)

> However, the thought has also crossed my mind if we're finally seeing fruits of browsers being better standardized on "95%"+ of the popular features -- and if writing a browser today is in fact easier than both writing AND maintaining a browser a decade back. While the web is of course still evolving, it feels more "settled in" than 10-15 years ago

This is definitely true! I've worked on browsers on and off since 2006, and it's a very different landscape today. Specs are better than ever and there's a treasure trove of tests available.


It is not any easier, because we still have a monopoly running the show, only it's not called Microsoft anymore.

If anyone threatens Google position, they can literally throw money at the problem, invent some overcomplicated standard, implement it in Blink, and have the competition chase them. It doesn't need to go through W3C either, if it works in Chrome, all web developers will adopt it and any smaller engine will necessarily have to support it or risk losing whatever little market share they have left.

Having control of the internet now is of greater strategic importance than it was 20-30 years ago when Microsoft was king of the hill.


This is where Apple'grip on the iOS browser engine choice paradoxically comes in clutch.

It is conceptually despicable, especially for devs, but it prevents Google from completely running the show.

Now the European Union is coming after Apple without trying to rein in Google's influence... This seems short-sighted.


> European Union is coming after apple.... seems short sighted

I think you have a fundamental misunderstanding of the goals of the EU in this matter. The objective is not to keep both companies on even keel, but it is merely about enforcing existing anti-monopoly laws. If this results in Google gaining a de-facto browser monopoly then those same laws can be used to break up that monopoly when we get to it.

What would be the alternative in your opinion? Allow Apple to break the monopoly laws in hope that they will be able to rein in the growth of the Google browser? What good is a law if it will not be enforced?


> What would be the alternative in your opinion?

Force Apple to allow alternative rendering engines, but only the ones with <50% market share. This would promote diversity in rendering engines without giving what is already by far the most dominant rendering engine the opportunity to get more of a stranglehold over the market.

People already mistake Blink-only APIs like Web USB, Web Bluetooth, Web MIDI, etc. for web standards. The market is already dangerously close to where it was 20 years ago, where a vast number of web developers treat the most popular rendering engine as if it’s synonymous with the web. Handing more opportunities for market share to Blink is playing with fire.

You want Apple to allow Gecko or Ladybird? Sounds great! But the monopoly that threatens the web at the moment is Blink.


Another thing the EU could do which would help is to prohibit cross-promotion of browsers, as Google has quite aggressively done since Chrome’s inception and Microsoft is returning to doing with Edge.

That means no more “download Chrome” prompts on Google search and YouTube, no pestering people to install Chrome when tapping links in Google iOS apps, no bundling of Chrome in installers of unrelated software (very common on Windows), etc.

Some sort of rule against favoring one’s own browser in web apps (as has happened with GSuite and YouTube on multiple occasions) would also be nice but unfortunately strikes me as unlikely.


The main issue with the ie monopoly was that it was an absolute trash browser, and MS wouldn't update it. Chrome is actually good and gets updates regularly.

It would be nice if there were more alternatives, but I can't really expect anyone to spend the resources necessary. I don't think crippling chrome for some kind of sense of "fairness" is a productive solution.


> Chrome is actually good

I'd like to take issue with this statement...its spyware, and its atrocious to modify. You can't slim down a chrome browser, and it can force you to do whatever it wants to...

It's actually really, really bad...


How is that a law?

That's just playing sides.


The rule is clearly based upon the ability to influence the market that the players have, which is exactly what is appropriate for an act to improve the state of the market. The rule is not based upon the identity of the players - nor is it just a proxy for that - so it’s not “playing sides”.


Playing sides for the small guy is fair.


Blink isn’t dangerous, Chrome is. You can have a privacy focused Blink browser.


You fail to get it, YES Blink indeed is in fact dangerous because its controlled by Google and they dictate what they allow and what will be next things in the Browser and basically force everyone else to follow up with features they implement. They have a way to big control over browsers with the large market share they have. It has very little to do with with privacy. I use Brave myself. Using Blink as rendering option is cool and its cool that is open source but it does not change the control Google has over browsers. People could fork it of they disagree with things they do but if they are not as big as Apple the fork would disappear into meaningless and would never even come close to somehow influence and web standards ...


This is far worse than the "you can just re-skin safari" on iOS nonsense. Blink is just another tool to force cash cow $$updates$$ onto users they didn't want or need.


> it is merely about enforcing existing anti-monopoly laws

It's never ever that simple in politics, and this is about a new law (Digital Markets Act). Ask yourself:

- How were all the terms in the Digital Markets Act determined?

- Which players or initiatives were able to scoot by unnoticed (e.g. Chrome's grip on web standards)?

- What is the interpretation of what constitutes a gatekeeper? (note: its actually 'gatekeeper' not monopoly we're talking about).

- Who will decide whether Apple's changes are in compliance?

It can be true that their aims are generally to have more fairness in the market, but there's always going to be other factors at play in this kind of legislation. Note that the DMA was written in a way that allowed Europe-based Spotify and Booking.com to avoid being labeled gatekeepers.


Spotify has been losing money from day 1. It also would not even make the US top 50 for tech companies, by revenue.

Booking.com is registered in the US...


Ah you're right, though Booking.com started in and is headquartered in Netherlands, its part of Booking Holdings based in the US.

Spotify is definitely no money maker. That said, its market share alone would/should put it in the crosshairs of gatekeeping laws.


>Now the European Union is coming after Apple without trying to rein in Google's influence... This seems short-sighted.

At the very least, if the EU really wants to limit the tech giants' grip on the web, it needs to fund independent open source web engine development handsomely; their pockets are more than deep enough and projects like ladybird and servo can use the extra resources.


> if the EU really wants to limit the tech giants' grip on the web

You're misunderstanding the goals of the EU. The EU only cares about limiting their grip as a result of their main goal. The main goal of the EU is to protect the rights of their citizens, which they do by enforcing existing anti-monopoly laws.

The EU has in principle no problem with a company gaining giant market share by providing a successful service or product. There would be no problem if Google or Apple would gain 99% market share by everybody voluntarily choosing their product. But it is a problem if they then use that marketshare to make it harder for competitors to compete.

I.e.: you're wrong to say that they're trying to minimise the 'grip' they have. They are merely trying to prevent companies abusing that grip.

That being said; I'm pretty sure that the EU does actually fund a lot of open source development


The Digital Markets Act is a new law

https://en.wikipedia.org/wiki/Digital_Markets_Act


If only Mozilla hadn’t run Firefox into the ground (effectively) we wouldn’t need to reply on Apple.


As I still use Firefox, I have to agree with this.

The performance improvements are nice, but trying to make a Firefox another Chromium skin irks me. Plus I still miss the official compact mode.


> without trying to rein in Google's influence

That's the problem. They should go after Google too. Honestly all of these megacorporations should be broken up.


The current dispute between the EU and Apple has nothing to do with Safari though and is about the Apple store, not sure how that's relevant?


no, they're also forcing Apple to allow other browser engines on iOS, which was previously banned. other, third party iOS browsers still used the safari webkit engine.

> In addition, apps that use alternative browser engines — other than Apple’s WebKit — may negatively affect the user experience, including impacts to system performance and battery life.

https://www.apple.com/newsroom/2024/01/apple-announces-chang...


Every app may negatively impact system performance and battery life. A better webview could positively impact system performance and battery life. This statement from Apple was made in bad faith, and it didn't fool the regulators. It definitely shouldn't fool technologists.


It could improve performance, but let’s not kid ourselves that there are many companies that care about the minutiae of battery life as much as Apple. I mean, have you used a windows laptop recently. It’s -so- much worse than a MacBook that I refuse to believe it’s all about the m* magic chips. Sleep, power cycling, prioritisation just all seem to be better implemented.


I haven't, but I have used a Linux system recently, and the experience is far better than a Mac. Even the regulators can see that Apple doesn't care about battery life and performance so much as it cares about the billions it extracts from Google for the search engine deal. Allowing better browsers means that fewer people will be stuck on Apple's inferior browser, and Google will pay correspondingly less to access them.


Is that really true? I’ve tried a number of laptops with Linux and the sleep and power management always still seemed terrible. What do you have? I borrowed a Lenovo x something, and a Dell. Does it matter which brand because of drivers and firmware etc?


I use Chromebook laptops. Good touch screen for productivity, good power management, and good application support. I remote into beefier machines for computationally intensive tasks if I'm not at my desk. On desktop workstations, I use Debian.


It is my understanding that Apple losing here would prevent them from enforcing WebKit as the sole iOS browser engine. I may be mistaken.


It'd be nice if Apple allowed Firefox to use their own engine, while not allowing Google's engine. But I can't imagine Apple being that nice.


> if it works in Chrome, all web developers will adopt it

This is why we, tech nerds who understand the problem, must resist monopolies: object to using such APIs. Chrome wouldn't be in quite this position if, instead of embracing the monopolist, more techies had warned their non-techy friends and family away from it, like they did with IE.


"Tech nerds" built web sites that only worked in IE back then and "tech nerds" are building websites now that only work in Chrome. Didn't have a clue back then, and don't have a clue now. Forget warning "non-techy" people and clean your own house first.


Ethics go out the window so long as the flow of shiny new features remains unimpeded. The only reason devs turned against IE is because Microsoft got complacent and essentially abandoned it, letting Gecko and WebKit steal the spotlight… a move that Google is smart enough to know to not repeat.


We warned people that the government was snooping on everything you transmitted or received.

They didn't listen or care.


Many took tech jobs to help further that cause. Perhaps unknowingly?


I have been rather loud about this and while I don't think my voice has meant much I think I have been part of a growing group of people who - together - are making a dent in this monopoly.

Becoming part of it isn't hard either, if you test software either as a SW engineer on a team or as a full time tester it might actually save you some time :

- If you use Mac, test in Safari first. On any other platform, install Firefox (or the Debian version, or Librewolf) and use it as the first tool to test applications.

- If it doesn't work (and the customer hasn't very explicitly said they absolutely only care about Chrome or IE^CEdge) report it as a bug.

I mean, seriously, who would have accepted a feature that only worked in IE 6?

---------

OK, some people might say: but IE 6 was an old and outdated browser, you cannot compare IE 6, or any version of IE for that matter to Chrome.

Or one might say: Chrome has already won, your idealism is appreciated, but you are too late.

Well, here is the thing: IE was at one point in almost the exact same postition as Chrome is now:

- biggest browser by far

- endorsed (or even enforced) by IT

- lots of features only worked in IE. (I remember one particular customer who seemed to be obsessed with security to the point were we had to keep a VM with Windows XP and IE 8 around with both Active X and Java Applets enabled to sign into them. This was around 2014..! Yes, if you find this notion of security absolutely ridiculous then we agree.)

---------

OK, one key difference:

Back in 2006 when I started fighting IE we had Mozilla on our side. Firefox was innovating like crazy. We had extensions that let us embed IE in a tab to render certain web sites. We could automatically archive a full website for offline access (full rewrite of links so they worked on our copy was included). Full developer tools that everyone knows from every browser these days started out as just an extension to Firefox, named Firebug IIRC.

Today, while I understand that the extension API had to be reigned in before a disaster happened, it went way to far and today we cannot even get a function in the API to programmatically remove the top tab bar when we add a tab bar on the side. And not only that, but if someone asks about that particular issue, someone will come and hush and hide the comment.

So godspeed to Ladybird devs and Orion devs, Librewolf devs and actually even Safari devs and everyone else who challenges the current monopoly!


Laziness and inertia is the reason monopolies exist, yet when you inquire individually everybody have their own inane reasons to keep the status quo alive.

Elsewhere in this comment section someone literally said they use Chrome because Reddit does not work in Firefox, which is utter nonsense.


This is a tragedy of the commons, rather than simple laziness. Choice of messaging apps is the same. I may know that the world would be a better place if everyone ditched WhatsApp and used signal, but I want to schedule tennis matches today, and the value of a open source future is distant and heavily discounted by the knowledge that it is low probability, even if I do my part.


> object to using such APIs

Doesn't work since the corporation can just fire you and replace you with someone who has no such objections.


The problem ultimately isn't Google pushing new "standards", since plain HTML and other existing ways of doing things that work in many more browsers still works; it's the trendchasing web developers who somehow feel the need to make sites that only work in Chrome, and the propaganda that Google to encourage that behaviour.

It's almost as if backwards-compatibility is seen as something to be avoided in certain web dev communities. Lots of "drop support" and "moving forward", zero consideration for simplicity and interoperability.


Counterpoint: A lot of these newer features are only used on a small fraction of websites; things like webgl are only used for games and tech demos, most websites work just fine without it.


Google's control of the internet is exaggerated. Amazon is not dependent on any web browser to reach clients, they can just launch their own app and be done. And yes, they can do it all platforms.

How is Instagram dependent on whatever Google decides? How is Facebook, TikTok and Snapchat? How are streaming platforms dependent on Chrome? Big businesses with a ton of users easily launch their own apps to sidestep Chrome. And if Chrome makes fundamental breaks of HTML and CSS, then most of the internet is going to be broken in Chrome.

If your bank website stops working in Chrome, they're not going to change their website. They're going to ask you to install another browser.


The biggest companies might be immune, but anyone else's incentive is quite high to use Chromium (Electron) to build that app on.


I can guarantee that if a bank website stops working in Chrome they will be instantly working around the clock to fix the website!


Or not? I remember when banks required you to have IE long after nobody was using IE. I remember when banks showed an alert against using Safari long after it became the most common mobile browser. Banks also have their own apps and are not dependent on Chrome.


That was in a time when expectations were low and all the banks were pretty equally crap. There are better choices now, and even my mum would be pissed off of her bank website didn’t work in her current browser.


Would she be pissed off at A) The bank? B) The browser? C) The computer? D) Her internet provider?

People can change browser more easily than changing banks. They can even have multiple different browsers on their machine, and one for only doing banking.


For many people, the icon on the desktop they click isn't important. They just know when they click that icon, it brings them to "the internet". If she's one of those people, my guess is she blames the bank.


> if we're finally seeing fruits of browsers being better standardized on "95%"+ of the popular features -- and if writing a browser today is in fact easier than both writing AND maintaining a browser a decade back.

A decade back, maybe... but decades ago the number of things you had to support was just so much smaller even if you only looked at HTML! Consider https://www.ietf.org/rfc/rfc1866.txt vs https://html.spec.whatwg.org/multipage/

Writing a web browser was hard in the old days for a lot of reasons, but trying to write a full-featured one today is a huge undertaking and we're still adding a bunch of new features all the time and expecting browsers to support them


> Consider https://www.ietf.org/rfc/rfc1866.txt vs https://html.spec.whatwg.org/multipage/

I thought, oh, that's not so bad. Then I realized what I was looking at was a 10 page index.


> if writing a browser today is in fact easier than both writing AND maintaining a browser a decade back.

Probably not. Yeah we have web standards and some idea of how to architect it, but the total set of APIs and HTML/CSS features a browser supports is probably changing faster than the Ladybird team can actively implement it. The API surface is just impossibly large compared to 10 or 15 years ago. Look at all of these: https://developer.mozilla.org/en-US/docs/Web/API

And that doesn't include the updates to Javascript, MathML, SVG, HTTP-based security features, encryption or media support.


It's a lot of work but most of it is very doable for several reasons:

- standards are really detailed at this point and a large reason why the three remaining browser engines (chromium, safari, and firefox) largely do exactly the same things.

- There are a lot of open source components. It's not necessary to start from scratch on things like wasm and javascript interpreters for example. There are some nice low level graphics libraries out there as well. And of course things like Rust are now pretty mature and there's a lot of rust code out there that does stuff that a browser would need.

That being said, it's one hell of a hobby project to take on and I don't see much economical value in an independent implementation of something provided by free by three independent browsers already; two of which are open source.

Which begs the question: why?!? Is there qualitative argument here of doing the exact same thing but somehow better?


In the case of Ladybird they build everything from scratch (intentionally) so existing open source code cannot be used.

The main argument for doing it is because it is fun, just as with SerenityOS. Having alternative implementations is never a bad thing for web diversity though.


Of course, alternatives are almost never a bad thing, and devs should feel free to work on whatever floats their boat when they're volunteers, but developer resources are scarce, and there are other things in the FOSS realm that could use some attention where there really aren't great alternatives. But again, if these people prefer to work on this, that's OK. Just because the world could use a better X doesn't mean these devs have enough interest in X to be effective at building such a thing in a volunteer capacity: in my experience, having personal interest in a project makes you much more productive than working on something you really don't care about.


Not to mention the number of possible vulnerabilities that will be needed to be pentested and fixed.


The web as a platform keeps getting better. As someone who's been developing for it for a living since 1998, I'm delighted to see and to share things like "Interop 2024"^1 and Web Platform Tests ^2, which are improving the adoption pace and reliability of key platform features:

1. https://www.webkit.org/blog/14633/get-ready-for-interop-2024...

2. https://wpt.fyi


It also helps that there are tests

https://web-platform-tests.org/


Indeed. These may be even more important...

https://github.com/tc39/test262


IMO most of the complexity of modern browsers is in all those strictly optional, app-like features. PWAs, service workers, JS JIT and all that stuff. If you want to build just a hypertext viewer, not a full-fledged OS/application environment, just leave them out and nothing of value will break. Things also become substantially simpler if you aren't looking to make your JS execution as performant as theoretically possible (and you don't really need to with how most websites actually use JS).


It would be a bummer IMO to see XSLT abandoned. Its still a really interesting approach to the web and an alternative to today's JS-heavy client rendering.

The idea of being able to ship an XML template with basic logic like for loops alongside the actual XML data source is really unique today. If people cared to use XML at all, that really does cover quite a few use cases that we current reach for JSON + client JS today.


It's really quite incredible that one guy basically started a project to create a whole operating system from scratch for fun and to give himself something interesting to do, and then accidentally created one of the most viable new browser engines in a decade or two...

I've been watching the development videos for a year or two, and the speed that this has progressed in such a short time is unbelievable. Now they have multiple volunteers and enough sponsorship to pay more than one developer, it's pretty exciting what could happen here!


He is a world expert on Web rendering, and an extremely capable C++ developer. One of their success recipes is to code up the various specifications directly, which is - today - the best way to go about this. They are also heavily test-driven.

He did not even use the C++ standard library, when he says "from scratch" it includes his own string class, for better or worse, which is fine since it's "just for fun", "to learn" etc. And just when you think a library and an OS are crazy, he announced a browser and a JavaScript engine on top. Then a JIT compiler, then Jakt, their own novel programming language, because neither C++ nor Rust is what makes him perfectly happy.

More than his expertise I admire his modesty and kindness - unlike Linus etc. he is not full of himself, and each of his videos gives lots of credit name by name of who did what. A perfect role model for open source.


I think their C++ library is one of the reasons they can create capable software so quickly actually. They have jetisoned just a tonne of C++ nonesense and added some really nice modern features such as how they handle memory and errors.

Also, you would think that having to implement EVERYTHING themselves ( they are making their own image decoders as an example -- inclding SVG ) would slow them down. However, as it is written in a mono-repo from soup to nuts, this allows them to very rapidly add support throughout the stack. They do not have any of the endless conversation and arguing that happens between components in Open Source. They do not have to work around things missing upstream. If they need something, they add it.


This project reminds me of the approach at Xerox PARC in the 1970s. Alan Kay wrote [1]:

(At PARC we avoided) putting any externally controlled system, in- house or out, on one's critical path. ... Thus, virtually all the PARC hardware ... and software ... were completely built inhouse by these few dozen researchers.

This sounds disastrous, (because) in programming there is a widespread first order theory that one shouldn't build one's own tools, languages, and especially operating systems. This is true --- an incredible amount of time and energy has gone down these ratholes. On the second hand, if you can build your own tools, languages, and operating systems you absolutely should because the leverage that can be obtained (and often the time not wasted in trying to fix other people's not quite right tools) can be incredible.

1. http://www.vpri.org/pdf/m2004001_power.pdf


When would you (or anyone else) say would it be the best to consider doing everything by yourself from scratch? For example, I want to build a little arm server. I'm realistically going to use linux server or similar as I don't want to make my own OS. But if I'm undertaking something on a microcontroller - there's definitely a point where bare metal starts winning. How do you find that?


I would say: when the existing offerings completely prevent you from doing what you want to do, or require ugly workarounds that are not consistent with your goals, or when future changes in those dependencies might compel you to do extra work just to keep your own system running.

One more reason to start from scratch: to get the functionality you want from an existing offering, you would also have to include a lot of other stuff you don't need, resulting in unnecessary complexity and resource consumption.


I think you misconceive how open source projects need to work. Some projects (especially those in the web-dev niche) might view their relationship with upstream the way you do. But others do not.

For Ardour, we feel entirely free to just bring an upstream library into our source tree if we need to. And we also have our dependency stack builder that configures and occasionally patches upstream libraries to be just the way we need them. We do not wait for upstream adoption of our patches.

Most recently, for example, we became aware of impending moves by various Linux distros to remove GTK2, which we rely on. Even though we don't support distro builds, we want Linux maintainers to still be able to build the software, so we just merged GTK2 into our source tree.

This idea that using a 3rd party library becomes some sort of constraint is very, very far from reflecting universal truth. If we need to hack a 3rd party lib to make it do what we need, we just do it. Meanwhile, we get all the benefits of that lib. Ardour depends on about 86 libraries - we would be insane to rewrite all that functionality from scratch.


> they are making their own image decoders as an example -- inclding SVG

Considering the vast amount of exploits that continually comes out of media decoders everywhere, this basically guarantees I will never ever use this browser.


Have you looked at how it's implemented? The image decoder is completely separate from the main browser and is in a sandboxed process (with restricted syscall and filesystem access). If the image decoder is exploited, there's nothing the attacker can do.


Where can I find more details on how this sandboxing works?

Edit: Seems like it's using OpenBSD's pledge API? https://www.youtube.com/watch?v=bpRw6KQnY0k&t=8107s


Look at how many of the past big exploit chains on iPhones, Chromium etc involved media decoding at some point in that chain.

It’s like crypto, you have to be very deliberate with your choices, and it’s generally ill-advised to roll your own.


That advice has context. Do not roll your own if the feature is not your core product offering. So don't roll crypto if you're not selling crypto. If it is your core offering (and media decoding is absolutely a core offering of a web browser), you should choose carefully whether to get it off the shelf or roll your own.

Otherwise how would new/better stuff ever get built?!


If Apple and Google can’t even find all the vulnerabilities in their libs, how on earth would a scrappy team of a few devs, especially since media decode isn’t the sole thing they’re focused on?

> Otherwise how would new/better stuff ever get built?!

The problem here is that people are salivating to use this as their daily driver. When WireGuard was still in development, everyone got told in very strong terms to not use it in any setting that required actual security.

Browsing the web at large is sort-of hostile by default.

Ladybird is a great project, and I hope it keeps developing, but any user that thinks their media decode libraries will be bulletproof libs free of vulnerabilities are nuts.


If Apple and Google can’t even find all the vulnerabilities in their libs, how on earth would a scrappy team of a few devs

Perhaps a few devs have nowhere near the required escape velocity to create vulnerabilities before they can be fixed, nor the pressure of PMs to ship substandard code?


> but any user that thinks their media decode libraries will be bulletproof libs free of vulnerabilities are nuts.

Sure. And its a high bar to challenge the same or better vulnerability profile that the established players have. But a "small scrappy team" which is capable of doing everything this team has done certainly garners a lot of confidence that the bar is possible.


Apple and google are big corpos and those are legendary for their inability to make anything properly. It has been a while since they were small and could move fast... So no, I would not take them as a standard.


That's fine, I'm sure they weren't targeting only you when they developed it. So it will still have utility for the developers of the project and other users.


Let’s be dismissive of bad security practices I’m sure it’ll work out fine.

There would be absolutely nothing sacrificed by using open source, well-tested libs for image decode.


I don't think that's a healthy mindset to have.

Just because something is widely used doesn't mean it's more secure (example: libwebp). The security issues tend to happen mostly when creating optimizations that bypass the "obviously secure" way to do things, but rely on an internal state that gets broken by another optimization down the line. This is way less frequent in "smaller" projects, just because they didn't need to go through that round of optimizations yet.

For this question specifically, tho, I think Ladybird is extremely interesting in terms of creating a security-focuses C++ project. Between the constant fuzzing that each part of Ladybird already goes under (courtesy of the OSS-Fuzz initiative), the first-class citizenship of process separation for the different features of the browser, the modern C++ used in Ladybird (that prevents a lot of the issues cropping up in commonly used media decoding libraries), the overall focus on accuracy over performance in library implementations, and the compiler-level sanitatization utils (UBSAN, ASAN) that are enabled-by-default, I think it's less likely that an critical security-impacting bug would exists in Ladybird's .ico support than in Webkit's for example.


If something is bad, we should be trying to rewrite it. Will you not touch WireGuard because OpenVPN is full of holes?


If massive companies like Google and Apple can’t even find all the vulnerabilities, how are you expecting a scrappy team to?

Don’t get me wrong, how far they’ve gotten is very laudable and as a educational exercise it is really cool, but it starts being a pretty massive risk if users start using this as a daily driver.


Google and Apple are just a bunch of scrappy teams trying to work together on insanely massive and bloated code bases. Numbers of bugs scale with lines of code.

Small scrappy team writing simple and consice code from scratch is likely to produce fewer bugs than enterprisey monstrosities.


>He did not even use the C++ standard library, when he says "from scratch" it includes his own string class, for better or worse

I can't imagine how it can be for worse; the standard C++ string library is awful. It makes perfect sense that a super-talented C++ dev would make something better if they have the energy and time.


The C++ library used in Serenity is much nicer, saner, and more modern than the STL


> then Jakt, their own novel programming language, because neither C++ nor Rust is what makes him perfectly happy.

TBH Jakt defaults to reference counting, which makes it compete more with Swift and Go rather than C/C++/Rust.


As far as I can tell Jakt's reference counting is not optional. So it may be closer to Swift.

That being said I've seen a few people here suggest it's easier to use rust's Rc and Box for everything and treat it like Haskell or Scala. So it might not be so different in practice.


Wait what. Doesn't go have a proper garbage collector or do I need to get reeducated?


Yes, it does have a garbage collection, I wasn't trying to imply it uses reference counting.

(Though to be fair reference counting can be considered a rather rudimentary way of doing garbage collection.)


> One of their success recipes is to code up the various specifications directly, which is - today - the best way to go about this.

Would love to read more details about this, or have a link to a video where he describes and follows the process.


Isn't this a recipe for disaster from security perspective?


Wasn’t this guy working in the WebKit team at Apple for years before ?

It’s still really impressive but he is not the new kid in town when we talk about developing browsers.


Yes, he was.


He was probably their top guy too. Sort of like how Lucifer was the top angel in Heaven before his drug problems brought about his fall from Apple and since then he's been a true light bringer working tirelessly to give us awesome software as he recovers from addition.


> as he recovers from addition.

Divide and conquer for the win!


I mean Lucifer writing software tracks


"It takes 20 years to make an overnight success."


I can’t tell from the link. What makes it so viable?


Compared to a lot of other new browser engines, this one actually renders a lot of web content decently. And if you follow their update videos, they improve their coverage really quickly.

Ladybird also comes with their LibJS runtime, which has good coverage of the JS standards and even manages to implement some new features before the big browsers all get to it.


Sorry, could you explain what this means "this one actually renders a lot of web content decently."?


When you open sites on other "new" browser engines you typically get a really butchered visual result, with layouts completely broken, elements missing, wrong colors, etc. For example, Servo didn't support floats until recently, and IIRC even simple sites like Hacker News look "wrong".

Ladybird's approach has been to start with a somewhat naive implementation of features, then choose popular websites and apps and just continuously iterate to make them gradually look better, by fixing the parts that stand out. This pragmatic approach means that their supported feature set, while nowhere near 100%, can decently render 90% of websites due to being aligned with the most commonly used features.


I can't relate / do not recognise these claims of incorrect rendering; is there a resource out there that shows images of how it's supposed to be vs what it looks like? I thought this was a problem of the past, IE compatibility with web standards kind of thing.


For example, here's the BBC homepage in Firefox, Servo, Ladybird, and NetSurf: https://i.imgur.com/kCReCPd.png

Here's Wikipedia: https://i.imgur.com/IshNWU2.png

Ladybird implements far more web technologies than more well-funded, longer-running alternative browser projects.


Which browser are you talking about that renders everything correctly? Are you using a Servo-based browser? Is there even a Servo-based browser that someone can easily download and use?

Servo themselves say they only pass 55.8% of tests[1]. This thread[2] says Servo doesn't support SVG[2] as of Nov 2022.

[1] https://wpt.servo.org/

[2] https://old.reddit.com/r/browsers/comments/z2d7pr/servo_base...


> I thought this was a problem of the past, IE compatibility with web standards kind of thing.

For the mainstream browser engines, yes, but if you're starting a browser from scratch the amount of stuff you have to implement is massive and cannot be implemented in the span of even a couple of years.


just download a no-name browser and see for yourself


I learned about the project from the co-recursive podcast episode. Fascinating story. https://corecursive.com/serenity-os-with-andreas-kling/


What could happen? Why is a new browser engine needed?


I have hopes it will become a daily usable browser. A new Web engine is great. I hope Servo succeeds at this too.

I would consider contributing but development is coordinated on Discord and I avoid proprietary software… [1]. It's a shame. Can't blame them though, they are doing it for fun.

[1] https://drewdevault.com/2022/03/29/free-software-free-infras...


Wow - that article was a tough read. I like a LOT of what Drew has to say, but this seems over the top.

He claims that authors promoting their open source software on channels like Twitter, Hacker news, LinkedIn or even Github is "selfish and unethical outright":

> Many projects choose to prioritize access to the established audience that large commercial platforms provide, in order to maximize their odds of becoming popular, and enjoying some of the knock-on effects of that popularity, such as more contributions.

> To me, this is selfish and unethical outright, though you may have different ethical standards.

I find Zealotry like this tough, and promotes a definition of FOSS that feels hostile against those who want to simply build something cool, and share it with the world - (or even more controversially, make money from FOSS).

Given such a strong view, it's really surprising that he then posts stuff like "Can I be on your podcast"[1] to try to promote Hare - his programming language.

He didn't ask for podcasts that aren't distributed on platforms like Spotify or Apple Podcasts. In fact - he's right there with several appearances promoting Hare.

That feels like hypocrisy.

[1]: https://drewdevault.com/2023/11/09/Can-I-be-on-your-podcast....


> or even more controversially, make money from FOSS

I doubt Drew is against making money from FOSS. He actually runs a business (businesses?) around FOSS.

I don't think it's controversial to make money from FOSS anymore. FOSDEM just happened, many companies making money from FOSS were there, and they are liked. Some specific ways of making money might be less appreciated, but not the whole concept.

People are not silly, they know money helps develop (free) software and also many would love to be paid to work on free software.

> That feels like hypocrisy.

No, that feels like living in an imperfect world and trying to make it better. To improve something, you generally need to be part of it and its imperfections.


> No, that feels like living in an imperfect world and trying to make it better. To improve something, you generally need to be part of it and its imperfections.

Right, but that's the hypocrisy, no? He's being rude about people who use github, or post an article on HN, but surely most of those guys are doing just the above. When is it OK to use non-free software and when not? Maybe there's a dividing line you can draw about "platforms".


Are you talking about Drew or "those guys", whoever they are?

Let's focus on Drew. I've not seen him mention HN, which by the way doesn't require running non-free software, and he literally runs a free software competitor to GitHub. I've not seen him be rude to people using GitHub. He certainly strongly criticizes them.

I assume he uses GitHub to communicate with projects hosted there. If he does, I don't think he could be blamed for meeting people where they are. He is not arguing about this, he is arguing against hosting non free software on proprietary infrastructure and strengthen it instead of helping strengthen the free software ecosystem. Which he doesn't do. He doesn't host his projects on GitHub.

He could boycott GitHub to make an even stronger point, but I believe that isn't practical at this time when you are part of the open source community. And running a whole GitHub competitor is way more than most people do for this cause. Accepting to reluctantly use GitHub (or Discord, or whatever) and spreading the word against its use is not contradictory.

Hypocrisy would be telling people not to use proprietary infrastructure to manage your free software project, and then hosting on GitHub.

Specifically about podcasts because that's what people seem to take issue with here: podcasts are usually hosted somewhere else, in addition to Spotify. Historically, podcasts are handled with RSS feeds, there's nothing more standard and open than this. It would be wrong to force people to use Spotify to hear his podcast, but that's not the case. He also should accept to be hosted on Spotify. When you are spreading ideas, you should want to reach people who are not yet as aware as you are in your cause. If you stay outside the world you criticize, you don't reach people inside it. And more importantly, he didn't mention Spotify at all; in particular, he didn't say "please host me on Spotify". Podcast ≠ Spotify.

I see hypocrisy nowhere.

Activism is hard, you know. You often need to do compromises for you activism to be efficient. Nobody is perfect. Should you wait to be perfect before making something for your cause?


I'm struggling to see the distinction here.

Sure, Podcast ≠ Spotify, just as Git ≠ Github. But people choose to distribute their code on Github for exactly the same reason the people choose to distribute their podcasts on Spotify - reach.

That exact reach is what Drew argues so articulately against - in fact he expressly calls out marketing on Twitter and Facebook as a "mistake", and damaging against the FOSS community. He goes on to encourage people to prefer open infrastructure with lesser reach, even if that comes at the expense of effectiveness:

> Such projects would prefer to exacerbate the network effects problem rather than risk some of its social capital on a less popular platform. To me, this is selfish and unethical outright.

It's hard to see how that same argument doesn't extend to promoting your software on podcasts which are primarily distributed via Spotify, Apple Podcasts, etc.

On the topic of Activism, I think I'd agree with you, if he was on Spotify podcasts promoting other "Free" podcast platforms.

But he's not - he's promoting a programming language.


I'll grant this to you.

My opinion is that he didn't explicitly ask for the podcasts to be on Spotify or Apple. That Spotify and Apple Podcast are the main way of consuming podcasts is not of his making. And maybe he requests podcasts not to be hosted on those platforms. Maybe not.

But I can see how you may find that there can be some contradiction here.

To me this would be a "you still need to be part of this imperfect world" thing, or a "imperfect activism" thing, but I would totally understand someone disagree with this / find that it's not coherent.


You're right, he mentioned Reddit not HN. I was speaking loosely when I said "being rude" - he criticizes them.


Now, I wouldn't be shocked to read someone consider HN as a proprietary platform.

> I was speaking loosely when I said "being rude" - he criticizes them

Ok, we are on the same page. The distinction is important to me :-)


> No, that feels like living in an imperfect world and trying to make it better.

Fair call.


Podcasts are actually distributed via rss. Proprietary platforms pick them up but they're actually one of the few forms of media that the public widely consumes via open standards.


> you may have different ethical standards

Isn't this the exact opposite of zealotry? Zealotry is imposing your ethical standards on others.


You may be right, Zealot may not be the right word here.

But after reading that article, I was definitely left feeling judged because I choose to do exactly the things he's talking about, for exactly the reasons he's suggesting. Maybe that's on me, but I certainly felt his standards imposed on me, disclaimer or not.

Also, given the conviction with which he argues with in the article, that disclaimer feels a little weak -- kinda like when someone says "No offence, but... <very offensive thing>".


It's at least a form of _casual_ zealotry. "You may have different ethical standards" is clearly the author's passive-aggressive way of saying "your ethics might not be as righteous as mine", as opposed to "reasonable minds may disagree".


I certainly read this sentence as your second option. It may depend on the tone you imagine for Drew's sentence.


> Zealotry is imposing your ethical standards on others

It’s even more specific than that though isn’t it? A fanatical belief in a single cause to the exclusion of all else.


Yes. "fanatical" is key here.

The commenter somewhat retracted their use of this word in a sibling comment, but it seems important to me that we don't confuse strong views with zealotry. Drew's views are certainly very strong.

Strong views can be rational and well thought. I even believe they are often the ones that can push the world to a better place. Usually you can even argue with someone yielding strong views if they are rational (unless the person is bad at communication / is an asshole, of course it's possible). Strong views can shake you up and are not always enjoyable.

Zealotry is just plain irrational and dangerous and there's no way you can have a constructive discussion with a zealot.


You can promote the software, as well as mirror the code of the software, on multiple services. (Unfortunately, while the code and documentation can be mirrored, the discussions usually won't be.)

Requiring the use of priorietary software to access and discuss it is a problem, and requiring complicated software is not that good either, but it is also possible to use open protocols with multiple software.

(In the specific case of GitHub, they had previously allow viewing files without needing JavaScripts; that has changed now, but the data is included as JSON data within the HTML file, so I was able to write my own much shorter script to substitute for theirs. Of course, that does not help much if you do not have that script, but you can still use the git protocol to download the files, or to use the API (the form for creating a new repository has stopped working on my computer, but I have been able to do so by using the API).)

One thing they do not mention is NNTP, which I think can be a helpful alternative than mailing lists (although you can also have multiple interfaces for the same messages).


You might be interested in GitHub's cli tool, which is open source, if you want to access GitHub without running their proprietary JS code.

https://cli.github.com/


Promoting free software through unfree software IS selfish and hurtful to society.

Just because you don't necessarily have a solid counterargument to his convictions, doesn't make anything he said "over the top." That's just a disingenuous dismissive attitude towards what is clearly a post on his personal website that builds on established and clearly communicated values (freedom of software.)

There's absolutely nothing in that article that criticises making and sharing free software. It is clearly criticising using a certain type of medium to share free software. If that's zealotry, then any argument against doing anything is, too.

I wager that this hostility felt by these views are projections of guilt, devoid of criticisms towards said views or values. In fact, I'd argue that having no opposition towards a certain ethos then opposing it for frivolous reasons such as personal offence out of a public blog post is as close to hypocrisy as one can get.


> Promoting free software through unfree software IS selfish and hurtful to society.

Well, it's a tough call. I agree that communicating trough them strengthens them because of the network effect. But if you never reach "unaware" people with your ideas where they are (on those platforms, that is), you are not really helping either.

So it's not clear using those platforms only hurts. It could be a net win, all effects taken in account.

In any case, I agree that you should not force people to use these platforms to follow you.


I think HN is proprietary.


Indeed. I'm fine with HN though. I'm not the one running the non free code. One can browse it and participate to it without running any proprietary software. It works without JS, and the JS code is trivially small anyway. There are open source clients too. That's a pass for me. The day this changes, you won't see me here anymore.

That would correspond to the NonFreeNet antifeature in F-Droid [1].

They could update the code they released for good measure though [2].

Running Discord is on another level for me. I would consider accessing a Discord using a Matrix or IRC bridge.

[1] https://f-droid.org/docs/Anti-Features/#NonFreeNet

[2] https://github.com/wting/hackernews


So you'd be against running a modern commercial video game on your machine, but you'd play it if it was running in a Google datacenter, and transmited the rendered pixels to you via Google Stadia?


No. I would call this process "open source laundering".

Actually, that's kinda an issue I see with those bridges and why I'm not totally comfortable with them. Now, if the bridge is run by the people who set up the closed communication tool in the first place, that's a grey area. That makes them run an open protocol / standard with proprietary software, which is better than nothing. I'm okay with having to reach a proprietary network with some free software. Should I join an XMPP network run with a proprietary implementation that I wouldn't probably even know about it, but at least it has usable open source implementations. That's my take. I would be happier if we could just skip the Discord and use the real thing though.

But you know, I would be fine with you considering I'm not completely coherent. I'm not indeed. I have thresholds higher than those of RMS which makes me less coherent than him on this topic.


Hmm... Would a Discord custom client be considered libre enough? They officially see them as a violation of ToS but do not interfere unless they use too much bandwidth. Or would private/semi-private non-encrypted communications place it in another category?


From the FAQ:

" Q: Why bother? You can’t make a new browser engine without billions of dollars and hundreds of staff.

Sure you can. Don’t listen to armchair defeatists who never worked on a browser. "

Nice take.



Great to see some competition still alive in browser engine development. See also Servo (previously part of Mozilla) https://servo.org/ - that and Ladybird are still very underdeveloped compared to every day browsers.

It's a huge shame that there are no nightly builds of ladybird to try out but I assume that's because they just don't want the bug reports (if everything doesn't work it's pointless getting random bugs filed).


Last I tried Ladybird didn't take very long to build. This was admittedly sometime last year and it's likely slower now, but still. It's very far from a 9 hour chromium build.



Don't forget WebKit. It leads to project such as https://surf.suckless.org


WebKit is established and controlled by the richest company of the world. Most websites make sure they work on it because hardware (exclusively) running it are widespread. Why should someone interested in new players care? Anyone knowing about Servo and Ladybird has most likely heard of it anyway.

(agreed, it is a credible alternative to Blink's dominance)


Because it can be used to do cool things, has an interesting development history, and most importantly:

> it is a credible alternative to Blink's dominance


For me it is like:

WebKit -> Chrome

Bun -> Node/Deno

It is good to have competitions in Ecmascript landscape, even though it is currently a duopoly, but with introduction of AWS LLRT, and QuickJS, maybe small player can even have a say so in this. It would be good the big corp comply to the Ecmascript and Web API standard.


IIRC there are no builds because forcing people to compile it themselves ensures that users (and people who file issues) have a certain amount of technical competency. Keeps life easier for maintainers, but will probably change if/when the project matures.


i love these progress screenshots: https://serenityos.org/happy/1st/


Looking at the timestamps, the pace of progress just him working alone in the beginning is nothing short of amazing. He seems to be a true jack of all trades when it comes to programming.


A thought experiment. What about a new kind of browser for a new kind of web? Much of CSS is obsolete. So doing a "modern" version (I'm thinking css grid and flex in particular) would provide the same functionality without the cruft. All that old stuff about the holy grail three columns layout.

And for me there is the question of canvas, threejs, react-three-fiber and react-drei. Is it possible that - especially with mobile - that canvas could be used to provide a better user experience? Who writes games for mobile with a HTML and CSS? Not saying it can't be done, but I wonder how many web sites require HTML & CSS instead of canvas?

A big barrier to browser competition is needing to implement obsolete and outdated technology. Why not just a minimum set of html and canvas.

Just thinking. Your thoughts?


I've worked a bit on browser engines, although it was quite some time ago.

I don't think it'd help much.

- There's been a Cambrian Explosion in the web API surface era. The modern stuff dwarfs the old stuff. Dropping support for older/less frequently used mechanisms does not shed as much code and complexity as you might think.

- Beyond mere surface area, the level of engineering required to implement a sort of "Restricted Core Profile" to a competitive degree (e.g. performance) is quite high, if you're talking true blank-canvas development.

- There's a long tail effect in full force, where even mostly-modern websites will use and rely on some cruft here and there, making very few pages work in your supposed browser.

That is to say, it's still a very large, tough project. But the FOSS community has achieved quite a few large, tough projects; it's not the same as saying that it's not possible, of course.


> Who writes games

Please stop following Google who is trying to turn the Web into an OS for their own ad-fueled, user-tracking profit.

If you want to make connected (or not) apps, there's already the Internet and OSes for that. And you don't have to make your interface worse by fighting with the browser about it ! (Especially important for games and other "deep" software.)

The whole point of the Web is to be an hyperlinked collection of documents, sometimes multimedia, with maybe a little bit of interactivity with some forms and scripts sprinkled on that.

(As an example to how incongruous the current situation is, imagine a parallel universe where it was Adobe rather than Google that got humongous, and it was the JavaScript in PDFs that was (ab)used instead to make apps.)


What exactly is obsolete about css? There are still valid use cases for float and inline block. border-box also fixes most of the teeth-gnashing from the 00s. I think it's a nice idea but I don't see what would get cut. Tables are still best for actual tables of data, too.


I think there are modern frameworks which render everything in webgl/webgpu with the canvas


Related ongoing thread:

Interview with Andreas Kling of Serenity OS (2022) - https://news.ycombinator.com/item?id=39286638 - Feb 2024 (134 comments)

Related to OP:

Ladybird browser update (July 2023) [video] - https://news.ycombinator.com/item?id=36939402 - July 2023 (1 comment)

Chat with Andreas Kling about Ladybird and developing a browser engine - https://news.ycombinator.com/item?id=36620450 - July 2023 (65 comments)

Shopify Sponsored Ladybird Browser - https://news.ycombinator.com/item?id=36502583 - June 2023 (1 comment)

I have received a $100k sponsorship for Ladybird browser - https://news.ycombinator.com/item?id=36377805 - June 2023 (166 comments)

Early stages of Google Docs support in the Ladybird browser - https://news.ycombinator.com/item?id=33511831 - Nov 2022 (84 comments)

Github.com on Ladybird, new browser with JavaScript/CSS/SVG engines from scratch - https://news.ycombinator.com/item?id=33273785 - Oct 2022 (1 comment)

Ladybird: A new cross-platform browser project - https://news.ycombinator.com/item?id=32809126 - Sept 2022 (473 comments)

Ladybird: A truly new Web Browser comes to Linux - https://news.ycombinator.com/item?id=32014061 - July 2022 (8 comments)

Ladybird Web Browser - https://news.ycombinator.com/item?id=31987506 - July 2022 (2 comments)

Ladybird Web Browser – SerenityOS LibWeb Engine on Linux - https://news.ycombinator.com/item?id=31976579 - July 2022 (2 comments)


While Mozilla is re-selling privacy service (see other news on HN), others are building a better browser. Without the need of $6B.


Mozilla has definitely a management problem. But the product is good. very good even.


Unfortunately, Mozilla is not interested in providing an easily embedable web engine. It makes sense: it goes against their interests as a company whose main product is a fully featured web browser.

Google, on the other hand, provides a web engine with a nice license and reasonable ergonomics that can be used for all sorts of projects. This allows them to execute an EEE strategy:

Embed (into all kinds of projects)

Enforce (moving-target standards of your own making)

Exhaust (any potential competitor/resources that needs to chase after them)

This is why I wish we can get an alternative, OSS, easy-to-embed engine soon.


Love Andreas Kling and the Serentiy OS project. Hate that he's only on Twitter. Mastodon seems like the perfect fit for his audience.


I dunno, he's pretty committed to his positive utlook on things. Mastodon seems to be the far angrier place in comparison to Twitter. Twitter used to be a lot more like that, but it seems like most of the angriest people on twitter moved ot mastodon when Elon took over.

It's (IMO) very pronounced and hard to avoid on Mastodon.


Not only that but the frontpage of Mastodon is more angry political stuff than anything technology related.


Frontpage of Mastodon? Just what exactly are you referring to? :o


Doesn’t seem that way to me much anymore - definitely as time has gone on it feels like a lot of the behaviour police (who were everywhere scolding people for not using content warnings and stuff) got blocked by enough people that they’ve mostly given up which is nice. Sure, in the Explore tab there still a lot of “people being angry at politics” but if you are careful with who you follow you shouldn’t see that in your feed…


He has a mirror on Mastodon: @awesomekling@bird.makeup


Shopify is a sponsor! I wonder how that came about.



We need more companies like Shopify, that actually share some of their revenue with open source projects they benefit from (or even, as in this case, projects that they don't directly benefit from)! But I'm afraid the current financial situation will lead to less, not more, of that...


Step 1: Register Company

Step 2: ????

Step 3: Profit.


How many days can Google resist the urge to charge 30% for payments on Chrome after Chrome gets all the marketshare and the last drops of Mozilla are milked by the CEO? It's both good PR and a hedge against an (unlikely) bad scenario in the future.


Are there any plans to rewrite the browser implementation in the Jakt language once that gets a bit more stable? Memory safety would be a unique advantage over other browsers (aside from Servo).


I am also looking forward to that, but I think the Serenity philosophy is to not make any long term plans and commitments.

If the language becomes mature enough, and there are people interested enough in doing that porting, it will likely happen.

I think their C++ code is also constrained enough due to the use of their custom standard library that it would be possible to write a transpiler from C++ to Jakt.


There are no concrete plans for anything with the Serenity ecosystem. But the main design goal of Jakt was originally for developing mainly GUI applications in SerenityOS. But if Jakt ever gets used in SerenityOS, a gradual rewrite of the libraries underlying the browser engine seems likely to me.

I will note though that development on the Jakt language has slowed down significantly. After extremely fast initial development, most Serenity developers no longer contribute to the language. Because everything is done exclusively when the contributors feel like it, development happens in bursts. And right now, Andreas is not actively working on Jakt, so very little improvement is happening there. The main developer working on the language is Ali Mohammadpur, but I don't think he is currently being paid to work on the project. So his contributions are also inconsistent.


I am fascinated by this project.

What are the chances that this could become a real world usable replacement for chrome or Firefox witching the next couple of years?


"Usable" as a term is a bit of a wash in the browser world. Some people will argue Firefox or Chrome are unusable due to some minor annoyance and others will say filling is still usable. Trying to give an answer though I'd say most will say it works with most sites in a few years time but most would also not recommend it due to security concerns. It's not that they haven't thought about security, sometimes they even try newer more segmented approaches than current major browsers use, just that everything has been done from scratch and the work to make the code safe and battle tested as other browsers would be more than the work to make the browser to that point.


My gut feeling with how far they've come in so little time says it's definitely in the double digit percentage points.


I would love to use this as my daily driver, but a lot of popular sites don't even work with Firefox.

I hate what a small group of lazy front-end people have done to our world...


> but a lot of popular sites don't even work with Firefox

Which ones? I have always exclusively used Firefox and rarely have issues.


Count me in as well. I only use Google Chrome for Google meet calls, as some feature are not working on Firefox (I'm sure this has nothing to do with the fact that Google makes both chrome and meet ;)


Meet also does not work properly in Safari, each time I have a meet call I need to use Chrome, so Google might using non web standards.


Does it work in Ungoogled Chromium? I use that just on principle when nothing else works (which, in agreement with the previous posts, is rarer for me than people seem to claim)


My goto example is roll20.net. During a gaming session some feature or thing didn't render. Switch to chrome... worked perfectly.

The real problem is that firefox is tier 2 support or not even. It's a small percent of users so it's a cost/benefit for these businesses.

A recent issue I had was buying tickets from air india. You can't with firefox... it'll hang at a certain point. Switch to chrome... works perfectly.

The web is dead. It's basically client/server nowadays. Firefox is still my main browser, but I keep chrome/chromium around when I need it.


Thanks for trying to use Firefox first!

You can report websites that don’t work in Firefox on webcompat.com and Mozilla web developers will test and diagnose the problem. When possible, they attempt to reach web developers at the site (using personal contacts or referrals when official channels aren’t working) to share the bug report and a suggested fix.

In other cases, Firefox can include a site intervention script to patch the site or send a different browser User-Agent string to make it work.


Same here, I've been using Firefox for about five years now, and it seems like it opens absolutely all websites


I can't get Reddit to work in Firefox or Chromium. No idea why.


It's often an extension. Maybe you happen to use the same problematic extension(s) on both Firefox and Chromium. Maybe try with an empty profile :-)


Weird, I use it with Firefox every day.


Have you tried either in an incognito/private browsing session? If it works there then it could point to needing a cache/cookie clear. If not then the issue may lie outside your box (try a vpn?)


Both old and new UI? What problems are there?

I've never had issues loading/using reddit from any browser aside from their annoying "use our app" popups.


You may have added it to an adblock filter list by mistake.Try disabling and trying them.


Nope, tried that.


A couple of things have helped me solve issues when trying to load sites in Firefox that will work in Chrome/Vivaldi, etc.:

1.) Refresh Firefox: Click the menu button with 3 lines -> "Help" -> "More troubleshooting information" -> "Refresh Firefox..."

2.) Check your Enhanced Tracking Protection settings from the Privacy & Security tab in the Settings menu. If it's set higher than Standard, it could be causing sites like reddit, sites that use Cloudflare for protection, etc., to load incorrectly or fail to load entirely.


Not my experience at all. Firefox works great for all the popular sites and 99+% of the unpopular ones. The trouble comes with websites that generally seem shoddy. I haven't had to install and delete Chrome for a long time!


Same. Actually I've had it the other way around a few times, when a site doesn't work in Chrome but works in Firefox. Perhaps though it was some caching issue because it worked OK in Chrome's Incognito mode. However, it was easier to just fire up Firefox than diagnose/debug Chrome.


In many many years of Firefox use, it has always loaded my sites with perhaps only one exception, but that was an odd graphics css treatment the signed out Patreon homepage used, probably fixed by now.


> I would love to use this as my daily driver, but a lot of popular sites don't even work with Firefox.

you mean minor aesthetics differences or functionality? I just use firefox, I don't even have chrome, and everything works. And I use mainstream web, nothing too niche.


small group of lazy front-end people? i think you actually mean a small group of chrome developers single-handedly deciding how web should work, while having the vast majority of the market share to push those decisions.


Well, no, that is not how it works. Blink undoubtedly has an oversize influence on 'web standards' (which are more and more defined as 'whatever Blink does') but that would not be that much of a problem if web developers built and tested their sites against more than just Chrome and Edge (Blink) and Safari (Webkit). History is repeating itself since the same thing happened when Microsoft's Internet Explo[rd]er was the dominant browser and developers only tested against that, putting a 'Best viewed using Internet Explorer' badge on their sites.

https://en.wikipedia.org/wiki/Browser_wars


i am aware, i work as webdev and my current project is only Blink compatible. my employer does not want me to waste any time ensuring i support other browsers that are not going to be used.

web should be able to be viewed in any browser and get the same document.


I'm not particularly experienced with web browsers, but whenever I hear people who know what they're talking about, I always leave convinced that this is exactly the problem.

If there are sites that work on Chrome but not Firefox, it just seems to me that either:

- Chrome or Firefox must be breaking web standards

- Web standards must be unspecified for that use case

I have no idea the fix though, the web is so massively complex now, that I don't even know what specifying standards for every use case would involve.


Most web standards are codifying existing functionality, not the other way around.

Chrome/Blink has exclusive APIs, that often are not on track to be a standard.

This makes Safari (Webkit) and Firefox (Gecko) look bad, because they end up having to implement the same APIs, and then, maybe, it's standardized. Browser extension APIs come to mind.

I wish the situation was more neat and tidy, but it's not.


No, I blame it on front-end people.

I've seen them putting all effort on eye-candy while ignoring that the page only works on "retina" display and then only on Safari.


Blame the UX and Product Managers, not the engineers.


As a web developer, the things that bother me the most are the small differences in edge cases between the rendering engines.

What happens when you put a percentage height on a row in a table. What happens when an element has a margin that doesn’t fit in its parent. How does adding display: flex effect how text is laid out inside an element.

These are things that Gecko and WebKit/Blink handle differently. Some of them are defined in the spec and have tracking bugs, but some of them just aren’t addressed. I don’t think it’s maliciousness or laziness on anyone’s part, but the web is too complicated for there to be multiple perfectly compatible rendering engines.


I exclusively use Firefox, and I probably browse more websites than most. I very rarely run into websites that don't work on Firefox, and I can't recall the last time I ran into a page that didn't work on Firefox when serving a Chrome UserAgent (btw, if you're a web developer and you're accessing your user's UA, you're doing something horribly wrong. Stop).


Which sites don't work with Firefox? I'm a daily Firefox user for 15 years now and I can count on a few fingers the amount of sites that were "broken" in FF (and weren't a legacy IE issue).

This seems like hyperbole, frankly.


Are you sure they are lazy? Maybe they are overworked, exhausted and constantly pressured by management to output new features? Did you ever work as a front end developer?


Interesting. I use Safari for most sites and when there is rare case that site does not work I open Firefox and it just works.


what sites, bro? i see this kind of statement many times yet they never give me any answer.



Is there collaboration between Serenity/LB and Igalia?

Seems like they're involved in many browser technologies, and other technologies.

https://en.wikipedia.org/wiki/Igalia

https://www.youtube.com/watch?v=9lkIX5ryZZ4


They were chatting together: https://www.igalia.com/chats/ladybird


I want to feel excited for Ladybird, but it's an incredible shame that such a promising and potentially very important project has settled on a pushover licence and Discord for their communication platform. The latter especially is an antithesis of freedom and openness, which I feel ought to be valued by people celebrating Ladybird's progress.


The weird thing is, a couple years ago when I contributed a little bit to Serenity OS, they were actually using an IRC channel


I may just be being dumb here, but what is a pushover license?


A licence which grants you rights to the software, but allows you to strip them away in derivative works. This is in contrast to (for example) copyleft, which requires you to share with your users the same code and rights that you got to use, when distributing derivative works.


Presumably a derogatory way of describing a permissive license, rather than a copyleft one.

It's a religious disagreement.


They used IRC but they switched because IRC is just too inconvenient. Discord is proprietary, but it works really well for them. They chose to be pragmatic.


I get that Discord is more practical, but it is at the cost of freedom, trustworthiness and privacy. Communication platforms are the last place you should compromise on this, since your choice is directly going to affect the choices of many others.

There are tools which are much more user-friendly than IRC (and even Discord, in some aspects), such as Matrix or Zulip. They easily enough could’ve been more pragmatic picking those instead.


It sounds cool, and it’s nice that someone is disproving the myth. I use Qutebrowser daily, these projects are great, but not without their pain points: once you start using them in anger you’ll quickly realise lots of basic features are missing. It would be really nice if the more common oss libs had more work done on them to unbloat them


> 2023-08-13: New sponsor: ohne-makler.net

> 2023-06-28: Welcoming Shopify as a Ladybird sponsor

Hmm, no new sponsor since august 2023. Not a good sign. I cheer for them to succeed though!


Is there a way to create a binary to use this browser in a normal-ish way. Looks like the docks recommend using a script to run it, but I’d like to be able to package it for my personal package repository.


There is AUR package: https://aur.archlinux.org/packages/ladybird-git to use with one of many AUR helpers.


I just made a wrapper script that calls the script in the serenity repo (which I cloned into my home directory) and put the script in PATH e.g. in /usr/bin/ladybird. The content my script.

> #!/usr/bin/env bash

> cd ${HOME}/serenity && ./Meta/serenity.sh run lagom ladybird

I guess you could create a .desktop file that invokes the script, or just the "serenity.sh" script directly.


Aha thanks very much for the validation. My solution is nearly identical. Much appreciated.


Very cool project!

I also find it curious that they are sponsored by a real estate site (:


Ohne Makler sponsored the project with 10k USD to make their website render correctly in Ladybird, it was covered in one of the browser update videos last year[0].

[0]: https://youtu.be/xdVOdrWuzLQ?t=147


Ha! Didn’t know. Very cool IMHO.


I just needed a reason to throw money at him ;)


Haha, kudos!


Show me Ladybird in 10 years, then I'll decide if it's a promising browser/dev team.


This is a really cool project, but:

"Where are the ISO images?

There are no ISO images. This project does not cater to non-technical users."

This comes off as really abrasive. Wanting an ISO image to quickly test this out is not an indicator of someones technical ability.

I'm sorry I don't want to boot up a linux vm, install a lot of development packages and then build my own boot image just to try this out.


> This comes off as really abrasive.

I think that’s often the point with OSS projects, especially those that have an ambitious long term vision. If you “don’t want to boot up a Linux VM” etc, they don’t want you. It’s a filter. It means their concern at the moment is the coherence of their community, not increasing their numbers. It’s the same reason projects like this often have absurdly ugly logos, and landing pages that don’t work on mobile. Fast growth is often seen as destructive when you already have a nice little community vibe. It’s essential to maintain that vibe carefully if you have a long term goal of building something important.


SerenityOS nightly builds: https://serenity-builds.halves.dev/


Counterpoint, it would be another release / packaging they would have to build and maintain, unless they find a volunteer that can do it without detracting from their core business, it's not worth the investment (to them).

Anyone can set up a pipeline to distribute ISO images though, it's open source.


Sure, if that's the reason, I completely understand. However that's not the reason they stated in their FAQ. It really comes off as gatekeeping


Gatekeeping is the right solution sometimes, no?

What if the goal is to keep the relevant communication channels populated exclusively with technical users?

I've seen F/OSS projects completely overrun with support requests from non-technical users. Is it wrong to want to avoid this from the start?


Is it just for fun or not? I think it's important to face this question, because users should not trust a just-for-fun browser with their security, and we should not look to Ladybird as a meaningful contribution towards competition in the browser space if it's just for fun.

If it's just for fun, we need to temper our expectations accordingly.


"Just for fun" is precisely how Linux started:

    Hello everybody out there using minix -

      I'm doing a (free) operating system (just a hobby, won't be big and
  professional like gnu) for 386(486) AT clones.


Similarly people shouldn't have used Linux for serious use cases back then. We are talking about the present.


They don't provide binaries, so I don't think there is a risk of having users.

Is it really important to answer this question? A lot of widely used software started as "just for fun", e.g. Linux or OpenSSL.

I think tempering our expectations should be the default for Open Source software.


I think it's a "just for fun" project that is getting a bit serious, and sponsored.

And that having expectations as a end user is still a bit premature.

You can expect the project to move somewhat fast.


Written in C++? Makes no sense to me. I have high hopes for Servo but this seems like a waste if its a C++ project.


I always mix SerenityOS and TempleOS :(


[flagged]


It's very tiring to see these kinds of comments, even as a bystander. I can't imagine how it is for the developers. People have spent a fair amount of time building this thing and deserve better than to have their efforts dismissed by the likes of you. Please, do better.

The current browser engine landscape is a monoculture, so this is a very welcome addition IMO.


The whole point of the project is to write a completely independent browser from scratch. The effort seems focused to me, write a complete engine. Not write a small part of a browser.

But why would reusing something be better? Reusing someone else code wouldn't be competing with them, it'd be depending on them.


Because you can have something usable instead of dreams. If you want to have completely independent code eventually, then replace modules one by one, while having a working product. Like Mozilla did with Servo. They had C++ codebase they wanted to replace with Rust codebase. They did not create entire browser from scratch, but replaced C++ with Rust gradually. While one figures what HTML and CSS layout should look like, no need to have no JavaScript engine and basically unusable browser. Use existing JavaScript engine, replace it later when/if project has enough resources.


Why is it "dreams" if they're making quantifiable progress toward their goal? The browser is steadily becoming more usable and performant. Proving that such a project is possible is one of Andreas (the project lead's) goals as stated in interviews.

I'd rather encourage promising "dreams" than have a web with only 2.5 usable browser engines.


This is Hacker News, not just Lets Bring Yet Another Product To Market news.

He is doing this for fun and to be creative, is incredibly inspiring for a lot of people, and actually gets things done on top.

The epitome of a hacker, I want to be more like him.


You have a very different definition than what this site has historically been about. :)


Have you ever wondered what ycombinator.com is in news.yconbinator.com?


Yeah yeah, I know. They could have chosen to call it Startup News but didn't.


Do you want what you're arguing for?


Andreas worked professionally on WebKit at Apple and succeeded in writing SerenityOS from scratch.

So he is in a good position to decide on the approach and has proven he can finish things. I’d just trust him on this, if my opinion was even relevant.

But as it’s a for fun project, he can write a browser in Fortran and abandon it half-way ;-)


They already do have something that is incredibly usable for how long the project has been going (this context is part of why you're being downvoted, as well as just the general principle). They have probably got something that is looking more viable as a new browser than what the Servo project is doing now it's been detached from Mozilla. It really is quite incredible what they've done, and the velocity it's improving is just mind-blowing.


Sometimes it's nice to dream. A nice bonus if other people respect your autonomy to use one's time on Earth as one sees fit.


It's very possible to have other goals than having a usable and marketable product, Andreas Kling and SerenityOS developpers have explicitly stated other goals on numerous occasions, and other positive side-effects have emerged from the project. These include having something to stay busy long term to stay away from drug use, having fun, learning, prove that things are possible, identifying and reporting mistakes in specifications that were caught thanks to the blank slate implementation approach.

Maybe Ladybird is not usable right away, so what? It's getting impressively close to this point with barely 5 years of development, from scratch (and it goes much deeper than HTML, CSS and JS engine, they also re-implemented the whole networking stack, image/audio/video codecs, font...) and by a small team of mostly volunteers, but more importantly, it's a positive project not only for the developers, but also for their audience and for greater scale web standardization.

I hardly imagine a project such as the one you're describing getting as much traction as Ladybird / SerenityOS (in fact, there are many such of these, but I don't see nearly as much talk and interest for their development), the whole project isn't about the end product, it's explicitly about the process to get there.


Dreams are what gets you out of bed in the morning.


> you can have something usable instead of dreams

Realising dreams is called progress.

The reason Chrome and Firefox exist is because people dreamt of having something better than the already "usable" Internet Explorer.


I strongly disagree. The point of the whole SerenityOS project is to do everything from scratch, both for fun and to bring a bit of innovation in a classic environment. I see this "rebuild everything from scratch" effort as beneficial in many areas of browser development, since there are technologies shared by most browsers that create a sort of sub-monopoly, independent from the actual browser marketshare.

Take for example the JavaScript engine. Spidermonkey and V8 are much more prominent in browser engines, even the lesser known ones, compared to JavaScriptCore. Spidermonkey for example is used in Firefox, Servo and Flow (Eikoh's browser engine), while V8 is used in every chrome derivate and most javascript implementations outside of a browser. JavaScriptCore is literally used in only one browser engine, and while WebKit is rather popular, it doesn't change the fact that the browser-grade javascript engine market is dominated by only two engines.

Rebilding everything from scratch is such a fresh breath of air from all of those technological monopolies. I only wish the best for the SerenityOS team and contributors.


Don't criticise people hacking for the fun of it, on Hacker News of all places.


> So, no motivation why not port Servo/WebKit, but write from scratch. One could at least use HTML/CSS layout manager or JavaScript engine. I mean, I am very pro competition and open web, but writing everything from scratch seems like waste of effort. I'd expect efforts to be focused.

Why write Servo from scratch instead of porting Blink/Webkit? One could at least use the HTML/CSS layout manager. Seems like a waste of effort.


https://imgur.com/dP2c3j6

Re-inventing the wheel is not the problem. It's copying the wheel that is the problem.


As years have gone by, I've realised that the whole "don't reinvent the wheel" is fundamentally an anti-innovation and anti-intellectualist idiom.

Like yeah, the idea is that the wheel already works pretty well, so you wouldn't need a better one, but really now? First of all, how would we know that we've explored all the possible ways wheels can be? How do we know whether what we have is actually the most optimal way that wheels can exist? You won't know unless you try! You might fail, but I'd say that the learning opportunity is worth it.

Also, when it comes to people who criticise projects like Ladybird here for "reinventing the wheel", I feel that comparing software to wheels is a bit disingenuous. Because while manufacturing wheels does take skillful craftsmanship and such, most software is probably just a tad more complex than just the wheel. Call it a hunch.

So, the ethos that one shouldn't "waste effort in writing this stuff from scratch, but that off-the-shelf components should be used" is the kind of ethos that can be applied to a lot of things. And usually when that kind of ethos is adopted, it tends to lead to stuff like stagnation and lack of innovation. And we know this, because we have examples of entire countries doing this, which has usually lead to either reforming away from this sort of model, or the country just not existing anymore.

We should not let the web for example stagnate due to browser monoculture. We should have alternative implementations of stuff, actually.


> So, the ethos that one shouldn't "waste effort in writing this stuff from scratch, but that off-the-shelf components should be used" is the kind of ethos that can be applied to a lot of things. And usually when that kind of ethos is adopted, it tends to lead to stuff like stagnation and lack of innovation.

I think the advice still has its place, but only in certain contexts. You can get a lot of work done very quickly by making good use of work that others have already done for you. It's why modules and libraries are so popular.

When you've got the time to try and and improve on existing code, or when you'd have to compromise too much on what you want by using something preexisting, or when you're just learning then reinventing the wheel is invaluable.

There's a time and place for both.


I agree, imagine a carpenter who is told "don't bother making tables, you can buy them from the shop". We tell software devs this all the time. The javascript ecosystem was the perfect example of this, it professed an ideology that most devs should only write glue code between libraries, and only the arrogant devs we can't stop and/or FAANG are writing the libraries.


Then you have not understood the motivation of the project.


There is no motivation on linked page.


The project initiator has written about this several times, for example:

https://awesomekling.substack.com/p/i-quit-my-job-to-focus-o...


That might be part of the reason you don't understand the motivation.


Why would there have to be one there?


They don't have to justify themselves too.


When the alternative was using drugs, writing everything from scratch is way better.


It is a SerenityOS project. You can find the answer to that question in their primary project's FAQ[1].

1. https://github.com/SerenityOS/serenity/blob/master/Documenta...


This mindset is why we now have countless new browsers that all run on Chromium. What's the motivation for those?

Also calling the project a "waste of effort" seems a bit out of touch. Effort doesn't have to pay off in any way to be worthwhile. If the act of building is part of the motivation then taking shortcuts at building it defeats the purpose.


>What's the motivation for those?

To compete on the actual end user experience. Most users do not care at all about the code of the project. They don't care if your layout engine code is completely unique, they care that it does layout for the sites they use correctly. Having a good browser engine to work off of lets people start delivering value to users immediately instead of having to spend a ton of time recreating the parts of a browser that users do not care about.


Besides the actually answer - because they want to - I don't think it would even work, because of the dependencies. (SerenetyOS is also written from scratch.) But since Kling used to work on the webkit team - you can bet that they don't work in a vacuum and reinvent the wheel, but just reimplement in a way they think is best. Also, they are not doing it for you, but for them.


Why have any other browser engines at all? Why not just require everyone use chrome?

If you think you are "pro open web" but don't see the value in new browser engines (rather than just Safari, Firefox, and endless Chrome skins), then you're not actually "pro open web" you're pro status quo.

But also, you're completely missing that the whole point of serenityos was to do everything from scratch, and simply porting an existing engine like webkit (which Andreas is extremely familiar with) isn't an interesting task. Porting something like servo (that doesn't even benefit from significant web compatibility) is even more pointless.

At least building a new engine from scratch has the potential to introduce a new non-gecko, non-webkit derived browser, even if it takes a while (though despite the size of modern web specs, it's in many respects easier[1] than doing it 10 or 20 years ago).

[1] It's still not easy, but prior to the immense work from the major browser developers in the late 2000s and 2010s to actually make the specifications accurate and complete, you could not do what serenity/ladybird has been doing and just implement the spec. Firstly often times the spec simply did not exist, but then the W3C specs were largely incomplete, often times ambiguous, and frequently just did not match reality. Even TC39/ECMAScript, which was much more constrained, had absurd amounts of ambiguity and incompleteness. Nowadays, you can go to most specs and be fairly assured that if you implement the spec as written the behaviour will actually be correct (maybe inefficient or slow), and the difficulty is the much more tractable problem of there being more features to implement.


I thing I understand what you mean: in theory rewriting a browser in ‘ship of Theseus’ way focusing on one thing at a time should be more manageable. But in practice the integration points between different parts are not standardized and you’ll have to deal with a huge C++ code base that takes hours to compile.


Taken from the FAQ of serenityOS:

> Will SerenityOS support $THING? > Maybe. Maybe not. There is no plan.


A big part of open source motivation is a creative expression.

Then there are raccoon digging through dumpsters for free code.

I find this mindset (why aren’t you working on something that benefits me, Bender) rather toxic.


It seems there is zero c++ in their web engine (cannot clone their repo right now and github is spitting raw json while browsing their source code with noscript/basic (x)html browsers).

Is this true?


No, it's basically all c++


Sad, I wish for a modern web engine in plain and simple C99 (let's allow us some benign bits from c11).

I guess I'll stick to links and/or lynx and/or netsurf.


The code might be closer to your wish than you think.

So, they stay on the bleeding edge if C++ compilers. So in terms of core language features, they are C++20-something.

But, everything is written from scratch, including the standard library.

As such, the code is a lot cleaner than a typical C++ codebase.


The problem is c++ itself, with its grotesquely and absurdely massive and complex compilers, mechanically due to its syntax complexity. And yes, rust is hardly less worth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: