Hacker News new | past | comments | ask | show | jobs | submit login
Maintaining an Independent Browser Is Expensive (ocallahan.org)
441 points by jonchang on Dec 3, 2017 | hide | past | favorite | 300 comments



A reminder that, as article points out, a healthy web needs multiple independent client implementations and we're already down to less than a handful major ones.

Ideally they should hold equivalent usage share which implies that as currently usage is heavily tilted towards Google Chrome, the best way in which you can help Mozilla is to use their browser and get your friends and family to use it too, unless there's a deal-breaker reason they can't. Your vote (usage) counts, so use it!


...and ideally, we should also encourage the use of the "lesser" browsers like Dillo, NetSurf, and all the text-based ones. They can't really run "web apps" and the like, but will be fine for viewing the "long tail" of content-focused sites out there (including this one.)

What has become quite obvious to me within the past few years is that the whole "move the Web forward" thing seems to be really about coming up with and implementing as many complex features as possible (and advocating for their use in new sites), making it harder over time for smaller efforts at creating independent browsers to produce anything useful.


> we should also encourage the use of the "lesser" browsers like Dillo, NetSurf, and all the text-based ones. They can't really run "web apps" and the like, but will be fine for viewing the "long tail" of content-focused sites out there

Perhaps this is a knee-jerk reaction on my part, but I'm against initiatives that resist technological progress, including those that rail against modern web applications.

Not only do these initiatives seem doomed to lose against the nearly unstoppable forward march of technology, but they are coming at the problem from the wrong direction. If it's too hard for independent efforts to succeed, we should not be fighting in vain to hold back progress. Instead we should be working on efforts to increase the abilities of independent contributors. We should be making it easier for anyone to build a modern browser. That's something that more technologists could actually get behind.

Further, while there's nothing wrong with a little nostalgia, let's not forget how far things have come. Many instances of modern web applications are certainly flawed, broken, misguided, overkill. But the web as a platform today is unfathomably better today than it was in the 90s. The applications built on top of it have massively empowered individuals and augmented our productivity. And I'm happy that I can build for the web instead of for Windows.

The fact that some websites do a poor job using the new tools available is a small price to pay.


> I'm against initiatives that resist technological progress

I dispute the notion that the evolution of the web is at all progress. It has nice things, but so far the main effects are ever increasing bloat and centralisation. There are simpler ways to do this stuff, we're just too caught up in path dependence to see or invent possible alternatives.

> we should be working on efforts to increase the abilities of independent contributors

Unless you mean to make them less independent (say by providing common libraries and rendering engines), then the only way forward is to increase everyone's abilities. This is already an ongoing effort, just see how many programming languages and methodologies are popping up every year.

> the web as a platform today is unfathomably better today than it was in the 90s

The web wasn't even a platform the 90's, of course it's a better platform now. However, it only became a platform by accident. This reminds me of C++'s accidentally Turing complete templates, which later spawned template meta-programming madness.

If a platform was design from the ground up as a platform, it would all be much simpler, perhaps even more capable. But no, we had to piggy back on whatever the market favoured at the time.

---

Here's what I think: a web-ish platform written today wouln't need more than 0.1% of Firefox's 35 million lines of code, or about 30 thousand lines. Or maybe 10 times that amount, to account for optimisation and long tail functionalities —that's still under 1%.

This even has been done. The STEPS projects at http://vpri.org is only 20K lines, including the compiler collection and editing tools. It even runs at acceptable speeds on a laptop.

That would be progress. That would spur the independence of contributors. Too bad the market just isn't ready for this. It probably never will be.


Progress is very subjective. Yes, its true that some "good" things have "bad" consequences. An example is that an average website in 2017 on an average computer + internet connection from 2007 is dog slow. But that doesn't mean these "good" things are not progressive. Rather, I'd describe it as 2 steps forward, 1 step backward. Take for example the smartphone/touchscreen phenomenon. It has its negative consequences. People are able to record anything (sound/video). They're using these things during traffic. But that doesn't mean there's no positive consequences. The end result is a nuanced list of + and - effects and there's no need to discount all the + or all the - with an extreme standpoint.


Let's weight the good and bad. First, what is possible today that wasn't then? I see only multimedia and games. Text forums like this one were totally possible in the 90's. They just weren't as pretty.

Then let's examine why stuff is so bloated now: is it because we have more features? For the most part we don't. It's just abstractions piled on top of each other, images, and ads.

I'm not even sure the costs are even related to the benefits.


I think you're focusing on the costs and neglecting many of the benefits. For example, just think about how much easier it is to learn to code nowadays, thanks to the proliferation of helpful websites that take a variety of approaches to teaching, many of which were enabled by web technologies. GitHub, JavaScript 30, Khan Academy, RunKit, Flexbox Defense, Codecademy, just to name a few. And think about how many more of these sites exist today than would have in the past, because more people are building them, they're much easier to build (again, thanks to new web technologies).

And this is just learning to code.


> I think you're focusing on the costs and neglecting many of the benefits.

Which of those costs actually enabled the benefits? Very few, I'd wager. Much of those were avoidable. Or would have been if the market wasn't so short sighted.

> GitHub, JavaScript 30, Khan Academy, RunKit, Flexbox Defense, Codecademy

I reckon some of this does require a scripting engine somewhere. But not all of it. I still believe your examples are possible without JavaScript (even without CSS). A REPL wouldn't need it, if you have the server do the computation (within limits to prevents DOS attacks). It just wouldn't be as pretty.

But that would be unwise. Many of those really are applications, and they deserve a proper application platform. Which the web isn't, despite herculean efforts to the contrary.

A proper application platform needs to give access to simple, relatively low level constructs. Something like web assembly, a raster viewport, input, and sound. Meaning, a virtual machine. Browsers are becoming such anyway. Also, no text handling, no style sheet, no programming language. Let the users implement a web browser on top of the application platform if they really want to.

Actually, I do hope we eventually turn this madness on its head, and implement browsers on top of web assembly engines. This would drastically reduce the effective attack surface (good for security), and do wonders for portability. Wouldn't solve the independence problem, though.


I'm sorry, but who says the web platforms of today are "progress?" it's the same re-heated cross-platform "write-once-run-anywhere" pies-in-the-sky as 20-30 years ago. It's just these days it's wearing half-shaved heads and horn rimmed glasses, skinny pants and plaid shirts instead of pony tails and threadbare t-shirts and jeans/shorts with sandals or sneakers.


I had bosses argue pretty much exactly this when I resisted proposals to remake entire websites in Flash... "It's _obviously_ the future!" they said...


> What has become quite obvious to me within the past few years is that the whole "move the Web forward" thing seems to be really about coming up with and implementing as many complex features as possible (and advocating for their use in new sites), making it harder over time for smaller efforts at creating independent browsers to produce anything useful.

Yeah, it's enough to be infuriating. I've recently been thinking about what it would take to implement a lynx-like browser that had a better understanding of layout and could deal with the modern web.

Hurdle number one is that there's no "Web standard", but rather a great many interrelated standards. At a bare minimum, you need HTTP K₁, SSL/TLS K₂, HTML K₃, XHTML, DOM N, CSS K₄, and ECMAScript K₅. Several of those standards are actually multiple standards, so you'll have to do some digging. That's the easiest hurdle to overcome. Number two is that a few of the standards (including some really fundamental ones like the DOM) are maintained as "living documents", making them moving targets; even the—uh—"dead" documents are subject to speedy revisions (for example, there have been 3 new ECMAScript revisions in the last 3 years). Number three is all the up-and-coming experimental stuff that isn't standardized yet, but if at least one browser supports something, some website somewhere is already using it. Between two and three, you'll need to keep a finger on the rapid pulse of the Web. And on top of all that, you have to support legacy bullshit, deal with malformed documents, and be ever hyper-aware of security threats.

Allowing for off-the-shelf components like libcurl and duktape, I'm sure it's still doable by a single person, but it's a lot of work (much of it frustrating).


The alternative to a "living document" is "we have a bunch of updates and errata, and you're going to have to piece together the current version yourself from many sources".

If you really need an unchanging version, download some version and use that.


No, the alternative is a versioned document.


These documents are versioned in VCS. The core issue is that either you update the master document every time you have a fix, or you don't.

If you do, it's a living standard.

If you don't, then people have to cobble together the latest version from various sources.


Or alternatively, follow a strict release cycle, ie, once a year and publish that. Have all standards follow this cycle.

The choices aren't "commit everything to master" or "commit everything into various versions".

A strict release cycle also prevents having some hype extension which only a few people actually use since it takes more effort and dedication to get stuff into the standards.


Why would you want your published version to exclude up to a years worth of fixes for known issues?


Why would you want an only 3 week old version with an unknown number of untested and unproven features?


I think what you describe is a side effect of the "move the Web forward thing". I see the motivation for moving the Web forward as a cost-savings measure for software developing companies, which they believe will be achieved when "write once, run everywhere" is achieved. This time that particular Holy Grail is thought to be achievable through browser-based development. They essentially want the browser to become something very like a full VM running on a host OS. The natural consequence of this is increasing complexity.


The thing is that that, in turn, is somewhat of a reaction to people using native apps instead of the web, creating lock-in for other ecosystems.


Sorry if I sounded like a Hardcore Apple Fans, but if there is anything Steve Jobs taught us that lifted Apple from near Bankrupt to the World most valuable company, is that User experience matters.

Native Apps provided better UX, easier access on the Home Screen, and is generally 10x faster. What the web responded, or in your words a reaction to people using Native Apps, were to put more Ads, Creepy Ads, more JS tacking, more JS that many had no idea what they were doing, web notification pops up every time you visit a page, mandatory cookies notification, and all sort of other things that makes people no longer wants to consume anything on the Web anymore.

I will have to admit, even without those problem it will be a hard battle against Native Apps, but not only are they not improving the web, they are making it worst.

A Steve Jobs phase constantly pops up in my head when i see this, "The only problem with Microsoft is they just have no taste." It is true for the majority of the web as well.


A couple of things about native iOS apps. These are all a result of Apple controlling the walled garden and wanting to increase its revenue:

-Apple limits what your app can do, e.g. apps can't spawn processes (FU Apple, this is bad for security/pricacy), can't do JIT compilation, can't download code, etc...

-Apple rejects apps that compete with their own.

-Apple removes apps that some governments want to have removed.

These are all things an open web avoids.

Now the fun thing is that nothing prevents Apple from turning the web into a walled garden since they don't allow any other rendering engine other than their own. The iOS ecosystem is really a beauty.


I really disagree with your argument -

> -Apple limits what your app can do, e.g. apps can't spawn processes (FU Apple, this is bad for security/pricacy),

What's the issue with that? Why would I want some random no-name devleoper randomly spawning processes on my device killing my battery life? How is it exclusively better for privacy? What prevents them from executing privacy-invading code as the separate process?

> can't do JIT compilation,

What's the advantage when all the devices, your apps are supposed to be running on, share the same OS, platform, ABI etc? What's the point when the hardware is already limited in scope compared to alternative platforms

>can't download code, etc...

Again, why would you want some random developer be able to download extra code? What prevents them from not misusing it for nefarious purposes? Even assuming that the developer is not malevolent, what prevents a third-party from compromising the developer's "update" servers and pushing malicious code?

>Apple rejects apps that compete with their own.

This is from a bygone era. For every native app on my iPhone, I can list an alternative -

  - Apple Photos -> Google Photos/Plex Photos
  - iMessage -> Whatsapp, Telegram
  - Facetime -> Skype, Allo, Duo
  - Mail -> Outlook, Spark, Airmail
  - Apple Music -> Spotify, SoundCloud, Wynk, Gaana
  - iBooks -> Kindle, Blinkist, Kobo
  - Files/iCloud Drive -> Dropbox, Tresorit, Google Drive
  - Apple Maps -> Google Maps, Waze
  - Keychain -> 1password, lastpass, minikeepass
  - Notes -> Evernote, Simplenote, Bear, Ulysses, Notion
  - Calendar -> Fantastical, Vantage
  - Clock/Timer -> Klok, Timeglass
  - Reminders -> Todoist, Due, Things 3
  - Video -> VLC
  - iTunes U -> Coursera, Udacity
  - Camera -> Halide, Camera Plus, Retrica
  - Weather -> Dark Sky, AccuWeather
  - Podcasts -> Overcasts, Pocket Casts
  - iTunes Store -> I'm not sure there's a perfect alternative but I can purchase music and movies from non-apple services as long as they provide it (and give apple their cut)
  - Settings -> Do we even need an alternative?
  - Voice Memos -> Recordium, Just Press Record
  - Pages/Numbers/Keynote -> Word/Excel/Powerpoint, Google Sheets/Slides/Docs
  - Apple Health -> I haven't gone looking for alternative but a lot of apps maintain their own data sources. I'm not sure what really prevents them from sharing data across or maybe they just don't want to? In any case I, sure as hell, trust only Apple with a centralized data source when it comes to my health tracking
  - iMovie -> Videoshop 
  - Stocks -> Stock Tracker
Do I need to go on? Some apps, which are indeed competitors to native apps, have been rejected but "is a competitor" doesn't appear to be the exclusive basis for rejection.

>Apple removes apps that some governments want to have removed.

True. Agreed

> they don't allow any other rendering engine other than their own

I am a developer and as much joy it would be to have Blink/V8 running on iOS, I'm totally fine with the decision apple made. In the absence of strong disincentives, companies and developers would try to get away with as much shenanigans as possible. I would absolutely not be happy if Google pushes a battery-killing update.

Sure Apple has its fair share of problem but out of all the big tech co's they are, by far, the best when it comes to preserving privacy, security and the least likely to pull shenanigans or gotcha's on their users.


> Why would I want some random no-name devleoper randomly spawning processes on my device killing my battery life?

Why do you want Apple to be in control of your device though? Wouldn't you rather be able to let an app do something that it needs to, when you need it to? Or, would you honestly prefer Apple to make all of your decisions for you?

If you really want Apple to be in control of your devices and not you, that's fine... but can you understand why other people might not want that?

Are there any Apple decisions that you disagree with?

Also, how about a replacement for the most important apps?

    - Contacts
    - Phone Dialer
    - Messages
    - Safari
Can you even uninstall those? Are other apps even allowed to receive phone calls or SMS text? (To replace Safari - I'd require that it never get launched from a hyperlink in an SMS text message.)

I'd like to replace these because Apple only gives you the most bare-bones features for them and they strong-arm you into using Apple Maps and Safari from them. I'm guessing you're fine with that though.

> [Apple is the] least likely to pull shenanigans or gotcha's on their users.

Pffft. Do you know the history of Apple? Their whole business model is a shenanigan on the users. I look forward to when their time is over.


To be honest, I kinda like Apple's "walled garden" approach on some level. No perfect, not ideal for power-users, but for the general non-techy person, I think it's great. I don't have to worry about my kids installing some app that pretend to be WhatsApp while it's clearly not.

But yeah, I would like to be able to change default apps in iOS. That would be great.


>Why do you want Apple to be in control of your device though?

I don't but I'm not hesitant to cede some control of my device to Apple in order to not be a technology janitor. I do not want absolute control of my device which is not even possible with any commercially available device anyway. (And no, Android does not give you absolute control of the device)

>Wouldn't you rather be able to let an app do something that it needs to, when you need it to?

Does Apple's current model really hamper any app to do something that it needs to, when I need it to?

>Or, would you honestly prefer Apple to make all of your decisions for you?

Again, I am happy to meet Apple in a middle ground where it makes some decisions for me and I make some decisions. I am happy to decide which apps gets access to contacts, notifications, location etc etc. What gives the assumption that you are in all control of decisions on an android device? Remove the Google Play Services and 3/4th of the ecosystem falls flats on its face.

>If you really want Apple to be in control of your devices and not you, that's fine... but can you understand why other people might not want that?

I totally get that part. I never denied that.

>Are there any Apple decisions that you disagree with?

Plenty. Touchbar is useless. Removal of headphone jack was rushed. Before iOS 8, I was practically in the android camp because iOS devices were severely limited then compared to now. iOS's notification game is weak. There is no need for iOS to block the whole screen for an incoming cellular call. Devices can get very difficult to repair.

>Also, how about a replacement for the most important apps?

You moved the goal posts. Original reply was against the assertion that apple does not permit certain apps because of "competition" which is not true. Otherwise Spotify, Evernote, Todoist etc won't exist. Greater than 3/4th of native apps have proper substitutes available in the App Store. You expanded the argument to include apps whose functionality is not replaceable because of ideological grounds or design choices. I never posited that Apple permits all apps. This would be akin to me demanding an alternative for Google Play Services with comparable functionality. Anyways -

  - Contacts -> Full Contacts
  - Phone Dialer -> Simpler Dialer (Not a true replacement though)
  - Message -> True. No Alternative as in you can't access SMS/iMessage
  - Safari -> Chrome/Firefox. Does an ordinary user care if they are running Webkit or Gecko? V8 or Rhino? As long as you are getting the feature of your favourite browser (chrome syncs your data between devices just like safari would between apple devices) why do you care if its Chakra or v8? OTOH, I'm glad that Apple does not let Google roll it's own javascript engine in iOS Chrome. Otherwise surfing youtube on Chrome iOS would like garnish the battery life. This hits Android devices too which do not have a VPx capable hardware decoder.
>Can you even uninstall those? Are other apps even allowed to receive phone calls or SMS text? (To replace Safari - I'd require that it never get launched from a hyperlink in an SMS text message.) No. You've got a valid point.

>I'd like to replace these because Apple only gives you the most bare-bones features for them and they strong-arm you into using Apple Maps and Safari from them. I'm guessing you're fine with that though.

I just sent a link on iMessage from my desktop to iPhone. Clicking on the link opens the location in Google Maps and not Apple Maps. If there's a specific case which forces Apple Maps I'd like to know. Every app I know and use (Outlook, food delivery, taxi services, online stores) force open Google Maps on my device and not Apple Maps. The only time I actually opened Apple Maps since owning an iPhone for 2 years is when I went to see if navigation is being provided in my area or not. Practically it doesn't exist for me. Even Apple's own Workflow app gives me Google Maps directions (I can choose between both when setting up the workflow)

>Pffft. Do you know the history of Apple? Their whole business model is a shenanigan on the users. I look forward to when their time is over.

Please give me concrete examples of "Their whole business model is a shenanigan on the users." There are few ups and downs but overall, interaction with Apple is much more pleasant than dealing with other tech companies. That they are expensive is not a shenanigan. You know what you are in for when you're purchasing an apple device. I've yet to see Apple engage in deceptive practices or misleading claims or gotachas like Sony which claimed all multimedia support on my TV but turns out only mp4 and mkv are supported on only an NTFS formatted partition. And if you are using a hard drive you can only use specific USB ports because not all ports provide maximum current. Half of this info was buried in the manual and half I had to figure out manually by taking a USB Voltmeter to my TV. Or like Canon which claimed "full compatibility across all Operating Systems" but turns out duplex printing is not supported on mac os or linux which was confirmed to me in a forum post. Or like Dell which claimed 2560x1440p @ 60hz on my monitor but conveniently failed to mention, on the box or in the quick start, that it's supported only on DP and if you've got to use HDMI then you have to go out of your way to create custom profiles and what not. This was not even mentioned in the manual and I had to scavenge for it in the dell forums.

The closest I've come to a "gotcha" with Apple is the lack of DP MST support in mac OS some years back and which was not readily confirmed in apple support article. The only other egregious issue I can remember is the refusal to acknowledge and fix the iPhone 6 bending issue. For all other issues, Apple generally came forth with an acceptable solution (like free bumper case for iPhone 4 cellular connectivity problems, free display replacements for the staingate thing, free motherboard replacements for the 2011/2012 Macbook Pro Graphic Card issues, 6s free battery replacement even after 3 years if you have the "defective" first batch phones). Unlike, say LG which threw a tantrum on replacing the motherboard of my mother's nexus 5x, or OnePlus which quoted a repair bill higher than the cost of a new device, or HTC which promised to repair the device in 3 days but delivered only after a fortnight, or Logitech which claimed to process a refund within a week but took 45 day. OTOH, I have had exactly one interaction with Apple support over the past 5 years and they replaced my phone in 24 hours (while claiming 7 days). And this is when Apple does not even sell directly to consumers in my country. I can only image what the support would be like when the middle men are removed. This is compounded by the fact that every other person in my close circle, who has interacted with Apple support, speaks highly of them. So my one single interaction is not an outlier.

>I look forward to when their time is over. I look forward to that time too but alas, I'm pretty sure that I'd be gone from this earth by then.

Apple has its fair share of issues (tax avoidance, being stingy with new product launches etc; I can jot down a whole list) but hands down the products are a joy to use and the company pleasant to interact with, for an average consumer, compared to any other option available on the market.


> What's the issue with that?

Processes (unlike threads) are for memory protection. There's no relation between the number of processes and battery life.

> What's the advantage when all the devices, your apps are supposed to be running on, share the same OS, platform, ABI etc? What's the point when the hardware is already limited in scope compared to alternative platforms

What if you want your users to be allowed to write code?

> Again, why would you want some random developer be able to download extra code?

Again, to allow users to share code (e.g. allow game mods).

> What prevents them from not misusing it for nefarious purposes?

The same thing that prevents them from having native code doing nefarious stuff: nothing.

> what prevents a third-party from compromising the developer's "update" servers and pushing malicious code?

Nothing, just like an native app can be compromised. That's why Native Apps are in a different process, and that's (one of the reasons) why Native Apps should be able to create their own processes (e.g. run a game mod in a separate process from the rest of the game).

> This is from a bygone era.

Okay, but this may happen again.


Ah, but now you're mixing browser makers with companies with websites. The ads, tracking, etc. are not the reason that browsers have become complex, although I agree that they're making the web (or at least that part of it) a less attractive proposition. (Although I'm personally still happy I can simply run an ad-blocker in my browser, which is more difficult for ads and tracking in native apps.)


That's true as far as it goes, but there are plenty of other reasons it's become so prevalent.

From a vendor's perspective:

- Web apps often have a reduced support burden relative to equivalent desktop apps because you have easier solutions for complete control over deployments: environments, live versions of software in play (often only one).

- Subscription payment models are convenient for vendors: they can help smooth out revenue but, critically, can also increase lifetime value of customers.

From the customer's perspective:

- One less software system to deploy, manage, and support,

- Managing backups is (should be!) taken care of for you,

- Lower on premise hardware costs,

- Your stuff is available from anywhere you have access to a browser,

- Can enable frictionless collaborative working.

With that said, it's arguable that a purely web-based SaaS model isn't necessarily a good fit:

- The problem of lock in is often worse. Even if you have options to export your data, is it in a format that makes it easy to import into a competing offering?

- Poor (or no) offline support, and unreliable operation with choppy connectivity.

- TCO can be higher, despite reduction in on premise support costs.

- Subscription payments can quickly mount up to death by a thousand cuts with respect to costs.

- Your data is not only offsite, but may not be stored in the same country or even on the same continent: a deal-breaker in some scenarios.

- Susceptibility to attack via DDoS, and other methods: at best service can be degraded or rendered unavailable, at worst your data can be compromised.

SaaS is like anything else: it can work really well (Office 365[1], GitHub, GMail), or it can be decidedly ropey (Google Docs), or anywhere in between.

[1] Within certain constraints - to me Office 365 still feels weak as a collaborative tool.


I agree with all of this, although I'd add that

> Poor (or no) offline support, and unreliable operation with choppy connectivity.

is also being solved - with additional complexity for browsers, of course.


Browsers like Dillo and Netsurf are also excellent for browsing Facebook, who maintain a remarkably functional HTML-only version at mbasic.Facebook.com

The big advantage of such browsers being the zero-risk of JavaScript doing sneaky things ( like keylogging )


I wonder: who could even feasibly introduce another browser today? The task seems almost impossible.


Amazon and Facebook could both launch entirely new browsers. Spotlight it constantly on Amazon.com or across Facebook's properties as with Chrome on Google.com.

Facebook already has far more cash and profit than they know what to do with. $38 billion currently, rapidly climbing. They'll add ~$20 billion to that in 2018. They could do a new browser just for kicks if they wanted to, and they have the vast global reach to generate some uptake.

Whether either could claw meaningful share away from Chrome is an entirely different question.

Samsung, Tencent and Alibaba all have the resources to launch entirely new browsers if it made sense.


Hiring talent to build browsers is not as easy as throwing money.

The bottleneck is engineers who have the know how to build it.

Building your own browser could be a shitty business decision if you can't compete with Chrome for ad dollars.

A browser makes sense if you can bundle it with an OS.

Android : chrome, ios: safari and Windows: Edge

Facebook makes a shit ton of money from ads people click on chrome.


Alibaba already have their own browser of a sort (https://en.wikipedia.org/wiki/UC_Browser). As I understand it, it's some kind of thin mobile browser which renders on the server side.


There are multiple versions of UC browser. My understanding of it is:

- uc mobile on Android now uses the blink engine, it used to use Webkit. - uc mini ( I think that's the name) has server renderer but it's "uc engine" is a derivative of gecko (not checked recently)


Samsung actually has a browser: http://www.samsung.com/global/galaxy/apps/samsung-internet/

Quite similar to Chrome, but has support for ad blockers etc.


Amazon sort of has their own browser called Silk. It's based on Chromium so it's not a built from the ground up endeavor but has been customized on their Fire devices to 'enhance' the browsing experience in a way that seems to benefit Amazon.


Samsung makes their own browser on Android.


it uses webkit/blink underneath, so that doesn't really count.


mozillia research is sort of doing it with gecko.

But yeah, from scratch is hard... Google didn't even do that, they started from webkit.


With Gecko ? It's their original engine. You probably mean Servo (the experimental Rust engine).


I guess it's not wrong to call it their original engine, just remember it's a two-decades-old development that started at Netscape.


Well, arguably started at DigitalStyle.


Can you expand? I thought I had a good grasp of Mozilla’s history but this is the first time I’ve heard of them.



So my understanding is following Netscape's acquisition of DigitalStyle, the decision was made to replace Netscape's existing layout engine with something based on what DigitalStyle had in their editor (which, unlike Netscape's existing one, supported things like reflow), and it was this combination that was open-sourced. Notably, it was only really the layout engine that was replaced: the HTML parser and the JS VM survived unchanged, hence the name NGLayout for what became Gecko.


And webkit started from KHTML


I would first target a subset of HTML, for example static documents instead of web apps. You would have a specification like AMP which restricts what elements you can use. If a document used the full spec, I would fallback to another engine (e.g. WebKit). You could have full documents in iframes inside limited documents (but cross-frame scripting might be hard to get right in that case).

Also, I would try to make it as modular as possible. E.g. layout engine, css engine, js engine etc. as libraries, use the system codecs instead of bundling own codecs, and so on.

And it would help to have an extensive test suite, which you could take from other browsers.


The difference between static web documents and web apps isn't the set of HTML they use, it's just that the latter are designed to deeply integrate javascript (and/or, to futureproof this comment, WebAssembly) into their UI, rather than do all of the logic on the server between requests.

The only way to ship a browser that doesn't render "web apps" would be to simply not support scripting of any kind. That would permanently break much of the web unless you offered a way to opt in to scripting the way current browser users who run script blockers can. But then you just have a regular browser with script blocking turned on by default.

Having different engines for "static" and "web app" documents, and defining a proprietary standard for the former as a restrictive subset of HTML would be unnecessarily complex and hostile to web developers.


There is a subset of web pages that only use a subset of features - Hacker News is an example, but better examples would be Wikipedia, many news sites, documentation of all kinds, and so on. You have basically static HTML, and use JavaScript for 1) progressive enhancement and decoration (what used to be called "DHTML"), 2) ads, and 3) more and more for client side rendering. You can't do much about 3 right now, but I would offer support for 1 and 2. I would definitely offer scripting, but just not all features.

You could autodetect a "lean html" site, or you could use a marker such as "<html lean>" like AMP does.

I don't think this is hostile to developers at all, unless AMP is already hostile. You would have a subset of the web platform, yes, but if you only use that you are likely to have a faster and leaner website - even without a new engine. I would view it as a guideline or a best practice, rather than a restiction.

Side note: I think the co-mingling of documents and apps is the source of a lot of problems today. Why can ads inject arbitrary javascript into my website?! On mobile, I am regularly redirected to annoying ads that proclaim my phone is infected, and make my phone vibrate and make noise. A news article shouldn't be able to do that!


>I would view it as a guideline or a best practice, rather than a restriction. >I would definitely offer scripting, but just not all features.

That's a restriction. If site authors have to abide by that or else not have their content render in your browser, you're restricting their freedom to publish the content they want or make their own decisions about what code they choose to write.

But if you still have a "backup engine" that will run it anyway, then what's the point?

>I don't think this is hostile to developers at all, unless AMP is already hostile.

I would think Chrome was hostile if it gave preference to AMP over standard HTML. Those aren't decisions that browsers should make.

>Why can ads inject arbitrary javascript into my website?!

Because that's what the site authors decided to publish. Like it or not, that's the way the web is intended to work - you request a URL, you get back whatever the server decides that URL points to. Changing that dynamic would change the web at a fundamental level.


I think you are misunderstanding me. I was just trying to answer "how could you concievably write a web browser engine if you are not Apple/MS/Google/Mozilla". My answer is: focus first on the happy path. There are a ton of websites that could be rendered with a simpler browser engine - I believe also faster and with less memory. Fall back transparently to an established engine if necessary. This is similar to how a tracing JIT compiler makes assumptions about the types of variables, and if these are violated (because the language is dynamic), falls back to the legacy path, an interpreter.

I further think, if you sell it right, you could get people running sites like blogs, documentation, wikis, to opt in to "strict" or "lite" or "document" mode, but that is an orthogonal issue.


> > Why can ads inject arbitrary javascript into my website?!

> Because that's what the site authors decided to publish. Like it or not, that's the way the web is intended to work

I'd argue that the web wasn't intended to involve executing code, but rather it was originally envisioned to be a means of viewing documents.


>I'd argue that the web wasn't intended to involve executing code, but rather it was originally envisioned to be a means of viewing documents.

That's fair - but it doesn't imply that it was intended to not execute code, so much as that it wasn't feasible to consider at the time. The web wasn't originally intended to display images either, but it evolved over time. Certainly, in either case, javascript was never intended to carry the burdens it's been made to, but people found it useful to have a Turing capable language as part of the web, and here we are for better or worse.

And the web in the near future when WebAssembly takes off is going to execute binaries compiled from arbitrary languages, of which javascript will be only one option. That's going to be awesome in both senses of the word. Not what the web was originally intended for, and certainly not the best of all possible worlds, but I would argue that it's still better than static documents alone, in terms of the possibilities it provides to publishers and users.

And maybe the pendulum will swing the other way at some point. Maybe people will choose to write lighter pages and not use unnecessary code. But if they don't, that's still their choice to make, not yours or a browsers to make on their behalf. And unlike with C, C++, Java, etc, you can turn javascript off if you want and still use some of the web.


It’s true that we can look at most websites and generally agree on which ones are “light” (e.g. HN or Wikipedia) and which ones are “heavy” (e.g. Gmail or SoundCloud), but that doesn’t mean it would be easy to define a specification that would allow all and only light websites to be built.

An extremely light and content-focused spec like Gopher would exclude a lot of websites that we would consider light. Terms used to describe light subsets of web capabilities, like “DHTML,” do not appear to be well defined.

My initial attempt would be to exclude XHR/fetch, dynamic URL/history manipulation (window.history), and loading resources from other domains. But the latter would probably exclude things I don’t oppose (like CDNs) and things I don’t necessarily support but which would surely drastically impede adoption (like third party ads/analytics services).

Can you propose a better and more viable spec?


> I don't think this is hostile to developers at all, unless AMP is already hostile.

I think the prevailing opinion on Hacker News is exactly that.


My impression is that HN’s hostility toward AMP is caused less by the spec itself and more by Google’s leverage of its market power in search to incentivize AMP adoption, as well as its “web-breaking” behavior (like sending you to ugly and self-serving google.com/amp/whatever URLs).


I don't see how a subset of HTML could be hostile. AMP is not Google Search.


A subset of HTML isn't hostile. A browser that defaults to only rendering that subset is hostile, because it relegates any content which doesn't conform to that proprietary spec to second-class status.


That sounds like a much more graspable technical goal, but you would effectively be competing not with modern browsers but rather with the web as a whole, which is a much more difficult social/political goal.

If at the same time you could encourage widespread adoption of your HTML subset, then you may be onto something. This is essentially what Google did with AMP, their incentive being their caching network and apparent favoring in search results.


Not sure why you're downvoted, this is a good plan of attack.


Well you could also fork Chromium I guess.


Except that that doesn't really broaden the ecosystem, unfortunately.


Given enough time, it does. People used to talk about a “WebKit monoculture” on mobile back when both Safari and Chrome were built on WebKit (although, they enabled different features and used different JavaScript engines). After Google forked WebKit to create Blink, WebKit and Blink started to diverge with different development priorities resulting in different improvements being made and different features being implemented. So as time goes on, and code gets added, removed, and rewritten, much of the commonality eventually disappears over time.


And with Puppeteer/Devtools programmers can stay in text editors.


dillo for static documentation is neat, you can have twenty tabs and still using only 32MB total.


Yeah, "moving the web forward" sounds a lot like building barriers to entry. On the other hand, these browsers are open source software, so I'm not sure that's a great argument.


I switched back to Firefox 57 about a decade after abandoning it for Opera. I remember the advocacy movement around Firefox 1.0 which I was quite involved in. I thought (and still think) it was great.

I don't plan to go quite as far this time, but do plan to try and convince all my friends and family to get on board. It'll be a harder task for sure since Chrome is not IE6, but as the resident IT guy that everyone asks for help, I know I have some sway.

I'm just waiting for Quantum to land on Android, because I know enough of them will have issues with the current version and I want to skip over those problems to leave a good first impression.


Firefox on Android is so frustrating. Things like no swipe down to refresh, despite all other apps implementating that pattern.

On my phone, scrolling by swiping is slow, it scrolls such a short distance. Meaning going down a long NH page takes many swipes. All other Android browsers seem to get it right. Fonts also seem "off".

It's these small things that make the experience frustrating.

I truly want to use it, but it seems the designers of the app don't use other browsers, or have no interest in making the switch and Android FF easier.


Things like no swipe down to refresh

This is the thing that made me switch to it from Chrome. Something as experience-breaking as refreshing the page isn't something that the user should encounter just through energetic scrolling.


I can see the flip side on this too though - the user action is pretty universal in apps, and it's disruptive when 9 out of 10 apps use this and you're trying to get fresh data out of app #10. There should be some consistency in user expectations across apps, or a clear way of performing the same action.

That being said, I agree with you and really like that FF doesn't have a motion for refresh. It's mildly annoying sometimes, but usually it's only on sites very clearly constructed around the mechanism which have no other way of refreshing the page without going somewhere else first. HN is convenient enough on mobile with FF because you can just tap on the HN Title to refresh the page. Other pages are not as kind.


I expect swiping down to take me to the top of the page. It's frustrating when it does more (like reloading the page). I guess I'm the wrong person to ask whether 9/10 apps do it; I usually find a replacement when they do.

Firefox, Pale Moon, and Chrome all have a refresh command in the "3 dots" menu. I usually use that.


Swiping down only refreshes when you are already at the top of the page, and it even gives some visual feedback to see that you have indeed reached the top and you are now initiating a refresh gesture.

I'm a quite energetic swiper and, to be honest, I have never found a situation where I wanted to return to the top and ended up refreshing due to too much swiping. But YMMV, I guess.


Personally I want to disable "swipe down to refresh" system wide and instead have a button at the top of those apps.

"on sites very clearly constructed around the mechanism"

Browsers have refresh buttons for that.


Yeah, I'm a heavy Firefox for Android user and I _really_ want to use it but the performance is really rough. I wish I had the sort of skills to help contribute to performance, but the whole thing is so huge I wouldn't even know where to begin.

At this point if Chrome implemented the "open tab in background" feature (when I click on a link in an e-mail I don't want to switch directly to the browser) I would probably switch back.


Mozilla don't really care about mobile. The (too) small mobile teams do care, but the company as a whole doesn't - which mostly explains why it's so rough.


Firefox Focus is my go to browser on Android for 'just looking things up'. It's perfect for Wikipedia and IMDB (when someone points at the film you are watching and asks who the actor is). I only use another browser if I want up keep the page. No cookies, no bookmarks, no trackers


Sadly, Focus appears to just be another Chrome skin with some ui and privacy tweaks, and I think it's pretty telling that Mozilla did this rather than actually use FF mobile as the basis for this product. If you want to encourage browser diversity it doesn't really help.

I use FF Mobile because I want competition and I do like the features Firefox offers, but the performance and UX on Android are definitely lacking relative to Chrome.


Just go into Private Browsing in Android Firefox and you have essentially the same thing as Firefox Focus, except with Gecko underneath it and all the usual amenities of a full-featured browser.

Focus is meant for average users, who don't necessarily understand what Private Browsing does.

Having it as an Android Webview wrapper is nice, because it makes the APK really small. This way, users don't have to have two full browser APKs installed.

And well, it's not like it sways webpage owners one way or the other much, to have users use Firefox when they have all trackers and as a result essentially also all ads blocked. They don't make revenue off of those users, so why should they worry about making their webpage work for those users?


To me the UX is a major plus in FF on Android (albeit with plenty of rough edges in the more hidden features). But yeah, no investment in mobile performance sucks (but hey, they spent hundred of eng months on making desktop not suck, so yay I guess?).


And you take this wisdom from where? They tried to build an entire mobile operating system before. Clearly they do care.


The thing that annoys me the most is to change tab: on chrome / brave we can just swipe the top bar and change tab. On firefox, we need to first click on the tab button, and then find the right tab by looking at a huge list of tabs. I find it much less practical and that's what got me back to chrome.


Ha yeah. Firefox seems to want to encourage people to use fewer tabs, by having the tab-bar on top be a scroll-bar, so it's super annoying to have many tabs open.


All you need to do is go to your awesome bar and type "%" as the first character, then space, then start typing the string you're looking for.


Yeah but on mobile it's annoying to type % .. it's so much faster to just swipe the address bar


..Swipe down to refresh intervenes with google maps or canvas bad interactive content on firefox focus. I just opened a ticket.


Check out FF nightly on Android. It has the new features. I use it and it's great


You're not going to install a nightly build on your friend's or mom's phone though. Your parent is already sold on FF; just waiting for Quantum to be released on Android before pushing it.


I can think of only one crash in the best part of a year of using Firefox Nightly exclusively on my phone. (Firefox Nightly on my laptop has been far more heavily used for a number of years and has had a handful of crashes and a few other issues, but still fewer crashes at least than I get the impression production Chrome has.)


I've had a few crashes with it. But nothing serious or really different from Chrome. There's also FF developer edition, which is more stable.


Firefox on android kills my phone, gobbles tons of memory. Nexus 5x, no plugins.

I've given up on it now.


Have you tried Firefox Focus?


Focus is chrome with a different skin and ads blocked. Literally, it's WebView which is Chrome.


I started before it was called Firefox, and it was similar story.

I abandoned Firefox last year to the Chrome Opera, So i have been bitting my fingers for years, Firefox was slow, and I knew Chrome was better, and I had waited.

I used to force the usage of Firefox as the resident IT, with over 150 computer's default browser and later grew to 250. But every year since the introduction of Chrome, I have had more resistant, Chrome was faster, and better from those employees who knew IT, and may be 4 years ago, someone who is absolutely computer Illiterate came and asked if she could use this Google thing (Chrome). I asked for a reason, because I was curious how she knew, it turns out someone helped her using Chrome, and the speed was much better. She since then loved using it. And that wasn't only case, there were many other similar examples. And others from friends and family as well.

It was at that time I gave up.

Before then many within the Mozilla communities were begging for a better, more performant browser. That was right after Firefox 3, the goal of Firefox 4, the project of e10s. We thought JS was the problem, we advocated Spidermonkey being faster in benchmarks, but not much in UI or general feel of the Browser.

Mozilla then literally gave up making a better browser and its users, and went on to built Firefox OS. Their ideology of Javascript is the best, build an OS with JS. It was pure luck they had a small group of people crazy enough to work on Rust, and Servo as an experiment which later turned into what we had today, Quantum.

From the outside it was what i call a management and vision issue. And Mozilla left a very bad taste in my month. There used to be Article from Ars every month showing the decline of Firefox usage. I kept wondering at some point they should become worry as with less market share means less money from Search Engine, I dont think they see that as a problem until fairly recently.

It wasn't until a month ago, someone on HN posted an comment on Mozilla that helped me Grief. It was never their intention to have the best browsers, or to care for user experience, that wasn't their main Goal. Their goal, as it always has been, is an Open Web; all others are secondary and merely the results for their first goal. They weren't here to compete, they were merely here to make sure the Web is open.

It is good they are now back to Focus on Firefox. With WebRender, more Quantum parts, Firefox's future is bright. I am using Firefox now, but I wont be advocating for it any more, I guess everyone just choose what they want.

Sorry for a long rant, but that is the results of someone who has been following them since Netscape era.


I understand that this is the view you had from the outside, but I think it's incorrect as someone who was at Mozilla for nearly 4 years.

> It was pure luck they had a small group of people crazy enough to work on Rust, and Servo as an experiment which later turned into what we had today, Quantum.

This is the crux of it. You see this as luck, but it wasn't luck at all! Someone had to say "we're going to fund this crazy work that might not pan out". They did so explicitly because they saw the same things you saw: Chrome was fast and it was getting harder to keep adding the features and standards required while being secure and fast.

Building Firefox OS wasn't completely at odds with wanting to build the best browser, because Firefox OS was fundamentally still a browser. There were other problems that caused the initiative to fail.

So here's the thing, Mozilla has had hits and misses. Rust and Servo were a long-term bet that is paying off. The decision to make the bet was not pure luck, but a real choice.


Quite so. You don't accidentally pay people to work a project for years, and I can assure everyone from first-hand knowledge that what has happened was one of the desired possible outcomes from the very beginning of Rust and Servo.


There is a third major independent option: Microsoft Edge.

I assume (like me) a lot web of developers aren’t using Windows, but we can at least help people out by making sure our applications work fine on those browsers. Since IE 9, with a polyfill, most things just work unless you need some very recent HTML5 APIs.

Microsoft provides free VMs for testing different browsers here: https://developer.microsoft.com/en-us/microsoft-edge/tools/v...


The situation with Microsoft Edge is strange. Not only is Microsoft Edge not available for older versions of Windows, it is not supported on Windows 10 IoT Enterprise, Windows 10 LTSB, or, most importantly, Windows Server editions. Virtual desktop hosting (VDI/terminal servers) has to be based on Windows Server (licensing requirement). This means that Edge can't actually be the only supported Microsoft browser for Windows: if you do corporate work, you still have deal with Internet Explorer 11.


This is pure speculation. Maybe they're doing something shady with edge that would get them in more trouble(or found out) if they put in enterprise systems, but they see the regular consumer market as easy targets who can't defend themselves. That might explain why they're pushing edge so hard.


IoT Enterprise and LTSB are for embedded systems, they lack many of desktop applications and features (like Cortana) found in "regular" Windows. Windows 10 Enterprise is the desktop version, it has everything in Pro, and then some. There's nothing "shady" about Edge.

https://www.microsoft.com/en-us/windowsforbusiness/compare


> IoT Enterprise and LTSB are for embedded systems, they lack many of desktop applications and features (like Cortana) found in "regular" Windows.

I view that as an advantage. That's why I use LTSB whenever I set up a Windows desktop for anyone at work.


To be honest, I think that it hurts Microsoft more than anyone else. It means that corporate desktops using VDI, and presumably Web kiosks, now have to use a third-party browser. In some cases, Edge literally can't be an organization-wide standard, and Internet Explorer is now frozen, so people will have to move away from it over time. For example, every VDI Windows desktop provided by AWS Workspaces is actually pre-loaded with Firefox.


I think the reason it's not available on server is that it's a universal Windows app.

Generally though it's ui and feature set is minimal compared to chrome and Firefox but it'd rendering engine is decent.


Except Edge doesnt even support basic things like Server-Sent Events: https://developer.microsoft.com/en-us/microsoft-edge/platfor...

They also haven't actually replaced IE11 which still has plenty of usage out in the wild. MS had a good start but they continue to stumble in their efforts today. Firefox has done much better with 1/1000th of the resources which should say something.


To be fair, SSE never got much adoption to begin with. Their web socket support is top notch.


Because Edge and IE11 exist, that's why. It's a fantastic standard that would make things much easier for many real-time apps. Even the dev tools in Edge are subpar compared to Chrome and Firefox.


I'm not convinced. There's not-terribly-bad polyfills for SSE that go down to IE8 or so. Admittedly they don't support all details of SSE but they support a good chunk. I've used SSE pretty extensively in a number of projects and I can't say IE/Edge is why I stopped using it. It's just.. there's better options out there :-)


If only ms would just make their software available cross platform.


Wasn't it just released for Android and iOS? (I get that that might not be what you meant...)


The iOS version is a WebKit wrapper by necessity (Apple mandates that all alternative browsers use the system-bundled WebKit for user experience reasons), and while it’s possible they’ve ported Edge’s engine to Android I’d bet that Edge for Android either wraps WebKit or Blink/Chromium. The primary value proposition for Edge on mobile is syncing of tabs/bookmarks/history/etc, not its engine.


> (Apple mandates that all alternative browsers use the system-bundled WebKit for user experience reasons)

So far, this is the only thing that really drives me bonkers about iOS and has for a long time. I'd love to be able to use an actual third-party browser with a third-party rendering engine, even if it meant having to install a particular signing certificate or some such. Doubly so for using browser-specific extensions. Who knows, maybe iOS 15?

(While I'm wishing for ponies, the ability to change OS-level default applications would be nice.)


I wish someone would just port Firefox mobile to iOS. You don't have to put it on the app store (which is impossible). Just zip up the xcode project, let me swap in my developer key, and run it on my phone. The protections against JIT and so on are enforced by policy, not by technology.

I understand it would be a lot of effort for little gain. But if more developers would offer their stuff as self-signable "xcode projects" (maybe even just a .dylib and some wrapper source), it would be really cool.


Gecko already runs on iOS, I just don't know what the exact build process is. The trouble is, a browser is much more than an engine, and why would people sink effort into an unreleasable product.


Besides the obvious benefits to Apple, I believe the reason behind this is because allowing JIT execution would open up a security risk.


Yes, because they would need to allow write-executable memory pages. Disallowing those closes the risk of executing non-verified code.


Yes, the Android-version is also just a Blink wrapper. It's almost impossible for Microsoft to port Trident/EdgeHTML to other operating systems, because it's so deeply integrated into Windows.


Way back when, there were versions of IE 4, 5, and 6 for non-Windows OSes such as HP-UX, AIX, Solaris, and Mac OS (Classic and X both). It is not unreasonable that they could do it again.


Note those releases were based on an entirely different codebase to the Windows version of IE.


Internet Explorer for Mac was a separate codebase from the Windows version.

The Unix workstation versions (HP-UX, AIX, etc.) were built with a commercial equivalent to Winelib: a library that implemented Win32 APIs on top of Unix and X11.

I have no idea how they managed to build the IE code on a compiler that wasn't MSVC.


When you say "entirely different" you mean they didn't even share the same rendering engine?


The sibling suggests that non-Mac ports shared the Trident rendering engine with IE/Win. IE/Mac used Tasman, which was totally different (and was the first implementation of many CSS 3 features); as far as I'm aware there was no code shared at all between IE/Win and IE/Mac.


Those were basically just independent pieces of software, and had their own set of features and bugs (IE5.5 on mac was notoriously buggy)


5.0 and 5.1 were pretty bad. 5.2 was surprisingly standards compliant for the time (apart from `clear` inheriting, that was painful). 5.5 was a Windows only version of Trident, never available on the Mac (the Mac’s rendering engine was Tasman).


> Apple mandates that all alternative browsers use the system-bundled WebKit for user experience reasons

Not entirely true: Every browser developer is free to use whatever engine they like on iOS. Mozilla could use Gecko, Google could use Blink. They just don’t do it, because what’s not allowed is JITs (because security), thus 3rd-party JS-engines would always be slow.

So the reason everybody uses WebKit on iOS is not that they are forbidden own engines, but only JITs, so Apples is faster.


I believe you're incorrect. First of all: https://developer.apple.com/app-store/review/guidelines/

2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.

The JIT thing was true, but a few years back, they lifted this restriction, see "4.7 HTML5 Games, Bots, etc." for its current incarnation which also says "your app must use WebKit and JavaScript Core to run third party software".


Would it be possible to use WebKit for JavaScript execution, but still do rendering in a different engine?


I don't think you could plug directly into the JS engine to allow it to interact with your DOM.


> The iOS version is a WebKit wrapper by necessity (Apple mandates that all alternative browsers use the system-bundled WebKit for user experience reasons)

Which is why viewport values like maximum-scale, minimum-scale, and user-scalable do not work on iOS10+, no matter the browser.


Android version uses Webkit as well


I'm curious, why does the web need multiple independent client implementations?

Note I'm not trolling, I'm seriously asking the question. I think it's assumed to be true but not necessarily true. I'm probably wrong.

As a counter example AFAIK there is no python standard that lots of teams make competing implementations. There is basically one python and a few incomplete copies but those copies have little to no say in the spec.

Then we have C++ where there are independent implementations but every implementation embraces and extends with optional non-standard features you can enable.

Linux doesn't seem to have much of a standard. Is there a spec? Are there multiple implementations or are there just multiple distributions of the same implementation.

There's effectively no spec or standard for phones. There's 2 proprietary OSes, Android and iOS. Android happens to be open source but there's no spec that multiple teams can implement AFAIK. There's just an API that Google decides on it's own what to add/change. I guess you're free to make another implementation by copying the API but there's no spec to follow you'd just have to try to keep up.

Would Android and/or Linux be better if there was a standard, committees deciding what gets in it and multiple implementations?


The last time we had a web monoculture (IE6), it was not a pleasant experience. MS had decided it was over, they won, no more need to invest resources in that (and really, they would have preferred you write native Windows apps, but IE-only web pages which still only ran on Windows was a good second choice to them). It's also easy for a single dominant vendor implementation to impose terms on required periphery tech even if the primary standard is open (see the fate of Apache Harmony)

You could argue the Python/Linux case are safer because:

* They are themselves independent open source projects. While Chromium is open source, if Google pulled out, I'd expect the remaining development effort to be on par with e.g. Pale Moon

* You could argue there _are_ multiple implementations of Linux-as-OS, if you're not just looking at the kernel. See Ubuntu, Red Hat, Arch, Alpine, etc. There's no standards body (unless you count stuff like POSIX or LSB, but then they are only subsets of functionality compared to what users expect of a full OS) but still things become standard by community consensus (e.g. systemd)


> While Chromium is open source, if Google pulled out, I'd expect the remaining development effort to be on par with e.g. Pale Moon

I don't think that's quite true, for several reasons:

* The biggest external contributors (Opera, Samsung, and Intel) put in more development effort than Pale Moon has, by a wide margin. (I think non-Google contributors to Chromium outnumber those working on Opera's Presto at its peak, though less than Opera's desktop product and Presto at that time.)

* Last time we saw any browser vendor leave the browser engine business (i.e., Opera killing Presto), we saw plenty of people jumping ship to keep working on browser engines (and not just those let go), so if we were hypothetically in a Chromium monoculture it's unlikely that Google pulling out would stop every Googler currently contributing from continuing to do so.

Also, when it comes to the Linux kernel, remember that both Windows and FreeBSD implement the Linux kernel syscall ABI too, which you can argue means they are alternative implementations of the kernel.


Let me answer your first question with few examples:

When Chrome started taking off in 2010, they introduced unique ID per browser user that's sent to their servers. This has nothing to do with the web standards, but it's non optional added feature.

When gmail took off in Canada, Gmail started requesting an actual phone number, to open an email account.

The above are examples when company starts having advantage.

When a company starts having monopolies, expect stuff like mandatory signons just to browse and such.

In China for example, you must register with RL creds if you want to post for the masses online.

TLDR: monocultures are bad, monopolies are bad.


As soon as the binaries start betraying their user, there needs to be an alternative. Open source goes a long way to prevent the people in defacto control from adding things like requiring unnecessary accounts, naggers, ads, telemetry, backdoors, excessive bloat, etc. But sometimes the maintainers can't be reasoned with beyond a tolerable point and a lot of software and people already rely on it heavily. If the thing is a standard then someone needs to fork (if open source with a proper license) or reimplement. If it's a closed, proprietary foundational piece of software (OS, driver, word processor, compiler, browser) then someone needs to create an alternative from scratch. Those alternatives may very well start betraying the user at some point in which case, goto 10. e.g. We don't need a competing Linux because Linus is currently on our side.


And in fact, a viable competitor alone is often enough to stop abusive behavior from the market leader. If the market leader knows that they will quickly use market share if they start abusing the users, they will not abuse the users.

So a competitor like Linux or Firefox is helpful even if market share never gets very high.


Python has multiple implementations like numpy and others that i am not familiar with. Python not having a clear spec is a disadvantage, although it is pretty stable language.

C++ has standards and all compilers implement them and people stick to standards, no idea what you are talking about when you say "embraces and extends with optional non-standard features".

Linux implements the user interface standard called Posix api and portable programs stick to it.

How do you know what you are writing against is going to work tomorrow, if there is no standard?


Python has multiple implementations such as CPython, Cython, PyPy, IronPython, Jython. However, because the de facto CPython has no spec, every other Python implementation has behavioral differences from CPython and there is no real way to resolve all the differences without heavy burdens.


Do you want one already very large company to be the only browser provider? That company could start to make its implementation less user friendly/less friendly to competition; but users would have no alternative but to use it. e.g. "Let's track everyone's usage, and prevent adblockers".

Not sure that applies to language implementations. Imagine if C++ started trying to send user data to some organisation.


Python's a very good example of the problems you get into with a single implementation defining the platform (by fiat, or de facto), even if it's controlled by a completely benign and incorruptible entity.

The problem is that then bugs in your implementation become part of the de facto standard that defines your platform. Downstream code is very likely to depend on those bugs. You can't fix those bugs without breaking things. Your platform accumulates warts at a higher rate than it otherwise would. Any attempt to make major changes to the platform implementation becomes incredibly difficult because of the need to maintain bug-for-bug compatibility.

All these things have clearly happened to Python. It's one reason why despite massive efforts over a long time, there is no fully compatible Python JIT today.

If you follow Chromium development you will quite often see developers consider whether it's going to be safe to fix some bug in Chromium's behavior. Quite often part of the justification that it is safe is "Firefox doesn't have this bug, so major Web sites probably don't depend on it".


You are comparing a global standard everyone has to use to individual projects. The web is a standard, the rest are individual projects that no one is forced to use and all have n number of alternatives. They are completely different things.

Here is the thing. Many people simply do not care about things like open source even though they may use it.

Even when Stallman and Linus started many people didn't. Now the difference is these guys are doers and helped created a uniquely rich ecosystem that gives real choice to every forward generation and something to build on.

But open source typically works with small team starting things and then growing. The out of control complexity, sometime gratuitous is a real blocker as you need large teams to implement right off the bat.

Since open source cannot pay anyone or afford large teams in the beginning so this effectively rules out open source alternatives and choice. This is a pretty nasty outcome, but people have to care about these things just like Stallman and others did and fight it persistently with an eye on consequences of decisions taken today and undue complexity.

The web in terms of browsers and corporate sponsored projects like Systemd come to mind. Both will need large teams to develop alternatives ie other corporates, and thus de facto rule out open source projects and choice.


iOS and Android mitigate some of the problems by cutting off backwards compatibility at some point. "Yeah, your app broke in the new iOS/Android ... fix it". Python's tried some of that, but because people can carry on supporting/using Python 2, the result has been a crippling fork.

Linux kernel devs spend a lot of energy trying to maintain compatibility with warts, and still break things once in a while. Also the Linux kernel interface, while complex, is nowhere near as complex as the platform-level API for iOS/Android/Web.


How would Stylo have been possible in the world where Chromium was the only browser engine?

Chromium has not, and will not, accept Rust code in the tree.


I don’t think there is a need, however market forces will ensure that competition exists. If the incumbent starts to betray the user alternatives will automatically arise.


That's optimistic and naive. See Intel/AMD. The problem there is that the cost of attacking the incumbent is billions of dollars, and generally you see no ROI at all until you've spent multiple billion.

Market forces can ensure that competition will exist as long as barriers to entry are kept low enough that people can afford to compete. The larger the barriers are, the more aggressively an incumbent can be abusive without needing to fear competition.


But the use of an alternative is only possible, if the alternative is on/near the same level of usablity as the product that is market leader. If it's not the case, than most users won't switch and there is also the factor of the "mental burden" to switch to a product you're not used to and stay in the "preferred loop".


I tried to switch over. Switched from chrome to firefox completely for about two weeks after quantum.

I got random high cpu usage from firefox. Some random pages just spiked the cpu and everything slowed down. Same thing on both my workstation and laptop.

I also didnt find a way to make opensearch working. In chrome i can just type part of the url of a website, press tab and write a search query and get a search on the website.

That is so worked into my daily routine. It just became an annoyance and a dealbreaker. :(

I really really want to switch. But chrome works really well and my browser is such an important tool for me so i really want to use the absolute best.


As far as I can tell, anything Javascript heavy performs significantly worse on Quantum than on Chrome.

The benchmarks I've looked at seem to indicate that this shouldn't be the case, but I'm thinking maybe Google has put in some "real life application" optimizations for certain configurations that can't be measured easily with benchmarks? Maybe they've learned some lessons from Angular? Just spitballin' here.

Kinda like how Apple managed for a long time to make everything a lot more "buttery" than the competition, even though they usually pushed hardware with specs that aren't at the tippy top.

Additionally, I think whatever version of flash is bundled with Chrome is less CPU-inflaming than the actual current version of flash, as flash sites seem to perform worse on Quantum as well, and Quantum relies are your system-installed flash.


There was an article some time ago by someone from Chrome/google in which you could read that they optimized for benchmarks before but benchmarks are not real world usage so they stopped and started to optimize for real world usage. They did it because they optimized for some benchmarks while making real world average usage less efficient. Benchmarks do not show real browser performance.


Ah, thanks for that, it seems you are spot on.

https://blog.chromium.org/2017/04/real-world-javascript-perf...

I actually just ran their "real-world" test (http://browserbench.org/Speedometer/), and the results were:

  Chrome 62: 39.6 ± 3.4 rpm
  Firefox 59: 27.0 ± 5.6 rpm
So there may definitely be something to this theory. Would be good to have a larger sample size though.


Google Maps crash Quantum for me on a regular basis. I am trying not to become a conspiracy theorist, but this is making it difficult.


Unfortunately same experience here. Firefox is just slower than Chrome for me on all devices and I also have some high CPU hiccups with FF.


Well we have 4. That's more competition than we have in a lot of other software platforms.


>A reminder that, as article points out, a healthy web needs multiple independent client implementations

I don't think it does. It could just do (and do better) with a single client implementation, as long as it wasn't controlled by a single company.


Chrome does exactly what I want, the way I want it. Mozilla/Firefox OTOH has failed to respect me and my OS time and time again.

Honestly, I'd rather see Mozilla go away and stop making software completely because I don't like them, their politics, the way they turn people into ideologues who beg you to use their inferior product or the way they do just about anything.

If the pain of everybody using the same browser is real, people will get around to building a worthy competitor to Chrome. I suspect that the issue is a bit overblown though.


You're saying Mozilla is turning people into idealogues without joking? Besides the fact that people independently choose to be active, half the Mozilla fans in this thread aren't shoving Firefox down anyone's threat.

Firefox Quantum is a worthy competitor to Chrome. So is Safari on Mac. Not sure about Edge as I haven't used it.


Mozilla obviously doesn't turn people into what they are already. It just attracts that certain type of person. My poor choice of words doesn't change the fact that Mozilla annoys me, but thanks for the correction...

> half the Mozilla fans in this thread aren't shoving Firefox down anyone's threat.

They don't have to when the top comment patronizes everybody by telling them what's "ideal". Gimme a break!

> Firefox Quantum is a worthy competitor to Chrome.

No it's not.


Why did you use ...? You admit you had poor choice of words. I can't mind read so I didn't know your intended meaning.

> No it's not.

How does saying that help or inform in any way? I didn't elaborate on Firefox 57 because but others had in the thread and the post itself is about the browser.


My hope with HTML5 was that complexity would be removed in the standard; reduce the platform to smaller generic components. Instead we have this ever-expanding feature set, a constant stream of new standards that have to be implemented. It's death by complexity.

This plays in the hands of Google; each new feature is another opportunity to fingerprint the user. If things continue like that, a new platform will have to emerge to replace the web.


About three years ago, I decided that I was going to build my own browser from scratch, and I thought it would be really easy to implement as long as I strictly followed the web standards.

Within a few minutes, I got stuck with the fact that I could not get any performance out of even a simple attempt at a CSS engine.

After about three days of mucking with that, I gave up on the project, and gained a lot more respect for Mozilla.


You had a running css engine in a few minutes?


Heh, bad wording; I should say that I had performance problems within a few minutes.

I did eventually get something that kind of worked, but that took about a day, and still suffered performance issues, probably because I had no idea what the hell I was doing (and still don't, really)


You had a running CSS engine within a day?


Yeah, a really crappy one that parsed a CSS file using Bison, and could change text colors, background colors, and position things based on pixels (no em or %).

I never got around to implementing class support, but I did allow targeting an element via ID.

I did this in C++ for some reason, and used Xerces to parse the XML, and I used Allegro to render everything. If I were to try and make this now, I probably would not use a rendering system designed for games, but I was naive back then.

I'll have to look to see if I can find the old files on my NAS; if I can I'll put it on Github.


These kind of high-level overviews of prototype implementations are very useful, thank you for sharing.


Hiring good people is not cheap! "Mozilla's highest-paid official, chairperson Mitchell Baker, now enjoys a pay package that tops $1 million. In 2014, she got a $400,000 base salary, a $594,000 bonus and some other benefits that pushed total compensation to $1,035,114.Nov 30, 2015"


I fail to see how north of 80k$/mo is a reasonable salary. Maybe things are more costly in America, with private insurance and whatnot, but where I live (Italy) few people earn more than €10k/mo after taxes.


It's because there's so much competition for experienced people in Silicon Valley. If Mozilla wouldn't pay her that much, another company would.


Has anyone ever tried to hire someone requiring less than or up to max say 20k for such a job? I wonder which qualifications that are that you find only in people asking for >50 or even 100k/month.

I am from Europe so i might miss something that’s different in US. But 20k USD would make for a very good living here. Ok you only can afford one house with that an no 1000sqm villa and no Ferrari... but i state you don’t need either.


20k USD/month is in the range that Google/FB/Microsoft pays for direct managers of ~10 person teams: basically that would be the level of credentials that you would be competing for in that pay area.


Interesting info - but what does that mean in conclusion?

what proves us that the skills necessary to lead bigger teams or whole companies are so scarce that they have to be paid orders of magnitude more?

Put another way: is it proven that the pay needs to rise with the same or even any multiplier as the people you are “responsible for”? (and i even have to understand what that responsibility means - i saw too many bad managers doing bad in terms of people as well business. who takes responsibility for thousands of failed and misconducted IT projects,their exploded costs and burned out people, to speak only of my personal area of work?

Ok, maybe it’s the money you are responsible for... if you have to decide about a certain amount of money, especially in expenses, you might be prone to bribery as somebody who wants to sell your company something for a few millions could easily give you an “extra” of a few hundred thousands to buy from him and this would make uou “earn” a multitude of a months wage with just one signature.

But besides this? I see no proof that skills to handle certain management tasks are really so scarce that those who get these jobs get hundreds or some even millions a month for doing them. And they certainly don’t work hundred or thousand times more than others - that would make for extreeeeeme long days.

It might sound like bashing, but to me these questions are real, and i wanna know the answers.


That’s low for a manager. Many individual contributors make more than that at large companies.


What do they contribute that no one else could for this „low“ amount?


It’s not that “no one else” could do their job, but there’s not an unlimited supply of people who can meet the hiring bar at Google/Facebook/Microsoft, and there are a number of employers with deep pockets trying to hire them. Google in particular is known to offer people with competing offers a lot of money to keep them from going to other companies.


If there are other people who are able to do the job, but not meet the hiring bar, what does that tell us about the bar?


Did I say that?

No interview process is perfect, but here's what would happen if e.g. Google attempted to use your strategy of "make the interviews easier and pay people less":

Many candidates would interview at both Google and Facebook (as well as other companies). The smartest, hardest-working candidates are all able to figure out how to game the interview process and get really good at solving algorithms problems on a whiteboard. Many of them get offers from both Facebook and Google. They all choose to work at Facebook because Google's only offering half as much money.

The candidates who are less smart and/or too lazy to study for their interviews get rejected from Facebook, but get an offer from Google since Google's decided that interviews are stupid anyway. They all take the Google offer since they got rejected from Facebook.

Now Google ends up with a pool of employees selected on the basis of being not smart enough and/or too lazy to get a job at Facebook. Do you see the problem here?


Haha.

You question that you said „It’s not that “no one else” could do their job, but there’s not an unlimited supply of people who can meet the hiring bar“ Or what?

I didn’t propose a concrete strategy. I just doubt the things have to be as they are and pose a lot of questions why they are as they are.

Then you build exactly one fictional scenario that would make things maybe(not necessarily - there are options... maybe there is a number of smart people who don’t want to learn for artificial interview scenarios, maybe they don’t have some university degree) go bad for one employer and imply the conclusion everything has to be and stay as it is.

Do you see the problem here?


I don't understand your point of view. You work in tech, right? Do you want tech companies to pay their employees less?


We talk about the question if managers have to be that expensive.

And I’m open to a yes if there are valid points and my questions about it answered.

We don’t talk about what I want to get paid.

You’re steering away from the topic, right after questioning something that you said just exactly as cited, and dont answer straight questions. Do you wanna take part in a discussion or just confuse abd avoid it?


I'm curious about this too. My impression is studies show high-level CEO salary don't correlate at all with company performance. Surely there are folks who have run small businesses that have just as many stress, meetings, communication, and forethought as running a large business. There might be a bit of a learning curve, but there's that for anything. Basically we need a Money-Ball approach to CEOs...


As if no other city or country in the world had any competent developer willing to work with them.

Frankly, whoever solves the remote work 'problem' will make a lot of money.


It does seem counterintuitive, but the justification is that cutting a top employee's salary in half only frees up less than a percent increase in all the lower-level position salaries.

I know about 20 years ago, Ben & Jerry's tried to hire a CEO for a reasonable salary and decided they couldn't make it work.

So maybe it really does make sense.


That's what I was saying. It mostly seems to be a thing that happens in Silicon Valley and some big cities. There's good programmers in places with lower cost of living that are cheaper. Even the high end of them would approach the low end of the typical salaries in Silicon Valley. One approach I'd explore was a mix of highly-experienced people in the domain with large checks plus good programmers in the cheap areas supporting their work. There's also a few companies in my area that will pay for smart folks college if they do a 3-4 year agreement with the company. If they already program and have good mentorship, they should start getting pretty good in the first year. The salary combined with the yearly college expenses is usually about half of a Silicon Valley programmer. Multiply by several hundred people to get a lot of savings that could accelerate features or get into new markets.

If I was Mozilla, I'd doing the later buying small companies doing user- and privacy-focused variants of popular services. Then, they could have a whole umbrella of stuff to offer to differentiate them based on reputation. I'd at least try a paid offering in the enterprise space as well.


I wish the Mozilla store hadn't gone away. I've used Firefox for a very long time but it's been three or four laptops ago that I was able to adorn my machine with Firefox (or Thunderbird!) stickers. Yes, it's a small thing, and I tout Firefox whenever I can, but I'm also a sucker for something I can slap on something that I carry around.

(I suppose I could also just make my own or have someone print them up for me alongside giving money to the Mozilla Foundation.)


I totally forgot about the Firefox shirts I owned in college until I read this. I can't believe they closed it.


This begs the question: Is it healthy that one of the most important technologies today (web) is also very complicated (=expensive) and thus effectively limiting number of players.


I'd agree that the browser spec is incredibly complex, but a large part of it is due to not wanting to break compatibility with the web. Many of the problems tackled by the browser are just complicated, and there's no shortcuts.

If we look at other technologies, the same thing happens there. How expensive would it be to write a new OS from scratch? A new C++ compiler? A new UI toolkit? These technologies are all incredibly complex.

Heck, consider how many projects are just frontends for LLVM. The LLVM IR claims to be well documented [0], but AFAIK it lacks a formal spec as well as multiple independent implementations.

[0] http://llvm.org/docs/LangRef.html


It's not healthy, but unfortunately maintaining compatibility and staying abreast of modern platform features (so that apps and content don't fully migrate to the single-vendor platforms) just mandates a lot of complexity.


The reason why it’s expensive is that no HTML5 proposals are taking into account the cost of development and maintenance. “Anything goes” as long as it competes with the walled-garden platforms. Spending without budgeting nor cost evaluation is similar to going to a casino spending your life savings thinking you’re the lucky one.


Deciding just to cede applications and content to single-vendor platforms isn't that appealing for the open Web.


That implies a binary decision. The war is won in many battles. Deciding which ones to go into is the deciding factor. Bringing your men to each is a sure way to lose, and it’s what is currently happening with the modern web.


If individual browser vendors decide that some feature is not worth the effort, they can just not implement it and not support those apps fully. Maybe change that decision if the feature gets popular. If the feature never gets popular it can be removed. This actually happens all the time.

But vendors getting together and saying "no, we're just not going to compete with native platforms in this space" seldom happens. As a proponent of the open Web, I think that's a good thing.


Browser vendors usually do not have this choice. Either they proposed the change themselves, forcing other vendors to follow, or a prisoners-dilema occurs where either all vendors act, or non at all. These are well studied problems in oligopolies and are the prime reason we need a certain amount of governance of (independent) standards bodies.


Browser vendors always have the choice not to implement, or to delay implementing, standards are are seeing little uptake. I know, I did this work for over a decade and made many such decisions.


All of the most important technologies today are very complicated. Anything that has to do with computers, cars, airplanes, trains, almost every industrial process. Being complicated is basically one of the prerequisites for being "high" tech.


I think parent meant a different kind of "complicated". More like "overcomplicated".

If we want to move forward as a society, we better structure our tech in the simplest possible way. The difference with cars, airplanes, etc is that in a capitalistic world, an overcomplicated browser spec actually helps big players to keep their monopoly. That's what's wrong here.


I'm confused what the move should be then? People want more features and capabilities from the products they use, websites included.

What im reading here is a preference for web technologies to be frozen as they were since 2000.


That's a difficult question for which there are probably many answers, so here's my attempt at it.

One first insight: we don't need the "semantic web" because Machine Learning can recover semantics. (This is also how Google's webcrawler views the web, so why not apply it to browsers too? People mix up semantics and formatting too often for the "semantic web" to be a useful concept).

Another thing to realize is that overcomplicated systems benefit from factorization, i.e. splitting functionalities into different modules with their own responsibilities. This doesn't just apply to code, but also specifications.

A better factorization will immediately solve the "monopoly" situation, because now many players can develop many modules, and browser vendors can simply combine module implementations at will.

What it will also solve is security issues. A modularized architecture is much easier to keep secure than a system where all functionality has been thrown together on one big heap without structure.

You can say that browsers are (if the developers are smart) already structured in a modularized way, but that is not the point, because the internal boundaries of the modules are not openly available in any kind of specification.

Personally, I long for a world where formatting a document or showing a video on a computer screen is not considered "high tech". And this might be the way towards it.

Note that an "extreme" interpretation of this would be to replace the browser by a simple virtual machine. In that case, every web-developer is their own browser vendor, and they can mix and match existing libraries at will. People will then probably object that the user loses power, and accessibility is lost. But we live in a world of Machine Learning now, so any "structureless" rendering can be easily restructured by an appropriate tool. E.g. an image containing text can be OCR'd and read aloud by a tool. This would also make these tools more robust against formatting hacks. And machine learning can do many more useful things, like removing ads.


> Personally, I long for a world where formatting a document or showing a video on a computer screen is not considered "high tech". And this might be the way towards it.

Okay so here's the thing. Web pages do a lot more than just 'formatting a document' or showing a video. Any serious talk about the web platform has to acknowledge the that that it's an 'application platform' that is more than just an .rtf file.


Yes, but do you consider launching an app "high tech"? If yes, then that must change, otherwise we'll never reach a higher level of enlightenment.

So what I wanted to say is that the "high tech" is currently not as much in the functionality, as it is in the software engineering effort to reach that functionality.

Also, the paragraph after the sentence that you quoted addresses web-apps, if that's what you are concerned about.


The semantic Web was always a W3C boondoggle that never became part of the Web as implemented by browsers. So that's a bad example in this context.

It's hard to see exactly what you're asking for. Web specs are already quite modular. The modularity could be improved, but it's not like nobody's trying.


It is not healthy, but it is not trivial to fix -- I think building a modern spec compliant browser is expensive (=complicated).

One way would be to create a browser architecture from truly independent pieces that would allow one to pick implementations from different sources. That would split complexity and hopefully increase choice.

To get this off the ground one would need an architecture and a functional reference implementation, which is a lot of work. If you get that, others will improve it from an OK state to greatness, but it must be end to end OK to start with for people to notice


Of course, it's not always healthy, but what are you gonna do about it?


I feel like there's a golden opportunity here to have things like Firefox be broken up into more independent parts.

We've already seen this in the JS engine space, though it's still pretty hard to integrate alternate JS engines into browsers. Having more and more of the app broken up makes it _much_ easier for contributors to come in and for work to be shared.

We don't all need to write our own implementations of CSS style sharing, even if higher level strategies are different.


The CSS engine just got replaced with a Rust implementation. Isn't that a sign that it is quite modular already?


The CSS style side got replaced with a Rust implementation (that matches selectors and computes the used CSS value of each property for each node). The layout side is still all C++ (and Servo's implementation has nowhere near parity with what Gecko already has).


The CSS engine does not have a very modular boundary, because abstractions would kill the performance. It would not be very easy to use it in another browser.

However, lots of pieces of the CSS engine are modular and published as their own crates, like rust-cssparser, rust-selectors, etc.

Other Servo projects are more modular; WebRender in particular just has an IPC boundary with specific messages.

Servo layout would probably be more like CSS than WebRender in terms of modularity. So it's not perfect, but we're trying very hard!


Agreed, I've been pounding the "embeddability" drum for FF for a while. I'd love to easily use pieces in my own projects.


I have so much admiration for Firefox. With the increasing market share of mobile I'd love to see a well-funded, independent, compatible variant of Android.


Supporting FF then is a kind of participating in mass movement to fight against capitalism.


If you're supporting Firefox with money, it's still capitalism even if Mozilla is a non-profit. You're paying an entity to continue maintaining a product, and attempting to bias market share in favor of that product over its competitors.


Yes it is, because what we keep referring to as a browser has, thanks to commercial pressure, become a OS in its own right.

And rather than stop and ask if this is in any way sensible, they keep pushing more stuff, like USB and Bluetooth, into it.

Whenever i run into a site that is either just giving me a blank page or a error when i have JS turned off, i wonder when the train jumped the rails. And i fear it may well go as far back as the browser war between Microsoft and Netscape.


I think this is just too many people. No matter how you spin this it’s just too many. Do Google or Microsoft have half as many working on their browsers? I’d be very surprised if they do. It should also be interesting to look back and find out how many people Netscape employed at its peak. I would love to know the distribution of those 1.2k people ( engineers, marketing, whatever ). Of course, it’s likely that I am too naive or clueless.


There isn't public information on this, but in the past I have heard that Chrome had a lot more engineers than Firefox (though it's a bit difficult to make direct comparisons because of scope differences). I suspect that's still true. I also heard the same thing about Edge at one point, but I suspect that may not be true now.

Comparisons with the past are meaningless because browsers are far more complex (and performant, and secure, and versatile, etc) than they used to be.

And sorry, but your gut estimates are not likely to be that accurate if you don't work in this area.


> There isn't public information on this, but in the past I have heard that Chrome had a lot more engineers than Firefox (though it's a bit difficult to make direct comparisons because of scope differences). I suspect that's still true. I also heard the same thing about Edge at one point, but I suspect that may not be true now.

Scope differences make it incredibly hard.

Plenty of people on the Blink team work on things like Skia, which neither the WebKit nor EdgeHTML teams have equivalents of (because they just leverage OS-specific APIs), or on their HTTP/TLS stack (again, neither WebKit nor EdgeHTML teams have equivalents of), etc.

Gecko is somewhere in a middle-ground; they rely on third party libraries (including Skia—does that make that part of the Chrome team Blink/Gecko developers?) but not quite as much as WebKit and EdgeHTML do.

Core web stuff? The Blink team is larger, yes, but by how much is a hard question.

FWIW, I believe the Gecko team is nowadays larger than the EdgeHTML team, but they're certainly close in size. (And close to the size of the old Presto team, which admittedly had a scope closer to that of the Blink team.)


Gecko also has its own infra/productivity/etc folks, whereas I bet these folks are from a much larger shared team at the big companies


The Chrome team has a lot of its own infra due to policy that a lot of Chromium stuff be public (its bug tracker system now code.google.com is dead and its CI infra are all done by the Chrome infra team).

I imagine to some limited degree it's true for WebKit (though I suspect they don't have anyone whose sole job is infra) given Apple don't have any general public bugtracker or CI system, and totally untrue for Edge.


I never worked on a browser, though I did implement the HTML5 spec, twice, and I wrote a JS engine(compiler, runtime). I know that it was a lot easier because I could count on infromation and other advances that were available to me, and anyone else really, at the time, the kind of resources and information I probably wouildn't have had access to it("whats javascript?") 20+ years earlier.

Browsers have evolved - because standards did, and expections along with them, but in the past they too were quite complex(see also Netscape's play: browsers, email, usenet client etc, suit) and again, it was harder then that it is nowadays to build such systems.


Which part of the HTML5 spec, though? A parser? Or the whole thing, including DOM APIs and all the features like media elements and so on?

Even an HTML5 parser is harder than it looks to implement to the level needed for a browser. You have to build a proper DOM, you have to be secure against all kinds of fuzzing, you have to implement off-main-thread parsing and speculative readahead for performance, you have integrate it correctly the HTML5 event loop for document.write() etc, you have to support innerHTML and so on.

Likewise building a JS engine is impressive, but I do not believe a Web-compatible competitive-performance JS engine can be built by any single person.


I feel like there's a tendency to put browser developers on a pedestal. Browsers are large and complicated, and no, a single person cannot write a browser competitive with Chrome from scratch. On the other hand, a single person can probably write a browser competitive with Dillo, if they are sufficiently motivated. Browser developers are just as competent and incompetent as other professional native programmers. There are several HTML5 parsers out there. JS engines are also relatively abundant, especially if you will settle for an interpreter. You can build such things yourself, too, if you have the time and want to hone your skills.


There's a huge amount of fairly boring work to write a DOM implementation (if you want half the APIs the web expects to work with the right semantics, you can't use an existing library), and layout is incredibly hard to get right (especially in a web compatible way). I think it's totally doable for a single person to do HTML + CSS2.1, with no scripting support, especially if you're happy to leverage existing libraries.

The big difference with most existing browser developers is the majority know a lot of the web platform inside out—but that's mostly a matter of having worked on an implementation for a long time and having to worry about edge-cases and interactions no web developer would ever think of. There's certainly a few who have achieved a huge amount for individual developers—but that's not a trait unique to the browser sphere.


The parser. Once for properly parsing HTML content, for a search engine, and the other for a framework used for saving pages (For an Instapaper like service). So it wasn't a big deal, but in both cases, a DOM was constructed and operations were executed against the DOM.

Nothing about that or the JS engine is impressive really. That was my point, more or less. Of course, there's a difference between building something that works, and something that works exceptionally works etc etc(all the stuff on top of that), but all told, I still don't think building a browser justifies assembling such a huge org, even if there is no reliance on third-party technologies.


Implementing an HTML parser is not hard—the spec is a book of instructions on what to do.

Implementing a fast HTML parser that will produce a fast DOM is a little harder.

Tying it in with a fast CSS engine is harder still.

Tying it in with a JavaScript engine is a bit more work as well.

Implementing layout is moderately fiendishly difficult to do and get right.

And it just gets harder the more of the web platform you add, and the more it has to be fast.


HTML5 parsing is maybe 1% of the HTML5 spec.


However many people Netscape employed is irrelevant, browsers now are very different in complexity and security than browsers in the Netscape era.


On the flip-side though, the tools, expertise/skills and means for building browsers some 25+ years ago don't match what's available today.


How do Firefox and Chrome compare in respect of security?


With Firefox 57, they're now essentially equivalent in terms of the security architecture.

There's a small architectural difference with how tabs are sandboxed against one another. That is, Chrome starts a new process for every tab, unless it's on the same domain as another tab.

Firefox by default only uses as many processes for tabs as you have processor cores and then round-robins tabs across those. This achieves essentially equivalent parallelism with much lower resource usage, so overall better performance. If you'd rather have the security than the performance, you can set dom.ipc.processCount in about:config to a higher number. This is the maximum number of processes that it will use for tabs, so setting it to something like 1000 will essentially make it start a new process for each tab.

Then there's something to be said about add-ons. Mozilla does manual code reviews of most extensions and extension updates. (They have some criteria for when they don't do a manual code review and instead just an automated check, based on how damaging the extension is able to be with the things that it accesses.)

As a result, AMO has essentially no problems with malware, whereas on the Chrome Store you hear every other month of some trojan or adware spreading.

Mozilla also has a policy that disallows telemetry without user opt-in as a whole, so recognizing adware or a trojan is just a matter of observing network traffic where it shouldn't necessarily be going.

Lastly, Firefox Sync is end-to-end-encrypted by default, only one password needed. Chrome Sync is not, you need to set up a second password for it to be end-to-end-encrypted. And Google will store and work with your Sync data, if you do not opt for E2EE, so that's another possibility for a pretty big data leak. Not a problem, if you are security-conscious and don't mind setting up that secondary password, but recommending it to average users on the merits of it being secure is essentially out with this.


I believe Chrome has for a while been using much the same model as Firefox has recently taken up, not a new process for each tab.

AMO’s lack of malware problem is simply because it’s not so popular—making addons was, before WebExtensions, decidedly harder, and the browser’s not as popular as Chrome, so why bother targeting it when there are bigger fish you can catch more easily?


Hmm. `ps -A | grep -i chrome | wc -l` adds +1 per tab for me on Mac OSX as long as I navigate off the home. Does it cap eventually?


Yes, there's a cap. See https://www.chromium.org/developers/design-documents/process... for the detailed descriptions the "Caveats" section.


My understanding is that it is capped at a higher figure than Firefox’s default. But I’m not a Chrome user.


No Chrome sandboxing is still ahead of Firefox but FF is getting closer.


I believe AMO is not doing manual reviews for WebExtensions in general, now that they're sandboxed. Maybe there are targeted manual reviews, I don't know.


tptacek mentioned a while back that while sandboxing for Firefox and Chrome are converging towards equivalence, there was still significant difference (in favour of Chrome) with regards to hardening measures.


I don't doubt that, but either browser is so secure that if you are afraid of being hacked through that, you're basically afraid of nation states. (Assuming you keep your browser up to date.) If they're interested in you, you'll have to take whole other security measures anyway. At that point I don't think using Chrome, Firefox or Edge makes a difference.


You can trust both of them to do that well. Other than that it's hard to make a comparison not reliant on anecdotes, I guess.


Question: Is there actually a good Independent browser for Windows at this time? That is, not Firefox, not Chrome, not Chrome-Opera, not Vivaldi (alias Chrome-Opera-2), and also not based on Chromium, WebKit, Blink, Gecko, Trident, or any of the major bits of one of the corporate browsers? The closest that I can see is Pale Moon, but even they are just a "better" fork of Firefox.


You're asking for a completely new browser engine to be written. A browser with a completely new browser engine isn't going to be "good" for a long time, because most webpages will just be broken on it, until they've managed to implement some portion of the webstandards in a semi-decent way. The upfront-investment is ridiculous. Which is also why no one's done it since last millenium.

There's only one project, that I'm aware of, that's semi-seriously trying to write a new browser engine:

Mozilla is working on a completely new browser engine, called Servo, and they've been working on that since 2013 and it's still miles away from actually being usable for every-day-browsing.

It is good in other ways, really good performance and security, but if you prefer Pale Moon over Firefox, that's probably not even what you're looking for either.

Officially, it's a research project, some components from it have just recently been included into Firefox, and yeah, it's not even necessarily going to be turned into a complete browser.

Mr. pcwalton (who also replied to you) is one of the core-devs on Servo, so if you have questions about Servo, he's probably more qualified to answer those than I am.


The major issue there is that "Mozilla is developing" means that whatever browser discussed after that really is only as independent as the current Firefox, which is basically not Firefox anymore. It's "looks like Chrome, works like Chrome, branded as Firefox". So yeah, for the moment I guess Pale Moon is the best out there for me.


There are exactly four major engines, each of which has a flagship browser: Blink/Chrome, WebKit/Safari, Gecko/Firefox, and EdgeHTML/Edge.


That's fine that there are four "major" engines. I'm looking for a minor one anyway, whose dev does not need nor want to become part of a major flagship product, or who at least isn't tied to Mozilla or Google. The lack of real development in the browser field is kind of appalling.


Do you think Google, Mozilla, Apple, and Microsoft aren’t doing any “real development”? Did you miss all the stuff about Firefox Quantum? Or Microsoft Edge being released for the first time only a few years ago?


Firefox Quantum, and just about everything to do with the browser since Firefox first introduced both their rapid release cycle and Australis, have left me cold. As far as the others, sure there's development - all in a negative direction away from simplicity and into a grotesque and bloated Web into one entirely built on corporate marketing ideas.


Wouldn't introduction of Servo make it five? Gecko is not going away yet, IIRC it's used in some browsers, like Pale Moon. https://www.palemoon.org/


Servo is not a major browser engine. It's a small research project.

And Firefox continues to use Gecko. It has some bits and pieces from Servo in it and the Mozilla marketing team has dubbed everything "Quantum" for the Firefox 57 release to communicate that lots has changed, but it is still Gecko.


I work on Servo, I wouldn't count it here.

It doesn't support everything to make the web work; i.e. you can't use it as a daily driver (for some sites, perhaps, but not all).

We're getting there.


I really hate it when things are not for sale to the highest bidder. Everyone knows that’s the best way to go.... ;)


So here is a question. If one was to undertake the challenge of building a new browser how would you go about doing it conceptually? What would at least convince a small group of technically advanced users to adopt it?


You would probably want to build a browser on top of Chromium, like Opera does (and also what Microsoft is doing on Android), and would want to try differentiating based on user-facing features. You could also build on top of Gecko, but this doesn’t seem to be as popular. Building on WebKit probably doesn’t make sense because its corporate backer (Apple) mostly only cares about supporting macOS and iOS. Eventually, if you become big enough that you think it’d be better to be independent, you can fork Chromium.

This is basically the approach Google took with Chrome: they built on top of WebKit (which itself was originally an Apple fork of KHTML), then forked it eventually.


and it's only getting more difficult because the browser is turning into an operating system


I have been discussing the debt-based economy and Google and Mozilla recently with BrendanEich on Twitter, including a debt diagram.


   Annual salaries from 2015 (reportable compensation from IRS Form 990):

   Mitchell Baker, Chair $977,382 + $45,530
   Bob Lisborne, Director $92,000
   Mark Surman, Exec. Dir./President $170,699 + $40,602
   Jim Cook, Treasurer $934,526 + $45,530
   Angela Plohman, Secretary/VP Operations $121,322 + $30,342
   Christopher Lawrence, VP Learning $153,492 + $62,538
   An-Me Chung, Dir. Partnerships 154,946 + $72,672
   Daniel Sinker, Dir. Partnerships 123,630 + $64,215
   Hiram Paul Johnson, Marketing Lead 126,605 + 54, 903
   Andrea Wood, Online Organizing/Fundraising Lead $135,048 + $46,322
   Samuel Dyson, Director Hive Chicago $114,860 + $63,549
source: https://static.mozilla.com/moco/en-US/pdf/2015_Mozilla_Found...

Perhaps the author is telling the truth. Consider the salary costs of retaining skilled "maintainers" like the above personnel.

Note I am not condemning Mozilla. They do good work.

However I am not quite sold on the idea that their purpose for existence is to "keep the web open" or whatever tagline they are going with.

For example, an "open" web would be one made for all browsers to access, not just a select few who have chosen to try to keep up with an arms race of recently-added, non-optional "features" that carry serious costs to users. The key word I wish to emphasize is "non-optional". Users are not given choice and that is most likely intentional.

Many a web developer presents the user with a small selection of "acceptable browsers", the ones that will run third party code and readily show heavy, third party advertising. Anything else is "prohibited" and must be a "bot". Users are sometimes even shamed for not using the web browser or version that the web developer wants them to use, calling the user's software "outdated" when they have almost no information about the user's software except its behavior and some easily forgeable HTTP headers. In many cases, e.g., where the user is just retrieving some information, that is just silly.

Not every website needs to show advertising. Not all information is commercial. Not all information is graphical. I am not using one of the select few targeted browsers to read HN or post this comment, but I am getting the same information as the readers who are. And for some reason(s), web developers flock to this "outdated" website. Why?

Clearly the web can be both accesible by simpler user agents, and fully-functional for the most bleeding-edge features of corporate-sponsored browsers at the same time. The original web standards, e.g. basic HTML, still work, very effectively. The web as a means to offer basic functionality such as HN can be "backwards compatible" with user agents that are far more simple than Firefox, Chromium, etc.

The select few browsers that many web developers target overwhelmingly controlled by corporations and other advertisers. Netscape and the idea of corporations paying to use a browser may exist only as entries in the historical record, but I do not think we can say that browsers are truly separated from corporate funding (and influence). Browsers are still commercial, in a sense, IMO. While it might be viewed as "free software", the authors of these few browsers are well-paid by their corporate (or "non-profit") employers and the end goal is "market share", ideally a monopoly.

This is fine if the web is 100% commercial. But it did not start that way and it still isn't 100% commercial. Non-commercial uses are still alive and well.

It is not necessarily Mozilla's fault for the direction the web moves in, e.g., making content less accessible by simpler user agents. However, if their purpose is to make the web "open" or some such "non-commercial" aim, then what could they do to enable and promote use of simpler user agents (and thereby promote their acceptability among web developers)? I can think of a few things. I am sure others could as well. There is no shortage of users who are dissatisfied with the (lack of) variety of user agents that are available.

http://bitcheese.net/web_browsers_must_die

The question is whether Mozilla really wants to listen to users who might criticize dismiss their browser as, e.g., too bloated.


The complexity of the modern Web platform is a problem. Mozilla developers feel this acutely, believe me.

The problem is that if the open Web platform does not expand to meet the needs of modern applications, then modern applications will simply be restricted to single-vendor platforms like iOS, Android and Windows, and over time the relevance of the open Web will atrophy. That is not an acceptable outcome for Mozilla.

If you can convince Web developers to build sites that work on cut-down browsers, great. I don't see any way to make that happen en masse though.

FWIW Mozilla obtaining a monopoly would be inconsistent with their mission and I don't think actual Mozilla developers are aiming for that. Partly because it's not a realistic outcome!


"The complexity of the modern Web platform is a problem. Mozilla developers feel this acutely, believe me."

I believe you. Do they want to take action to address it?

"The problem is that if the open Web platform does not expand to meet the needs of modern applications, then modern applications will simply be restricted to single-vendor platforms like iOS, Android and Windows, and over time the relevance of the open Web will atrophy. That is not an acceptable outcome for Mozilla."

I understand. Why does the "open Web" need to stay relevant? Honest question. The answer to this question is really the core issue, IMO.

"If you can convince Web developers to build sites that work on cut-down browsers, great. I don't see any way to make that happen en masse though."

Not sure where the "en masse" part comes from. That was not in the original comment.

The comment was meant to draw attention to denial of choice in user agents given to users. The illusory scarcity of browsers that will "work". (Thus web developers design for browser implementations instead of according to open web standards, which should be implementation-agnostic. Browser developers are the ones who are seemingly in control of what is and what is not a "standard" in the mind of the web developer. Too often, companies are the ones writing the "open web" standards.)

I believe in the antithesis of the "en masse" notion.

That users do not need to converge en masse around a small selection of known, complex web browsers because most sites, e.g. those that simply present information, already "work" with cut-down browsers.

I use a non-graphical user agent. Most sites work fine for me. I get what I need. And this is without the web developer even contemplating the software I am using. To the extent they are following certain open standards for HTML, it all works anyway.

On the same day as the OP was posted, it shared the HN front page with a re-post of a 2012 ACM submission by PHK.

In this ACM submission, PHK describes building Firefox on FreeBSD.

He cites something like 122 dependencies, and a requirement for some binary to run plugins.

I have build Firefox on BSD myself using pkgsrc. It takes longer than building kernels.

Is there a "Firefox Lite" where a user can opt out of various features at build time?

Why not?

"FWIW Mozilla obtaining a monopoly would be inconsistent with their mission and I don't think actual Mozilla developers are aiming for that. Partly because it's not a realistic outcome!"

Nor am I suggesting that providing options to users should result in any "en masse" behaviour by users or web developers.

It is not a realistic outcome.

The goal of the experiment might be to provide some software to users that the market share leaders will not provide. Is this realistic? Mozilla has taken risks before. Not every project it has sponsored has attracted a mass following of users.

For example, imagine a version of Firefox that optionally has no Javascript engine. This software would be less complex (and intentionally less functional). But believe me, as a text-only browser user, it would still work.

If a user complains about a major browser as "bloated", if she complains about ads and is forced to use NoScript or an ad-blocker, if she is pondering how complexity makes her browser more vulnerable to attacks, then it would be interesting if there was another option she might try, which came from a well-known source such as Mozilla. She might build Firefox without the features that lead to these problems.

What happens after that is anyone's guess. As it is with any experimental Mozilla project.

But at least one can say that an option was presented to move away a state of from increasing complexity in addition to the option of using Mozilla's full-featured Firefox browser competes for market share with the other major browser vendors.

Internet commentators sometimes reference a quote from Steve Jobs something like: users do not know what they want (until Apple gives it to them).

While this may be self-evident to developers, I believe that users do know what they do not want. These are programs features, modifications and behaviours that users are familiar with because they have been using the software for years. Alas, rarely do developers of graphical programs give users the option to remove features. As such, we can only guess what might happen if they had that option.

A large portion of the web does "work" when using user-agents that lack the latest features. It also works with the recommended browsers. The question is whether Mozilla can acknowledge this is true and release software that takes advantage of it. Mozilla can still pursue its mission, including keeping up with the Joneses.


Thanks for the thoughtful comment.

> Do they want to take action to address it?

Within the constraints of the assumptions I've outlined, yes, and they have.

One example is the fight for asm.js/Webassembly against PNaCl+Pepper. We fought that fight in part because Google was fine with introducing a whole new platform API in the form of Pepper, and Mozilla instead wanted to minimize the additional complexity by reusing existing Web platform APIs.

> Why does the "open Web" need to stay relevant?

I want a platform that isn't controlled by any single gatekeeper, through which app/content developers can reach all users and all users can reach the apps and content they want. (A sort of corollary is that the platform should be implementable in free software.) The Web is by far the closest thing we have to that.

One problem with a niche browser is that when you have a browser with very low market share, Web developers don't test against it, and with the state of software development technology today, stuff that's not tested in a platform implementation tends to not work on that platform implementation.

Another problem with a niche browser is that by definition it impacts a small number of users, and therefore only a relatively small investment can be justified.

Fortunately there is nothing stopping you or some like-minded group from doing your experiment. Tor has already done something similar and that's gone quite well.


Our respective ideas of what constitutes complexity appear to differ. Both of your alternatives are far too complex for what I envision of less complexity. I envision a browser that browses hypertext, not a program that automatically runs third party code.

Further, I do not think of browsers in terms of "market share" nor testing for one browser or another. I guess I have done a poor job explaining the point about standards.

Testing input, e.g. a webpage, against a program, e.g. a browser, is backwards, IMO. A program should be tested against input. If the program fails given legal input, then if desired, fix the program.

The web contains plenty of legal input for a wide variety of clients. I am consuming it everyday.

This plain fact appears to be outside of Mozilla's purview. But to me, a user, this is what the "open web" means, much more so than a squabble between Mozilla developers working on Firefox and former Mozilla developers working on Chrome over some exotic feature to entice web developers.

The "open" web to me means I can retrieve information with a simple open source client I can compile, and without an overly complex web browser controlled by a corporation/organization that is by its complex nature unreasonably difficult to compile except for a select few people. That is not "open" within the meaning I subscribe to, which is more along the lines of "accessible".

The more complex the browser becomes and thus the more complex the input that web developers create for it, the less "open" the web becomes, because it becomes less accessible.

I do not expect anyone to agree with me, but I at least hope that these points make some sense.


Just to be clear, a decent (i.e. can get through the interviews) straight-out-of-school developer in the bay area can expect a salary of around $100-120k. Plus bonuses. Plus stock of some sort (e.g. RSUs at Google and the like). And of course developers with a few years experience are paid more than that.

That's in the general ballpark of all the salaries on your list except Mitchell Baker and Jim Cook.


Regarding the top salaries, I found this comment interesting: https://news.ycombinator.com/item?id=15836919


[flagged]


In that alternate reality, there would be no Rust and no Servo, which would mean there would be no Firefox Quantum (as well as no ripgrep, fd, exa, Parity, etc.)


Rust and Servo doesn't have much to do with Firefox 57/Quantum, it is still mostly Gecko inside.


Following pcwalton’s line of thought: in that alternative reality, Firefox would be doomed, because it would be stuck where it is, unable to leapfrog the competition and eternally playing catch-up, and falling steadily further and further behind.

In Firefox 57 the CSS engine is the only part that’s been replaced, but the Quantum project is a lot more than that—integrating Stylo is just the beginning. WebRender, for example, is basically about fixing rendering issues once and for all, including things like making it possible to switch tabs instantly. (There’s a video floating round of holding down Ctrl+Tab and having it render each tab completely as it rapidly cycles through them. In that alternative reality, this could never happen.)


I was present for the initial Quantum discussions. The entire idea behind Quantum was to integrate the most mature part of Research's work into Firefox on a short time frame. Without that impetus, it's pretty safe to say there wouldn't have been a Quantum.


You do not contradict my point though.


> The chairwoman takes over $1M year? Why would an open source project need anyone who would be so unethical? This is like a city councilperson paying themselves $1M/year.

The Mozilla Corporation is 100% owned by the Mozilla Foundation. The Foundation determines the loan of the chairperson. And they pay the chairperson so much, because the industry pays chairpersons so much. Mitchell could find an even better paying job in no time, I'm sure.


The question is not that whether Mitchell could find a better paying job, but rather if Mozilla could find a chairperson as good as her for $100k.


100k is low even for engineers in the bay


"Firefox could be done with 400 people" is a "fact" you just made up.


So if they had just enough people to "make Firefox", what would they do with it?

It's quite easy to see what Firefox would be with "just the browser". Take a look at some Firefox fork which packages almost identical software but with worse distribution, marketing, etc. What is the market share?


You're confusing "non-profit" with "volunteer work". Mozilla is, organizationally, no different than a for-profit venture, only with the mandate that all revenue is to be reinvested.

As such, they compete with all other software companies for talent. Sure, maybe working on an open source browser brings you satisfaction. But they could do so at Google, and get paid accordingly. I'm quite sure the compensation package for a Googler supervising 1200 programmers far exceeds $1 million.

As for how many people it takes to create and maintain a competitive browser, I'll take their experience over your wild guess. Remember that MS Excel Installer Team of 32 people? Yeah–big consumer software products are a beast, even if not targeting 5 operating systems.

And when Google comes around and is willing to give you $XXX million annually to set the default search engine to the one you would choose anyway, should they just turn down the money and go with bing out of spite?

Yeah, Mozilla has had its failures. It still seems terribly unfair to shit on them like this, considering all the good they have done, and are now doing again.


Mozilla is only "independent" if you take an extremely loose interpretation of that word.

Follow the money:

https://en.wikipedia.org/wiki/Mozilla_Corporation#Affiliatio...

I only dug into this after having a mildly unproductive run-in with what-wg. Mostly Google and Mozilla guys, and not terribly open. Asking them "why" typically gets very evasive replies, if any at all! The guy Phillip at Opera was a class act though. Been using Opera ever since!


Mozilla has many flaws, lack of transparency is not one of them.

I don't even know where to start with your comment. You're linking the Mozilla Corp wikipedia page, insinuating they're "not terribly open" (and putting Google in that basket), on a subject you give almost no hints about, but hey Opera is a thing? And somehow this comment means that Mozilla is not independent?


I agree, and this is not to mention that a lot of the companies funding Mozilla that may love not to pay them, but they have to because there is a viable independent company that provides a browser.

I am glad that Mozilla exists, and I was glad that they existed even when I used Chrome (pre Firefox Quantum).


Read again.

I doubted Mozilla Corp's "independence" in the first sentence. "not terribly open" in the last paragraph was in reference to the WHAT-WG group, which is "[m]ostly Google and Mozilla guys". From the posts in that group, those guys seem more like co-workers than competitors.

For anyone who believes money to be some kind of motivating thing, it isn't hard to come to the conclusion that Mozilla is Google-by-proxy.

edit: And apparently I have to have "hits" to my name to state that getting 85% of your revenue from Google puts your independence into question???


Would you rather have people in standards groups to act more like competitors instead of coworkers? Cause that seems like a really bad idea.

As for:

> For anyone who believes money to be some kind of motivating thing, it isn't hard to come to the conclusion that Mozilla is Google-by-proxy > edit: And apparently I have to have "hits" to my name to state that getting 85% of your revenue from Google puts your independence into question???

You aren't bring its independence into question, you are stating its an easy conclusion that Mozilla is Google-by-proxy.

The 85% was from 2011. As of 2015 Mozilla stop receiving cash from Google inpart to them changing there default searches (why google paid Mozilla). That may have changed recently with Firefox 57 as they change the default search back, but even then google announced a while back they were cutting how much they were going to pay, so I doubt it will be the same 85%.


Citations for "cutting how much they were going to pay"?

Mozilla revenue is up, they're making a LOT of profit, and the Yahoo deal was supposed to be 5 years so they're not likely to be losing money by switching to Google: https://www.cnet.com/news/mozilla-revenue-jump-fuels-its-fir...


Well in all fairness the deal with Yahoo kind of went side wise. But yeah I would agree its probably around equal.

As for the source, I can't seem to find a source for this. I may have been mistaken, but it was something I heard several years ago on a google related podcast.


If they paired up with a search provider other than Google, does that make them any more independent? If they only get 70%, or 50%, of their income from a search engine, does that make them independent?

A non-profit that has a taxable subsidiary and receives the bulk of its funding from the search industry does not strike me as being particularly independent.


What are you claiming now?

Are you claiming Mozilla was the tool of Google, switched to being the tool of Microsoft or Yahoo, and then back to being the tool of Google?

Or are you claiming that Mozilla is the tool of the "search industry" which is some amorphous group containing all of the above?

Google (and other companies in some countries) pay Mozilla to be the default search in Firefox. It's a business transaction and adequately explains why Mozilla gets paid. Conspiracy-theorizing claims that secret terms bind Mozilla to some hidden agenda should not be believed in the absence of evidence, and I haven't seen any. (And I'm a former Mozilla Distinguished Engineer.)


It is super, super simple. When 80%+ of your revenue depends on a search engine, and you still have payroll to make, you are by definition dependent upon it, regardless of which one it happens to be at the time. The article title is "Maintaining an Independent Browser is Expensive". I'm just calling it out.

From the tone of the replies, kicked off by a guy in the analytics industry who went so far as to google me for a comment, Upton Sinclair is screaming from the grave:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

edit: And granted, they can currently play Google against Bing, although that isn't a terribly impressive degree of freedom.

edit edit: And where did the word "secret" enter into the conversation? It's like you depend on a search engine (or a batch of them during your break from Google) for most of your revenue, and I'm the one who has to prove that you're not influenced by that???


> From the tone of the replies, kicked off by a guy in the analytics industry who went so far as to google me for a comment, Upton Sinclair is screaming from the grave

Good lord, get a hold of yourself. I work in the gaming industry (easy mistake to make, right?) and you seem to have read "hints" as "hits" and made up a whole scenario from that... I didn't Google you (nor would I care to), I just called out a completely bullshit comment when I saw one.

The foundation of your entire argument is loose accusations, outdated information cobbled together in a web of random words and quotes you found appropriate. You have incorrectly profiled everyone who replied to you. It is hard for anyone here to have an argument with you, or even take you seriously.


You butchered my original comment to a point where I could only assume malice. As for "hints", my bad.

In 2017 they're going back to the 800lb gorilla of search and browsers for the bulk of their operating revenue, and I'm supposed to just accept that as "independence" and back off? There is a disconnect here that I'm obviously not going to bridge, so fine, they're independent! Go write their non-profit-for-profit chimera a check and "take back the web".


The bulk of Mozilla's revenue through 2019 is most likely going to come from Verizon.

https://techcrunch.com/2017/11/14/mozilla-terminates-its-dea...

https://www.bloomberg.com/news/articles/2017-11-21/verizon-i...

I'm not aware of a renewal of the Google financial partnership. Feel free to correct me if you find information that goes beyond 2011. There certainly could be a new deal with Google there but Mozilla is not in the negociating position it once was, so there just as well could not; and seeing as I'm finding zero info about it, I'm leaning towards "there isn't".

But hey, I guess if they don't make a billion dollars a year through donations and take in zero outside revenue, they're not independent.


Unlike Edge, Chrome, Opera, or Safari, which are obviously corporate assets, Mozilla is also part non-profit, and going by their PR, a defender of privacy and the open web.

When money from search, which whittles away at privacy to deliver the right ads, or money from Verizon, who isn't a friend of privacy or the open web, is what keeps the lights on, don't you think that is just a little misleading to Joe Internet who is thinking about donating his time or money to their cause?


Did you read the links? It's not like Verizon wants to pay that money. I think you're confused about how incentives work...


Already read the first article. Didn't read the bloomberg article about the buyout provision, so my fault. Retracting the Verizon complaint.


Also you didn't answer my questions.

Please spell out exactly who you think is influencing Mozilla, and to do what (that Mozilla wouldn't normally do for the sake of their mission). And please support your claims with evidence.

("X pays $Y to Mozilla" is already adequately explained by the fact that Firefox-generated search traffic is valuable to X.)


I left Mozilla nearly two years ago so you can lay off the baseless speculations about my salary, thanks.

Incidentally I explicitly defined "independent" in my blog post: "I mean a browser with its own implementation of HTML, CSS, JS, etc". Choosing an alternative definition and then ranting about it is irrelevant to what I wrote.


I'm sorry. That's not how this works.

    dependent *adj*
    1. free from outside control; not depending on another's authority.
    2. not depending on another for livelihood or subsistence.
It is not disputed that they have made their contribution to the implementation babel of web "standards". It is disputed, right here, that they are more of a spoke in the wheels of online advertising than a heroic champion of the open web.


I know first hand that Mozilla is a highly competent organization, but it not an organization capable of keeping a nefarious secret :)

Leadership at Mozilla have already shown themselves willing to break with Google. And I doubt it's any secret that they've been looking for alternative revenue sources.

If you want to question Mozillas independence please bring something more substantive than baseless speculation.


It is no secret that for at least the past decade, the majority of Mozilla's revenue has been from search partnerships.

In what bizzaro world can you derive the majority of your revenue from a thing while simultaneously declaring independence from it?


Apparently, this one, as they were doing it. What helps think about it is that the 'dependence' was a business transaction: Google paid Mozilla, and got something in return. Much like a startup being dependent on a single large customer yet still wanting to diversify.


Until they have actually diversified, they are by definition dependent on whichever search engine they are currently contracted to.


Yes, but you implied they couldn't strive for independence while doing so.

Furthermore, it's good to note that they have actually diversified: https://blog.mozilla.org/press/2014/11/new-search-strategy-f...


That is from 2014. They are back on the smack[0].

As long as search is the revenue center, there isn't much room[1] for diversification. If they need annual revenue of ~$400M to operate, it's either Google or Google.

edit: On that last point, I could be wrong. Most of the biggies could throw half a billion at them indefinitely if it served the right purpose.

[0] https://techcrunch.com/2017/11/14/mozilla-terminates-its-dea...

[1] https://www.netmarketshare.com/search-engine-market-share.as...


That only says they made Google the default search engine, not that they received money from them. And even if they did, that would be in addition to what they are still getting from Yahoo - the contract lasts until 2019. Mozilla simply had the option of changing the default in case of a Yahoo! acquisition by a company not in line with their values (i.e. Verizon).


Afaik, Google is nolonger the default in all markets.


Maintaining an Independent <Software Project> Is Expensive


I wrote my own browser a few years back and it was quite challenging/fun trying to figure out how to do basic browser things. You can check it out here: https://paulwebb.software/SciLab/Aries

Since then, I’ve been using Opera, then Vivaldi, and now Firefox, taking notes of how certain things are done. Lately, I’ve been thinking about using Servo as my base insead of Electron and doing a whole rewrite.


This article is talking about a browser engine, now a UI. Creating an independent UI is considerable less work (although it still takes considerable effort do to a good job of course).


Ah yeah, that's true. I was standing on the shoulders of giants.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: