I worked for Mozilla for a few years, after seeing John Lily (CEO at the time) speak. It was right after Chrome started getting popular, and a smug person in the crowd asked him about how he felt about Chrome.
John's response was awesome. "This is the web that we wanted. We exist not because we want everyone to use Firefox, but because we wanted people to have a choice" Firefox was a response to a world of "best viewed in IE" badges, and it changed the browser landscape.
Now, we have options. Chrome is great, but so are Safari, Edge, Brave, Opera and Firefox. There's a lot of options out there, and they're all standards compliment. And that's thanks to Mozilla.
So, in my mind, Mozilla won. It's a non-profit, and it forced us into an open web. We got the world they wanted. Maybe the world is a bit Chrome-heavy currently, but at least it's a standards compliment world.
I hope Mozilla sees that. I hope they take credit, and move on to what's next: privacy and net neutrality. Our privacy is under attack, and Mozilla is one of the few companies that can (and would want to) help. I know, I know. Nobody cares about privacy. Nobody cared about web standards, either, but Mozilla bundled it into an attractive package and it worked. It's time for Mozilla to declare victory, high five the Chrome team, and move on to the next big challenge.
We really need someone to fight for our privacy and neutrality. And I really believe that this could be Mozilla's swan song.
EDIT: Hey cbeard - My email is in my profile; I'd love to talk.
I definitely run into sites that only work on chrome and not in any other browser.
In my opinion chrome isn't doing a good job either. It's a massive energy hog and waste more CPU than it needs to. Unlike IE and FireFox and Safari it comes from a company that is notorious for wanting to know everything about you.
The time to celebrate victory was a few years ago. Now it's starting to look like the new boss is the same as the old, maybe worse.
"Always after a defeat and a respite, the Shadow takes another shape and grows again." -- Gandalf
Chrome works on every platform. It hogs memory but its fast. Chromium & v8 are open. This is the kind of things that gave us Electron, nodejs and everything that is built on top of it. I appreciate Google for working on it.
Regarding privacy, I totally agree. Would be nice if the most popular web browser wasn't developed by the world's biggest ad company. I still think they do a descent job of isolating the two orgs. I can technically install extensions that stop much of the ad invasions.
An what Chrome is giving out is "Tier1" search results only fed to their browser (Thus search results are more accurate using Chrome.... and now Google Earth is "Chrome Only" which uses WebGL and the latest tests show Firefox is still over 3x as fast with WebGL content then Chrome.
Except anything MIPS-based. Or Power. Or in fact anything that isn't x86 or ARM.
And it's not just a matter of compiling it for those platforms. There's a bunch of architecture-specific porting that would have to be done (e.g. you _have_ to implement a V8 backend; there is no platform-independent way to run V8 just with a C++ compiler).
Not sure about Chromium support on those platforms, but V8 support is pretty impressive.
So you're saying that Chrome is but a Sauron to IE's Morgoth? ;-)
I've run into several other complex sites that fail on Firefox. It's sad, because I've used it for years. I'm using it right now. But my default just switched to Chrome because I started having too many Firefox issues.
Once I became aware of this, I've been trying to be more diligent about thoroughly testing things on Firefox.
I wonder how many other developers are in a similar situation, where Chrome is their default browser and/or their main debugging environment. Part of the problem for me is that I find the Chrome dev tools superior, and that makes it so much easier to just forget about the rest (not that I'm justifying my behavior, btw).
I still use Firefox as default, both for developing and for general web browsing. It and my set of extensions fit my preferences too nicely and have no equivalent in Chrome. I use Chrome at work primarily for Google Hangouts / Meet, the occasional debug session, or just to have another session. (Trying to get into Chrome's Profiles feature too.) At home I just use Chromium from time to time, mostly because my computer is starting to age and I notice the performance difference for certain things.
(Also, I think a lot of people discount how good Edge's Dev Tools have gotten. There too my corporate mandated environment is mostly stuck with Windows 7 and an intentionally broken IE 11 due to Oracle and using their terrible software internally.)
Upon googling, I discovered that drop-down menus have been an issue in chrome (even not using vbox). I'm using zurb foundation for the menu js/css, fwiw.
This hasn't happened to me in a while, except for occasionally government or bank sites which require "IE or Chrome". In many cases, spoofing the user agent works just fine for those. I agree it's bad, though.
The one exception I've noticed is Yubikey (U2F) support for Google services. Firefox has an add-on that provides Yubikey support, but last I checked, Google blocked access to those (I believe even if you spoof the user-agent).
spoofing the user agent works fine
By all means, politely warn a user that "your browser is not tested". It's getting to feel like a marketing driven decision, where pages just about say "our site is so powerful, we only work with the greatest browser ever, Chrome, so come back when you have it".
A local airbnb competitor currently does this.
God I miss the old Opera....
I'm fairly conscientious about this myself since I'm working on plotting data, and the dumb client-side number crunching involved is actually pretty good at eating CPU cycles.
Most plotting libraries want to show off how smooth and incredible their animations are. What I really want to know however is: does your library keep updating the canvas at 60FPS, or does it only refresh when the data does and idle otherwise?
I think some of it is also that the web has numerous things about it that are fundamentally expensive operations, going back to things like "the default table sizing algorithm reacts to the flow of the content within it, which also depends on how the table decides to format it". It's not hard to create a pure HTML page that has no interesting images or scripts or anything, but still is fundamentally slow to render. (You probably wouldn't want to write it by hand, but I've accidentally written programs that output such pages over the years.)
I know there are all sorts of flags etc, but consider old Opera, where switching the status bar between none, simple and advanced was right there in one of the main menus. That was a good start, that stuff would be compact and super useful by now if we'd just keep going.
Sure, it doesn't explain to average users why the webpages they view might be slow, but average users don't care.
The average smoker probably doesn't want to hear smoking is unhealthy. Does that mean doctors should adjust their advice respectively? At what point does "professional" really mean nothing other than "gets money for it, like a carpenter or a thief or a drug dealer might"?
We don't even have the right to "just give people what they want without any judgement on our part", but we certainly don't have the right to ignore those with legitimate concerns because ignorant or apathetic people are more in numbers. That goes for everything, everywhere. That goes to how you are supposed to look out for little siblings when parents are away, and it goes for expert knowledge or intellect.
But what is any user supposed to do with a blinkenlight telling them what they already mostly know: that a website is bloated/slow/eating their machine slowly? If there was an alternative website, maybe they'd already be using it. If they thought complaining to the site's owners about it, maybe they already had or are already aware that they'd be shouting into a careless corporate void. That mostly just leaves uselessly blaming their browser for a blinkenlight that tells them something they already know and can't care about.
"Microsoft has lost over 300 million browser users in 2016, mostly to Chrome, tracking site shows"
Not going to take very long at this rate. Appears to be accelerating. Personally just use Chrome as for me extremely stable and when tried Edge it was not stable. I do a lot of surfing and often times have a lot of tabs open and can not remember the last time a tab crashed.
As for privacy you could use the open source chromium, there's a fork somewhere which has all the Google removed.
Chrome using less resources than Firefox and IE was their sales pitch.
Now they've become what they made fun of, while Firefox is smooth as a baby's butt.
Chrome is still a huge improvement compared to the browsers it was competing against. Chrome changed the market and now the other browsers are competing in the world Chrome created. So while Chrome might fall behind in some areas now, it's naive to say that it's become what they made fun of.
It's nice there working on it, but why didn't they ever care before?
I've always assumed people talking about having hundreds of tabs open just don't understand how to properly use a browser. My grandmother, for example, usually has 100 or so open by the time I get a call about her having computer problems.
There's no logical reason I can come up with for doing this instead of using bookmarks.
Then throughout the course of the day I end up looking up API docs, get linked to blog posts, news articles, and YouTube videos, and read articles which themselves have relevant links to follow. Most of these I just open in a background tab to check out later in the day. These accumulate until I have time to go through and quickly review them. Those that I want to read and don't have the time currently go to Pocket. The rest get read or summarily closed out. I find bookmarks to be a terrible way to triage tabs.
This workflow works for me (and evidently others). It's faster than bookmarking. It's less prone to failure, in my experience (I've suffered bookmark corruption more than once). And a modern computer ought to handle many background tabs just fine. Moreover, if browsers aren't expected to be used in this fashion, they really should set an upper-limit on the number of tabs that can be opened.
Hopefully this gives you some perspective on alternative use cases. It sounds like your workflow works out well for you. I've tried it and couldn't get it to stick. If that means I don't know how to use a browser, so be it. At this point, there's enough of us (your grandmother included) that maybe the browser vendors should just find a way to cope with it better.
Each browser instance is tabbed completely across, I keep them open until I read the page fully, and then save it in keep to keep forever.
By Friday I can have hundreds of tabs that I go through and clean up. Web apps are a huge pain to constantly log in.
I run Korora with 24GB RAM and an I 7, Chrome is never a system hog for me, and most of the time it surprises me how well it handles my use.
I have hundreds of tabs open at a time. Instead of searching for something, then going to the Google page, clicking and waiting for it to load, in Firefox I search for what I want in the bar, it presents the tab as a result and opens it instantaneously.
In addition with the vertical tab bar extensions I can see a list of about 40-50 tabs open at a time, using the additional horizontal space monitors provide that web pages don't use to keep an easily visible list of tabs.
I open pages that interest me, I might read them later like I did this discussion or just drop them. Add in open tickets, reference pages, the build server, youtube, etc. and the number grows over time.
> My grandmother, for example, usually has 100 or so open by the time I get a call about her having computer problems.
Maybe she should use Firefox instead of Chrome?
> There's no logical reason I can come up with for doing this instead of using bookmarks.
I did this when I started, by now I only use bookmarks for high interest pages, no point in bookmarking everything.
So has cVim, but that has no relation to any of this.
Watch out for Opera's add block, it works through a proxy to compress data, thus they know your every move online too...
You spend too much time working lurking around HN sites and young web developers.
That's not true. Microsoft has now gotten into the spying business, and is infamous for the Windows 10 telemetry. They're basically copying Google.
Firefox and Safari are the only ones that come from companies that aren't notorious for wanting to know everything about you. And Firefox doesn't try to get you to spend scads of money on massively overpriced but mediocre hardware that locks you into their ecosystem.
Firefox has its warts, but it's the only choice that really makes sense if you care about privacy and freedom and avoiding vendor lock-in.
Sure, error reporting feeding in to a QA database is one thing. But is there the capability to target Win10 OS ads to, say, folks with old video cards? I'd be very surprised if someone in Redmond didn't think of that.
(I highly doubt that there is any truth to this claim)
> One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe.
Firefox 3 - That was 9 years ago.
I see this as truly the case of making the pie bigger.
All devices include 2 Billion Android devices. That's a bigger pie that Chrome owns now, but FireFox can target in future.
Are you suggesting that Chrome gathers data about you? Because unless you tick that box (which they show pretty prominently) it doesn't. It doesn't by default in most linux distribution packages.
> Unlike IE
I don't know if you've been following, but Microsoft is now the king of knowing everything about you. They record things about their customers' computer activity which should horrify anyone. Sometimes they don't respect user selections either, even all the way up to Enterprise editions (where it is often mission-critical not to send competitive information to Microsoft by accident in a core dump), which is infuriating.
> In my opinion chrome isn't doing a good job either. It's a massive energy hog and waste more CPU than it needs to.
My impression is that at this point, most unique performance problems in Chrome are either an inherent cost of multi-process, a mediocre implementation choice in that model, or a performance tradeoff toward better application latency at the cost of heavy initialization. Chrome could display many pages more quickly if they ignored the GPU, but they use it across the board so that they don't have to restart into "GPU mode" when they realize there is a lot of compositing on the page. Chrome has converged toward other browsers recently, they'll now run multiple tabs on the same process as long as they share a FQDN (sites that host together, crash together), I suspect they do this to save memory.
If we're talking about runtime speed of real web apps and sites, Chrome has everyone matched or beat.
> The time to celebrate victory was a few years ago. Now it's starting to look like the new boss is the same as the old, maybe worse.
The problem with IE6 was not Microsoft, or IE6 itself. Microsoft did not win by literally forcing people to use IE6. The problem was, and probably always will be, greedy unscrupulous web developers (and their managers) who want all the cool new toys at any cost. Microsoft was doing the cool, "html5, bro!" type browser innovation that google is doing now, and developers (and their managers) lapped it up. People forget that Microsoft made box-model: border-box, XMLHttpRequest, favicons, <ruby>, and bi-directional text on the web. They did this all in IE5, this put IE6 ahead, and people loved it too. Microsoft did the right thing and didn't break compat for honest customers who just wanted their webpage to work, so IE5 quirks are the way of the web.
The problem is not that nobody likes the boss, the problem is that everyone likes the boss.
No, I didn't forget. But you forgot to say that MS did all that with draft specifications or even no spec at all (XMLHttpRequest), just to beat everyone to market, then refused to correct their implementation once the standard was revised and agreed by others. And they sprinkled ActiveX on top, for good measure.
> developers (and their managers) lapped it up
Disagree. Developers were the ones that pushed Mozilla and then Firefox (and then Chrome) as soon as they could.
> Microsoft did the right thing and didn't break compat
Microsoft did the right thing for their own bank account: they smashed the competition with bundling then left IE to flounder, even obliterating their dedicated team, because they had reached their objective, which was to dominate the web so that they could sell what they really cared about: ActiveX and other Windows-only technologies.
> the problem is that everyone likes the boss.
No, the problem is that people are lazy. IE won with OEM bundling on Windows. Chrome is winning with OEM bundling on Android. As long as the default is good enough, people won't switch, especially 15 years ago when downloads took a degree of effort (waiting several minutes, restarting after failure etc etc) and now on mobile where it is awkward and/or completely forbidden to switch browser. This is basically what the article says as well: they couldn't push a browser, they had to push an OS with a browser bundled. If people don't switch, developers can't build for alternative browsers, because their managers won't allow the additional time and effort.
No, the DOM put into IE was legitimately better than that in the old NN. MS won that war because their browser was BETTER, period.
They then sat on their laurels and the rest of the world passed them by, so now they're still trying to play catchup.
But at the time? No, anyone who had any experience in the DOM of IE vs NN would hands down push IE. It was just better in every way.
Where did I mention Netscape? I didn't.
IE5 is from 1999, IE6 from 2001, and they were undoubtedly better than Navigator; but the first 0.x releases of Mozilla with the new Gecko engine are from late 2000/early 2001, and were better than IE (although the suite was slow and bloated). Firefox was branched out in 2002 and took off very quickly because it was a great engine without the bloat of full Mozilla. That's why people pushed it (or rather Phoenix) right off the bat.
If you were pushing IE6 over Mozilla or Firefox in 2001/2002, you weren't paying attention. Navigator all but died in 1999.
When developers were pushing IE was when it was IE vs NN.
No. Microsoft had a non-standard box model which was an utter pain in the ass, regardless of their box model being more sensible than everybody else's in theory (and ultimately standardised as an option circa 2010) having to code for a single standard documented box-model is way the fuck easier than coding for two different box models.
Also it's box-sizing not box-model.
 because in practice MSIE's layout engine was a buggy pile of shit
The sane way of doing boxes pre-dated the "standard", which in hindsight appears as though it was specifically crafted to spite Microsoft.
It was a silly time to be a developer.
A world with Google owning a monopoly on web browsing isn't any less bad than if it were Microsoft.
Now with the web, it's much easier to make something work across all platforms, except at the bleeding edge, which is generally where you'll find those sites. Almost all the ones I've seen were tech demos of new browser tech that wasn't available everywhere yet.
That was the case with sites only working on IE 6. What did you expect?
And after some market share point, it's not about laziness either, it makes business sense to not waste time for a small percentage of users (100% reach is not always better than 90% reach -- there's this thing called "opportunity cost").
This is what we want to avoid.
A lot of companies that thought short term like that our paying through the nose for the decision now because they are still stuck on IE6. There is a business case for avoiding vendor lock in, but it's not quantifiable so it gets ignored.
How many of them will remember the lessons?
Those who forget history are doomed to repeat it.
Not about not caring to test/optimize for other browsers, or using standard stuff some browser gets out faster -- which is what some companies do today with Chrome.
If the internet exploded and we had to rebuild it from the ground up, there would be no html/web, and no 3rd party search engine which attempts to reconstruct the web by viewing it as a blackbox. We would build search into DNS, since that's basically what DNS is supposed to be for, and along with the monetization of search (register your site for x search keywords, pay the root DNS for additional keywords.) All of Google's revenue is but a hack of a patch on a chaotically formed system.
So, back on topic, Google won't stop being good to the web, because the greatest evil of Google is that they're good to a platform which doesn't deserve it.
IE was (is) closed source
It currently has a very large relation to the open source Chromium project. But Google could change that tomorrow if they wanted to - they could also gradually move more and more to their closed source Chrome builds (as they have done with Android).
I disagree. There's a difference between having 99% of the source available and 0%.
There is some debate inside Vivaldi about making it open source and it would be easy enough to do. I'd guess that it probably doesn't make economic sense while it has such a small market share, but I don't know if that's true.
You said "Microsoft owned the whole stack" (OS, Office Suite, Browser). My response is, that Google is trying to achieve the same thing: The blurred O/S+Browser that is Chrome, and browser based software like Google Apps.
You're right, that what they intend to do with said monopoly is not relevant to that specific point. The point is that both saw an advantage of some kind that made it worthwhile having control over a large portion of the software their user's ran.
Where it does matter though, is that in the Microsoft monopoly, it was a monopoly of defaults and business contracts only. Nothing technically prevented someone from installing a separate browser, a separate office suite, etc., on their computer.
With a Chromebook, which Google is pushing heavily in education, what options do you have when it comes to installing an office suite? What options do you have when it comes to installing a different browser?
If your answer is "Android Apps", I suggest you read up on Google's own docs, which show that just 10% of devices support that functionality, only 7% support it without using a Beta.
It is official, Google is the new Microsoft.
Microsoft lied to the Justice department, microsoft intentionally broke software on other system, microsoft actively tried to kill open source, microsoft tried to co-opt standardization bodies, microsoft has bought competitors only to fire their staff, microsoft has...
Microsoft has a plethora of criminal charges levied against it.
Google.... Reads your email if send to or from Gmail and sometimes some of its things don't work in FireFox and even then they try to fix it. Google open sources a bunch of things, even when there is no obvious profit motive or requirements to do so.
There is a world of difference. Google's shit doesn't smell like roses, but they are only human and not overtly evil.
EDIT - If you downvote me, please comment so I can know what part of what I said was wrong.
Servo and Webrender will completely shake up the browser landscape, and will allow web apps to match (maybe even surpass?) native mobile apps in terms of rendering performance. Unless Chrome, IE, and Safari can develop an answer to Servo and Webrender by the time those technologies are ready for prime time, I wouldn't be surprised to see "Best viewed in Firefox" badges start popping up everywhere.
Mozilla won the browser war. Firefox lost the browser fight. But there's many wars left to fight, and I hope Mozilla dives into a new one.
As technology shifts to a world where most people do not have a monitor on their home computer or a screen on their phone, what it means to be a browser will dramatically change. Certainly, we could post-it the current user experience into whatever we will have tomorrow, but if VR, AR, Speech, and AI and ample cheap private computing power don't excite people for the future of browsers and user agency, I don't know what will.
I know we've been working on tech such as Servo for a long time, but sometimes even just being "better" isn't enough, especially when there's a large legacy gap to close. You also need to get lucky with a point where consumers are making massive changes and open to new things.
I think that time is much sooner than the "always 5--10 years quoted", and you're going to see mind-blowing things on the web in general and supported by the browser and related services specifically. And I'm betting (at least with my current career) that Mozilla will lead the charge.
What's next isn't clear to me
Apple lost it by getting boxed into a market share corner by android. Google lost it by losing control over android. Android OEM's lost it by getting stuck in a cutthroat competition. Microsoft lost it by being microsoft. Users lost it by having no good choices left (either go with the golden cage iphone, or go with the privacy and security mess android).
Now OEM have to obey to Google because losing the Google apps and services licence (thus losing the Play store and the whole ecosystem) basically means they're dead as an Android manufacturer.
That works in China because Google is relatively weak there. It also works for Amazon, which has its own store for Fire products.
Apple was never likely to license iOS to other manufacturers, nor were they likely to have enough capacity to satisfy the whole market. I reckon they are where they always wanted to be: owning a very profitable and locked-in niche.
You mean the corner where they are the premium smartphone vendor, taking 90% share of global profits? That's a great corner to be boxed into :)
Because iOS users are a significant source of potential profit.
Apple doesn't need to maximize usage in order to control the ecosystem - they just need to maximize profit potential.
Secure messaging is also still a hot topic. Join forces with Signal or Wire or Matrix or XMPP. For example, Wire intends to open source their server code and enable federation .
Voice control requires some weight for an Open Source solution. Specifically, we could use something which does not rely on the internet. PocketSphinx is an ok foundation, but needs more work.
However I would say that among those who do know, Lineage OS has a fairly good reputation for quality. You wouldn't be targeting mass adoption with this, you'd be targeting the influencers.
The Vivaldi browser has copied the original Firefox user interface and stole the best ideas from the Firefox extension makers so if you want the Chromium web rendering engine with the original Firefox user interface you are served by the Vivaldi browser. Hopefully they will become profitable and release their modification under a free software license.
But I am assuming that a browser offering a gigantic leap in UX through native-like rendering performance will entice web app developers to recommend that browser over others, because it's nigh impossible to build a consistently 60fps non trivial app with native-like interactions and transitions on the web today, while Servo and Webrender aim to make 60fps on the web the norm rather than the exception.
Not that anyone cares, because MathJax...
It also wont matter for mobile, since Android will still keep Chrome browser, and iOS will still keep Mobile Safari -- they're both made by the platform's creators.
And as for Firefox on Android, I have plenty of hope for it. I'm seeing more and more people switch to alternative browsers for speed (the most common one is samsung's "browser" which everyone says is "super fast" but really only is a weird hack to make scrolling smooth which breaks a few standards).
Ios is another story, but at least on android if they make a damn good product, people will use it.
That's at odds with almost every single sentiment I've seen regarding native vs. Web apps. Take one look at any HN thread about the two.
If that were true, then there wouldn't be a performance differential between native and Web, since Objective-C and Dalvik are slower than modern JS engines. (Look at how method call dispatch works in Objective-C!)
Besides, a lot of what shows up as "JS performance" in a profiler is actually blocking on DOM operations. With off-main-thread layout, these operations can be done in the background, resulting in improved DOM performance.
If we're talking about e.g. Electron apps, the problem I see mentioned (and felt myself) is almost always the memory hogging, the GC-pauses, the battery impact and such -- not the rendering speed. Although, there is talk of getting to 60fps web apps etc.
For something like Atom, is the slow redrawing because "DOM is slow" or because "doing the calculations needed for a sizable file, with syntax highlighting regexes, compiler checks, freeing memory, etc takes lots of processing time"?
>If that were true, then there wouldn't be a performance differential between native and Web, since Objective-C and Dalvik are slower than modern JS engines. (Look at how method call dispatch works in Objective-C!)
That's not entirely true, as Objective-C dispatch was thoroughly optimized . Besides, the performance differential is also in the time to process logic (and the network latency) which you don't address. And of course, aside from rendering (which often is just "show a few forms, buttons and lists" for most apps) a part of the heavy logic in Objective-C for lots of tasks is done in C or C++ frameworks at much faster speeds than modern JS engines.
I see the opposite. VS Code feels somewhat slow, mostly because of rendering—it doesn't hit 60 FPS.
You cite GC pauses. One of the best ways to mitigate GC pauses is to move the noticeable rendering logic off the main thread so that your app doesn't freeze during GCs, which is precisely what Servo is designed to do.
> For something like Atom, is the slow redrawing because "DOM is slow" or because "doing the calculations needed for a sizable file, with syntax highlighting regexes, compiler checks, freeing memory, etc takes lots of processing time"?
The performance differential is because of many things, but regex performance and freeing memory relative to native aren't among them. JS engines' regex engines are best in class and easily exceed the performance of popular C regex libraries; this is a side effect of SunSpider and V8 including regex benchmarks. Memory deallocation in popular JS engines is faster than in native, because sweeping takes place all at once and generational GC nursery evacuation is very fast.
> That's not entirely true, as Objective-C dispatch was thoroughly optimized .
> Besides, the performance differential is also in the time to process logic (and the network latency) which you don't address.
Pure computation in most apps is not appreciably slower for the end user in JS than it is in Android or iOS. And if it is, there's always Web Assembly! We're doing lots of work to improve JS performance; it's just not all under the Servo umbrella.
> And of course, aside from rendering (which often is just "show a few forms, buttons and lists" for most apps) a part of the heavy logic in Objective-C for lots of tasks is done in C or C++ frameworks at much faster speeds than modern JS engines.
That same "heavy logic"—by which I assume you mean audio/image/video decoding, JSON/XML parsing, image filters, vector graphics work—is also done in native code in browsers. And it's those very same tasks that we're optimizing in Servo.
You make it sound like these are two orthogonal aspects. When rendering is faster, CPU usage obviously goes down. As does battery impact, since the CPU can go back to a sleep state faster.
Only as much as its the rendering, and not the core logic that consumes the CPU.
Degenerative case: a page with a single text entry field, where you enter a number and it calculates e.g. the fibonnaci sequence up to that number or factor primes etc. There's hardly any rendering, but lots of CPU.
And it's proving to be fast, safe, and the future of browsers.
First of all, I think the idea behind Servo is awesome, and I follow it. But I've been testing it on Mac OS and Windows, and it is not a runnable browser, nor fast (as expected!). CPU is often fully pegged and it's very iffy if any UI elements or page loads work. Not to say they won't get there, but it's still very, very early and buggy.
We had a similar issue with Stylo (Servo style system in gecko) recently where there were bugs in the parallelism code making us slower than gecko. Fixed, now we're faster again. We only recently started tracking performance properly, and it was caught and fixed in a few weeks.
So it's not entirely unreasonable to suggest that Mozilla's next-gen engine efforts could be first-to-market, and that everyone else might have to play catch up.
As Servo would be a mobile app itself, I think no ;)
Sucks that they don't have an IOS version, but Apple is the problem there, not Mozilla.
Adblocking is more than pure resource blocking (which afaik Brave, Samsung, iOS et. al.) currently implement. In fact, the smaller amount of ads I would see would be blocked this way.
I have Facebook, Instagram, Twitter, reddit and in the past Tumblr ad free due to element hiding capabilities that are non-present in any other browser I know except Firefox mobile with ublock origin.
Edit: I replied to the wrong comment. I guess it's early.
I think the idea of TFA is that soon we wont have as many options, since Chrome seems to be dominating. Opera is also using Chrome's engine, ditto for Brave, so they're basically just sells. And Safari is from the same DNA, and only really relevant on Mobile and OS X.
So Windows users basically have just Chrome and Firefox, and Linux users basically have just Chrome, Firefox and Edge. And even in Windows, Chrome dominates, almost to the point that IE dominated back in the day.
So where's the choice? If it's just about availability of other rendering engines, people still had choice in the "optimized for IE" days. But it's mostly about rendering engines having competing market shares, and nowadays they increasingly do not.
Plus, who will keep paying search placement money to Firefox if it gets to small single digits of use? And without those, how will development be continued?
>> We really need someone to fight for our privacy and neutrality. And I really believe that this could be Mozilla's swan song.
I deeply care about privacy. I fight for privacy. I work in information security. Every day I help my customers write code a little more securely. I educate them about implementing end to end encrypted communication systems. I am slowly migrating away from systems that don't respect privacy or can't function at scale without violating privacy.
You have made a great point, and we do need big organizations to fight for privacy too. But the "someone" also has to be you and me. We have to reject operating systems like Windows 10. We have to make Linux and open source tools the ones we want to use. Even merely quitting Macbooks, which trendy firms and developers are so fond of, even if just one more person does that /today/ matters.
We have to claw our data back. Byte by byte, we must earn it back and never accept being the product again. We must suffer the almost inconceivable inconvenience of perhaps not using Amazon for every online purchase. Amazon, Facebook, Google... they are slowly eating the world and even if they are "good" that sort of absolute domination enforces a mono-culture onto the world.
I certainly hope it is not a PR stunt, but WordPress is probably the other big player in the fight for the open web. It might actually benefit all of us if Automattic starts making a lot of noise about privacy.
And ultimately, its not as if anyone wants any of these tech giants to completely fail (well, maybe Facebook). What we want is to not have the nature of the web changed to suit the whims of a handful of companies.
I feel like we're ten years late for this concern.
At the moment, there are only two kinds of employees at Facebook. Those who care and are getting irritated each time these issues are raised on HN (see here ), and the dregs who bury their heads in the sand. I bet there is someone who works there who is reading this and realizing that either they will have to change their attitude, or soon the company will turn into another Enron. We don't still have Enron in our midst anymore, do we?
Once one company goes down, it is only a matter of time before the rest fall in a domino sequence because people will start wondering about the practices of its peers. I would like to think that these companies are a little more sensible than to imagine they are somehow infallible, its better for them to change now before it gets to the point where they are made to.
Mozilla _is_ fighting that fight. See their posts here: https://blog.mozilla.org/netpolicy/
i disagree. Google built chrome to protect their monopoly in search. and they have protected that monopoly well, and added another one: browser.
i dont disparage Google for doing it, in fact both are great products that I use. But to say mozilla did it to 'give people a choice' and that they 'won' doesnt seem right to me.
I'd be interested to see the evidence for that....
This was not just competition. This was a deliberate campaign to break the web in a way where IE would work but Netscape would not.
As I said, this is from memory, and I can't find the source. I have read a copy of the Microsoft memo, though (but you only have my word for it...)
Microsoft was much more co-operative than Netscape in the early days. It was one of Microsoft's advantages when Netscape was winning and running on pure arrogance. See How the Web Was Won, High Stakes No Prisoners and a few other books for details.
Microsoft did introduce ActiveX, which Mozilla considered supporting, and then decided not to.
One of the facts of the case is that Microsoft got as close to the standards bodies as it could, and part of its marketing was that it was making IE more standards compliant than Netscape. This is actually very common in computer history (the market leader does whatever it wants to innovate, while the losers band together around standards).
In the end, of course, it didn't matter. Microsoft out-programmed Netscape and then Netscape made several disastrous decisions that amounted to browsercide.
As I said, if you've got any evidence, I'm interested. Specifically, what did Microsoft add that was non-standard and that Netscape couldn't have added?
As far as I know, not even ActiveX qualifies. I discussed this with Mitchell Baker, and she clearly said that Mozilla could have implemented ActiveX if they had wanted to.
sure, but if Chrome de facto controls standards does that really matter?
I'm much less pessimistic.
Besides a cross platform and extensible browser we see also the following coming out of Mozilla:
* Rust, a modern low-level programming language with cutting edge "safety" build in at zero runt time cost, luring many system programmers.
* Servo, tomorrow browser, from scratch, in Rust.
* Thunderbird, x-platform desktop email client (interesting for those not trusting the cloud enough).
* MDN, everything MSDN and w3school wish they could be. :)
A lot with revolve around privacy and safety in the future, a space that Mozilla is very well positioned to florish in.
Chrome is a good product. But I prefer Firefox. And seeing what is becoming of Servo I will soon start using that. Form me Firefox has won, and is not at all losing. I dont need the "most popular" browser, I need the most secure one.
And when I see what programming languages Google came up with... (Seriously? Is Go the best money can buy?) Then I think Rust shows single handedly that Mozilla beats Google in that arena as well.
1. A programming language, that hasn't yet shown "escape velocity" to go beyond D and other would-be-C++-successors in traction.
2. The only major application of that language... a pre-alpha browser engine, which may or may not eventually replace the engine in a browser that is seriously declining in market share with no reversal in sight.
3. A desktop email client, from which Mozilla has repeatedly made clear their intentions to divest and move on.
Mozilla is an organization with over $400 million in annual revenue. Where that money is going baffles me.
You may be interested in https://www.rust-lang.org/en-US/friends.html.
From FY2015, actual budget is $372 million. $227 million is applied toward investments - mutual funds, bonds, etc, presumably to ensure longevity and continuity. They paid $29 million in income taxes, $6 million in employee benefits. I can't really comprehend rest of the audit report in any practical terms I understand, but that leaves about $100 million to try to understand and rationalize (actually probably a lot less than that.)
At 1,000 employees, that's $100k per person on average, which doesn't sound so far-fetched (of course in reality it's a curve, but it suddenly doesn't look so alarming). Somewhat noteworthy, they donate about $300k/year and a full-time employee to Let's Encrypt. Which, well, I personally benefited from that, so kudos.
I think you meant this disparagingly, but such a reference is remarkable. Comprehensive disscussion of a massive standard, in a well-written, usable form is a huge accomplishment.
There are other promising applications. For example, it could be used for web technologies-based applications (e.g. hybrid mobile apps, Electron, ...).
This is astonishing. Anyone looked at their accounts?
Besides that, a few things to consider:
* Mozilla is currently doing financially well, no one ever claimed otherwise. They are putting money to the side and also diversifying their income strategies, in case their market share falls even lower and revenue from search engines drops out.
* They are still by far the smallest of the big browser vendors. Google, Microsoft and Apple could all easily invest far more money than that, if they wanted to.
* Mozilla is currently developing two browser engines in parallel, implementing a new extension model, making Firefox multi-process capable and just in general dealing with a lot of technical debt. No way to truly know what Google, Microsoft and Apple are up to, but I can hardly imagine them currently doing more than Mozilla in terms of innovation and development.
Anyway, $400 million a year, if you spend it all on developers (salaries, payroll taxes, benefits, desks, monitors, etc) gets you at most 2000 people in the US. And that's if you really stretch things, honestly.
That assumes you don't need a continuous integration infrastructure (which is actually a pretty large cost in practice). And that you don't need legal, payroll, marketing. Or any project management or management at all. And that you're not doing any documentation or anything like that.
In practice, Mozilla has a bit over 1000 employees and isn't spending all its revenue (so pays taxes on whatever it doesn't spend, and saves what's left).
The amount of funding they get is in direct correlation to how much market share they have in Firefox at the time they negotiate a search deal with one of the big search sites.
I love Rust, and advocate it among other memory safe systems programming languages, but right now it still has an uphill battle against Swift, C++17 and .NET Native, on the desktop and mobile OSes.
Adoption is growing slowly, even Microsoft has recently added a Rust library to VS Code, but it will take years to become a major systems language.
It seems Mozilla is very much aware of what will disappear when WebExtensions becomes the only extension API, and what will need to happen to replace it.
Google may focus on specific types of exploits, but they let everyone bad just walk right in the front door.
Assuming you know better than installing adware or installing random chrome extensions, Chrome is the most secure browser out there.
Btw, regarding extensions, before Mozilla switched to WebExtensions, you could do nasty stuff like extension-reuse attacks.
Sure, it's removable, but most of the victims of this behavior don't even know what Chrome extensions are, much less how to remove them.
This is absolutely something Google has failed to address, and it's really not accurate to refer to Chrome as "the most secure browser" while this is so rampant and unhandled.
then you really ought to be using chrome
Servo is an "experimental" browser engine. While Chromium is cranking out features, servo is yet to reach version 1.0.
Thunderbird - was "discontinued" by Mozilla
MDN is a mess. The topics are all over the place and its hard to navigate. php.net docs is much more organized.
You can criticize Go but it has a more thriving ecosystem than Rust.
As much I'd like them to succeed, I dont think Mozilla is doing very good right now. They churn out technologically good products, but business wise, they dont know what they're doing. And without money, they wont be able to fight the big companies like Google, MS, FB.
This is just one of many examples!
This would be a more convincing point if any of memory safety, parallel styling, parallel layout, retained mode rendering as suggested by the GPU vendors, and so forth were on Chromium's roadmap.
Servo is a nice goal and a technological feat but its a big business risk for Mozilla without clear return of value.
> Servo is a nice goal and a technological feat but its a big business risk for Mozilla without clear return of value.
Doing nothing is an even bigger risk.
I don't understand why "you shouldn't even try to improve" is such a popular sentiment on HN, of all places. What possible benefit can complacency bring? Why advocate it?
Disclaimer: I work on Chrome.
This is a case in which looking at the numbers without digging in more closely can lead to misleading conclusions. Yes, the Chrome Profiler reports that a lot of time is spent in JS (though ~25% of total CPU time on styling, etc. is not what I would call "paltry"). But what is that JS doing? It's usually not doing raw computation but rather doing custom layout, interacting with the DOM, etc. People do synchronous reflows (no matter how much you evangelize, people will still do it), which means that layout performance affects what looks like script time. And, due to ads, a lot of the performance cost is cross-domain iframes, which have no reason to run on the main thread (process isolation is too heavyweight to scale this far, which is why Chrome-style Site Isolation isn't a solution).
The real problem with browsers is that so much is synchronous. We need to make everything as responsive as possible, and the way to do that is to aggressively multithread. Unfortunately, that's very hard to do in existing browser codebases. Hence Servo.
The solutions that the Chrome team keeps proposing—Custom Layout, Custom Paint, CSS Compositing, etc. are all targeted toward rendering (and they're essentially short-term band-aids at that). If rendering really weren't a problem, then we wouldn't be spending all this time on Google's Houdini proposals! If layout were fast, we wouldn't see people implementing layouts in JS, and therefore we wouldn't need Custom Layout. If painting were fast, then we could use SVG and CSS and not feel like Custom Paint is necessary. If the main thread weren't so bogged down all the time, then people wouldn't see the need to move a random subset of the Web platform to the compositor thread.
To be honest, I think that Houdini is largely misguided: there is a huge amount of performance left on the table that would obviate the need for Houdini if we simply chased it.
I do layout in JS because the existing layout capabilities don't cover my use cases. I create complicated visualisations in SVG, and SVG's layout capabilities are so pathetically anaemic that I have no choice but to take over and use a mixture of D3 and custom JS to arrange things.
Similarly, I don't do synchronous layouts because I'm a moron who doesn't understand performance, but because I need to measure the size of various data-bound elements and feed those dimensions back into my custom multi-pass layout algorithm.
AFAIK, Houdini is an actual attempt to solve both of these problems: It will let run my custom layout algorithm in a worklet, and give it access to the font metrics and element sizes it needs to run efficiently.
The Houdini people seem to be the only web platform people who actually "get it" with respect to these kind of problems. Everyone else seems to be either ignorant or dismissive that these problems even exist.
Of course, I agree with the rest of your points.
Chrome's profiler actually shows synchronous layouts and style recals forced from JS as layout/style so it's not misleading in that way. The traces I've looked at (admittedly, not something I do terribly often) showed those to be typically ~20% or less of the time spent while content is loading.
> And, due to ads, a lot of the performance cost is cross-domain iframes, which have no reason to run on the main thread (process isolation is too heavyweight to scale this far, which is why Chrome-style Site Isolation isn't a solution).
I suppose the proof will be in the pudding, but I think process isolation could actually be a solution here. You don't have to have a separate process for every cross origin iframe. This is something desirable to do anyway for the security benefits so you might as well leverage it for performance as well. IMHO, this is the biggest problem with web perf and I don't see Servo as directly addressing it.
> If layout were fast, we wouldn't see people implementing layouts in JS
Are people re-implementing layout/paint because it's slow? Or because we didn't bake in the little detail they want into the platform through a multiyear standardisation process? I think it's naive to think developers will just stop writing JS if the browser gets x% faster.
> If the main thread weren't so bogged down all the time, then people wouldn't see the need to move a random subset of the Web platform to the compositor thread.
I disagree. Developers will always find something to fill it with. IMHO, the real goal - and the reason for the compositor thread - is to separate rendering from input and make it difficult for a page to tie the two together (or rather, easy to keep them separate). A user wont notice if the page takes an extra few hundred ms to render. She will notice if her scroll is delayed by 200ms.
I don't see Houdini as the main thrust here, there's lower hanging fruit. Practically speaking, non-blocking event handlers have had a bigger impact on user experience than anything I've seen in the last few years. I expect IntersectionObserver to have a big impact too. I think chasing the long tail of UX antipatterns, while not as sexy, is far more productive.
In any case, I'd be happy to be proven wrong here - Servo is doing some really cool stuff and no one would be sad to see things get faster; I just don't see it as the silver bullet it's often promoted to be. I think this is a great example of how having multiple rendering engines is healthy for the web. Lets all innovate independently and let the results speak for themselves :).
Yeah, it's too bad that Google's Chrome-first/Chrome-only approach to Web development these days is making that increasingly difficult to sustain.
If you can multithread iframes, then I think Amdahl's Law will start to kick in unless you improve style and layout performance. For instance, if I block the ads on washingtonpost.com, then style + layout is nearly as big as script.
But in any case, you have to look at how that affects the user experience. If done properly—i.e. everything remains responsive—the user won't notice slow script very much.
> I suppose the proof will be in the pudding, but I think process isolation could actually be a solution here. You don't have to have a separate process for every cross origin iframe. This is something desirable to do anyway for the security benefits so you might as well leverage it for performance as well. IMHO, this is the biggest problem with web perf and I don't see Servo as directly addressing it.
Servo definitely does address it, because it runs all cross-origin iframes in separate threads (and has from the beginning). Even same-origin iframes get separate style/layout threads, even though they share the same DOM thread. It can also do process isolation, as ipc-channel lets us abstract over the thread/process distinction.
I think multithreading is a more scalable solution for performance than process isolation, because with process isolation you need heuristics to avoid ballooning the number of processes out of control. You could have a throttled "ad process", but then you'd have to figure out what the ads are and hope you don't mess up, or else you might hurt iframes that matter. It's a lot simpler to just put separate origins in separate threads to begin with.
> Are people re-implementing layout/paint because it's slow?
Yes, because they've been told to use CSS transforms instead of real layout because those "run on the GPU".
> I think it's naive to think developers will just stop writing JS if the browser gets x% faster.
They won't, but we can certainly help things along a lot by making it easier to just use the platform. (I think we're in agreement here.)
> IMHO, the real goal - and the reason for the compositor thread - is to separate rendering from input and make it difficult for a page to tie the two together (or rather, easy to keep them separate). A user wont notice if the page takes an extra few hundred ms to render. She will notice if her scroll is delayed by 200ms.
I agree with the general principle, but I disagree with way it's implemented in existing browsers. The compositor thread is super limited; it's a thing that handles a weird subset of CSS that you have to be careful not to fall out of or else your performance drops. This is no way to treat Web developers! It's a historical accident, too, one that stems from the fact that Core Animation was developed independently of Mobile Safari for the iPhone 2G, Mobile Safari retrofitted a subset of CSS to that API, and then everyone copied Mobile Safari.
It makes more sense to have a dedicated thread for all style and layout and another thread for all painting, eliminating the paint/composite distinction. No matter what you do, the styling/layout runs off main thread, and the painting runs in yet another thread. That's Servo/WebRender's design. It's not easily compatible with existing engines, but that's why Servo exists :)
> I don't see Houdini as the main thrust here, there's lower hanging fruit. Practically speaking, non-blocking event handlers have had a bigger impact on user experience than anything I've seen in the last few years. I expect IntersectionObserver to have a big impact too. I think chasing the long tail of UX antipatterns, while not as sexy, is far more productive.
The problem with focusing just on adding new stuff to the platform is that, while Google's evangelism operation is impressive, it's hard to get existing Web developers to move to new stuff. It's the classic Itanium vs. x86 inertia problem. To take Washington Post as an example, they layout thrash like crazy when loading, despite it being known for years that that's a massive performance problem. By contrast, by making existing patterns faster, we improve the user experience for all Web sites, old and new.
To be clear, we should both introduce new APIs (IntersectionObserver is a very important one) and work on improving performance of the old. They're not mutually exclusive.
> In any case, I'd be happy to be proven wrong here - Servo is doing some really cool stuff and no one would be sad to see things get faster; I just don't see it as the silver bullet it's often promoted to be. I think this is a great example of how having multiple rendering engines is healthy for the web.
Oh, it's hardly a silver bullet. Those don't exist. By the same token, though, new APIs and things like AMP aren't a silver bullet either ;)
> I think this is a great example of how having multiple rendering engines is healthy for the web.
I agree, which is why I disagree with the "Chrome Won" defeatist attitude of the article.
True. But Mozilla is betting on servo which I see as a big question mark as well. Are you sure that people will download firefox over chrome when servo lands? How long before it will finally ship? 1, 2 years? Was there a market research before hand? Like I said its a big question mark. Cant blame me if I have doubts after Firefox OS.
> I don't understand why "you shouldn't even try to improve" is such a popular sentiment on HN, of all places. What possible benefit can complacency bring? Why advocate it?
I dont know and I dont have something to do with it. Personally, I think the popular opinion on HN is pro Rust and pro Servo.
> I'd be more than willing to hear your technical thoughts...
Wait, are you using the "if you cant talk technical GTFO" card on me? We are straying from my original comment.
To make this perfectly clear:
Im not against improvement. Im against Mozilla's poor business decisions. You can see that was my point in the original comment.
No im not downplaying all your hard work. But ive seen this happen before. Happened to me as well. "Oh this will be great when it ships" at the expense of the company. Im trying to give an alternative opinion here contrary to the constant positive reinforcement when Rust and Servo is posted on HN. Nothing personal.
Bits have already landed, and another major piece is being worked on right now. That's the whole idea of Quantum; no need to wait till Servo is done for Firefox users to start seeing benefit.
Are you sure that people will download Firefox over Chrome when Mozilla does nothing?
I think Go has already become unavoidable for those working in cloud like environments, thanks to Docker and Kubernetes.
Anyone working on those areas needs a working knowledge of Go, even he/she dislikes the language.
It's definitely put it firmly on my radar as something to pick up recently, and since I don't already know C or C++ it feels like a good starting point.
There are not a ton but there are more than zero.