
How Stylo Brought Rust and Servo to Firefox - mnemonik
http://bholley.net/blog/2017/stylo.html
======
kbd
It's gratifying to see how successfully the same organization has learned from
the debacle that was the rewrite from Netscape 4 to Mozilla in the first
place. That time, they didn't release for years, losing market share and
ceding the web to Internet Explorer for the next decade. Joel Spolsky wrote
multiple articles[1][2] pointing out their folly.

This time, their "multiple-moonshot effort" is paying off big-time because
they're doing it incrementally. Kudos!

[1] [https://www.joelonsoftware.com/2000/04/06/things-you-
should-...](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-
do-part)

[2] [https://www.joelonsoftware.com/2000/11/20/netscape-goes-
bonk...](https://www.joelonsoftware.com/2000/11/20/netscape-goes-bonkers/)

~~~
mjw1007
Joel is making two separate claims there, though he doesn't cleanly
distinguish them.

One is that rewriting from scratch is going to give you a worse result than
incremental change from a technical point of view (the « absolutely no reason
to believe that you are going to do a better job than you did the first time »
bit).

The second is that independent of the technical merits, rewriting from scratch
will be a bad commercial decision (the « throwing away your market leadership
» bit).

We now know much more about how this turned out for his chosen example, and I
think it's clear he was entirely wrong about the first claim (which he spends
most of his time discussing). Gecko was a great technical success, and the
cross-platform XUL stuff he complains about turned out to have many advantages
(supporting plenty of innovation in addons which I don't think we'd have seen
otherwise).

It's less clear whether he's right about the second: certainly Netscape did
cede the field to IE for several years, but maybe that would have happened
anyway: Netscape 4 wasn't much of a platform to build on. I think mozilla.org
considered as a business has done better than most people would have expected
in 2000.

~~~
aidenn0
In Joel's defense, we don't know that gecko is better than X years of
incremental changes to Netscape 4.

One of the big issues with software engineering advice is that it is really
hard to find apples-to-apples comparisons for outcomes.

~~~
mjw1007
True.

I think we can say that Gecko ended up technically better than incremental
changes to Internet Explorer, which I think was starting off from a more
maintainable codebase than Netscape 4. That's hardly conclusive but it's some
evidence.

~~~
lucideer
> _we can say that Gecko ended up technically better than incremental changes
> to Internet Explorer_

> _..._

> _That 's hardly conclusive but it's some evidence._

Given Internet Explorer's monopoly position, and consequent disincentive to
compete, it's not really the best comparison.

Compare to something like Opera Presto, a codebase that - while younger than
Netscape - predates Internet Explorer, which underwent incremental changes
while remaining in a minority market position. It was killed by market forces,
but I doubt anyone would contest it was a badly put together piece of software
in the end.

Konqueror is another example. It's not quite as compelling, as KHTML itself
has fared less well than its forks, Safari WebKit has never exactly been a
leader in engine advancement, and Chrome's innovations, while incremental,
were largely built on a scratch-rewrite-and-launch-all-at-once of one
component (V8). However KHTML/Webkit/Blink is still pretty much an incremental
rewrite success story.

~~~
iopq
I actually used Opera because it allowed me to stop websites from hijacking my
browser. No capturing my right click or any of this silly bullshit. The exact
UI/features I want. Opera 6-12 were good times.

------
thomastjeffery
> Stylo was the culmination of a near-decade of R&D, a multiple-moonshot
> effort to build a better browser by building a better language.

This is the most impressive, and useful aspect of all the recent work in
Firefox. Rust is an amazing language. It really brings something new to the
table with its borrow checker.

The fact that rust was created as part of a greater effort to work on a web
browser is amazing.

~~~
stingraycharles
What I wonder, and I do not mean this in negative way, is whether this would
have happened in a more commercially oriented organisation. Mozilla remains a
foundation, and I consider Rust a fruit of their labour in itself.

To put it another way, I find it hard to justify developing Rust just for a
web browser. But if you consider it from the perspective of a foundation
developing tools for the developer community as a whole, it makes much more
sense.

~~~
overhang
Imagine if some company were to make a completely new language just for an
IDE.

~~~
signal11
Is that a reference to JetBrains and Kotlin? :-)

~~~
sjwright
Or emacs. :-)

------
haberman
I'm a huge, huge fan of Rust, Stylo, Servo, WebRender, etc. Hats off to
everyone involved.

~~~
pcwalton
Thanks for the kind words!

~~~
haberman
You're welcome! I guess while I'm at it I should mention that I am a huge fan
of your HN comments also. I learn more from them on a technical level than
probably anyone else on this site.

------
FlyingSnake
> They borrowed Apple’s C++ compiler backend, which lets Rust match C++ in
> speed without reimplementing decades of platform-specific code generation
> optimizations.

This was a pretty smart move by the Rust team, and this gave them a rock solid
platform to go cross-platform. In words of Newton, "If I have seen further it
is by standing on the shoulders of giants". Kudos team Rust, and let's hope
they eat C++'s lunch soon.

~~~
sp332
Does this just mean LLVM? Because it's weird to describe it as "Apple's",
Apple just uses it.

~~~
fulafel
It's slightly oversimplified. Apple hired Chris Lattner from academia to ramp
up work on LLVM & create Clang. (Apple always hated GNU stuff so the motive
might not have been purely technical.)

~~~
GeekyBear
There were a couple of purely technical reasons for Apple to pursue Clang.

1.) GCC was designed for command line use, not to provide integration (for
instance, debugging info) into a modern IDE.

2.) Objective-C was not a priority for those who maintained GCC, but was for
Apple.

[https://en.wikipedia.org/wiki/Clang](https://en.wikipedia.org/wiki/Clang)

~~~
fulafel
The Objective-C story is a little more complicated. Originally Apple had a
hostile fork of GCC and even initially refused to provide source code until
the FSF lawyers got involved. It's a small wonder the GCC Objective-C support
is as good as it is considering the politics.

Debugging info works similarly in all compilers (GCC, MSVC etc) - it's saved
in the compiler output and read by the tools like IDEs.

~~~
GeekyBear
>For instance, GCC uses a step called fold that is key to the overall compile
process, which has the side effect of translating the code tree into a form
that looks unlike the original source code. If an error is found during or
after the fold step, it can be difficult to translate that back into one
location in the original source.

[https://en.wikipedia.org/wiki/Clang#Design](https://en.wikipedia.org/wiki/Clang#Design)

------
VeejayRampay
It might sound very naive to say this, but I found it very cool that it was
someone from the States, an Aussie and a Spaniard working on this, open source
is something magical when you think about it. Props to everyone involved, all
those projects sound like a lot of fun for a good cause.

~~~
Manishearth
Both the stylo and servo teams are extremely distributed.

For servo we've had long periods of time where no two employees are in the
same office (it helps that most folks are remote).

In both teams we found it impossible to pick meeting times that work for
everyone so we instead scheduled two sessions for the recurring meetings, and
you can show up for one of them (the team lead / etc are usually at both so
they're scheduled such that it is possible for them to attend both). Though
for Servo we eventually stopped the meeting since we're pretty good at async
communication through all the other channels and the meeting wasn't very
useful anymore.

Also, to clarify, while that was the initial team, shortly after getting
Wikipedia working the team grew with folks from the Servo and Gecko teams.
Eventually we had folks living in the US, Paris, Spain, Australia, Japan,
Taiwan, and Turkey working on it. As well as volunteer contributors from all
over the place.

------
linkregister
I love Firefox Quantum and it has replaced Chrome as my browser at home. It's
memory consumption is far lower with the same amount of tabs open.

That said, why does it perform slower than Chrome on most benchmarks? Is it
due to the Chrome team doing much more grunt work regarding parallelism and
asynchronous I/O? Or are there still features in the current Firefox build
that still call the original engine?

Does Rust have a runtime penalty as Golang does?

~~~
steveklabnik
> most benchmarks

Which benchmarks are you talking about? It depends on what those benchmarks
measure.

For example, a lot of the Quantum work was in user-percieved UI latency;
unless the benchmark is measuring that, and I imagine that's a hard thing to
measure, it's not going to show up.

> Does Rust have a runtime penalty as Golang does?

Rust has the same amount of runtime as C does: very very little.
[https://github.com/rust-
lang/rust/blob/master/src/libstd/rt....](https://github.com/rust-
lang/rust/blob/master/src/libstd/rt.rs)

~~~
majewsky
Looking at rt.rs:

    
    
      sys::stack_overflow::init();
    

I probably don't know what this function does, because my initial guess is not
very comforting. :)

~~~
rhencke
It initializes stack overflow handling. You can read about what that means
here, for Unix:

[https://github.com/rust-
lang/rust/blob/71340ca4e181b824bcefa...](https://github.com/rust-
lang/rust/blob/71340ca4e181b824bcefa887f1be60dd0b7352ce/src/libstd/sys/unix/stack_overflow.rs#L79)

------
DonbunEf7
"So it’s pretty clear by now that “don’t make mistakes” is not a viable
strategy."

This is more generally known as Constant Flawless Vigilance:
[https://capability.party/memes/2017/09/11/constant-
flawless-...](https://capability.party/memes/2017/09/11/constant-flawless-
vigilance.html)

------
Brakenshire
One thing I've been wondering is that Stylo and Webrender can parallelize CSS
and Paint, respectively, but I haven't seen any mention in Project Quantum
(the project to integrate Servo components into Firefox/Gecko) of any
component to parallelize layout, which is probably the biggest bottleneck on
the web at the moment.

Is parallel layout something which can only be done through a full rewrite,
hence with Servo, and bringing Servo up to full web compatibility, or can this
be handled through the Project Quantum process, of hiving off components from
Servo into Firefox?

~~~
kibwen
The OP links a video from 2015 that implies that one of the advantages of
making Stylo the first Servo component in Gecko is that the next phase in the
pipeline, layout, will be able to benefit from having a well-defined interface
in place. I'm curious about this as well!

~~~
bholley
Since I gave that talk, it's become more clear to me that servo's layout
engine is a lot farther from feature-complete than the CSS engine was. So my
hunch is that the granularity of incrementalism we used for stylo may not be
workable for layout.

That said, we are absolutely going to explore opportunities for more
Rust/Servo in layout, so we just need to find the right strategy. One
incremental step I'm interested in exploring is to rewrite Gecko's frame
constructor in Rust using the servo model, but have it continue to generate
frames in the Gecko C++ format. This would give us rayon-driven parallelism in
frame construction (which is a big performance bottleneck), while being a lot
more tractable than swapping out all of layout at once. Another thing to
explore would be borrowing Servo's tech for certain subtypes of layout (i.e.
block reflow) and shipping it incrementally.

Each of these may or may not share code with Servo proper, depending on the
tradeoffs. But Servo has a lot of neat innovation in that part of the pipeline
(like bottom-up frame construction and parallel reflow) that I'm very
interested in integrating into Firefox somehow.

We're going to meet in Austin in a few weeks and discuss this. Stay tuned!

------
fulafel
Congratulations, it's really an unparalleled performance of parallel
performance.

~~~
vanderZwan
Speaking of which: does anyone know if some new optimization land in the beta
versions a couple of days ago? Or if some bug that caused delays on Linux got
fixed?

I updated my developer version yesterday and it was as if Firefox - already
ludicrously fast compared to before - turned on the turbo booster.

Obviously, I'm not complaining ;)

~~~
wldcordeiro
The next Servo component is webrender and I'm not certain on if it has been
flagged to on for developer edition but that would certainly affect speed.

~~~
pimeys
It's only on nightly and still buggy on certain chipsets. At home I have a
Kaby Lake system and using WebRender for daily browsing without problems. At
work I use a Skylake system which has trouble with some sites, such as
[https://tradingview.com](https://tradingview.com)

If you want to read the latest info, there is a good status post from a couple
of days ago:

[https://mozillagfx.wordpress.com/2017/11/27/webrender-
newsle...](https://mozillagfx.wordpress.com/2017/11/27/webrender-
newsletter-10/)

------
agentultra
This is a great story. For large, existing code-bases _incremental_ change is
the only strategy I've seen work. Kudos to the team behind it.

------
pitaj
One thing I've noticed about Firefox, especially on mobile, is that transform
animations are pretty janky.

Does anyone know if this is being worked on? Should I submit a bug report?

~~~
pcwalton
Feel free to file a bug report with your hardware, Firefox version, and test
case, yes. These often boil down to simple bugs (hardware-specific or page-
specific) that can be quickly fixed when isolated.

The medium-term effort to revamp the graphics stack is WebRender. Note that,
like Stylo, WebRender is not just meant to achieve parity with other browsers.
It's a different architecture entirely that is more similar to what games do
than what current browsers do.

~~~
spiderfarmer
Firefox’ CSS transform/animation performance is terrible on macOS. I filed a
bugreport but there’s no interest in solving it unfortunately.

[https://bugzilla.mozilla.org/show_bug.cgi?id=1407536](https://bugzilla.mozilla.org/show_bug.cgi?id=1407536)

~~~
pcwalton
Those are very general conclusions to draw from one test case. The bouncing
ball test runs at 60 FPS for me on macOS; most of the time is spent in
painting, as expected. Likewise, Stripe scrolls at 60 FPS for me.

I should note that the bouncing ball test is the kind of thing that WebRender
is designed to improve—painting-intensive animations—so it's obviously untrue
that there's no interest in improving this sort of workload. When running your
bouncing ball test in Servo with master WebRender, I observed CPU usage of no
higher than 15% (as well as 60 FPS)…

------
m0th87
> For example, register allocation is a tedious process that bedeviled
> assembly programmers, whereas higher-level languages like C++ handle it
> automatically and get it right every single time.

Ideal register allocation is NP-complete, so a compiler can't get it right
every single time.

I'm not sure how good in practice modern compilers are at this, but would be
curious to know if there's some asm writers who can actually consistently
outperform them.

~~~
phkahler
Optimal register allocation has been polynomial time for more than 10 years -
for some definition of optimal. IIRC it started with programs in SSA form and
has dropped that requirement more recently. Modern GCC uses SSA form and I
think LLVM might too.

~~~
jcranmer
GCC and LLVM do not retain SSA form by the time register allocation happens
(they both convert to a non-SSA low-level IR before then).

It's also worth pointing out that "optimal" in theory doesn't necessarily
correspond to optimal in practice. The hard problem of register allocation
isn't coloring the interference graph (since there's not enough registers most
of the time), it's deciding how best to spill (or split live ranges, or insert
copies, or rematerialize, or ...) the excess registers. Plus, real-world
architectures also have issues like requiring specific physical registers for
certain instructions and subregister aliasing which are hard to model.

In practice, the most important criterion tends to be to avoid spilling inside
loops. This means that rather simple heuristics are generally sufficient to
optimally achieve that criterion, and in those cases, excessive spilling
outside the loops isn't really going to show up in performance numbers. Thus
heuristics are close enough to optimal that it's not worth the compiler time
or maintenance to achieve optimality.

~~~
qznc
Yes, the fun in compilers: Even if every phase and every optimization actually
produces optimal results, the combination is probably not optimal.

One deep problem is that there is no good optimization goal. Today's CPUs are
too complex and unpredictable performance-wise.

Another problem is: Register pressure is one of the most important things to
minimize, but how can the phases before register allocation do that? They use
a surrogate, like virtual registers, and thus become heuristics even if they
solve their theoretical problem optimally.

------
bdmarian
I really like the new Fox. I’ve tried switching over completely but I think
it’s causing some random BSODs on my Latitude E5570. The laptop does have a
second Nvidia graphics card, for which there is no driver installed. ( don’t
ask :) I’m perfectly fine with the onboard Intel and I much prefer the extra
hours of battery life)

------
vatotemking
> The teams behind revolutionary products succeed because they make strategic
> bets about which things to reinvent, and don’t waste energy rehashing stuff
> that doesn’t matter.

This needs to be emphasized more

------
Vinnl
This is a great write-up that gives me warm fuzzy feelings.

What also is interesting for me to realise, though, is that a lot of this was
happening at the same time as Mozilla was largely focused on Firefox OS, and
receiving a lot of flak for that.

It's a shame that Firefox OS failed, but it was clear that they had to try
many different things to remain relevant, and it's excellent to see that one
of those is very likely to pay off. Even though Rust might've been dismissed
for the same reasons Firefox OS was.

------
JupiterMoon
FF has for me crashed more times in the last week than in the previous year. -
Multiple installs on different Linux systems. The last crash was with a clean
profile.

And then there's the disappearing dev tools - that's fun.

EDIT: I hope that there is something weird with my systems. But I fear that
the rush to push this out might have been a little hasty.

EDIT EDIT Apart from the crashes the new FF has been nice. I've been able to
largely stop using chromium for dev work - so not all is bad.

~~~
mbrubeck
You can go to "about:crashes" to get some more information about reported
crashes. If you open a crash report and click the "Bugzilla" tab, you can find
out if a bug is on file for that specific stack trace.

~~~
JupiterMoon
Cool. I'll check this out next time. Any way to report the disappearing dev
tools?

~~~
dochtman
Just file it on
[https://bugzilla.mozilla.org/;](https://bugzilla.mozilla.org/;) you can login
with your GitHub credentials.

------
Annatar
_For example, register allocation is a tedious process that bedeviled assembly
programmers,_

Yet more propaganda. I’ve been part of the cracking and demo scene since my
early childhood. If you didn’t code in assembler you might as well not have
taken part in it at all, because small fast code was everything. None of us
ever had an issue with register allocation, nor do we face such issues today.
Not 30+ years ago, not now.

------
thomastjeffery
> the breadth of the web platform is staggering. It grew organically over
> almost three decades, has no clear limits in scope, and has lots of tricky
> observables that thwart attempts to simplify.

It would be great to create the html/css/javascript stack from scratch, or at
least make a non-backwards-compatible version that is simpler and can perform
better. HTML5 showed us that can work.

~~~
dullgiulio
Yeah but Firefox is already struggling while supporting all the possible
standards and more ("sorry our site is better view with Google IE4... ehm
Google Chrome").

The whole Mozilla strategy of corroding Firefox piece by piece is actually
very professional. Big backwards-incompatible transitions in technology almost
always fail.

~~~
Manishearth
> sorry our site is better view with Google IE4... ehm Google Chrome

FWIW this is usually due to folks doing performance work in only one browser
or not really testing well and slapping that label on after the fact.

Or stuff like Hangouts and Allo where they use nonstandard features.

The major things Firefox doesn't support that Chrome does are U2F (it does
support it now, but flipped off, will flip on soon I think) and web components
(support should be available next year I guess; this kinda stalled because of
lots of spec churn and Google implementing an earlier incompatible spec early
or something.)

~~~
kazagistar
I've been using a U2F plugin that works everywhere except google, which
insists that you cannot possibly have U2F on firefox.

------
gjem97
What parts of FF 57 are written in Rust? Just Stylo?

Edit: I don't intend for this to sound like I'm complaining, just interested.

~~~
cpeterso
Stylo is new in Firefox 57, but Mozilla has shipped other Rust code in earlier
Firefox versions:

[https://wiki.mozilla.org/Oxidation#Rust_components_in_Firefo...](https://wiki.mozilla.org/Oxidation#Rust_components_in_Firefox)

Completed:

    
    
      MP4 metadata parser (Firefox 48)
      Replace uconv with encoding-rs (Firefox 56)
      U2F HID backend (Firefox 57)
    

In progress:

    
    
      URL parser
      WebM demuxer
      WebRender (from Servo)
      Audio remoting for Linux
      SDP parsing in WebRTC (aiming for Firefox 59)
      Linebreaking with xi-unicode
      Optimizing WebVM compiler backend: cretonne

~~~
phkahler
Can anyone explain what a URL parser does and why it's so complex? I feel like
there's a whole interesting story lurking there.

~~~
steveklabnik
A URL parser takes a string with a URL in it, and returns some sort of data
structure that represents the URL.

It's complex because URLs are complex; I believe this is the correct RFC:
[https://tools.ietf.org/html/rfc3986](https://tools.ietf.org/html/rfc3986)
It's 60 pages long.

(That said, page length is only a _proxy_ for complexity, of course)

~~~
noir_lord
As someone who once tried to write code to do it to avoid pulling in a
dependency.

 _Never_ again, it's not just that the spec is 60 pages long but that the
actual behaviour out in the real world is miles away from the spec, the web is
a complex place where standards are...rarely standard.

~~~
hsivonen
When writing code it's a much better idea to write according to
[https://url.spec.whatwg.org/](https://url.spec.whatwg.org/)

------
mi_lk
Off-topic, but does anyone know why in FF on Mac the pinch to zoom
functionality is disabled by default? Is there any performance concern?

~~~
nicalsilva
It's a matter of allocating time to implement the missing parts and get it to
work properly. Right now the people who could do this are working on other
things but it will get done eventually.

------
xtf
Thanks Mozilla Going to donate

------
wyldfire
Anecdote regarding this new FF:

I would find frequent cases where my system would stall for 10-20s (could not
caps lock toggle, pointer stopped moving). I almost always have just Chrome
and gnome-terminal open (Ubuntu 16.04). I had attributed it to either a
hardware or BIOS/firmware defect.

Now, after switching to Firefox I have gone a week without seeing those
stalls.

YMMV -- I never bothered to investigate, it could be something as simple as
slightly-less-memory-consumption from FF, or still a hardware defect that FF
doesn't happen to trigger.

~~~
throwaway613834
This sounds vaguely like what I've been experiencing on recent Chrome
versions. On Windows I've had Chrome randomly hang... initially on the
network, then after a few seconds even the UI freezes. When that happens, if I
launch Firefox and try to access the network, it hangs too. But programs that
don't try to access the network don't hang. Then after maybe (very roughly) 30
seconds, it all goes back to normal. No idea what's been going on but it seems
like you might be experiencing a same thing, and it seems like a recent bug on
Chrome versions, not a firmware issue... I'm just confused at how it affects
other programs. It didn't use to be like this just a few weeks ago.

~~~
tdb7893
I notice chrome having these issues if I'm running out of memory or if another
program is trying to read from the hard drive at the same time.

~~~
throwaway613834
I am most definitely not running out of memory or having other programs
active. I easily have like > 10 GB free RAM and it happens when nothing else
is open.

Like I was suggesting early -- my habits haven't changed. It's started doing
this quite recently. It wasn't like this a few weeks ago.

------
xstartup
I use firefox/rust every day. Thanks for the one of the most interesting
language!

------
nopit
Installed the new firefox, had 1 tab running for a few days which had
allocated more than 10gb of virtual memory. I had high hopes but im sticking
with chrome.

~~~
DigitalJack
Forgive my ignorance, but does having X amount of virtual memory allocated
necessarily correspond to physical memory (and storage for that matter?)

~~~
deathanatos
No, it doesn't _necessarily_ correspond. It could be an indicator, though. RSS
would be more useful, IMO.

That said, even if the poster is correct, it isn't necessarily wrong either.
AFAICT, nothing stops JS on a page from allocating that much memory, and
"leaking" it (e.g., holding on to the JS object, maybe accidentally in a giant
list, and not making use of it). It isn't the browser's fault if JS is
actually "using" that much RAM.

