Hacker News new | past | comments | ask | show | jobs | submit login
This Week In Servo 69 (servo.org)
212 points by simondelacourt on June 28, 2016 | hide | past | favorite | 65 comments



> https://treeherder.allizom.org/perf.html#/graphs?timerange=2...

> a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!

I don't understand the graphs under the link. Under all colors it says "servo" and "linux64". So what is Firefox (gecko), what servo?


Not to be negative, but I don't know if the benchmarks tell you much given the fact that Servo still has a ways to go before it renders sites properly.

I periodically test it out (most recently yesterday, in fact) and on popular sites like espn.com and cnn.com it doesn't render them properly. Meaning, not the padding is a little off, but parts of the page simply don't render at all.

This is not a criticism of their efforts, which are impressive, just pointing out that it might not be ready to compare in terms of performance.


You are generally correct, and this is why we don't often publish numbers.

However we have made an effort to implement things we expect will have an effect on our performance first to counteract this.

There are also sites included in our page load suite that Servo renders quite well, and should be a fair comparison.

Another reason benchmarks are hard is that many of Servo's individual pieces are blazing fast, and we do many tasks in parallel, but end-to-end performance still needs work and tuning, which is why our initial page load numbers are below Firefox. This is in large part due to the community concentrating on those hard parts and not spending any time on things like the network stack or disk caching.

A key point is that there is so much low hanging fruit that it is falling off the tree by accident (as seen in the suffix list PR). New contributors can make a huge impact here with minimal effort.


Those that start with "gecko." are the gecko benchmarks. The others are servo. I'm not sure of the difference between the servo and linux64 links.

(Edited for clarity).


The yellow and green have gecko at the start of the title in the key.


The colors actually change after each page refresh


New colours every time you reload the page..


Hey, maybe this isn't the right place but i wouldn't know where else to ask this question:

I'd like to get into Rust development and maybe help out with Servo (because it seems like an awesome project) but i have very limited experience with c/c++ (arduino and openframeworks). What is a good way/place to get comfortable with Rust?

(I am using Windows, can you even develop with Rust on Windows?)


I'm currently reading this:

https://doc.rust-lang.org/book/

It's very nice, and the people on IRC are extremely helpful.


For those reading, just remember to check out #rust on irc.mozilla.org - it's far more active than the freenode channel. Here's a list of the current rust-related channels on Mozilla's IRC server: https://www.rust-lang.org/community.html#irc-channels


I've found the Rust IRC channel to be awesome too!


Also http://rustbyexample.com/

Windows is a supported plateform, msi are available on https://www.rust-lang.org/


https://servo.org also has a "Contributing" section, which links to bugs tagged as "easy" for new contributors.


There's a helper page for this: https://starters.servo.org/


The "easy" bugs get picked up super fast, so it seems you need to keep a close eye on the issue tracker.


You can also ask in IRC and we'll try to think of some for you. We try to keep posting new ones, but we get half a dozen new contributors per week, so it can be hard to keep up with the awesome demand.


Yes, you can develop in rust on Windows. There's even a Visual Studio extension for rust: https://github.com/PistonDevelopers/VisualRust


You can develop Rust on Windows. Edit: things have changed since I last looked at a windows build. Ignore me.


Servo compiles just fine on Windows once you set up msys2. Instructions here: https://github.com/servo/servo#prerequisites


Good to know.


Does anyone know what is on Y axis in comparison graph? Is it load time in miliseconds? Are green and red lines for Firefox and others for servo? Servo is written in every case there, but green and red have "gecko" prefix.


I believe the Y axis shows milliseconds. Lower is better. The colors change over page reloads, but the lines at the bottom are Firefox. It is faster currently, but the Servo lines are winning ground.


Yeah, that graph is inscrutable.


We know. It's not really meant for public consumption, and we are trying to fit into the monitoring pipelines the Firefox team has already created. This should improve over time as we get better at using that infrastructure, and we'll make sure to post readable versions when we are trying to reach out to a wider audience.


Thanks for the general fix. Appreciated! In this case, just a few words of explanation in the blog post would have been sufficient ;)


It's really just an announcement of an announcement. Or charitably, a confirmation that they don't plan on missing the June release date.

I've been checking their mailing list and bug tracker and saw nary a peep about the alpha, so it's good to hear something.


Servo is using zero-copy parallel parsing of html, css and parallel rendering, right? Is it trying to reduce in-going data duplication? Any insiders (actually understanding this questions ;) who can comment on this?


> zero-copy parallel parsing of html,

zero copy yes, parallel I don't know

We do selector matching in parallel (parsing is probably serial?), layout in parallel, and offload almost all of the rendering work onto the GPU.

Not sure what you mean by "in-going data duplication". We try to be zero-copy when we can, and share as much as possible. Rust helps here; because you are free to try and share things without worrying if they will go out of scope too early, and the compiler will tell you if your guess was wrong.


Thanks!

"in-going data duplication" ummm... i guess i meant ;) duplication of the incoming data (network packets->buffers->resources like html-files etc) in the sense of trying to minimize the amount of data duplicated and moved around to be used by various code-parts.

Most probably the most interesting parts are those areas where even Rust can't really "help" to prevent that. thanks


.....

grins sheepishly

As Jack mentioned in another comment (https://news.ycombinator.com/item?id=11994607), a lot of our glue is slow just because we haven't paid attention to it. Some of this is duplication of the kind you specify.

So we do do a lot of extra copying. Especially once we switched to multiprocess -- there's a ton of data being passed between processes through a copy that really should use shared memory or something. It's pretty straightforward to fix in most cases, but we just haven't gotten to it. Like many other issues we have. Servo is mostly a testbed for the smaller components, so this is to some degree okay.

My favorite example of this is that we probably contain the worst cache implementation ever. Before multiprocess, we still used threads and senders a lot. So switching to multiprocess just involved replacing certain threads with processes, and using IPC instead of regular sync::mpsc senders.

One thing that got caught in the mix was our font cache. This cache loads fonts from disk and shares them with content threads till they aren't needed anymore. It was a simple cache, implemented as an atomic refcounted weak pointer being stored in the cache while strong refcounted pointers are being handed out to content threads.

In the process world, refcounted pointers are duplicated across IPC with their refcounts reset to one. So this involves copying the whole font across IPC. Inefficient in itself.

Also, on Linux, our IPC currently assumes /tmp is tempfs (it isn't in many cases), and uses it for shared memory. So, our cache, to save us from loading fonts from disk, loads a font from disk, and each time a process wants it, stores the file to disk in /tmp and has that process load it from disk. Which is unequivocally worse than what we had :)

It's not hard to fix, though. But it's stuff like this that we still have. It doesn't have to do with Rust, just has to do with priorities. My plan here is to first get a sane IPC shared memory implementation for Linux (to be used whenever we send gobs of data across a process), and build a better caching layer over it for the font cache. (If you like OS stuff and want to help ; let me know!)

However, the individual components themselves are pretty great!


Thanks for the insights!

I did not mean to criticize servo or rust in any way... Any serious and non-tiny project will have areas like those you pointed out. I just think servo (and rust as its main 'tool') is a really great endeavor on the track to multi-process/parallel/concurrent (system)programming on the larger scheme of things ;). Any 'issues' you guys trip over are for sure nice lessons... which even potentially can feed back into rust. (which was the plan all along, afaik ;) Cheers


Oh, I didn't take it as criticism. Was just slightly amused :)

Not sure the specific situations I mention can be fixed by Rust (or any language), those are design issues basically.


Parsing is serial for a single source, but multiple things (CSS, HTML, multiple HTML files) can be parsed at the same time. Also, the thread running JS (which drives HTML parsing) is separate from another page's JS thread. There can be many of these per process unlike Firefox and (I believe) Chrome.

CSS matching and layout have fine-grained parallel algorithms built on top of work stealing queues. Rendering is handled totally on the GPU (as opposed to other browsers which just offload compositing).


> June is also when the first Rust code ships in release Firefox!

Anyone know what Rust code is shipping in Firefox?

Source: https://docs.google.com/document/d/1JMOtVkRtb-s7auoQdnX810HG...



You can follow along with the process of shipping Rust code in Firefox here:

http://wiki.mozilla.org/Oxidation


In their blog post, what do they mean by "public domain list"?



So they achieved a 25% performance increase all from better parsing and a better algorithm for this[1] list? That's unexpected indead. I would love to see a blogpost with details on that.

[1] https://publicsuffix.org/list/public_suffix_list.dat


They changed the implementation from always iterating over this 6000 length array: https://github.com/fduraffourg/servo/blob/8bb853f64354b2cc1b... to a HashSet which is only filled once based on a text file. The domain list also more easily updated now with a python script.


Given that they know the list at compile time I wonder if they could do faster e.g. by using https://github.com/sfackler/rust-phf to generate a perfect hash function over the set.


Why are there actual services in that list? I see all the variations of blogspot.com for example, which is definitely not an eTLD. Are services providing subdomain registration supposed to talk to Mozilla and get themselves added to it? I don't see deviantart in there.


Services which allow their users to post custom HTML and JavaScript to their own subdomains (without filtering to exclude scripts) need to go on that list to prevent eg evil.blogspot.com from stealing cookies that were set on innocent.blogspot.com


Why is that the responsibility of the browser and not the website's owner?


Nothing profound, just historical reasons.

To really show the problem, you have to do something like contrast how "blogspot.com" is a top-level site, one level below a TLD, but so is bbc.co.uk, one level below what "co.uk". The naive "count one element" doesn't work, or all of "co.uk" would share cookies. And it turns out that now there just isn't much you can do other than have a huge table. Sure, we'd probably do it differently if we had it to do all over again, but, we don't.


You misunderstood. I fully understand why a count based approach cannot work. I don't understand why, should I want to create a service like blogspot, I would have to have my URL added in there.


You don't have to add it there. You can make it secure anyway. Public suffix list will mean that should your security get messed up, the browser prevents this anyway.


I don't understand either, with user generated subdomains I thought it was common practice to use a completely different domain for all trusted activity.


Are they doing security testing as they go? I'm really looking forward to seeing how it fares against the typical "use after free" javascript errors that pwn to own always demo's. I'm not particularly a fan of Rust, but I certainly like the ideas that they are trying to incorporate in it.

Great stuff. Looking forward to see how it evolves.


I enjoyed this 2014 blog post on how they avoid use-after-frees by design, by letting SpiderMonkey have responsibility for all DOM garbage collection:

https://blog.mozilla.org/research/2014/08/26/javascript-serv...


Ooh. Nice one, mate. Thanks.


Servo doesn't contain an own javascript engine.

Use after free generally isn't possible in rust in safe code.


Really looking forward to trying out the browser.html tech preview! Especially since it seems to have native support for a vertical tab layout with autohide!


Totally irrelevant: But did anybody else notice '(John) Connor (Kate) Brewster'?


Its Connor MacLeod, not John Connor ;)


Is this the browser.html that is referenced? https://github.com/browserhtml/browserhtml

`Browser.html: an experimental browser UI for desktop.`


Yes, I believe it is. Especially, since it mentions Servo in README.


Yes.


No. The blog post says to expect the announcement later this week.


I've edited the title to get it more in line with the blog post


I'm not a Mozilla fan at all, but the Rust and Servo team certainly don't fuck around.


Out of curiosity, why don't you like Mozilla?

In my opinion they're the only ones left pushing for an open & standards-based web. Certainly not Google with their hard push to make developers make Chrome Apps instead of web apps and obviously Apple doesn't push open anything, ever.


I'm Firefox and Thunderbird user, I also don't like Mozilla, but I'm full of respect for their work and... there is literally nothing better...


Servo would be more compelling for use with commercial software if had MIT or Apache license. The MPL isn't as bad as GPL but still isn't as generous to the developer as MIT or Apache. With the right license, Servo would be a slam dunk for replacing CEF in commercial software that prohibits reversing or tampering.


The license isn't really up for negotiation, but I'm curious what you think there is to gain with more liberal licensing? All the competing engines are either equivalently or more restrictively licensed, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: