> a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!
I don't understand the graphs under the link. Under all colors it says "servo" and "linux64". So what is Firefox (gecko), what servo?
Not to be negative, but I don't know if the benchmarks tell you much given the fact that Servo still has a ways to go before it renders sites properly.
I periodically test it out (most recently yesterday, in fact) and on popular sites like espn.com and cnn.com it doesn't render them properly. Meaning, not the padding is a little off, but parts of the page simply don't render at all.
This is not a criticism of their efforts, which are impressive, just pointing out that it might not be ready to compare in terms of performance.
You are generally correct, and this is why we don't often publish numbers.
However we have made an effort to implement things we expect will have an effect on our performance first to counteract this.
There are also sites included in our page load suite that Servo renders quite well, and should be a fair comparison.
Another reason benchmarks are hard is that many of Servo's individual pieces are blazing fast, and we do many tasks in parallel, but end-to-end performance still needs work and tuning, which is why our initial page load numbers are below Firefox. This is in large part due to the community concentrating on those hard parts and not spending any time on things like the network stack or disk caching.
A key point is that there is so much low hanging fruit that it is falling off the tree by accident (as seen in the suffix list PR). New contributors can make a huge impact here with minimal effort.
Hey, maybe this isn't the right place but i wouldn't know where else to ask this question:
I'd like to get into Rust development and maybe help out with Servo (because it seems like an awesome project) but i have very limited experience with c/c++ (arduino and openframeworks).
What is a good way/place to get comfortable with Rust?
(I am using Windows, can you even develop with Rust on Windows?)
For those reading, just remember to check out #rust on irc.mozilla.org - it's far more active than the freenode channel. Here's a list of the current rust-related channels on Mozilla's IRC server: https://www.rust-lang.org/community.html#irc-channels
You can also ask in IRC and we'll try to think of some for you. We try to keep posting new ones, but we get half a dozen new contributors per week, so it can be hard to keep up with the awesome demand.
Does anyone know what is on Y axis in comparison graph? Is it load time in miliseconds? Are green and red lines for Firefox and others for servo? Servo is written in every case there, but green and red have "gecko" prefix.
I believe the Y axis shows milliseconds. Lower is better. The colors change over page reloads, but the lines at the bottom are Firefox. It is faster currently, but the Servo lines are winning ground.
We know. It's not really meant for public consumption, and we are trying to fit into the monitoring pipelines the Firefox team has already created. This should improve over time as we get better at using that infrastructure, and we'll make sure to post readable versions when we are trying to reach out to a wider audience.
Servo is using zero-copy parallel parsing of html, css and parallel rendering, right? Is it trying to reduce in-going data duplication?
Any insiders (actually understanding this questions ;) who can comment on this?
We do selector matching in parallel (parsing is probably serial?), layout in parallel, and offload almost all of the rendering work onto the GPU.
Not sure what you mean by "in-going data duplication". We try to be zero-copy when we can, and share as much as possible. Rust helps here; because you are free to try and share things without worrying if they will go out of scope too early, and the compiler will tell you if your guess was wrong.
"in-going data duplication" ummm... i guess i meant ;) duplication of the incoming data (network packets->buffers->resources like html-files etc) in the sense of trying to minimize the amount of data duplicated and moved around to be used by various code-parts.
Most probably the most interesting parts are those areas where even Rust can't really "help" to prevent that.
thanks
As Jack mentioned in another comment (https://news.ycombinator.com/item?id=11994607), a lot of our glue is slow just because we haven't paid attention to it. Some of this is duplication of the kind you specify.
So we do do a lot of extra copying. Especially once we switched to multiprocess -- there's a ton of data being passed between processes through a copy that really should use shared memory or something. It's pretty straightforward to fix in most cases, but we just haven't gotten to it. Like many other issues we have. Servo is mostly a testbed for the smaller components, so this is to some degree okay.
My favorite example of this is that we probably contain the worst cache implementation ever. Before multiprocess, we still used threads and senders a lot. So switching to multiprocess just involved replacing certain threads with processes, and using IPC instead of regular sync::mpsc senders.
One thing that got caught in the mix was our font cache. This cache loads fonts from disk and shares them with content threads till they aren't needed anymore. It was a simple cache, implemented as an atomic refcounted weak pointer being stored in the cache while strong refcounted pointers are being handed out to content threads.
In the process world, refcounted pointers are duplicated across IPC with their refcounts reset to one. So this involves copying the whole font across IPC. Inefficient in itself.
Also, on Linux, our IPC currently assumes /tmp is tempfs (it isn't in many cases), and uses it for shared memory. So, our cache, to save us from loading fonts from disk, loads a font from disk, and each time a process wants it, stores the file to disk in /tmp and has that process load it from disk. Which is unequivocally worse than what we had :)
It's not hard to fix, though. But it's stuff like this that we still have. It doesn't have to do with Rust, just has to do with priorities. My plan here is to first get a sane IPC shared memory implementation for Linux (to be used whenever we send gobs of data across a process), and build a better caching layer over it for the font cache. (If you like OS stuff and want to help ; let me know!)
However, the individual components themselves are pretty great!
I did not mean to criticize servo or rust in any way... Any serious and non-tiny project will have areas like those you pointed out. I just think servo (and rust as its main 'tool') is a really great endeavor on the track to multi-process/parallel/concurrent (system)programming on the larger scheme of things ;). Any 'issues' you guys trip over are for sure nice lessons... which even potentially can feed back into rust. (which was the plan all along, afaik ;)
Cheers
Parsing is serial for a single source, but multiple things (CSS, HTML, multiple HTML files) can be parsed at the same time. Also, the thread running JS (which drives HTML parsing) is separate from another page's JS thread. There can be many of these per process unlike Firefox and (I believe) Chrome.
CSS matching and layout have fine-grained parallel algorithms built on top of work stealing queues. Rendering is handled totally on the GPU (as opposed to other browsers which just offload compositing).
So they achieved a 25% performance increase all from better parsing and a better algorithm for this[1] list? That's unexpected indead. I would love to see a blogpost with details on that.
They changed the implementation from always iterating over this 6000 length array: https://github.com/fduraffourg/servo/blob/8bb853f64354b2cc1b...
to a HashSet which is only filled once based on a text file.
The domain list also more easily updated now with a python script.
Given that they know the list at compile time I wonder if they could do faster e.g. by using https://github.com/sfackler/rust-phf to generate a perfect hash function over the set.
Why are there actual services in that list? I see all the variations of blogspot.com for example, which is definitely not an eTLD. Are services providing subdomain registration supposed to talk to Mozilla and get themselves added to it? I don't see deviantart in there.
Services which allow their users to post custom HTML and JavaScript to their own subdomains (without filtering to exclude scripts) need to go on that list to prevent eg evil.blogspot.com from stealing cookies that were set on innocent.blogspot.com
To really show the problem, you have to do something like contrast how "blogspot.com" is a top-level site, one level below a TLD, but so is bbc.co.uk, one level below what "co.uk". The naive "count one element" doesn't work, or all of "co.uk" would share cookies. And it turns out that now there just isn't much you can do other than have a huge table. Sure, we'd probably do it differently if we had it to do all over again, but, we don't.
You misunderstood. I fully understand why a count based approach cannot work. I don't understand why, should I want to create a service like blogspot, I would have to have my URL added in there.
You don't have to add it there. You can make it secure anyway. Public suffix list will mean that should your security get messed up, the browser prevents this anyway.
I don't understand either, with user generated subdomains I thought it was common practice to use a completely different domain for all trusted activity.
Are they doing security testing as they go? I'm really looking forward to seeing how it fares against the typical "use after free" javascript errors that pwn to own always demo's. I'm not particularly a fan of Rust, but I certainly like the ideas that they are trying to incorporate in it.
Great stuff. Looking forward to see how it evolves.
Really looking forward to trying out the browser.html tech preview! Especially since it seems to have native support for a vertical tab layout with autohide!
In my opinion they're the only ones left pushing for an open & standards-based web. Certainly not Google with their hard push to make developers make Chrome Apps instead of web apps and obviously Apple doesn't push open anything, ever.
Servo would be more compelling for use with commercial software if had MIT or Apache license. The MPL isn't as bad as GPL but still isn't as generous to the developer as MIT or Apache. With the right license, Servo would be a slam dunk for replacing CEF in commercial software that prohibits reversing or tampering.
The license isn't really up for negotiation, but I'm curious what you think there is to gain with more liberal licensing? All the competing engines are either equivalently or more restrictively licensed, no?
> a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!
I don't understand the graphs under the link. Under all colors it says "servo" and "linux64". So what is Firefox (gecko), what servo?