> a new automated page load testing performance infrastructure that he has been testing. It compares Servo daily builds against Firefox for page load on a subset of the Alexa Top 1000 sites. Check it out!
I don't understand the graphs under the link. Under all colors it says "servo" and "linux64". So what is Firefox (gecko), what servo?
I periodically test it out (most recently yesterday, in fact) and on popular sites like espn.com and cnn.com it doesn't render them properly. Meaning, not the padding is a little off, but parts of the page simply don't render at all.
This is not a criticism of their efforts, which are impressive, just pointing out that it might not be ready to compare in terms of performance.
However we have made an effort to implement things we expect will have an effect on our performance first to counteract this.
There are also sites included in our page load suite that Servo renders quite well, and should be a fair comparison.
Another reason benchmarks are hard is that many of Servo's individual pieces are blazing fast, and we do many tasks in parallel, but end-to-end performance still needs work and tuning, which is why our initial page load numbers are below Firefox. This is in large part due to the community concentrating on those hard parts and not spending any time on things like the network stack or disk caching.
A key point is that there is so much low hanging fruit that it is falling off the tree by accident (as seen in the suffix list PR). New contributors can make a huge impact here with minimal effort.
(Edited for clarity).
I'd like to get into Rust development and maybe help out with Servo (because it seems like an awesome project) but i have very limited experience with c/c++ (arduino and openframeworks).
What is a good way/place to get comfortable with Rust?
(I am using Windows, can you even develop with Rust on Windows?)
It's very nice, and the people on IRC are extremely helpful.
Windows is a supported plateform, msi are available on https://www.rust-lang.org/
I've been checking their mailing list and bug tracker and saw nary a peep about the alpha, so it's good to hear something.
zero copy yes, parallel I don't know
We do selector matching in parallel (parsing is probably serial?), layout in parallel, and offload almost all of the rendering work onto the GPU.
Not sure what you mean by "in-going data duplication". We try to be zero-copy when we can, and share as much as possible. Rust helps here; because you are free to try and share things without worrying if they will go out of scope too early, and the compiler will tell you if your guess was wrong.
"in-going data duplication" ummm... i guess i meant ;) duplication of the incoming data (network packets->buffers->resources like html-files etc) in the sense of trying to minimize the amount of data duplicated and moved around to be used by various code-parts.
Most probably the most interesting parts are those areas where even Rust can't really "help" to prevent that.
As Jack mentioned in another comment (https://news.ycombinator.com/item?id=11994607), a lot of our glue is slow just because we haven't paid attention to it. Some of this is duplication of the kind you specify.
So we do do a lot of extra copying. Especially once we switched to multiprocess -- there's a ton of data being passed between processes through a copy that really should use shared memory or something. It's pretty straightforward to fix in most cases, but we just haven't gotten to it. Like many other issues we have. Servo is mostly a testbed for the smaller components, so this is to some degree okay.
My favorite example of this is that we probably contain the worst cache implementation ever. Before multiprocess, we still used threads and senders a lot. So switching to multiprocess just involved replacing certain threads with processes, and using IPC instead of regular sync::mpsc senders.
One thing that got caught in the mix was our font cache. This cache loads fonts from disk and shares them with content threads till they aren't needed anymore. It was a simple cache, implemented as an atomic refcounted weak pointer being stored in the cache while strong refcounted pointers are being handed out to content threads.
In the process world, refcounted pointers are duplicated across IPC with their refcounts reset to one. So this involves copying the whole font across IPC. Inefficient in itself.
Also, on Linux, our IPC currently assumes /tmp is tempfs (it isn't in many cases), and uses it for shared memory. So, our cache, to save us from loading fonts from disk, loads a font from disk, and each time a process wants it, stores the file to disk in /tmp and has that process load it from disk. Which is unequivocally worse than what we had :)
It's not hard to fix, though. But it's stuff like this that we still have. It doesn't have to do with Rust, just has to do with priorities. My plan here is to first get a sane IPC shared memory implementation for Linux (to be used whenever we send gobs of data across a process), and build a better caching layer over it for the font cache. (If you like OS stuff and want to help ; let me know!)
However, the individual components themselves are pretty great!
I did not mean to criticize servo or rust in any way... Any serious and non-tiny project will have areas like those you pointed out. I just think servo (and rust as its main 'tool') is a really great endeavor on the track to multi-process/parallel/concurrent (system)programming on the larger scheme of things ;). Any 'issues' you guys trip over are for sure nice lessons... which even potentially can feed back into rust. (which was the plan all along, afaik ;)
Not sure the specific situations I mention can be fixed by Rust (or any language), those are design issues basically.
CSS matching and layout have fine-grained parallel algorithms built on top of work stealing queues. Rendering is handled totally on the GPU (as opposed to other browsers which just offload compositing).
Anyone know what Rust code is shipping in Firefox?
To really show the problem, you have to do something like contrast how "blogspot.com" is a top-level site, one level below a TLD, but so is bbc.co.uk, one level below what "co.uk". The naive "count one element" doesn't work, or all of "co.uk" would share cookies. And it turns out that now there just isn't much you can do other than have a huge table. Sure, we'd probably do it differently if we had it to do all over again, but, we don't.
Great stuff. Looking forward to see how it evolves.
Use after free generally isn't possible in rust in safe code.
`Browser.html: an experimental browser UI for desktop.`
In my opinion they're the only ones left pushing for an open & standards-based web. Certainly not Google with their hard push to make developers make Chrome Apps instead of web apps and obviously Apple doesn't push open anything, ever.