Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla Webrender: rendering any webpage at several hundred FPS (air.mozilla.org)
646 points by bpierre on Feb 25, 2016 | hide | past | favorite | 222 comments

Video from the beginning: https://air.mozilla.org/bay-area-rust-meetup-february-2016/#...

"We've removed painting from the bottleneck in Servo"

More info about WebRender: https://github.com/servo/webrender/wiki

If you want try it at home, you "just" need to build servo: http://github.com/servo/servo (git clone, then ./mach build -r) and run servo with webrender:

./mach run -r -- -w http://wikipedia.org

(edit: removed the bit about chrome's gpu rasterization - see pcwalton's comment below)

Chrome's approach is fundamentally different: it's the main difference I was describing at the beginning of the talk, by contrasting Chrome's immediate mode approach with WebRender's retained mode. (By the way, Safari, IE 9 and up, and Firefox on Vista and up all use immediate mode GPU painting backends like Chrome does.)

Incidentally, I don't deserve the lion's share of the credit here. Veteran game developer Glenn Watson is really the brains behind the project—my role has mostly consisted of picking off a few scattered features and optimizations to implement.

Great work guys. Will help push the web forward!

Vulkan/Android coming anytime soon?

Android is working already. As for Vulkan, I answer that at the end of the talk, but the basic story is that we're Vulkan-ready architecturally, but nobody has done the port yet.

Awesome. What's the perf like on Android? Is there an apk to try?

Have you been able to make use of hardware accelerated path rendering? Like the stuff Mark Kilgard is working on?

(I mean, "painting on the GPU" doesn't say how you use the GPU, and I've been wondering if there's anything new happening in path rendering land after seeing a video on Kilgard's work years ago)

We actually aren't doing any path rendering yet; CSS as used on the Web is all boxes and triangles, aside from glyphs. One of the neat things about WebRender is the observation that triangles are pretty much all you need for CSS.

A future extension to support SVG and glyphs on the GPU would be pretty interesting though!

I just got through the part of the talk where you mentioned all of this :)

BTW, are you aware of the Halide language? Might be useful for writing those CPU fallbacks.


I'm actually very glad to hear that. I've been working on (read: designing, not yet implementing) a GUI framework for Rust, and was hoping to get away without an external 2D rendering library like cairo. And if CSS can be reduced to those, my comparatively limited system (no overlapping areas, only rectangular widgets, only inherited data is position and base color) should be able to handle it.

The drawing abstraction is actually my current big hurdle, and I hadn't thought of looking at how CSS is rendered yet. So, thanks for that hint!

On Linux at least, NVIDIA's drivers do support some acceleration for glyph rendering and other 2D operations through the X RENDER extension. It's not really going to help with SVG rendering, but it's great for boxes, triangles, and glyphs, as well as blitting and compositing which covers most of what browsers do with HTML and CSS

XRENDER basically provides a subset of OpenGL which is insufficient to describe all of CSS and not particularly tailored to modern GPUs. The extra features it has, like trapezoids and blend modes, are easy to implement in regular old OpenGL. Additionally, it's effectively Linux-only and has spotty driver support. I don't see any reason to use it.

It seems like a waste to tie new software to X when Wayland is just around the corner.

It's been 7 years since OpenVG and not much has changed. I keep hoping one of these browser rendering technologies would emerge as an open framework for path rendering on mobile, but no dice so far. The benefits are massive — an order of magnitude improvement compared to CoreGraphics in my app — but the big players seem to be making their own technology and keeping it private.

pcwalton, are you able to share the rotating 3D test you used in the talk? I'm building Servo now and would love to have a play with it.

Re the building, if you're on mac with homebrew, it should be as easy as:

brew install --HEAD servo/servo/servo


servo -w http://wikipedia.org

Keep in mind though, servo has a long way to go.

What about for using it as a headless browser with javascript and Rust manipulation of the DOM?

We have a http proxy that would like to analyze html responses and rewrite parts of the page.

Is servo ready for that or should we use something like PhantomJS?


Side note if this is something you're interested in doing, you might want to check out the FastBoot project from the Ember camp. It's built on top of non-Ember-specific libraries that let you do server-side DOM manipulation without needing something like PhantomJS, just node.

I'm not involved with the project...just an admirer. I do know that it has a headless option in the command line invocation.

Is it a compiled bottle? I remember last time trying Rust I had to wait maybe half an ahour to compile everything. I think it was the time when servo was compiling their own version of Rust :)

Servo now downloads a Rust snapshot. But you still have to compile everything else needed to build Servo.

Getting this error:

==> python -c import setuptools... --no-user-cfg install --prefix=/private/tmp/servo20160225-7643-fode0q/_vedir --single-version-externally-managed --record=installed.txt Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named setup tools


brew update

brew uninstall --force python

brew install python

solved it.

This is great.

Mozilla is focussing on its core competency. No mobile OS, no identity built in the browser, no distractions—no, it's just about the browser and how to make it better. What this guy showed is just impressive, well done and please more of such achievements.

> No mobile OS, no identity built in the browser, no distractions—it's just about the browser and how to make it better

That's a rather narrow view of what the open Internet should be. Declaring failure with FirefoxOS and Persona was a blow for the Internet at large: it solidifies the status quo, which is growing more closed and proprietary by the day.

Resources are understandably finite and Mozilla can't do everything at once, but there is a still lot of good they can do adjacent to the strict confines of the browser.

What if I told you that we are releasing a successor to Personas but that accomplishes a lot more and in a different way?

You can have accounts at various different communities and have your experience instantly personalized when you arrive, without the sites knowing anything about you or being able to track you across sites. When you are ready, you can use oAuth to build up your profiles in different communities and use them to authenticate with each other. Finally, you have full control over which of our contacts in a community can see when you join another community -- that whole thing of "Your Facebook friend Frank Smith is on Instagram as Bunnyman123". With our protocol and reference implementation, as soon as you arrive on a site you see all the friends in your communities that also used that site -- if they decided to share this -- and your social graph is instantly connected everywhere you go. When a friend joins chess.com you'll get a notification from the friend, that they joined. If they want you to know. Maybe they wanted to check it out anonymously.

Truly decentralized identity and contacts that works seamlessly across sites.

If you liked what you just read, tell me -- how can we best position it for those people who were sad to see Personas not take off?

> If you liked what you just read, tell me

No, that sounds terrible, sorry. Persona did one thing: authentication. I don't want to drag in a social graph along with identity auth, but YMMV.

> I don't want to drag in a social graph along with identity auth, but YMMV.

As I read it, that's completely optional, and I think since that doesn't exist yet (right?), having that option sounds great. I'm probably naive to dream of diaspora etc. and Facebook getting along, but even just smaller sites playing nice could make this very useful.

Right, it's completely optional.

Jallmann, I am not sure who you think will be doing the dragging. The community? The developer of the app they install on their local copy of the platform? The friend who wants you to know that they're on there? You?

The app developer benefits from this feature for free, just by integrating with the platform's decentralized identity system. The app or community host can turn it off, or simply not implement it (eg ignore your social graph). The friend could simply not grant you the permission to see them. Or you could sign up and see your friends (who wanted you to see them) but not let your friends see you. Finally, you could turn even the social experience off and see the containing site as if you have no friends. So it's totally optional. But there are so many stakeholders in a decentralized system that ehen you said "I don't want to drag my friends" they have a choice too.

I know it's optional.

An extraneous social overlay simply isn't on my list of desirable properties for an auth system (aside from WoT type mechanisms, which is clearly not the sell here). I'm thinking SSL client certs, you're thinking Facebook Connect. Different strokes.

Anyway talking about your social login system is getting quite off topic.

> how can we best position it for those people who were sad to see Personas not take off

Can you elaborate on what "positioning" means? I kind of read it as how it's explained or "sold", but I'm not sure. And I really like all I read here, and though I'm not sure I could helpfully answer it I would love to at least understand the question fully :)

Oh, can you point to something here? Some vision document, brainstorming, back-of-napkin spec? Anything?

I'm not sure that adding another OS was the thing to do to fight the ongoing trend towards a proprietary internet.

A better browser, including a better mobile browser is much more useful than Firefox OS every could be, IMO. In fact, this is part of what makes pure OSS Android a practical environment to work in.

I am very glad that they are continuing to put a lot of effort into Firefox on mobile.

There is a lot to be said for having choice. Just because the dominant browser today is OSS (Chrome) doesn't mean Mozilla should declare victory and pack its bags with Firefox. Likewise, Android is hardly the ideal torch bearer for a Free mobile OS -- while AOSP is open-source, the development process is certainly not open, apps are not open, and that's ignoring other problems of abysmal performance, locked down hardware, and vendor crapware.

The reason that Firefox had the impact it did was because the browser is the gateway to the Internet. Firefox came of age just as the Internet was maturing as a platform, and because of that, Mozilla was able to play an important role in influencing the semblance of openness that we do have on the Internet today.

There was a similar platform shift to mobile, and Mozilla totally missed the boat. Now, please tell me, how will a FOSS (mobile) browser help open up the greater mobile ecosystem? The OS is the gateway to mobile, not the browser, and we need a better gatekeeper.

> There is a lot to be said for having choice. Just because the dominant browser today is OSS (Chrome) doesn't mean Mozilla should declare victory and pack its bags with Firefox.

Just since this is repeated so often: Chrome is not and never has been open source software in any shape or form. Chromium is OSS, but has only a tiny fraction of Chrome's market share and AFAIK nobody really knows what the differences between those two are (outside of the obvious: Flash player, pdf reader, etc.). Btw, Firefox is still the only major browser that's OSS, neither Safari nor Edge/IE are open source.

This isn't academic nitpicking either. Mozilla had to build a PDF reader from scratch (pdf.js), it couldn't just reuse what Chrome was using to display pdfs, since it wasn't open source. However everyone can now use pdf.js for the same task.

Note that Chrome's PDF reader was open sourced almost two years ago as pdfium - it was closed source prior to that because it was licensed from Foxit rather than written from scratch.

Chromium behave so similarly to Chrome that it makes no difference to me to use one instead of the other, except for my feelings about the Google brand. It is not a terrible mystery what the differences are. If Chrome were really a proprietary product on the same order as Microsoft Windows, it would not be practically or legally possible to have Chromium at all.

Does Chromium include all the spyware features found in Chrome?

I know, but then claiming Chromium as the most popular browser would invite pedantry the other way around, "Hardly anybody uses Chromium, they use Chrome!"

And the analogy holds with AOSP versus the Android that is distributed with Google apps. In any case, comparing the development of Android to Chrom(ium), it is night and day in terms of openness.

Hyperbole does not lead to anything useful.

You might not consider Chrome to be open-source by your personal definition of the term but that's fine hair splitting: you can submit a patch to Chromium and some weeks later millions of Chrome users are running it; you're similarly free to fork chromium and make significant changes while still pulling in code from upstream. Yes, license nerds can argue about philosophical meanings but it's far from the closed-source world of IE/Trident, Opera, etc.

> Mozilla had to build a PDF reader from scratch (pdf.js), it couldn't just reuse what Chrome was using to display pdfs, since it wasn't open source

That's a single, separate component which, as comex pointed out, was licensed from a third-party vendor and nicely illustrates that Chrome is in fact open-source: the only reason it wasn't an option is because it wasn't part of the open source Chrome codebase.

You're also leaving out a key part of the pdf.js (and Shumway for Flash) story which was people at Mozilla trying to demonstrate that you could write complex renderers inside the JavaScript environment and sharply reduce the amount of exposed C/C++ code. I suspect they would have gone with pdfium had it been available but the security improvements would still have made that decision non-trivial.

> There is a lot to be said for having choice.

Yes there is: https://www.ted.com/talks/barry_schwartz_on_the_paradox_of_c...

That's a really lazy rebuttal. Having another option that would be superior along multiple dimensions leads to analysis paralysis? We're not talking about Javascript frameworks here.

Maybe Mozilla isn't a big enough company to tackle it, or it diffuses their focus, but it's important to do.

Ubuntu is still trying, so there's that, and that's great.


I am looking forward to using an 8GB RAM 8-core 64-bit ARM phone with 2560x1440 HDMI, running Ubuntu desktop as my main PC in a year or two.

Currently only the quantity of phone RAM and the display resolution are hardware issues.

Waste of effort. It's never going to work -- the gap is so far and the network effect of apps and ecosystem are so entrenched that Ubuntu would have to do something revolutionary versus an incomplete copy of what other's already have invented and are still improving with massive (collective) resources. Add a dedicated hardware requirement and you've got the nail in the coffin.

It'll work the same way it happened in PC land.

1: Microsoft will ensure that capable hardware (both phones and peripherals) exists so that Continuum succeeds.

2: Ubuntu will leverage the hardware so that it's usable by the 1% who actually want it. 1% of a really big number is still a big number, so it'll be viable, if only barely.

Except that's not how this can work.

First Microsoft is going to fail. They already are. Eventually they'll stop producing hardware, but I'll just assume you mean Samsung or whatever.

The hardware is specific to what each phone provides (which changes per device), and usually the hardware vendor ships the drivers closed source integrated with the rest of the OS. In the PC land we had a neutral Microsoft that packaged drivers and distributed them since anyone might want to change their video card. That's not what happens with phones. So Ubuntu will most likely need a very specific phone just as they are currently doing and selling. Which means random people can't just try it out, since the phone they already have and like isn't compatible (let alone "dual boot.")

Now Ubuntu and their partners have stock that needs to get sold, and everything unsold is a loss. I can't install their OS on my iPhone, so there's no community support to be building this in an open and decentralized way. My galaxy won't be supported well because of driver issues, and since android is already open source enough there's no big momentum there to change things either.

Microsoft with all their resources and power can't manage to keep their marketshare and is around 1.7%. 1% of smartphones is huge -- you think Microsoft's billions of dollars and a known name couldn't do it, but Ubuntu can on their own hardware that you now have to buy?

I wouldn't hold my breath.

Well, we are in a simular, or maybe better position then desktop was 10 years ago. So yes, it might not work, but I would not wave the flag yet.

I'm not sure, but I think that on Android resources are restricted in ways that give first priority to Dalvik vs other runtimes. I also suspect Chrome is given priority when it comes to integration with it.

There's also something to be said about the advantages that default browser's get in terms of market share.

Mozilla's move to making their own OS might have come too late but the reasoning behind it was both strategically and technically sound.

Android doesn't run Dalvik any more and Chrome doesn't appear to be given "priority" in any meaningful sense.

Their reasons may have seemed sound technically, but the business of building a phone ecosystem on a completely new OS were well beyond their abilities.

You seem to have ignored the strategic reason for having a platform. It's very hard to compete with a default application.

Sure it may not have been in their wheelhouse but they certainly had reason to try.

They had to build another OS because they couldn't build a better browser on iOS.

Perspectives for building a better OS running on iPhones weren't that rosy either...

The problem is Apple and Google have financial incentives to drag their feet on things like WebRTC to prevent the web from eating into app sales in their walled gardens.

VR will be a fascinating glimpse at where we stand in this power struggle. Do I have the right to distribute software to my iPhone owning friend? No. Only Apple has that right.

A web OS would bring us back to the age of decentralized distribution we once had with PCs and boxed software sales. It would offer a check to Google and Apple's attempt to own centralized centralized control of software distribution. An escape valve for the users who Apple and Google are currently preventing from writing the software they'd like to write (like web VR).

Without an open source web OS such apps are gated by what perhaps a few hundred engineers at these two companies can imagine, implement, and push through internal politics.


Sent from my iPhone.

In what sense as Google dragged their feet on WebRTC?

It wasn't supported on Android until December.

But regardless, I didn't say "Google dragged their feet" I said "Google has an incentive to drag their feet". It's the incentive that scares me. I still trust Google to some degree, but not unconditionally.

And as a bonus they're building all this in Rust, probably the first language to seriously compete with C++ (for the things C++ is good at, like web browsers and game engines, where complexity and speed rule out most other languages).

Not sure if you have noticed but... you are agreeing to a comment that celebrates Mozilla focusing in, and only in, its core competency and adding that they used a new language they created :-)

Browsers are already highly specialized and complex VMs. If you have the ability to build a browser and work on all parts of it, you have the ability to create new languages.

So creating a new language is not at all that far away from their core competency. They are also working in a space where they have a strong need for a good systems language, so the motivation is there as well.

I think them building it in rust is a good thing. But rust is a mozilla project so its not as though theyre doing people favors. What it certainly does is showcase the strength of the language considering they have proven to be so badass

That's sort of what I meant to get at - building Rust is a huge service to other people, and an important one too. Servo is showing that they're really putting the effort into optimizing it and making things work as best as possible.

> probably the first language to seriously compete with C++

How about Object Pascal? I think Object Pascal is a better C++ than C++. It's too bad it isn't more widely used. Here are a couple of interesting perspectives from people who have done some recent development with Object Pascal:

1. https://news.ycombinator.com/item?id=11005203 (see also https://github.com/whatwg/wattsi)

2. http://ziotom78.blogspot.com/2015/01/lfi-data-analysis-with-...

I think the comment isn't really about the technical merits, but more about the fact that Rust has gotten some major traction, not just in Mozilla but elsewhere.

Where else? Rust usage is quite small at the moment relative to Object Pascal and definitely to C++. The new language that's rapidly gaining marketshare is Swift, mostly because Apple's pushing it.

The most notable one I'm aware of is Dropbox, who are reportedly using Rust in production for some low-level parts of their storage platform.

Do you have a link to the reports? Python and Go seem to be used a lot at Dropbox. Blog post on their Go usage: https://blogs.dropbox.com/tech/2014/07/open-sourcing-our-go-...

I could only find two Rust repositories on their github page, both of which are library bindings forked from other projects, so that seems to indicate some Rust usage. But then again there are more Haskell repositories than Rust repositories: https://github.com/dropbox

They've talked about it on Reddit a lot. https://www.reddit.com/r/programming/comments/3w8dgn/announc...

Their Rust stuff is closed source, not open source.

>no identity built in the browser

Except a non-commercial identity provider would be wonderful. The status quo of Google/Facebook providing identity is insane. They can also use this as leverage against Mozilla politically. Hell, I get pissed when I can't google login to a site nowadays. Shame Mozilla couldn't be the ones providing this. I certainly see it as important to the web. Mozilla shouldn't be just a browser maker.

I do agree about mobile OS and other things. I don't think every organization should have its own OS. There are way too many right now, if anything, and its a non-trivial project.

Yeah, I was saddened by them abadoning Persona. It was a really great idea, even if there were some flaws related to how one was supposed to change username if you shift emails. I feel like they never gave it the effort it deserved... never even saw a FF addon providing the browser integration they targeting :(

> Except a non-commercial identity provider would be wonderful.

We've had that all along. It's called email. It's decentralized. There are already numerous free providers. There are already numerous commercial providers. You can even be your own provider, if you want. You can have multiple identities, and they can be as loosely or as tightly tied to your actual identity as you want. Many services or web site will already let you use an email address, along with a password, to authenticate. What you're asking for already exists!

And we all use it all the time. But the user experience of google log in or facebook log in is manifestly much much superior.

The fact that the inferior legacy solution is still around lessens the pain, but it's still a tragedy that Mozilla did not succeed to establish a competitive modern solution.

I think it is merely a matter of time until email log in is no longer ubiquitous.

I'm 100% for this. I used to use Firefox for everything. But then Chrome was released which had a much better developer experience (and still does, in my eyes). I'd rather go back to Firefox but each time I do, I find a feature or performance issue that keeps me going back to Chrome. I _really_ wish this wasn't the case.

I wonder how much of that is just familiarity. FF is my main browser, and I always feel a little hamstrung when I use Chrome's dev tools.

People obviously do great work with Chrome's dev tools, though, so I'm pretty sure it's just my familiary with FF and my lack of familiarity with Chrome.

The mobile OS as basically android with a UI rendered in gecko.

I find it more troubling seeing them continually locking down the ability to customize the browser to my liking, and instead chase Chrome's design language.

Never mind that Mozilla, like much of FOSS as of late, have been conducting all kind of "social consciousness"(or should i say "social justice"?) projects that has crap all to do with writing good code.

Projects that end up being a time and money sink for already small (compared to the various corporate entities they are competing with) orgs.

If i was the conspiratorial kind i would wonder if they had been infiltrated. But more likely they are being "infected" by a "victim" culture that has been brewing in on college campuses for some time.

Cool! I run this meetup. This video was taken at the Bay Area Rust meetup [1] last week. We also had two other speakers that also gave great talks:

Sean Griffin, on Diesel, a safe and extensible ORM [2]

Alan Jeffery, on Parsell, a streaming parser written for WebAssembly [3]

[1]: http://www.meetup.com/Rust-Bay-Area/events/219697075/

[2]: http://diesel.rs

[3]: https://github.com/asajeffrey/parsell

Looks nice, we do similar videos of our meetups (example: https://www.youtube.com/watch?v=35UVffLINkc). What gear do you use?

We have our meetups at Mozilla's San Francisco office, so we piggy back on their AV equipment. Unfortunately I don't know what that they use. However, there is an IRC channel irc://irc.mozilla.org/#airmozilla that I'm sure has some folks that could answer your questions.

So Parsell saves state between received chunks.

Why can't it swap execution contexts, aka stacks? In other words, use coroutines and simply yield between chunks instead.

Wouldn't that be more efficient and simpler? Is there some aspect in Rust that makes coroutines less desirable than (manually) saving and restoring state?

Suggestion for the video: I would prefer it, if the presenters screen was fullscreen. I'm not interested in him talking or the background image.

Besides that: Thanks for these meetups and the video footage of it :)

So - is this a NEW browser? He mentions that he won't run this on firefox.

Is "Servo" their "next" browser? What does the mozilla ecosystem look like? Will this technology get folded in to firefox?

This is really cool but there's a tonne of stuff here that I've never heard of.

Can anyone explain it like I'm five? How does this fit into the big picture?

Mozilla have been developing a programming language called Rust for a while now. Two of the things it explicitly attempts to address are removing classes of potential memory bugs (like buffer overflows) and making it easier to write concurrent multicore programs. These two traits map nicely onto the browser landscape, as most devices have 2-8 cores now and memory bugs are a perennial problem.

To capitalise on this, and to give Rust a ‘halo’ project that uncovers potential issues early, Mozilla kicked off a project called Servo to research the ways that Rust could be used to produce a browser rendering engine. This has let them explore technologies like multicore rendering of a single page without dealing with the practical considerations (tried to find a nice way of putting it ) that Gecko has scattered throughout its codebase.

After Mozilla had made some progress Samsung became interested and started contributing a reasonable amount of code. I’m not sure if this is still happening, but you can see their interest given their prediliction for 8-core mobile CPUs.

Mozilla currently have no plans to replace Gecko with Servo outright, but they are folding Servo components back into Gecko when they’re portable and have obvious benefits. Gecko’s build chain can now handle Rust and I believe the URL parser is a port of Servo code (though I’m not sure it has landed yet). I think I recall a Mozilla engineer saying that Servo had progressed beyond a research project now, and it is pretty good at rendering reasonably complex sites, but they’ve still got a long way to go before it could be a drop-in replacement for Gecko.

This is a great summary. Some very minor details that you didn't cover:

Rust and Servo were basically announced to the world at the same time, as "the Servo project." It was really important that we build Servo at the same time as Rust, to make sure that the language was actually useful at building a real practical thing. The experience of building Servo helped Rust's development immeasurably. Furthermore, it also helped Rust be built in Rust itself. There's a danger when you build a boot strapped compiler: you might make a language that's great for making compilers, but not much else. Servo made sure that wasn't true.

https://bugzilla.mozilla.org/show_bug.cgi?id=1135640 is the overall tracking bug for moving Rust stuff into Firefox. Firefox 45 on Mac and Linux will contain a teeny bit of Rust, and just got turned on for Windows a few days ago. It's an MP4 metadata parser. The URL parser has a patch, but it hasn't landed yet.

Gecko is also importing bits of Servo's CSS system: https://bugzil.la/stylo

It should be mentioned that even though they started out as one project the Rust language team has done a great job at not letting the development of the language be influenced by the needs of the Servo project.

This is not at all true. The needs of Servo have had a huge impact on the development of the Rust language. The Rust team prioritizes feedback from Rust users, the largest of which (larger than rustc itself) is currently Servo.

I believe what hobofan is implying is that the Rust developers have resisted bending the language solely to the needs of Servo, despite there originally being almost a 50/50 overlap between the Rust and Servo developers (nowadays the division is more clear, with pcwalton working full-time on Servo and brson working full-time on Rust).

The most notable example is the proposal variously known as "virtual structs" or "inheritance", which had a brief prototype implementation in Rust (circa 2013) solely to support Servo's needs (specifically regarding representing the DOM), but which met a wall of opposition from the community. Such a feature is still likely to appear sometime in the future (after all, if Servo needs it, then it's likely that others do too), but the Rust developers are taking their time to explore solutions that integrate the most cleanly with the rest of the language.

It is true in some sense, we don't just implement things Servo wants. If Rust was beholden to Servo, we would have put inheritance in the language over a year ago, for example. That said, the eventual work for "something inheritance like" is absolutely predicated on Servo's needs. But we put it on the roadmap and weight it like anything else.

That said, we do take their needs into account rather highly, just like any project with a large upstream user.

Just like we wouldn't want Rust to be useful for only compilers, "only compilers and web rendering engines" isn't much better.

Is there a plan to test the security benefits of writing Servo in Rust? Or, in other words, at what point will it be both practical and realistic for Servo to be tested in something like Pwn2Own?

Well, Pwn2Own is usually done with production-ready browsers, so I would assume Servo isn't eligible until it supports the whole web platform.

I'm not sure, as I'm on the Rust team, and not Servo; maybe someone who works on Servo directly knows better than I what plans are here.

This was an excellent summary explanation; thank you.

Externally, Firefox has become a brand. Servo is comparable to Gecko or WebKit (it is a browser engine), although it is new, not finished, and uses novel techniques.

What follows is speculation on my part. I am not a Mozilla employee.

Mozilla might, in about five years, offer a new product for desktops which features Servo, SpiderMonkey (the JS engine currently used in Firefox), and an as-of-yet undefined chrome (the UI bits around the rendered page). It may be a mobile product instead, or a product for a different type of hardware, although the current thinking seems to be to target it for desktops.

Whatever that product is, it may still be called Firefox or a derivation of that brand (as Firefox OS was), although it might have a completely new name (it worked for Microsoft Edge, after all).

(Updated for clarity.)

  > Mozilla might, in about five years, 
Since some of us in this thread work for Mozilla and on Rust/Servo, I would like to point out that this number is your own speculation and nothing official, timeline wise. Nothing the matter with that! Just don't want people to get confused.

It's a research project to build a browser written in Rust. A few components will be merged into Firefox, but Servo itself probably won't be the "next" browser.



(Servo team member here)

So. There's a programming language called Rust[1] which Mozilla pushed for and developed (it's now developed by a larger community, though Mozilla is still involved). Among other things, it guarantees freedom from memory safety issues and data races. Both are important here.

Freedom from memory safety issues gets rid of a whole class of security bugs (common in browsers) in one fell swoop. Lack of data races lets one write concurrent code without worrying; without needing to program defensively everywhere. "Fearless Concurrency" is what we call this colloquially, after a blog post with the same name[2].

Fearless Concurrency lets us try ambitious things like making layout parallel. Layout in browsers is the step that takes all the nodes on the page with their hierarchy (the DOM) and computed CSS style, and figures out what goes where. All browsers currently do this in a single threaded way.

Servo[3] is a research project by Mozilla. It's a browser engine[4] which does layout (amongst other things) in parallel. It's written in Rust to make this possible (and to avoid security bugs due to memory safety issues). Webrender is a (succesful!) sub-experiment to see if we can be more like game engines and do as much on the GPU as possible.

Here are a bunch of (non mutually exclusive) ways it can go from here:

- Bits and pieces of Servo's code get uplifted into Firefox/Gecko (Gecko is the browser engine behind Firefox). This is already happening[5] (integrating webrender into Firefox is something being discussed, too). This means that Firefox can get some wins without having to wait for Servo to be completed.

- Servo gets its own UI and is shipped as a new browser. A promising UI being worked on is browser.html[6].

- Servo becomes an embedding thingy to be used like a webview.

- Servo replaces Gecko for Firefox for Android: Whilst replacing Gecko in Firefox Desktop is rather hard (the two are tightly intertwined), this is less of a problem in the Android app. It's possible that Servo would have its own app that's either released under a new name or branded as the new Firefox for Android.

- Servo replaces Gecko within Firefox: Hard (not impossible). Probably won't happen.

[1]: http://rust-lang.org

[2]: http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.ht...

[3]: http://servo.org

[4]: This is not the same thing as a browser. A browser engine handles the displaying and interaction of a given webpage, but doesn't bother itself with concerns like a URL bar, bookmarks, history, a proper UI, etc.

[5]: https://wiki.mozilla.org/Oxidation

[6]: https://github.com/browserhtml/browser.html

Oh how I wish Mozilla would just slap basic browser chrome on and ship the thing. Firefox used to be a fast browser, nowadays after a couple hours it takes 30s to open a new window on my 32GB machine and 10s to register a link click and start loading the page.

I am in my peer group the last Firefox guy standing, but not for much longer, I fear.

I compile Servo twice per week or so to check out the progress. It's really not ready to go as a daily browser. You can load pages, they're recognizable but generally not fully correct, the form input fields are rudimentary, and it crashes every time I go back to the previous page.

There's been one proof of concept basic chrome [1] and it wouldn't be THAT difficult to build a more robust one but I'd be much more interested in seeing it running in Electron so you could run Atom or VS Code since the domains are much more limited and rendering perf is a fairly significant limiting factor in those apps.

[1] https://github.com/glennw/servo-shell

P.S. I've never had perf issues with Firefox like you're talking about. You might want to try cleaning out your profile and if it's not that, you might have addons doing something screwy.

https://github.com/browserhtml/browser.html works fine as a basic chrome too.

Could you expand on "form input fields are rudimentary"? (I know they are, I'm wondering what part of this you're referring to). Is it the styling, lack of selection support, lack of HTML5 input fields, bugs, or something else? (I might be looking into improving these later)

The "go back" crash is something that can be fixed easily; I think we know why it happens but haven't fixed it yet (I forgot the reason though).

> Could you expand on "form input fields are rudimentary"?

All of the above except the lack of HTML5 fields. :) I don't mean to be negative, just to give reasons it's not as simple as "just slap chrome on it". I really should start contributing.

Yeah, no I get it :)

Thanks for the feedback.

I have done the profile cleaning routing (I think that's something like the defragmentation runs of old...) and disabled all add-ons except for uBlock, Reddit Enhancement Suite and greasemonkey (for a total of 3 GM scripts).

I don't understand what's up either, but I have it both on my Windows work machine with a 4+4 core desktop CPU and 32GB RAM and my Macbook with 16GB. After a couple of hours, I can type in a text-field and see the text lagging. On a 2.4Ghz machine.

I do have lots of tabs open, but since they only load on activation I doubt that's the reason. Maybe it's the Web Telegram app. It's totally frustrating that I am not even able to find the culprit if there's one since not even the dev tools have something like `ps -aux` to show what the tabs are consuming.

Granted, it's gotten better with v.45, but still I just hope something useful comes out of Servo, and soon.

Try going to about:memory to see if some tab is taking up a lot of memory. I've got this CI system where if I leave it open overnight it takes a gig or two and makes the browser unusable (Firefox and Chrome; it's a problem with DOM elements). If your GM scripts retain DOM elements for a long time (and the tabs aren't closed) that might be an issue too.

We'd love it if you submitted some profiling data: https://developer.mozilla.org/en-US/docs/Mozilla/Performance...

Is there a way to do it on the bog-standard Firefox? I can't use a nightly on my company's work machine.

Have you tried running without any extensions after having reset Firefox? The only time I've seen that where it wasn't a dying hard drive or a malware-infested/corporate IT-mismanaged system was when someone had some particularly poorly-optimized extensions installed.

I was curious & ran a couple of tests on this 2010 Mac Book Air. Normal launch time according to about:healthreport is 2.8 seconds and testing with something like `open -a Firefox https://www.google.com` when Firefox was not already running takes 4-5 seconds to finish rendering. When it is already running, it's under a second from start to finish.

On my Windows machine, firefox.exe has been running since 14th of February, and I'd estimate it takes less than 2 seconds between Ctrl+N and registering me typing in the default home page search box. Of course, Ctrl+T is much faster.

Among my extensions are Classic Theme Restorer, Cookie Controller, FireGestures, RefControl, Status-4-Evar, Tab Mix Plus, Tree Style Tab, uBlock Origin, FoxyProxy amongst several others - this isn't a lightweight installation.

Chrome isn't competitive with Firefox + Tree Style Tabs, so there's little risk of me switching any time soon.

Servo's not production-ready yet. "A good browser chrome" is not the blocker for this; browser.html is already a good enough "basic browser chrome".

I tried using Servo with browser.html and found it to be unusably slow, is this browser.html's fault or Servo's? I'm thinking it is Servo, it's too slow to just be slow javascript, it feels like my GPU isn't being used. On a Macbook

Did you use release or debug build? The perf difference in Rust between those is quite staggering


Please file a bug with your system specs and specifically what you were trying then. We'll try to get it triaged ASAP.

aand link it here. I tried to build browser.html+servo today on my macos and the browser.html UI feels terribly slow (while servo launched on its own feels fast).

Thanks for the report! Landing a fix for the issue (-b/glutin) right now—I had it queued but never actually got around to landing it.

Thanks, where can I run benchmarks (either the ones in your video or others) so I can compare it against Firefox? It still doesn't "feel" as smooth as Firefox, but then again that could be unrelated to WebRender.

I think it's not really ready for informal benchmarks like that. There are lots of random issues. Feel free to try various pages and file bugs, though.

Don't forget to use the -w switch to enable WebRender.

If that doesn't help, please file a bug. Getting to optimum performance with diverse hardware configurations is a slog, unfortunately, and any and all performance/hardware data is a great help :)

I'm using `gulp servo` which appears to use that switch: https://github.com/browserhtml/browserhtml/blob/master/gulpf... . Used the binary in the target/release folder... not sure what I could have done wrong.

Their roadmap also specifies how some parts of Servo might get incorporated into Firefox: https://github.com/servo/servo/wiki/Roadmap.

I watched this the other day, and I've been mulling over the observation that onboard graphics are getting more and more die-space.

On mobile, where discrete graphics are a rarity and power is a scarce resource, it makes sense to offload as much work onto the GPU as possible. Light it up, do the work as quickly as possible, then go back to sleep.

However I started wondering about what can be done for high-end desktops. I just bought an i7-6700K, and of course it's paired with a very powerful discrete GPU.

So as I understand it, effectively this means about 1/3 of that 6700K is just dark all the time. If there's any GPU work to be done it will all be offloaded to the PCIe bus, not the onboard graphics.

How can we make use of that silicon in scenarios like that? Is there any advantages an onboard GPU has that can be leveraged over a discrete GPU? (Apart from power efficiency; which in my opinion is a non-issue when thinking about PCs with 800W+ power supplies.)


P.S: Of course all this would be moot if I had just bought the 5820K. Live and learn.

> How can we make use of that silicon in scenarios like that?

Well, in theory you could run them simultaneously. Use one of them for compute and one of them for display. Of course, this may well be quite a stretch in practice; GPU compute in any consumer software is completely nonexistent, to a first approximation. But it's worth thinking about, as there has been a steady stream of research coming out for years about various useful things that you can do with GPU compute.

Using two GPUs for rendering (or rendering + compute) is a lot easier to do with DX12 and Vulkan - there are some benchmarks/game engines out there that can do AFR across paired GeForce and Radeon cards, for example, and it actually performs really well. So with Vulkan you could probably utilize both an Intel onboard GPU and a discrete GPU simultaneously. The question is how much leverage you can get out of that, but I bet people could come up with good uses (rasterizing font glyphs or doing some computational work, perhaps)

Exactly. As an example: when using OpenCL, both the IGPU and the discrete GPU will show up as compute devices. Currently if an application uses the GPU at all, it would often pick the one it considers the most powerful. But even today you could theoretically run benchmarks and then use the IGPU if that is sufficient, or use both at the same time.

When the iGPU is not in use, the CPU is able to stay in its thermal budget and sustain turbo speed for longer.

For some workloads though you're not maxing out the CPU or its thermal budget, so mixed GPU/CPU could still be a win.

There is a piece of software called VirtuMVP that attempts to use both the iGPU and the discrete one.

>which in my opinion is a non-issue when thinking about PCs with 800W+ power supplies

My electricity bill isn't a non-issue ;-) Also power efficiency is good from an ecological standpoint.

Texture upload to an onboard GPU are faster, iirc.

Especially when it's already in right format in the same address space. Then upload is not needed at all.

Except that GPUs don't usually read from linear "raster order" images or are very slow doing so. So even in a shared memory CPU/GPU setup, there is a texture "upload" which organizes the pixel data to a GPU cache-friendly block format.

What is die-space? / What does die-space mean?

It's covered in the video too (with visuals that illustrate what your parent is talking about), but the "die" is the slab of silicon that the CPU itself is on. https://en.m.wikipedia.org/wiki/Die_(integrated_circuit)

With webrenderer and asm.js/webgl, the last couple of years have been great for rendering in a browser and it has all been led by Mozilla. Keep it up.

One of these days we will effectively figure out why serious organizations can't seem to do even basic audio leveling on prerecorded videos that they intend for public consumption.

Seriously. It's 10 minutes of work, at the outside. Normalize the audio to an average of -3 dB so that people can actually understand what's being said without having to jack up their volumes beyond all reason, and then write it back down for the next thing that they have to listen to or get the ears blasted by the next notification sound that comes along.

There's just no excuse for it.

I've been searching for a music player app that does volume normalization for years, without luck. I would like to be able to listen to normalized classical music, for example, because the volume varies a lot in this kind of audio from quiet passages to loud ones. When listening in car or on the phone while outside it becomes hard to hear the quiet passages because of all the surrounding noise, so, normally, I'd raise the volume, but then comes the loud part and my ears bleed. Normalized audio also plays nice with listening to music on the feeble phone speakers. I kind of find it nostalgic to listen on such tiny speakers, like the transistor radios from a few decades ago.

What I am talking about is raising the volume of the quiet parts, not making all the parts of the track louder. I would listen at normal volume but be able to hear at a sane volume throughout the track. I think this can be achieved offline with the compand effect on sox.

Another listening experience improvement could be to compare the relative volume of noise outside (using the mic) with the volume of the music being played and slightly adjust the volume to keep it above the background noise.

In conclusion, it is necessary to consider the fact that listening on the phone and car happens in noisy environments and quiet passages are almost drowned if the user doesn't compensate. Why force the user with mess with the volume every few minutes when it can be achieved automatically?

Maybe a brave developer will champion this idea and release a music player or even better, an app to normalize over the whole OS so we could benefit from it while using other apps like YouTube.

As the sibling comment pointed out, you are most likely looking for audio compression ("dynamic range compression") instead of normalization.

I know many people want such a thing, though I wouldn't touch it myself (I like my music to have some dynamic range). But just FYI, there already exist music players that have compression plugins like rockbox and vlc, you might want to check them out.

this is usually not called normalization, it's more commonly called dynamic range compression, or just 'compression', fwiw.

Watched the video yesterday and it's incredibly impressive work. I was actually a bit surprised this hadn't been done more often and had assumed (incorrectly) that it had.

Nonetheless, I imagine we're going to see more projects like this coming out. It's going to be great to see what people can do when they can effectively use CSS to push high framerates. It's awesome that we'll be able to easily achieve 60+ FPS animation in the browser with ease.

Why can mozilla not put these videos on youtube :-/ air is horrible. Can't watch without downloading.

I'm happy that they try to push the open web forward instead data silos like YouTube, Facebook or Twitter. And for that you don't only need a great browser but also the content in different places to counter attack the monopolies. Only this way you can find the bugs and fix them.

> I'm happy that they try to push the open web forward instead data silos like YouTube, Facebook or Twitter. And for that you don't only need a great browser but also the content in different places to counter attack the monopolies. Only this way you can find the bugs and fix them.

They could mirror it to youtube and keep their own thing as a backup. If anything I trust content on youtube to survive the times better than Mozilla's own service. There have been too many videos going offline because the were on tiny services.

> If anything I trust content on youtube to survive the times better than Mozilla's own service.

I don't there have been to many videos going offline which I embeded on my own website because YouTube took them down for various reasons.

And my MySpace content is gone, the content from 2004 in my blog is still served without problems. Example: http://jeenaparadies.net/weblog/2004/apr/ersteintrag

Does your browser not support <video>?

> Does your browser not support <video>?

It does. But the server is not responding fast enough for me. Both on my home ISP nor the one on my phone. So my assumption is that they have either not enough bandwidth or bad peering with my country.

There's also a link underneath the video to download it.

> Can't watch without downloading.

YouTube has quite a few bugs in Firefox. I actually prefer air, what's wrong with it?

YouTube works fine in Firefox. I use it all the time. It works well in mobile firefox as well.

I was affected by this bug for example: https://news.ycombinator.com/item?id=10877810

That bug was YouTube's fault.

That's why I said that YouTube has quite a few bugs in Firefox ;)

Outside of places like the us? Crappy streaming for starters.

Hm ... works fine here in Germany.

Agreed. youtube is terrible on my FF compared to air.

Youtube lets me watch at 1.5 to 2x speed.

So does all HTML5 video, it just doesn't show the controls.

Here's my bookmarklet (doesn't work on embedded videos in iframes, I assume because of same-origin-policy)


> So does all HTML5 video

When there's the bandwidth to serve it, air often fails to load fast enough for 1x.

YouTube isn't the fastest here in Germany either. For example the Telekom (Germany's largest ISP) has bandwidth troubles with YouTube from time to time (because of some backbone bottleneck). I could only find a German blog post about this, but I have definitely experienced this myself: http://stadt-bremerhaven.de/telekom-aeussert-sich-zu-youtube...


Tessellation shaders could get around the vertex shader limitation on weird clips

Would vulkan/dx12 reduce much given the AZDO style techniques used?

Having the GPU do work queueing might be cool as well--have it deal with the AABB tree

Very cool stuff

> Tessellation shaders could get around the vertex shader limitation on weird clips

Indeed! It's issue #61, filed in October 2015 :) https://github.com/servo/webrender/issues/61

Is there reasonably-priced hardware that can do several hundreds of FPS? Preferably something with a built-in battery and Wi-Fi or bluetooth? I'd really love to build a volumetric 3D display by spinning the thing. I tried spinning my phone but the hardware screen refresh rate was way too low to do anything impressive even if processing power wasn't the bottleneck.

There are a few gaming monitors that can do 200fps. Maybe even 240. I don't think any mobile devices go above 60. Maybe that'll change as mobile VR picks up.

I only such monitor I know of is the Acer Predator Z35, which is advertised as 144Hz but can usually be overclocked to 200Hz. The only "240Hz" LCDs currently available interpolate from lower input refresh rates. I'd love to see a true 240Hz monitor, or even higher.

I believe this is a problem of the signaling as well. Even the new Thunderbolt 3's display mode is still stuck with DisplayPort 1.2 signaling.

If this is built into webkit + phonegap/cordova, is that mean we can write mobile app with web/css faster than best optimized native one?

What would be limitation for html/css mobile app at that time compare to best of native app?

webkit is something else. Servo will not be built into webkit. To the rest of your question, if Apple allowed alternate JS engines (they don't) then a web renderer like Servo would be very fast would be competitive with native applications.

I don't follow this. I understand the parent op say the ability to embed Serve as the front-end of the app. In this case, Apple is ok (the only limitation left is JIT).

I will love a way to have a fast html-rendered for app development, currently look like the only game is http://sciter.com

iOS does not allow alternative rendering engines. Firefox and Chrome on iPhone are just reskinned versions of the inbuilt browser.

Yeah, but for the browser used for the user. But for distribute as embebed inside a app?

I read somewhere react native/phone gap base type js app are allowed in app store already.

Supposedly, the authors can dynamically update the GUI/js code base on the client without going through the app store update process too.

Dude, all that stuff runs on webkit. This thread is about Servo. At present, something like Servo cannot be embedded in an iOS app. Yeah, yeah react native or anything else can use web views and downloaded javascript or whatever you want to call it. You can't have Servo and it's 100 FPS though which is what this discussion is about

I believe that your parent was asking if WebRender could be used in WebKit, not Servo.

The obvious answer is no. This is an video about heroics by the Mozilla team to make WebRender and Servo very fast. I bet $5 that the people trying to make webkit very fast are taking an entirely different approach. There are probably zero whole people in the world that want to work on porting the Mozilla approach to webkit

How are fonts rendered on the GPU here? Do they have a new font rendering engine?

"Glyphs—rasterized on CPU with Quartz or FreeType (for now)"[0]

0: http://pcwalton.github.io/slides/webrender-talk-022016/

The computer industry really needs to figure out and standardize on a good GPU font render soon. It's really limiting the utilization of the system resources.

This is awesome and incredibly useful.


Let's see if I can get even more excited, Would it be possible to render SVG in the same fashion ? I guess SVG can be considered a scene graph as well, maybe even more than CSS is.

I would think the Cordova camp is crying with happy tears.

Interesting approach with designing a web engine like a game engine.

If you think well, it is actually making sense, at least for the CSS part.

I'm guessing you're ESL and you meant:

If you think hard it actually makes sense, at least for the CSS part.

Otherwise it reads as if in your brain you're saying "well, it is actually making sense," but then you'd need a consequence for the if.

Wonder if this will help WebVR run smoother

It would be nice if there were an APK with a slim interface for Android. I'd like to try this out on my phone.

http://servo-rust.s3.amazonaws.com/nightly/servo.apk exists, however webrender probably won't work on it since it needs a flag to be passed down, and I think webrender doesn't completely work on GLES yet (though it should in the future)

You can now pass flags to Servo for Android just like on desktop:

  ./mach run --android -- -w https://news.ycombinator.com/
Last time I tried it, WebRender wouldn't render anything on my Android device, but that was a few weeks ago.

Yeah, but that won't work with the APK :)

Hopefully, in the final version, they will limit the max FPS if it increases battery life (because I can't tell the difference between 60 FPS and 300 FPS). But I did notice some jerking in his last demo so I do wonder if some garbage collection or something else limits FPS once in a while...

Auto-play video.

Does a transcript (and perhaps slides) of this video exist? I avoid streaming on this particular connection.

I tried this out, and the benchmark page performance is pretty bad on servo without the new painter. great with the new painter.

Interestingly, the same benchmarks seem pretty fast on mac chrome for me, so I wonder if I'm missing something about how effective the new technique is?

Does anyone know where the benchmark pages live? Can I run them on Servo myself?

I have a couple of bug fixes pending for these, though, so they may not work well right now. (In particular the border corner tessellation code is still disabled in master due to bugs, I think.)

Apologies—the WebRender work has been moving fast and things are a bit disorganized.

You can try them out in your own browser easily by visiting them from rawgit: https://cdn.rawgit.com/pcwalton/webrender-demos/master/moire...

I'm curious how Webrender is able to render native controls (buttons, text inputs, etc.) via this method - I suppose they have access to APIs not exposed in e.g. WebGL that make this trivial?

We currently don't use native controls. The form elements are CSS styled and look the same on all platforms.

But it would probably be possible to get the toolkit to render the control to a texture or something and move on.

Not really, unfortunately. Native toolkits aren't designed for this. As an example, I looked into using HIToolbox (the API Apple uses) for this on the Mac, but it was so obviously unsupported and likely to break at any time that I quickly decided against it.

This is super cool! I can imagine loading a web page in the next few years and hearing my GPU fan kicking into turbo and wondering what damn website is sucking up all my GPU cycles.

Are there any plans to eventually merge this into Firefox?

There is a plan (called Oxidation) to merge components written in Rust into Firefox, but I doubt the rendering will be one of them, at least any time soon.


Will this work with d3, because d3 is pretty slow with a lot of elements and it would be quite nice if we could get gaming performance in the browser.

I don't recall if d3 uses SVG or canvas off the top of my head, I haven't done much more than toy with it. If it uses SVG, then yes, this could speed up d3 greatly.

It uses SVG, so it will speed it up a lot. Unfortunately for things like a force layout in a graph, the main bottleneck will be JavaScript calculating the positions.

WebRender doesn't accelerate SVG. It's certainly a potential future area to work on, though.

You're right, it doesn't at the moment. But the whole point behind WebRender is vector graphics are much better suited to GPU rendering - there's no reason SVG couldn't be targeted eventually.

EDIT: Realizing you are Patrick himself, of course you know this...

Sorry I haven't gotten an opportunity to watch the video yet, so maybe the video addresses this, but if I may please ask... why would you want to render a webpage at several hundred FPS? Unless FPS is a multiple of your monitor's refresh rate, (i.e. vsync) it won't look good. You could use the extra time to do other background tasks instead of burning more CPU than you need to just to say you can do 1500 fps...

> why would you want to render a webpage at several hundred FPS?

You wouldn't (and the demo shows 60 fps), the point's rendering absolutely definitely most certainly isn't a bottleneck anymore, which gives more time for

> other background tasks

or other foreground tasks like the page itself (and the ability to increase its complexity since you don't need to budget for rendering anymore), or just letting the CPU go to sleep (could help whatever browser ends up with servo tech finally compete with Safari when it comes to energy consumption).

It also gets you in a good place for VR (2 displays and 90Hz minimum IIRC from a Carmack note?) or for ever-larger displays (though that ties into the complexity thing, displays also get finer and denser on mobiles)

> Unless FPS is a multiple of your monitor's refresh rate, (i.e. vsync) it won't look good.

Well you could do that with several hundred FPS anyway, 300FPS is a multiple of 60Hz, so's 1540FPS for 140Hz.

Thank you for the clarification - makes sense.

The video shows off several web pages with complex CSS animations that run at a reasonable FPS in Servo, but run at less than 1 FPS on current browsers. These pages were created specifically for this purpose, but that's the general thrust of it: it's not so much that you _want_ your pages rendering at hundreds of FPS, but that you can get more complex things done in a reasonable FPS. It also proves that Rust code can be _fast_, which is always something that is important for systems languages.

I know much less about this aspect, but I also understand, vaguely, that there are battery savings here as well? When you can get things done quickly, you can idle the CPU again, saving power. I believe that's the thrust of it.

If you can render at several hundreds of FPS, then you certainly can render at 60FPS or whatever you need. Rendering is not always silky smooth on websites using lots of CSS; this lets you render more complicated things with very little effort at 60FPS. Of course, you won't actually be rendering at 100FPS.

(It also moves computation off the CPU, so the CPU can be used for better things)

In practice, you want more than 60 FPS for painting because restyling is frequently slow. (Also, display list construction is sometimes slow, but there's active work to dramatically improve that.)

The FPS is obtained by seeing how much GPU time was spent on a frame. They're not actually going to refresh the image on screen that fast. Higher FPS means that the GPU is not being taxed and more complex rendering is possible, or you have more time to do other tasks.

Can this be used to build a graphics tookit for UI? It sounds like something similar to QML is a great fit.

The concepts? Sure. It already has been, to some extent at least.

For instance, the JavaFX UI toolkit is heavily multi-threaded and uses a dialect of CSS 2, it has support for blurs, box shadows etc. The app thread renders the scene graph to some sort of intermediate form (never looked at what, but I think it's similar to the WebRender form, i.e. a series of styled absolutely positioned boxes and text regions), then a render thread translates that into Direct3D or OpenGL calls. The render thread may itself use subthreads that handle rasterisation.

You can read about it here:


How energy friendly is it, though? Will it drain a laptop quickly?

The multi threaded architecture of servo is actually more energy friendly than existing engines for the same workloads, since distributing the same amount of work over more cores means that each core has to do less, and can thus run at lower clock speeds and thus drains less current.

wow, this is stellar!

Wow! Something from Mozilla that is not an awkward GUI change, a stillborn mobile OS or otherwise useless.

Pretty epic stuff, great work. How would this address raytracing? Granted, you're not going to do raytracing in CSS. But as web pages evolve to web apps which evolve to web games, raytracing is going to be common in the future. He said it should (or at least not be slower) be able to handle WebGL as well (although I'm assuming that is a separate compositing layer that runs as normal?). So would there be no gains? From my understanding, raytracing is something that cannot be parallelized.

There's already a lot of knowledge and technology around using the GPU for high performance 3D rendering. Doing it in the browser (with WebGL) isn't that different. The breakthrough of WebRender is (mostly) to efficiently use the GPU for 2D path rendering.

It's actually not so much 2D path rendering as CSS. NVIDIA made the breakthrough in 2D path rendering with NV_path_rendering (which is a great paper, and I cite it in the talk). The way I see it, WebRender takes NVIDIA's work a step further by observing that you don't actually need 2D path rendering to render most CSS: you just need a good blitter, a good box clipper, and some specialized shaders.

Speaking of your specialized shaders: how do you do the rounded rectangle corners? There was a quick mention about it in the talk but no details. Some alpha blending tricks or coverage sample masking (gl_SampleMask)?

A link to the shader source would be great!

Dead simple: we use a shader to draw corners to an atlas and then alpha mask when we go to paint the page. Doing this allows us to avoid state changes when many border corners are involved.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact