Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla's Servo Engine Now Capable of Rendering GitHub (phoronix.com)
251 points by kungfudoi on Aug 19, 2015 | hide | past | favorite | 82 comments



Also see Ars Technica rendered in Servo: https://twitter.com/pcwalton/status/631961638304804864

I've been working on knocking down layout bugs that affect the most popular sites lately. Please feel free to try it out and file GitHub issues, especially if you can minimize test cases! You're definitely likely to see various degrees of brokenness on most sites, but the core CSS 2.1/CSS3 layout is pretty solid at this point; there's just a long tail of bugs and corner cases we've got to nail down. This is where finding and isolating the bugs that break the major Web sites is important!


Interesting, as a web developer I really really want Servo to become a big success. I wonder at what point do Mozilla think I should start having a once over in Servo before putting my sites on the web?

Why not, I'll start doing this as it can't do any harm and create some test cases should I find some weirdness.

It's really easy to build a copy of servo after all:

    https://github.com/servo/servo
The instructions there generally just work.


Don't contort your Web sites to work around brokenness in Servo; just file bugs. Rendering bugs are our bugs, not yours :)


I wonder if there was an idea of a "strict mode" at any point? When making sure every site works surely you guys will have to render invalid code "correctly", I assume.

For example I am willing to develop in your browser if it meant I wouldn't broke some rules and the browser would care about that. Of course html validators exist and browsers display invalid css in their inspectors but I am asking for something like that would expect valid code. throw an error in my face if I didn't close a html tag or used an invalid css value, at any point.

Like if a javascript code resulted in my page getting invalid layout then browser could yell at me that causes an invalid layout. Maybe even a small performance improvement if my code conformed to every layout rule.


xhtml worked like that, html5 has moved towards specifying the quirks that are used to deal with poor conformance. That said, if you look at a webdev console, things that cause poor performance (à la document.write) tend to be noted.


Yes! A nice idea that... as has been previously mentioned XHTML failed due to there already being a version of the same thing with tonnes of automatic error handling built in! Which one will be most popular; a programming language that always seemed to complain and display an error, or one that 99% of the time corrected your mistakes for you.


and I get build failures on osx. :(

I just wanted to play with it anyway.


Please file bugs!


I have to say, as a C++ developer, I'm really impressed with Rust's progress in the last year. Creating a browser rendering engine is a complicated task -- it's a very strong proof that the language is practical. Rust also seems to have a friendly, engaging community, and leaders (e.g. pcwalton) that are fair-minded and that invite and inform rather than scare away. The docs are nicely presented and informative. The language feels fresh and modern. The build and library discovery story is 10x better than what C++ offers.

In short, there's a lot to make you want to move over. Still, I a) have a ~90K LOC code base in C++/Obj-C that I'm not going to abandon any time soon, and b) after having gotten burned (in Haskell) by the "more purity or strictness (in the conventional, not evaluation, sense) is always better" argument, I'm waiting to be convinced that having a more picky compiler actually helps rather than getting in the way. As mentioned above, projects like this certainly make one more confident that it does help.


I've gone all-in for Rust for any side projects. I still get paid to write C++, but I choose to use Rust. I'd say that I'm up to around 20K LOC at this point and the sheer peace of mind that I get is worth almost any amount of extra arguing with the compiler when it just works the first time I run it -- which happens a surprising proportion of the time.

Improvements which would make me really happy:

* Faster builds. This is still a pain point.

* "Non-lexical borrows" [1]

* Trapped under ICE [2]

I know that all of the above are being actively worked on, so I'm waiting patiently. [3]

[1] Borrows are currently tied to scopes (blocks, etc). This makes certain safe borrowing patterns illegal, requiring slightly-to-highly awkward workarounds.

[2] The compiler crashes too often (ICE). I was recently stuck with a non-compiling code base and facing the prospect of systematically commenting out code to find the offending piece. Fortunately, this one was fixed quickly. There are many others remaining, though. (It is getting better as Rust gets more use.)

[3] This is a lie. Patience is rarely one of my virtues.


> Faster builds.

This is a pain point for us, too, on Servo. There's fantastic active work going on for incremental builds, which will solve our most pressing needs (long turn-around on debug changes).

Also, if you're on linux, make sure you're using gold instead of system ld. That change made a huge difference for Servo.


The ICE issue is what turned me off Rust.

I went all-in for side projects like you (this is how I typically learn a language), but after two months or so, I was spending 30-60 minutes a day trying to figure out the cause of compiler crashes, and then tweak my code so it didn't trigger them.

I've heard the ICEs have gotten much better in 1.2 (I was working in 1.0), but I haven't picked it up again.


To you or anyone else who reads this, if you find an ICE, please report it or :+1: a previous report. It can be hard to tell which ones hit a lot of people, and which ones don't.


Would it be possible to add an optional crashpad-like script, one that allows you to review the data sibmitted in hex/ascii before sending it off?


We've been discussing it, not just for crashes, but even compiler errors. But it's all in the 'wild dreaming' kind of phase, there's privacy concerns, and you'd want such a thing to be VERY opt-in. But it's a cool idea.


1.2 is so much better with ICE-s than 1.0.


[1] should get fixed post-MIR (the new "middle IR" that's in progress right now)

[2] --- let's hope this improves :)


Abandoning ~90k LOC certainly seems like a bad idea, but since Rust can basically pretend to be C, you could just peel off a certain part of your app, and have a hybrid codebase. Your C++ would just link to the Rust like any other C library. This is the approach Firefox is taking.


Firefox' C parts are getting incrementally rewritten in Rust?


Slowly. There's two patches right now: using Rust for URL parsing, and an MP4 parser. They're not being built by default, but you can set an option at compile time to use them. There's some more work to be done before they'll start appearing in regular builds.

See here for more:

  * https://developer.mozilla.org/en-US/Firefox/Building_Firefox_with_Rust_code
  * https://bugzilla.mozilla.org/show_bug.cgi?id=1175322
  * https://bugzilla.mozilla.org/show_bug.cgi?id=1151899
(and Firefox is mostly C++, not C)


Addendum: The URL parser is a replacement (it's the same parser we use in Servo), but the MP4 thingy adds new functionality.


Yeah, this is very appealing, ideally migrating new features/submodules rather than rewrite.


"after having gotten burned (in Haskell) by the "more purity or strictness (in the conventional, not evaluation, sense) is always better" argument ... I'm waiting to be convinced that having a more picky compiler actually helps rather than getting in the way"

@seertaak, can you quickly explain how the strictness became a problem?


Could you describe in more details how you got burned? (I have never done anything big in haskell)


It was a while ago, and I tried using it at work -- for simple tasks, tasks that would have taken tens of minutes using either Java or Python. With Haskell it would invariably take much, much longer. It would look really beautiful at the end, but the truth is the extra amount it took couldn't be justified.

Specific problems were extending a program to do "just one more thing", as is commonly the case for more "scripty" tasks. This often caused severe refactoring, as what was suddenly a simple function or IO/ST monad now had to include another monad and it all would become a bit of a mess. In general I find combining monads in Haskell pretty unpleasant, especially when you consider that this kind of stuff is so trivial in other languages that you don't even think about it. To put it shortly, I would spend too many brain cycles wrestling with Monads and strictness and Haskelly concepts, and too little on my actual problem.

But hey, that was some time ago and YMMV.


> It was a while ago, and I tried using it at work -- for simple tasks, tasks that would have taken tens of minutes using either Java or Python. With Haskell it would invariably take much, much longer. It would look really beautiful at the end, but the truth is the extra amount it took couldn't be justified.

The productivity curve for Haskell is exactly the opposite of something like Python. You spend a lot of time at the beginning writing all your fancy types, monads, etc. Then later when you have thousands of lines of code you can make non-trivial changes with the ease with which you could write a tiny one-off Python script.


I also think it's important to tell people that there is a learning curve on that, too, even after the learning curve of learning Haskell. It takes some time to figure out how to write the idiomatiac Haskell code that factors correctly to make those changes.

It's a fantastic exercise to get to that point in Haskell, and the skills carry over in surprising ways to conventional languages too, so I unequivocally recommend it to anyone with at least, oh, say, 4 years of experience in "conventional" programming, to get you to the next level of programming skill. But make no mistake, it is quite the learning curve. I think this is a positive thing, though... the reason why it is such a learning curve is precisely that you are learning, in exactly the way you really aren't learning anything when you pick up your fifth object-oriented imperative language.


Do you have any references on that ? books, papers or even blog posts.


Start with Learn You a Haskell, IMHO. Best starting explanation of that sort of stuff.

Then you actually have to use it for a while.


If you're interested in checking out the progress, the build process[0] is really easy.

Here's the current page: http://i.imgur.com/NmRNaRz.png

If you want to poke around more, servo shell[1] lets you switch starting pages without having to re-run from the command line.

[0] https://github.com/servo/servo [1] https://github.com/glennw/servo-shell


Not only is it pretty easy to build, it's also pretty easy to contribute to!

We have a bunch of easy issues (https://github.com/servo/servo/labels/E-easy) and are willing to mentor people on them.


This is really a great idea. Thanks for mentioning this!


Huh, the image seems to have a different colour than HN has on my browser (Firefox). I wonder what causes that, usually GPU colour corrections don't show up in screen-shots. Compression?


Users can set their own color on HN. For example my account has the header in my favorite shade of green.


To clarify, you need to have a certain number of points, which I understand is sometimes adjusted to reflect the growth of HN.


You need about 250 points; I couldn't change the color 2 days ago when I had 227 and now that I'm at 267 I can.


omg, I'm so close, 237...help...please...anyone? =p


How? I can't see it in the options


In your profile settings, it's the "topcolor" setting.


you also need a specific number of points.



Really curious to hear more explanation of this comment from Patrick: "Printing is pretty incompatible with parallelism though." Is he saying that Servo as a parallel engine won't be able to handle printing?


Patrick refers to parallel layout here: in a text with many paragraphs for example, the contents of each paragraph can be laid out in parallel with other paragraphs. Then you only need to sum up the heights of earlier paragraphs to find the vertical position of each.

When printing though, you don’t know where to start page 10 until you’ve done the layout of earlier pages. There may still be some parallelism opportunity, but less than on screen.


I'm assuming what they would do is render the website using multi threading and then generate a PDF or something and print that.

But basically printing APIs are not multithreaded across all OS's


Most likely due to the way printing APIs work across OSes.


It's likely possible to work around that limitation. For example, Servo could render to a binary image and then send that to the printer.


That's not the problem. We could easily run our parallel screen layout and then just use Skia (or the local system)'s PDF backend. The trouble is that this doesn't implement the printing spec. As Simon mentioned above, the problem is the various CSS rules that control pagination.


That's not the limitation — the limitation is the effects of pagination on layout, especially given control of where page breaks occur and restrictions on widows and orphans.


Using a raster image is a recepie for slowness, especially at common printer resolutions of 300dpi+. It also breaks "Save as PDF" from the print dialog. Far better to communicate with the print driver using GDI, PostScript, PCL, PDF, etc. using vector graphics and embedded fonts.


What kind of parallelism is expected here? Is it about that you can't render layout to postscript or PDF well parallelized?


Is multithreaded rendering really the next big browser breakthrough? If so, are Google and Apple also working on similar projects?


I remember hearing that the Blink folks were looking into it, but that in general it is really hard to do the kind of stuff that Servo does because CeePlusPlus.


And Servo is built from the ground up. Putting (decent) threading into Blink/WebKit would be a huge undertaking.


Early reports look really promising. It's hard to tell when Servo doesn't implement the full web platform, but even with a substandard network stack, Servo is _fast_.


I remember Brendan Eich saying that you can bet on it. But he included Microsoft in that statement and given all the work on forking Trident into EdgeHTML I'm pretty sure they weren't.

Apple and Google could very well have secret projects going on for such a thing though.


I don't think it will feel like a breakthrough. On the desktop the network is the bottleneck, not the rendering. However, it might save energy on mobile multicore devices. That is nice, because on my smart phone the browser really drains the battery.


> On the desktop the network is the bottleneck, not the rendering.

That's pretty oversimplified. For lots of workloads—e.g. interacting with widgets on the page, composing emails/etc, scrolling, opening pop-out menus, interacting with Google Maps—the rendering (broadly speaking) is the bottleneck.

In fact, I'm of the opinion that improving layout and graphics performance is the single biggest thing we can do to make Web apps not feel slow compared to their native counterparts. People still (often quite rightly) feel that the Web is slower than native (however "native" is defined). The difference is unlikely to be network-related, and it's unlikely to be JavaScript either for most apps—JS performance may not be at C++-level yet (though asm.js will get there in time), but it's certainly on par with Objective-C and Java/Dalvik. The problem is mostly styling, layout, and graphics in my view: how to get from the DOM to rendered pixels as quickly as possible. Those are, not coincidentally, what Servo has focused on.


I disagreed. I don't usually notice rendering performance problem. I only noticd Javascript performance. But that's because I have 1,000 tabs open, and each tab is running 100 Javascripts due to modern websites desired to be dynamic and 'animating' all the fucking time.

The way to solve this is NOT to make Javascript faster. The way to solve this is also NOT to force websites to run few numher of Javascripts at the same time.

The way to solve this is to virtualized all the tabs. In other word, tabs that are not visible should be unloaded (allow the user to right click a tab and "Keep This Tab Loaded"). This saves on CPU usage and RAM usages.

In summary, rendinger performaces is in the milliseconds, compare to running 10,000 Javascripts, which is in the minutes.


So does that mean the experiment to use it in B2G has a more utilitarian purpose than just stretching the limits of the engine?


The rendering is a bottleneck that's less tolerated than network latency. For example in apps users tolerate a loading spinner more than they tolerate 300ms of latency when clicking on buttons or when scrolling. It's because of what users have been trained to expect. A loading spinner is considered normal, whereas a small barely-perceptible latency is interpreted as broken behavior when doing actions that in the minds of users should have an immediate response. Of course, for simple websites the rendering time is totally irrelevant. But nowadays we're running apps in our browser. And for example I've seen my browser brought to its knees by pages displaying charts.


This isn't true anymore. There's plenty of sites that are just sluggish even after they loaded. Netflix, for instance. Or any modern "content" site. It's really quite intolerable, especially as there's no new functionality in these sites. It's not like my experience on Netflix, for instance, has improved in a decade (player UI aside, which still lags behind any real playback program.)


> However, it might save energy on mobile multicore devices.

Multicore devices shut down cores to save energy. How would multithreaded rendering save energy?


Processor power use scales with frequency as well. See [0], specifically the 'Low Memory and Power Devices' section, which shows power savings running 4 cores at lower frequency versus 1 core at max frequency. I think they eventually decided that running at max frequency on all processors and sleeping wound up being most efficient but that's my memory and could be completely incorrect.

[0] http://blogs.s-osg.org/servo-adapting-c-to-work-on-the-web/


Getting work done faster lets you put the CPU to sleep faster.


In theory, more parallelism can give you a) more work done in the same time, b) same work done in less time, or c) same work done in the same time using less energy due to lower frequency.


I still see security as the best feature of a new Firefox written in Rust.


Eventual security, perhaps, but that's a loooong stretch-goal. The fact of the matter is, even Servo, which is written-from-scratch and lacks all the UI goodies, is only a fraction Rust. It'll be a long time before supported libraries could be rewritten and rust, and I reckon a rewrite of all these things is likely to cause more new human-error security flaws than it will eliminate pointer/memory related security issues. In the (very) long run, though, I agree.


I don't think that's true for layout/graphics at least. Generally, the worst that can go wrong for incorrect layout, beyond memory safety, is that a box ends up in the wrong place, or the engine harmlessly panics with a crashed tab. There can be spoofing issues with boxes being drawn in the wrong place, but those are minor compared to RCE.


What percentage of critical vulnerabilities in Firefox were due to memory safety issues? Is it changing dramatically?


> What percentage of critical vulnerabilities in Firefox were due to memory safety issues?

The vast majority of them.

(I haven't done the exact analysis, but I have performed similar ones and the fact that memory safety issues dominated was clear. Note also that I am not claiming that solving memory safety issues is a panacea. "Merely" that we're defending against the vast majority of critical security vulnerabilities.)


I think it's less of a big breakthrough and more of a way to validate that Rust is the Right Thing for certain hard problems.


nice. i wish the blog was updated more frequently - http://blog.servo.org/


This is entirely my fault. I was the one writing This Week In Servo blog posts, and I had a rather busy internship over the summer which didn't leave me much time to write the blog posts. And now I have a rather busy semester -- I probably could carve time out to start writing again, but right now I'm too swamped to do it. I'm glad you enjoyed the posts, though!

Following http://twitter.com/ServoDev can keep you up to date for now.


Thanks for the write-ups that you did. It's so much more tempting to contribute to a project when there are active public summaries of work being done. Sometimes, I admit, I don't even bother submitting my pull requests on projects with 50 unmerged PRs and no activity. When I see evidence of a progress and organization like those blog posts, it makes me want to contribute even if I don't necessarily have a personal itch to scratch.

I'd love to see more posts soon, if you or someone else ever gets the time*!


I also enjoyed reading this week in Rust! Thanks to your efforts I've found many great projects, it's inspiring.


[deleted]


The downvotes are because there was no content in your comment. "So much win!" can be expressed by clicking the upvote arrow instead.


Mozilla really needs to step up its game and start innovating, a technically "superior" browser won't make it gain market share against Google, Microsoft and Apple, these three have powerful platforms to promote their browsers, what does Mozilla have? Firefox's usage is declining every day at a fast pace.

I'm amazed to see how much average Joe users have Chrome installed with no help from anyone.

Edit: Want to add that I wasn't taking a stab a Servo, which is a truly awesome project on its own no matter what's going with Mozilla.


This is a truly bizarre comment. Can you elaborate as to how Servo is not "innovating"?


With the "innovating" part I was talking about Mozilla in general, not Servo. Servo is innovating in the web browser space, no doubt about that, but will that really be enough for Mozilla to compete with others browsers which have big platforms supporting them? I'm talking from a business angle here.


The only charitable interpretations of this comment I can figure out are either "Mozilla shouldn't bother improving its core technology because it's doomed anyway" or "Mozilla should build a 'native' OS stack to push its browser", which are gratuitous negativity and off-topic/silly, respectively.


It's important to remember that Servo is just a research project at this point. A lot of hopes are behind it for the future, but in terms of engineering resources it's not been any sort of significant drain on Mozilla's other efforts.


Mozilla has tried to enter the OS space with Firefox OS. Unfortunately, that hasn't worked out so well for them so far.

Entering a mature market with a new product often winds up a fruitless effort. I think they'll have a better shot with new categories like Smart Watches, Smart Mini MP3 Players and those Smart Flip Phones/Sliders. Android is having major issues entering those categories for quite some time so that leaves a good opening for competition.

Another thing they can do is to build their own phone. Make it the best phone that they possibly can and support it for 5-10 years. With the only revisions making it smaller and cheaper but otherwise the same (ie. same architecture but miniaturized). Even if during those 5-10 years it's not seen uptake Mozilla will have built up a unmatched reputation for supporting their devices. So when the successor is announced...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: