I would say that since the late 90ies, there hasn't been a single new engine being made. Everything we have now is a development based on KHTML, Gecko, Presto or Trident.
By now, HTML and the standards surrounding it have become so big that starting from a blank slate nowadays is next to impossible. The existing engines have it much easier as they are allowed to grow with the spec, but a new engine has to do everything from scratch.
This makes it even more impressive to see how far Servo is already (yes. I know - there's probably some overlap with some existing Mozilla code, but still).
Huge congrats to everyone involved. We need unafraid visionaries like you people to actually move the web forward.
One important mitigating factor is that newer standards often generalize older ones. For example, originally all the list elements had special case handling (and still might in other engines), but now it's possible to do all that with the user agent style sheet with new additions to CSS that generalized these things.
It saves an enormous amount of work to implement the web at a particular point in time as opposed to starting at the beginning and keeping up with the changes until that same point in time. Not only that, but all the failed branches that were already explored mean that a new engine avoids quite a bit of technical debt.
It's still quite hard. Here's a nice example of a layout bug on HN we thought would be simple to fix for the release, but turned ugly fast: https://github.com/servo/servo/issues/11821
It's a two sided coin: HTML now defines a lot of the basic web platform parts in enough detail that you can just implement the spec and have a web browser that works, whereas in the 90s (and most of the 00s) browsers spent huge amounts of resources on reverse-engineering each other. Pretty much the only way to be web compatible then was to start with something already mostly web compatible (consider the fact that Apple's first two or three years of work on WebKit were essentially spent pretty much just fixing web compat bugs!).
Don't forget the amazing work being done at http://testthewebforward.org/ with the Web Platform Tests and your own work on the CSSWG tests! Beyond the specs, having a cross-browser set of tests that have especially good coverage on newer specs has been a huge leg-up for Servo!
It not only helps us ensure interop for those features that are tested, but (puts on former dev manager hat) it really helps quantify how many work items are left until a feature is closer to completion. I especially notice it when there are areas like CSS 2.1 where the tests are less comprehensive and it's harder to quantify what our progress/status is.
These tests are super useful. They don't just test for the spec, they also test for known behavior, which makes it possible to find spec bugs.
Plus they contain a very good framework which we can easily add our own tests into, instead of rewriting an HTTP testing server and whatnot.
I wonder if it's worthwhile to spend some time working on the 2.1 testsuite to try and improve coverage at some point. Probably? Unlikely to find so many bugs in mature implementations, hence why so much effort has been spent elsewhere.
For too long it was considered OK to have a spec with either no tests, or tests that focused on the wrong thing ("is it possible to implement this spec" vs "do implementations behave in compatible ways"). It is still considered too OK for browser vendors to write their own tests for a feature and not share them with the rest of the community. However, pressure from other platforms is changing attitudes here as it becomes increasingly apparent that without a stronger degree of interoperability, developers will vote with their feet and target platforms with fewer bugs to trip over.
1) There are seem to still be regressions in pretty fundamental stuff when new features are added. Usually caught before things reach "release" trains, but I see it in bug trackers all the time.
2) Other layout engine managers I've talked with who want to do a layout system rewrite are terrified of anything that doesn't look like a large-scale refactoring because of this issue. I've had the "how are you testing and measuring progress / how do you know you're done with layout?" conversation with most of our engine peers at this point :-)
Honestly, from what I see, the amount you do is relatively minuscule compared with what Opera was doing ten years ago, while Opera had far more marketshare which is normally the best way to avoid web compat issues in the first place.
Even things as simple as line endings are implemented wrong (in HTTP too), and that can cause security consequences as proxies end up reading headers differently than the user agents. I've seen this live on the public Internet.
SIP and the IETF to some extent encourages this with Postel's Law, telling implementors they should guess what the intention of the message is. We need less of VB's On Error Resume Next and more panic-abort type functionality. Look at this insane document: https://tools.ietf.org/html/rfc4475 "SIP Torture Tests". The authors gleefully come up with ridiculous yet legal permutations of messages that are allowed under their arcane rules. The fact this exists should send the opposite message: simplify your damn protocol. And this is only at the parsing level!
I know people say that HTML could never have had strictness because it'd have been too hard, but I don't buy that. One common issue was getting nesting wrong, like <b><i></b></i> or leaving unclosed tags. By removing the (silly, honestly) name out of the ending tag and just using </>, a whole class of errors is removed. Add in a browser that just fails to render and explains exactly why, and people would quickly not publish pages that are broken.
I don't think it's as clear-cut as that. I think the main thing is error handling must be defined: it doesn't matter whether it's panic-abort or whether it's defined how to deal with any stream of bytes. Don't let implementers choose what they should do.
RFC 7230, for example, has error handling sometimes as SHOULD return 400 Bad Request and sometimes as MUST return 400 Bad Request. But that only applies to servers and proxies parsing requests, which are really the relatively well-implemented part of HTTP. (The vast majority come from browsers, which are always syntactically valid; responses can come from arbitrary CGI scripts and you get all kinds of syntactic nonsense there.) Sadly, there's comparatively few normative requirements when it comes to parsing responses, even ones which are needed for backwards compatibility (at what point do you conclude you're just talking to an HTTP/0.9 server?).
So yeah, "implementation deviations" is usually the issue, but not really English ambiguity, at least with web specs. I put my pedant hat on when looking at proposed changes to web specs, and I believe the rest of the community does too :)
That said, while I certainly have a fondness for formal specifications, I also always end up feeling like they have a lot of downsides. They're often very verbose and hard to skim, and they're also another language that anyone implementing the specification has to learn prior to implementing it (or they can just wing it, but then your formal specification is worth nothing!).
The wisdom from old days was to use relatively, simple, formal languages that were easy to train people on and use. B, Z, and TLA had good results plus lasted over time. The other thing was to do English spec, formal spec, and code side-by-side with each team looking for inconsistencies. The developers usually picked up the basics of the technique rather quickly but at least one specialist was always necessary to make first use go smoothly.
So, that's still my recommendation. A subset of those, especially focusing on states and contracts, done in same fashion against English specs and clear code should take lower work than going all Isabelle, NuPRL, or PVS on a problem. Second application is also always easier. Reuse where possible helps as with code.
So, some difficulties and training but probably easier than some frameworks, etc IT people learn today.
From when I've considered trying to formalise parts of the platform before, some of the things that often get hit very quickly are the fact that most of the web platform is defined in terms of sequences of Unicode code points, and ultimately any formalisation is going to have to deal with that (be it some complex parser—like HTML—or some simple validation of it matching a regular grammar). It has always seemed to me that to get much advantage you need to be able to programmatically assert statements about it (including, most obviously, pre-conditions are satisfied at all call-sites), but once you've gone from 2^256 to 2^1114112 characters you end up with such massive state explosion that even many symbolic tools start to struggle quickly.
Now, maybe I've tried using the wrong tools, or maybe I'm structuring things wrong, but it'd be nice to have some way to do this.
https://gngr.info/ comes to mind
The main site is down due to dain issues. Googling will also give you a blog and Github. Their Cobra engine was Java IIRC. All that code might make a nice head start even if this project is dead or goes that way.
Compared to KHTML, Webkit and Blink were pretty much 5 or more whole new projects on top of it. And even the latest IE is hugely rewritten compared to Trident.
> EdgeHTML is a proprietary layout engine developed for Edge. It is a fork of Trident that has removed all legacy code of older versions of Internet Explorer and rewritten the majority of its source code with web standards and interoperability with other modern browsers in mind.
I think coldtea is focusing on highlighted part. A rewrite like that might change so many behavioral aspects, esp user- or developer-visible, that it can be considered a new engine rather than mere clean-up of behaviorally-identical old one. I certainly reuses some code and techniques but differences might be major.
Rewriting function by function will give you largely the same program using largely the same algorithms, but possibly with more or less bugs, or cleaned up code.
Rewriting component by component will allow new ways those components are implemented, but the interface between components will likely be the same.
Rewriting from scratch could yield anything (servo could be classified as a way to rewrite gecko from scratch, I believe).
All of these could be said to have had the majority of their source code rewritten, but I would only really consider the last one to be "new" without knowing further details. Of course, the devil is in the details...
For a simplistic example, say someone provides a "rewrite" of grep. Maybe the options are all different looking and sounding, so it may appear to be significantly different. But if the core of the matching algorithm and it's capabilities are largely unchanged, is it a that much of a rewrite? To the outside user it may superficially appear so, but to someone comparing the source from before and after, there may be an entirely different opinion, and even that may change if you come from a context of focusing on a particular aspect.
As applied to Trident/Edge, we may have a case where the person speaking was involved in a project to rewrite one major component of the browser, and so is speaking in that context. Maybe that component is responsible for about 60% of the code and functionality, but it still relies on a quite a bit of largely unchanged additional libraries. It's very subjective as to whether you think that qualifies as a rewrite of the project, and depends quite a bit on what was not rewritten, and what you think of that code.
Let's look at the grep example agsin. If it did same thing, but new interface, then it would be same thing because behavior is otherwise the same. Now, lets say you changed what pattern-matching itself produced where you no longer got the same results from same text. Matched totally different patterns unless you changed the text you are inputting to get same results again. And the interface was different. Is that still grep?
If he's talking one component, maybe not anything new underneath. He said the engine (rendering) would be more standards compliant and compatible with other browsers. That's basically what Opera and Mozilla did many years ago while IE didnt. If you blocked out the name, nobody using the browser would think they were using same app when they saw rendering hit those aspects totally distorting the presentation. Or had to redo their websites to display correctly. I remember reading many complaints from web developers about making stuff work with several, different engines whose rendering argued.
So, if Edge engine is different enough to cause that, then it seems like a different engine at behavioral level given same input leads to different output that fails acceptance. Nobody would think it was same app unless the UI told them.
I mean, I still think sameness or newness is an open issue. Do we judge it be function and component as you added? Or interface and behavioral spec as I was looking at? One might also look at data formats. I think it will best to technically just compare in various ways to be accurate. Feature-driven development field probably has some insight into this. However, users will look at them as different, at least in a version since, if they can't do what they were used for or break compatibility. The two Python versions are possibly a good example.
That sort of depends on how you define "behavior". More specifically, if it matches something equivalent but different to regular expressions (even if it's just a character transliteration to regular expression syntax), then does the behavior change? Is that still grep? Maybe, if grep decides the the "new grep2 have an easier to learn matching language". It's not like we haven't seen that before.
> I mean, I still think sameness or newness is an open issue.
Sure! The vast majority of cases it probably is an open issue, because different people have different criteria. I was just pointing out that the wording from MS doesn't necessarily imply a rewrite as many people might define it, because we have very little context to go on in the statement, and different people have different ideas about what is the browser. The rendering engine? The UI? The JS VM? All of them combined?
I can totally see someone responsible for the rendering engine saying "we rewrote most of the source code" and the JS VM guys scratching their heads wondering what the hell they are talking about, because they definitely didn't rewrite most their code, and they think their component is a significant portion of the browser...
Your point applies totally if we're talking the How instead of the What.
It goes without saying, but as this is the very first nightly (not by any means a full-fledged release), expect severe bugs, crashes, and missing functionality. Many of your favorite sites will be broken. Don't expect to use this as your everyday browser. We'd love feedback on what issues folks hit the most, so we can prioritize—especially if you're a Web developer!
As I understand it, this is shipping with WebRender enabled, right? How far out is WebRender 2 from making it into the nightly?
A very general question: what's the high-level plan for Servo regarding Firefox? I see a lot of requests/issues dealing with stuff that doesn't seem related to a layout engine. Is the plan to have browser.html as a temporary incubator for Servo and eventually migrate it to replace FF's layout engine, or is it to "grow" an entire new browser around this new engine?
There are efforts to make various components work in Firefox. The largest effort is "Stylo", which replaces Firefox's selector matching with ours. I'm not sure what the future is on Firefox using Webrender or our layout; though I suspect it depends on how well stylo works.
The plan seems to be to let Servo continue to evolve as a testbed for new ideas (like webrender), and share components with firefox whenever ready. An independent Servo product is not likely in the near future, since there's a lot of work to make it fully web compat. In the far future ... well, we can't tell :)
Admittedly, android-specific efforts have slowed down in the past few months, but I believe it is still a goal :)
I mean, Servo-as-a-product isn't even planned yet, and may never be, so by the time that question is relevant the ios rules may have changed, too :)
As for Firefox, later on this year we should see the integration of Servo's style engine ("Stylo") into Gecko, and going forward we should expect to see the two codebases share even more components. However, wholesale replacement of Gecko with Servo in Firefox isn't on the table at the moment.
Well, it depends on some privileged APIs, of course. But we've tried to keep that API surface minimal.
In addition, the browser needs to act as a sort of broker between sites and the OS. Storing cookies and session data, showing notifications or dialogs, and entering fullscreen mode are just a few examples of this. (Note that most of these things aren't even implemented yet. Browsers are complex applications in their own right, even after accounting for the browser engine.)
We are deliberately trying to keep this to the bare minimum, and will likely try and find someway to standardize those few things we do if possible.
Servo's style engine is parallelized and scales nearly linearly with number of cores, and style time is a non-trivial amount of the total rendering time.
Can't wait to try it out. When is the Windows build will be available? Days? Week(s)?
Servo works fine, and we have an installer, but when run from the installer there are some weird things we weren't able to track down in time.
The Windows port didn't ship yesterday because of technical glitches post-install. Servo works fine, but after installation it had some strange behavior we couldn't track down in time. It should come in a few days.
Fixed that for you :P
PS. It is considered bad etiquette to ignore the issues of a Windows user and simply "recommend Linux" in a tech community like this, especially when a Windows build has already been promised. You don't know the reasons why someone is using Windows (ex. because of a policy of their employer).
Download Linux. Download Servo. Run Linux. Run Servo. Profit.
I've less and less love for MS (though Visual Studio...), but even with having to install crappy hacks to fix Win8, it's _still_ a smoother end-user experience than Linux desktop.
With Win10's Linux layer, assuming there's an accelerated X server for Windows, I can't think I'll keep even a VM around much. (If I can turn off Win10's spyware.)
Jokes aside, I stopped fighting desktop environments and embraced Ubuntu's Unity some years ago, at the end I learned to appreciate it and nowadays I feel more comfortable with Ubuntu than with OS X.
I still prefer Windows over Ubuntu, but because I have added several utilities to my workflow like Everything Search Engine or WinSplit Revolution (now defunct), not because of Windows' own merits.
I just wish there could be a way to have OS X's hotkeys in Windows, since CMD + <anything> is more comfortable to press than Ctrl + <anything>. I also miss having the behavior CMD + Tab / CMD + ` and the ubiquity of CMD + ,
This is not the place to discuss about operating systems, and people expect from you to respect their preference.
Linux has its technical merits, and because of them it is the most used OS in servers, supercomputers and niches like web development, but it also has its weaknesses, and because of them it isn't the most used OS in homes or corporations.
OTOH, there already exists a Linux distribution which is more popular than Windows, it's named Android, you've probably heard of it. It is a disaster in comparison with Windows, it gets locked, bloated and abandoned by phone manufacturers, phones get declared obsolete ridiculously fast and people are left without even security updates, you don't even have the ability to "format" a phone unless someone breaks a vulnerability to gain root access.
People blinded by Linux usually overlook the disaster behind Android because they're too accustomed to focus their attention in the issues of Windows, but no OS is perfect and people will choose the OS that works better for them, you must learn to accept and respect their decision.
Either way, I'm already impressed by how zippy it is :D
There's a more serious error in the url parsing
that caused this, due to some wierd stuff google
is doing with punycode. I haven't looked into that yet. But youtube no longer crashes -- so you can now enjoy window-shopping on youtube!
Note that youtube won't work on servo right now
since video/flash doesn't work :)
I really don't mean this in a snarky way: wasn't this kind of thing supposed to be obviated by the switch to Rust? It often seems that every other post about Rust on HN says that if it compiles, it works.
Your compiler doesn't know what you're trying to do, so you can't prevent all bugs.
That leaves crashes, and ... well it may also depend on what you mean by a crash. The UI becomes unresponsive? Might have absolutely nothing to do with memory issues or even code written in rust. Maybe your rust code is fine but you're trying to open and write to the same file from two places which you don't realise because the users file system is case insensitive but you've only tested on a case sensitive one, etc.
It's also multi-threaded and while languages can significantly help improve the safety of multi-threaded code, I don't think anything can stop you from creating a deadlock or putting your system into an inconsistent state.
And finally, they may not even particularly expect those things, but putting out early nightly builds to a wider audience and expecting nothing to go wrong would be cavalier. A typical warning on many early releases is "don't blame me if this destroys all your files".
That sounds too precise to be just an example :) We've all been there!
It's now on my mental list for "but it works on my machine", checking filenames and their cases.
This is just a hypothetical example, I bet image loading is one of the things Servo does fine. Just to give you an idea of when Rust might panic.
Earlier in this thread, someone did indeed find a bug in our image loading: https://news.ycombinator.com/item?id=12014108
We were basically panicking when an invalid URL was given to us. There was a comment there noting this -- which means that this was probably written when it wasn't so important to handle all the cases, and more important to handle some cases so that we can test out various ideas. We're still sort of in that stage, and you may see other comments like this throughout the code :)
I'm always careful not to say that "if it compiles, it works"--no language can guarantee that (well, except those that prove your program correct). Instead I like to say "if it compiles, it will fail for a not-stupid reason". :)
When talking about the usage of unsafe code in rust, we generally ignore encapsulated unsafe code in libstd since it has the same safety guarantees as tgr compiler itself. all unsafe code encapsulated in a safe API has these guarantees, but you trust libstd more :)
Please file bugs for layout being off -- we know
that this sometimes happens, but it's good to have testcases. As for embeds, that's not priority now, but we should eventually get them.
P.S. The app icon is the best part by far!
I noticed that the Servo Nightly build is already about as big as Chrome or Firefox. As it's not yet a complete browser, I expected it to have a lighter executable. Do you know what lead to this? Is it a matter of build artifacts or optimization? Is Servo going to grow even larger while achieving feature-parity with other engines?
It looks like Servo could be a great alternative for Chromium Content -- one of the things that keeps me from (ab)using Electron/NW/CEF is the massive initial overhead, and a WebRender-based UI with lower admittance price would change the whole scenario.
Probably not apples to apples. I don't believe optimizing for executable size to be too high in their priority list.
Happy to see a nightly already, I must read the source at some point :)
Speaking of which, do you have any systematic benchmark testing set-up for comparing various performance aspects? If so, what are the numbers?
It's not really ready for public consumption, just a dashboard
for us to tell how we are doing.
Looking forward to seeing this evolve.
So there's some extra copying in the IPC, and
no speculative parsing, which can slow things down, among other factors.
Can be fixed, but priorities :)
The thread that handles all the resources (http://, chrome://, file://, etc):
And here's the file that handles HTTP requests:
I have some more involved projects planned for the network stack that aren't yet ready to be picked up though. I suggest working on some bugs above, getting familiar, and then asking in IRC for something in the network stack.
First thing I noticed almost immediately after launching is that everything breaks when dragging the window across monitors (that have different densities) - OS X. I dragged from my Macbook's display to an external monitor with a lower DPI and everything went huge and most of the UI got cut off. Dragging back to the Macbook display does not resolve the issue and instead introduces some weird redraw flickering with garbage data.
EDIT: Reported - https://github.com/servo/servo/issues/12009
Edit: Additionally, it doesn't yet support native keybindings like Command left-right or Alt left-right.
As of auto complete, few people have reported issues. In our testing we did not run into this issue. Right now we have an issue and a reproducible test case to tackle this issue:
Feel flee to report more browser.html issues here:
We'll do our best to get them resolved promptly.
Anyway, what it is doing right now is seeing "http:/" and thinking <Ok, I can just add /sitethatobviouslyhasnorelevanceyet.com and everything will be fine!>. But while it was thinking, I already started typing the second "/" and didn't notice so I got in "lo" before I saw the result. At that point what's in the field is "http:///sitethatobviouslyhasnorelevanceyet.comlo". The issues here are serious enough but it's not entirely surprising you can't reproduce them well.
Is embedded devices one of the goals around Servo? Do Servo have any embedded specific design goals? I'm interested to know which parts in Servo targets to embedded use. I can see that page says Android builds are coming soon. i guess that explains it.
Yes, this is it.
Or is it really about embedding browser inside your application. That one can do today with QT framework where you can embed a browser inside your application. But QT itself moved from Qt WebKit to Qt WebEngine. So, 'embed a browser' in your application sounds old to me.
"Embed a browser in another application" is perfectly clear. Servo is designed so that it can be linked with other applications, and displayed within windows that the host application controls.
WebKit is also embeddable in this way, but other browser engines are not as mature in this area. There was talk of Servo using the same API that WebKit uses for embedding, probably to help developers transition from WebKit to Servo:
"The proposed plan is to use WebKit's API as a template or copy it completely."
However we do have ports for mobile hardware and some embedded hardware, and we think Servo has a lot to offer there. We've done some initial research into battery life stuff that looked really promising, and of course getting parallel speedups is much more noticeable on slower hardware than on desktops.
It's great to finally see a browser UI built from the ground up on top of standard web technologies and with first-class support for vertical tabs to boot!
Some really minor bugs that occurred while using it a bit (OS X 10.11):
- Scrolling when already at the end of the page sometimes makes a relatively big instant step (noticed this for instance on the hackernews startpage)
- The tabs aren't cropped in tab view: http://imgur.com/Q1Bxibo
- Tabs don't keep at which point you have scrolled when switching tabs
- After opening the address bar and clicking on the website again there's a short flicker.
See also https://github.com/servo/servo/labels/A-security
Is there a reason the OSX binary needs to be signed?
We don't do anything special.
Edit: HTTPS up: https://servo-builds.s3.amazonaws.com/index.html
(This comment written in Servo :P )
In the latest Sierra beta I've heard you can't disable the check without an involved workaround.
It's just OSX users being lazy ;)
Can it be signed by any old cert or does it need
to be part of the trust chain?
Here's the relevant tracking issue for the same problem in Rust.
Outside of that, Gatekeeper applies to executables fetched from the Web, can be disabled (harder on Sierra), and is easy enough to bypass--and you only have to do it once per executable. If signed, the certificate does need to be trusted to count--otherwise, it'd just be a fancy checksum.
Windows 10 has a similar feature.
It's unfortunate that a LetsEncrypt style project can't be done for code signing, due to malware/admin overhead.
Gatekeeper is nice in some important ways but seeing as it isn't free, I'd also be quite happy with just a SHA1 or similar signature to verify a binary with.
Retrofitting multithreaded, off-main-thread restyle/layout/painting onto an existing engine is an enormous undertaking, almost on the level of rewriting from scratch. This is much of the reason why Servo's rendering is from scratch, in fact.
The gist is that Servo runs most subsystems on separate threads (including sandboxed and cross origin iframes for example), and so each tab or page is not fighting over the same JS engine's time. On top of that, painting/compositing is combined in WebRender and is parallel as well.
ERROR:js::rust: Error at https://assets-cdn.github.com/assets/frameworks-6fe6a7604ec4...: history is not defined
Note that I don't think Firefox has overscroll at all, except maybe on mobile.
Half jokingly, all you need now is an emscripten target, so it can be run inside a browser, and anyone can try it out without needing to install an app :)
The thing I love about the programming community is that even though that's clearly meant as a joke, someone'll make it happen, just to see if they can do it :)
I am on Ubuntu 14.04, by the way.
There are discussions underway about how to bring in other tools, and how to get stuff like this into Servo or browser.html itself.
We do detect Retina on Mac and have system DPI awareness on Windows, so once we figure out what to do on Linux we can probably add it pretty easily.
thread 'main' panicked at 'Failed to create window.: OsError("GL context creation failed")', ../src/libcore/result.rs:785
note: Run with `RUST_BACKTRACE=1` for a backtrace.
welcome to servo, than it changes to a white window
Two things though:
1. when I opened the tab sitebar clicking on a tab closed it.
2. I cannot reproduce this, but servo hung when i closed the window, it stayed open, but did not rerender on resize. Using gentoo with Xorg, running dwm as window manager.
Or will this require some major rewrite?
Running ./servo gives me this error.
"./servo: error while loading shared libraries: libEGL.so.1: cannot open shared object file: No such file or directory"
I guess it should be an easy fix. But if there are instructions to copy/paste that would really help. If someone has an answer, it would be great if you can please post it here.
Edit 1: I think the problem is because I am trying to run it on ubuntu 14.04 server. Also I am using xming and putty x11 forwarding. I installed libegl using 'sudo apt-get install libegl1-mesa-dev'. Now I am getting a new error.
Xlib: extension "XFree86-VidModeExtension" missing on display "localhost:12.0".
thread 'main' panicked at 'Failed to create window.: NoAvailablePixelFormat', ../src/libcore/result.rs:785
note: Run with `RUST_BACKTRACE=1` for a backtrace.
So I ran with RUST_BACKTRACE=1 and this is the stack trace.
Xlib: extension "XFree86-VidModeExtension" missing on display "localhost:12.0".
thread 'main' panicked at 'Failed to create window.: NoAvailablePixelFormat', ../src/libcore/result.rs:785
1: 0x55868358b56f - std::sys::backtrace::tracing::imp::write::h6528da8103c51ab9
2: 0x55868359319b - std::panicking::default_hook::_$u7b$$u7b$closure$u7d$$u7d$::hbe741a5cc3c49508
3: 0x558683592dff - std::panicking::default_hook::he0146e6a74621cb4
4: 0x558683471857 - util::panicking::initiate_panic_hook::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::hc61828c8a9f6df01
5: 0x558683578dcc - std::panicking::rust_panic_with_hook::h983af77c1a2e581b
6: 0x5586835933e1 - std::panicking::begin_panic::he426e15a3766089a
7: 0x55868357a63a - std::panicking::begin_panic_fmt::hdddb415186c241e7
8: 0x55868359337e - rust_begin_unwind
9: 0x5586835c997f - core::panicking::panic_fmt::hf4e16cb7f0d41a25
10: 0x558681766ff0 - core::result::unwrap_failed::hd132ba2b7d75f379
11: 0x558681765df4 - glutin_app::window::Window::new::h750be0f1f60f008a
12: 0x558681764df0 - glutin_app::create_window::hae774cdda1762e66
13: 0x55868170df45 - servo::main::h6924e5658370f6b5
14: 0x558683592778 - std::panicking::try::call::h852b0d5f2eec25e4
15: 0x55868359d5db - __rust_try
16: 0x55868359d57e - __rust_maybe_catch_panic
17: 0x55868359221e - std::rt::lang_start::hfe4efe1fc39e4a30
18: 0x7fcc0d82eec4 - __libc_start_main
19: 0x55868170cf09 - <unknown>
20: 0x0 - <unknown>
That is on Fedora 23, and what's really strange to me, is that I compiled Servo about a week ago myself and that build works.
This is a dynamic library dependency, so you need it even if not building.
sudo apt-get install libegl on ubuntu, probably
We have some issues with choosing fonts IIRC.
faced 2 issues,including one crash, have submitted tickets, already somebody responded to github issues that was quick.
I beleive Servo does work on a Pi, but probably