Hacker News new | past | comments | ask | show | jobs | submit login
Remote Code Execution in Firefox beyond Memory Corruptions (frederik-braun.com)
106 points by tannhaeuser 16 days ago | hide | past | web | favorite | 23 comments



> Both [these bugs] were fixed in Firefox 56, which was released in the fall of 2017.

Might be worth clarifying this a little higher up on the page.


I wish more websites would include a publishing date right at the top (like some newspapers). I think that they don't care if the information is outdated as long as it brings in visitors. Too many websites do this.


> It was the year 1997, and people thought XML was a great idea. In fact, it was so much better than its warty and unparseable predecessor HTML. While XHTML was the clear winner and successor for great web applications, ...

While just the opener for discussing XXE-style and injection attacks in XUL, I'm not sure this isn't satire. Neither was nor is HTML unparseable, nor has XHTML and XML on the web replaced it. XML was just a proper subset of SGML doing away with element-specific markup declarations/parsing rules such as required for `img` and other elements in HTML, and also with tag inference/omission (also as required for parsing HTML).


It might look like satire, but that’s how it looked in 1997 (as late as 2005 or so, maybe even later).

HTML in the wild was a mess, and required a lot of error recovery and empirical quirks. Browser vendors were all on board with the idea of rebooting it as well defined XHTML which has XML structure and HTML semantics. There was quite a bit of work towards that goal.

But the actual content creators did not care, and vendors relented. HTML in the wild is not quite as bad as it was and is much more easily parseable as well.


> HTML in the wild was a mess, and required a lot of error recovery and empirical quirks

> HTML in the wild is not quite as bad as it was and is much more easily parseable as well

I beg to differ with the "HTML in the wild" concept and need for heuristics. HTML, including HTML5, was and is easily parseable using SGML, the markup meta-language on which HTML versions up to HTML 4 were specified. My DTD for HTML 5 (W3C's current HTML 5.2) can parse up to 97.31% of the test suite normatively referenced by the HTML spec (the rest being somewhat in a grey area containing constructs to make test automation happy, which is designed to never fail, but rather to produce something still in a predictable way) [1].

[1]: http://sgmljs.net/docs/sgml-html-tutorial.html (reported on the slides reached via the "TALK" link)


"the HTML spec" is definitely not the same as "HTML in the wild", so your comment has relatively little to do with what the parent comment is saying. Real web pages contain all sorts of sins (or at least did, back in the 90's) such as unclosed tags and missing elements that are "required", and browsers were expected to parse those pages because other browsers did. As a simple example, <b><i>foo</b></i> is not valid HTML 4 (or even SGML) because the tags are not strictly nested, but it would be suicide to make a browser that rejects pages containing that.


"it would be suicide to make a browser that rejects pages containing that" Nonsense, browser should have never supported it. We can't decide after the fact yes indeed.


The idea that HTML is hard to parse isn't exactly controversial; the HTML spec has a long description of the exact rules you're required to implement to parse HTML and it's a bunch of special cases and error handling. Certainly I wouldn't call it "heuritics" any more since everything is well specified, but equally certainly the rules are derived from heuristics that browsers implemented and are now required to maintin compatibility.

I'm having difficulty unpacking your claim about a DTD "parsing" HTML5, but I find it hard to believe that a primarilly DTD-based parser would get all the rules right around, say, foster-parenting, or handling of -- in script elements, or pseudo-entities or any of the the other weird quirks that are specific to HTML and not shared with SGML languages. Perhaps you can link the code and details about the claimed test suite pass rate?


I'm enjoying finally meeting someone to challenge my claims ;) I haven't published my test scripts yet (they're not very exciting and just extract test data and add a DOCTYPE anyway), but for the time being you can just follow the instructions given in the tutorial I linked to see whether the cases you're concerned about are covered (the tutorial also explains how to deal with other common HTML parsing issues).

Nb: "<!--" in script element content isn't a problem for parsing as such and was just used to stop browsers rendering JavaScript code text during the introduction of JavaScript back in 1995 or so (see also [1] mentioning the consequences for treating script content as CDATA or #PCDATA). Predefined HTML entities are also fully supported by the WebSGML revision of SGML. Don't know about "foster-parenting", but SGML has very specific rules for tag inference to accomodate elements not allowed at a given context so I'm assuming you're referring to that (also described in detail in the linked paper below).

[1]: http://sgmljs.net/docs/html5.html#script-data


Don't use .innerHTML ! Nost just for security, but also for convenience. Use document.createElement, and node.appendChild; this is much more powerful as you can abstract views and components into pure functions.


.innerHTML is great for wiping parts (or the entire) DOM. But yah, should definitely not be used with anything other than an empty string.

Example:

    document.innerHTML = '';


Don't. Use

    .textContent = "";


Why?


Because that way you don't have any .innerHTML in your code, which makes it easier to analyze and requires fewer eslint expectations, etc.

And it has the same end result.

Also, technically, this should be faster since that empty string is not put through the html parser (tho browsers might optimize for this special case already).


Sounds like premature optimization to me. An empty string will take constant time. Much more concerned with communicating intent, and .textContent is just confusing.


Not an optimization, like I said, but to avoid having innerHTML in your code. innerHTML isn't that much better communicating intent, either. That's what comments are for.


innerHTML might be faster (was true a few years ago).


That hasn't been true for many years — see e.g. https://segdeha.com/experiments/innerhtml/ for the Firefox 3 / Safari 4 era — and when it was, it was only in the case where you were blasting out huge amounts of changes at a time. For anything less than a complete rewrite of a large DOM hierarchy it was almost always faster just to update the elements in question using the DOM.

As an example, there was a lot of cargo cult performance discussion around React in the first few years. A coworker was really excited about it but the first time we did a benchmark for code which updated a report table, React was 5 orders of magnitude slower because it used innerHTML instead of the DOM and even with keyed updates that was doing tons of extra work.


That depends. This conversation is not so relevant now because JavaScript is fast and the DOM is almost assembly code speed. When you slow things down, such as using really old hardware or rewinding the clock to 12 years ago, things were much slower and the performance differences were readily noticeable.

When the environment is slow the ultimate bottleneck for the DOM is the document object. This is the only entry and exit point to the webpage and with JavaScript being single threaded you can only execute one instruction at a time. To make things confusing though DOM instructions without JavaScript are executed much faster than JavaScript instructions, but DOM instructions from JavaScript are slower than JavaScript. This confusing because every time you exit the language, this applies to any language, to perform some external operation there is a small programming overhead that takes time. Such an external operation includes any DOM operation including .innerHTML. None of that had any bearing on visual page render, which had its own performance perils.

Because of that it was/is faster to use the standard DOM methods, createElement or appendChild, when you had to execute those instructions only a few times for a given piece of logic. This because .innerHTML does a couple of things: removes existing DOM nodes (if any), parses a string, and inserts new DOM nodes. That normally applies to inserting a paragraph, changing an attribute, or some other singular piece of content. When the thing to change/insert comprises many DOM nodes, such as a table, it was dramatically faster to use the .innerHTML method, because then most of the work occurs outside of JavaScript. Keep in mind there is overhead for each any every instruction that leaves the language and .innerHTML method leaves the language only once. Also keep in mind the page DOM is bottlenecked to the document object so these instructions leaving the DOM could only do so one at a time in order. That overhead per language exit adds up and scales slower than the slower performance of .innerHTML, especially since the .innerHTML work occurred as native code in a faster language.

There were compromised solutions, such as using document fragments. Almost nobody did this though. A document fragment allowed constructing a DOM artifact without touching the document object and so there was no bottleneck until inserting that artifact into the page. This was the fastest way to construct large complex things, but it imposed a lot of work on the developer, it wasn't widely supported (even though it was standard), and the API was less clear/familiar.


Wouldnt worry about that. No matter which method you use its the actual DOM part showing the result that is slow. I have a thing that builds >4K row HTML table from localstorage data one line at a time, it takes 200-300ms to build (timed with performance.now()) depending on if I cloneNode/appendChild/build a string and shove it into innerHTML, but then about 2 seconds for the browser to actually update the DOM and show user the actual Table.

It might be/might have been faster, but it's also a massive footgun.


What was the piece in firefox that was being retired when many add-ons no longer worked? Wasn't that xul?


That was just the first step. Firefox first had to disallow XUL/legacy extensions before they could remove APIs that others used to depend on.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: