
Remote Code Execution in Firefox beyond Memory Corruptions - tannhaeuser
https://frederik-braun.com/firefox-ui-xss-leading-to-rce.html
======
carlsborg
> Both [these bugs] were fixed in Firefox 56, which was released in the fall
> of 2017.

Might be worth clarifying this a little higher up on the page.

~~~
OrgNet
I wish more websites would include a publishing date right at the top (like
some newspapers). I think that they don't care if the information is outdated
as long as it brings in visitors. Too many websites do this.

------
tannhaeuser
> _It was the year 1997, and people thought XML was a great idea. In fact, it
> was so much better than its warty and unparseable predecessor HTML. While
> XHTML was the clear winner and successor for great web applications, ..._

While just the opener for discussing XXE-style and injection attacks in XUL,
I'm not sure this isn't satire. Neither was nor is HTML unparseable, nor has
XHTML and XML on the web replaced it. XML was just a proper subset of SGML
doing away with element-specific markup declarations/parsing rules such as
required for `img` and other elements in HTML, and also with tag
inference/omission (also as required for parsing HTML).

~~~
beagle3
It might look like satire, but that’s how it looked in 1997 (as late as 2005
or so, maybe even later).

HTML in the wild was a mess, and required a lot of error recovery and
empirical quirks. Browser vendors were all on board with the idea of rebooting
it as well defined XHTML which has XML structure and HTML semantics. There was
quite a bit of work towards that goal.

But the actual content creators did not care, and vendors relented. HTML in
the wild is not quite as bad as it was and is much more easily parseable as
well.

~~~
tannhaeuser
> _HTML in the wild was a mess, and required a lot of error recovery and
> empirical quirks_

> _HTML in the wild is not quite as bad as it was and is much more easily
> parseable as well_

I beg to differ with the "HTML in the wild" concept and need for heuristics.
HTML, including HTML5, was and is easily parseable using SGML, the markup
meta-language on which HTML versions up to HTML 4 were specified. My DTD for
HTML 5 (W3C's current HTML 5.2) can parse up to 97.31% of the test suite
normatively referenced by the HTML spec (the rest being somewhat in a grey
area containing constructs to make test automation happy, which is designed to
never fail, but rather to produce _something_ still in a predictable way) [1].

[1]: [http://sgmljs.net/docs/sgml-html-
tutorial.html](http://sgmljs.net/docs/sgml-html-tutorial.html) (reported on
the slides reached via the "TALK" link)

~~~
quietbritishjim
"the HTML spec" is definitely not the same as "HTML in the wild", so your
comment has relatively little to do with what the parent comment is saying.
Real web pages contain all sorts of sins (or at least did, back in the 90's)
such as unclosed tags and missing elements that are "required", and browsers
were expected to parse those pages because other browsers did. As a simple
example, <b><i>foo</b></i> is not valid HTML 4 (or even SGML) because the tags
are not strictly nested, but it would be suicide to make a browser that
rejects pages containing that.

~~~
The_rationalist
"it would be suicide to make a browser that rejects pages containing that"
Nonsense, browser should have never supported it. We can't decide after the
fact yes indeed.

------
z3t4
Don't use .innerHTML ! Nost just for security, but also for convenience. Use
document.createElement, and node.appendChild; this is much more powerful as
you can abstract views and components into pure functions.

~~~
paulrouget
innerHTML might be faster (was true a few years ago).

~~~
acdha
That hasn't been true for many years — see e.g.
[https://segdeha.com/experiments/innerhtml/](https://segdeha.com/experiments/innerhtml/)
for the Firefox 3 / Safari 4 era — and when it was, it was only in the case
where you were blasting out huge amounts of changes at a time. For anything
less than a complete rewrite of a large DOM hierarchy it was almost always
faster just to update the elements in question using the DOM.

As an example, there was a lot of cargo cult performance discussion around
React in the first few years. A coworker was really excited about it but the
first time we did a benchmark for code which updated a report table, React was
5 orders of magnitude slower because it used innerHTML instead of the DOM and
even with keyed updates that was doing tons of extra work.

------
mar77i
What was the piece in firefox that was being retired when many add-ons no
longer worked? Wasn't that xul?

~~~
bugmen0t
That was just the first step. Firefox first had to disallow XUL/legacy
extensions before they could remove APIs that others used to depend on.

