
The Tangled Web: A Guide to Securing Modern Web Applications (2011) - dsr12
http://lcamtuf.coredump.cx/tangled/
======
tptacek
This is a great book and is one of the three books we'd send to all candidates
at Matasano. A thing to be aware of: it's probably the best all-around browser
security book, but it's not the best _application security_ book, since it
spends much more time on browsers and browser technology than it does on the
serverside issues (like SQL databases and authorization systems) that dominate
web appsec.

At Matasano, we gave The Web Application Hacker's Handbook to candidates to
cover that gap.

~~~
security-guy
This book along with the 'Web Application Hacker's Handbook', were all
published in 2011, do you feel they're dated in any way?

~~~
Brakenshire
The trend towards client-side web apps has happened since then, presumably
that will have changed the situation to a certain extent.

~~~
f-
(I'm the author of the book in question.)

I think it happened quite a bit earlier (perhaps 2005 -> 2010), at least when
you look at some of the "prime" web properties. Gmail or Google Docs in 2010
were already pretty close to what we have today. Hard to believe, but
XMLHttpRequest actually dates back to 1999! JSON isn't much younger.

I don't think the models of web development have changed dramatically since
the publication of TTW. There are some other, more incremental changes that
aren't reflected in the current edition - there are two examples in my other
comment here (Service Workers, parser harmonization, etc) - but by and large,
the content should be still largely relevant.

~~~
security-guy
so it's safe to say there won't be an updated edition due for release any time
soon =)

~~~
f-
I'm not working on one right now and have not talked to the publisher about
it. So not in the next couple of months. In the longer haul - maybe probably?

------
rwmj
I read this book quite recently. As others have said, it's a great book.
However it also had the unintended consequence of putting me off both client
and server web development for life. I'm fairly sure I would never be able to
write something that was secure. There's far too much to get right, too many
things that could go wrong, and I'm quite sure the knowledge you need expands
almost daily.

~~~
yeukhon
Yeah I agree we are not doing well.

A number of things:

* framework - use popular, battery-tested framework. Avoid new shiny HN front-page framework for production. If you want to experiment with it, and eventually replace the old framework, then wait until the community is strong enough.

* frontend - mostly HTML/CSS/JS, we carry too much legacy baggage: years of hacks and workarounds in the spec and in implementation. Make smart use of CSP.

* backend - quite stable these days if you first take care of your server. However, sanitizing inputs and outputs is both context-sensitive and challenging. Don't forget about sandboxing, authorization and state eviction, and keys management (api keys, user keys, server keys, automation keys such as Jenkins credentials).

* package management - there are vendors and SaaS (Github, and I think Gitlab too?) provide alerts and reports on software dependencies updates, so we are doing slightly better in this area.

The entire stack of an application is not as simple as putting a pipe system
together, indeed.

~~~
marcoperaza
I agree about using a battle-tested framework. But there’s also risks in using
a feature-rich, complex framework that you don’t have a deep understanding of.
Those frameworks ease development by making decisions and doing things _for
you_ , but also by hiding things _from you_. Your code and its assumptions,
combined with some piece of ‘magic’ deep in the framework with its own
assumptions that you were not aware of, could very well give birth to a
security problem.

I would say to use frameworks to your advantage, but remember to look under
the hood. There’s no substitute for understanding the entire stack and knowing
what your tools are doing.

------
1690v
I finally got around to reading lcamtuf's other book, Silence on the Wire
(2005) this wekk.

If you have even a small amount of interest in passive recon, it is excellent-
[http://lcamtuf.coredump.cx/silence.shtml](http://lcamtuf.coredump.cx/silence.shtml)

~~~
lossolo
I second that, I've read this book around 11 years ago and I remember it was
very good.

------
Arcanum-XIII
Just finished this book and it made me realize that my team was having endless
discussion about the wrong topic - mostly crypto security. Having simple basic
security about forging content was not on the radar, input sanitization is
still thought to be the responsibility of whatever framework... at least we
know where and what to look for.

------
geofft
I read this book close to when it came out—what do I need to know about what's
changed since then?

~~~
f-
(I'm the author of the book.)

Some things have changed for the better. For example, there's been a push to
harmonize the behavior of some of the core parsers and APIs. Say, we now have
less variability in how HTML is handled across different browsers.

On the flip side, there are also several new APIs and JS features that have
some scary security implications. Service Workers come to mind.

The near-complete demise of Java and Flash are the two other major changes
since 2011.

Either way, the book should still give you a very robust understanding of the
fundamentals (and a mental framework to evaluate the dangers of some of the
new stuff).

~~~
twiss
Service Workers can be used to _increase_ security as well, though. You can
use it to intercept and check requests and responses against any criteria you
like. You can reimplement (variations on) CSP or SRI in them [1], or implement
signed web apps [2]. There are probably other possibilities.

[1]: [https://frederik-braun.com/sw-sri-challenge.html](https://frederik-
braun.com/sw-sri-challenge.html)

[2]: [http://blog.airbornos.com/post/2017/08/03/Transparent-Web-
Ap...](http://blog.airbornos.com/post/2017/08/03/Transparent-Web-Apps-using-
Service-Worker)

~~~
jimktrains2
Last time I discussed this with someone, it appeared impossible for the
service worker to handle the update to itself, and as such the service worker
itself was not signed. I feel like this defeats the entire idea.

~~~
twiss
There is a way to verify the new SW file when it updates and warn the user
(but not prevent the update), but for now it's quite hacky.

\- There's an "updatefound" event with which you can listen for updates to the
Service Worker

\- You can request the new SW file from browser cache with:
fetch('/serviceworker.js', {cache: 'only-if-cached', credentials: 'same-
origin', mode: 'same-origin'})

You would hope that that's all you need (just register for updatefound in the
SW file, and when it updates, check the new one, if verifying fails, warn the
user). Unfortunately, _the new SW can kill the old SW_ with skipWaiting(). I
opened a bug to try and fix that [1].

In the meantime, you could instead listen for the updatefound event _in the
web application_. However, the example fetch() above would go through the new
SW, which is no good, but...

\- You can bypass the SW from the web app, but only by basically relying on a
bug in the spec: if you send a request from an <iframe srcdoc="..."> (or
data/object url) it doesn't go through the SW. That will probably be fixed,
but a less hacky mechanism will probably replace it [2].

There are some other remaining issues (some of which are discussed in [1]),
but given the above the probability of detection of a malicious Service Worker
is very high.

[1]:
[https://github.com/w3c/ServiceWorker/issues/1208](https://github.com/w3c/ServiceWorker/issues/1208)

[2]:
[https://github.com/w3c/ServiceWorker/issues/1026](https://github.com/w3c/ServiceWorker/issues/1026)

------
yread
I wonder how would this section look like now, has anything changed?:

 _To add insult to injury, the Internet Assigned Numbers Authority added a
fair number of top-level domains in recent years (for example, .int and .biz),
and it is contemplating a proposal to allow arbitrary generic top-level domain
registrations. If it comes to this, cookies will probably have to be
redesigned from scratch._

~~~
f-
The $100k+ price tag of top-level domains is prohibitive enough to discourage
abuse and to allow them to individually review applications and reject
anything even remotely problematic or questionable. In other words, life goes
on at this point.

If they ever decide to relax the rules, it may become a bigger deal.

------
m_fayer
A thorough overview like this would be just the thing for me right now. But is
this book still applicable/comprehensive 7 years later? I imagine that a lot
of today's issues revolve around OIDC/SAML/etc. which this book is too old to
cover.

------
jbaviat
This book has a clever approach to explaining what's at stake regarding web
app security, in particular the browser security model. That's how I would
have structured my book if I had written some regarding web app security.

------
marknadal
This is really polarizing issue for a lot of people, so I'll be strong on my
opinion (and it looks? like? I can't see what the author's opinion is? unless
I buy the book?) which is:

It seems like the "everybody is doing it this way" thinking is currently about
having strong server side security. But I believe that fundamentally
opens/leaves-out the browser as a loophole.

Yet the common criticism against end-to-end encryption in the browser (even
with the native Web Crypto API) is that JS-land can never be treated seriously
(and you probably shouldn't).

BUT I feel like that leads to not even trying, and adding anything is better
than nothing. So I/we/others are trying (for instance, see this HackerNoon
article on "How to Build a P2P end-to-end encrypted Twitter"
[http://hackernoon.com/so-you-want-to-build-a-p2p-twitter-
wit...](http://hackernoon.com/so-you-want-to-build-a-p2p-twitter-
with-e2e-encryption-f90505b2ff8) ). Because my philosophy is that no matter
how insecure JavaScript may be, it is a whole let less secure than trusting a
3rd party corporation (Google, Facebook, whoever) with your data openly, that
you then hope they keep secure -- that just seems like ignorance.

Yet, I doubt people will come around to this for a long time (they'll be
pushed into it because of Bitcoin and other forces), but it doesn't seem
popular amongst the "this is how we've always done it" way of thinking.

Shouldn't we at least push in that direction? The ideal: Hardware wallets that
trusted-open-source-verified-auditied web browsers have JS APIs that pass-
through to. Then JS doesn't handle it, it won't be in browser memory, and even
if the OS is compromised you'd have to be at Intel-level spectre meltdowns
before the hardware wallet can be cracked.

Admittedly, I feel naive even hoping/thinking/believing that such security
could/would/will exist given how many things could go wrong. But again, I
still feel like that is better than central servers storing all our data
maybe-encrypted.

~~~
tasn
Shameless plug:

I recently wrote a browser extension that verifies (using PGP) the integrity
of the page, and thanks to subresource-integrity also the integrity of
external scripts and css, making it possible to trust that the JavaScript
application you are running is what the developers intended you to be running.
Making E2E encryption in the browser much safer. Just to give a counter to
your "is that JS-land can never be treated seriously" statement. Link:
[https://github.com/tasn/webext-signed-pages/](https://github.com/tasn/webext-
signed-pages/)

It's used by EteSync to secure its web client.

~~~
twiss
Shameless plug #2: I've been using Service Workers to achieve something
similar, without using extensions:
[http://blog.airbornos.com/post/2017/08/03/Transparent-Web-
Ap...](http://blog.airbornos.com/post/2017/08/03/Transparent-Web-Apps-using-
Service-Worker)

~~~
marknadal
YESSSSSSSS!!!! To both of you. This is what I want and makes me super happy
there are people like you. Can you guys shoot me an email mark@gunDB.io
because this is critical!

Note: I do believe JS ought be taken seriously but I can't ever seriously tell
that to people because nobody does. But I hope because of projects like yours
that people stop having arguments against it. Great job, let's chat.

------
tombrossman
This is purely an advertisement promoting the sale of a new book, correct? It
looks like an interesting and useful book written by someone well-known to HN
readers, but it is just a web page announcing a book for sale. Serious
question - Is this OK for story submissions now?

I nearly flagged it but I think I'll hold off and see if linking to ads is the
new norm here, because I haven't noticed that before.

Edit: whoops, should have read this before posting _" Please don't complain
that a submission is inappropriate. If a story is spam or off-topic, flag it.
Don't feed egregious comments by replying; flag them instead. If you flag
something, please don't also comment that you did."_ Apologies, I'll leave my
original comment for the curious but won't ask this question again.

~~~
unethical_ban
It's an odd thing to balance: Many posts on HN are blogs, which are just one
person (hopefully with some credibility) that has something to say. Why then
can people not effectively use HN as a blog, subject to some submission limit?

Likewise, if an intelligent person writes a useful book or service (Show HN),
what is inappropriate about sharing it?

~~~
jnordwick
A book "review" by the author himself is basically an advertisement.

I flagged it.

~~~
packetslave
but... it wasn't submitted by the author, and it's not even pretending to be a
review. It's a "here's info about a book I wrote, with a bunch of links to
other peoples' reviews" page on the author's personal site.

~~~
jnordwick
Still biased. Do you expect to find balanced, informative links or simply ones
that pump the book? So somebody submitted a link to an ad for a book, makes no
difference who submitted it.

