
New Standards for Preventing Browser Hijacking - tbiehn
There’s a long history of attackers compromising popular web destinations, and exploiting those resources for monetary gain – the user populations are sometimes more valuable than the data on the compromised service.<p>Placing ‘exploit kits’ on compromised pages, and hitting their users’ with browser exploits or downloads continues to be popular, as is conducting ad-fraud. Cryptocurrency mining is yet another vector for attackers to monetize their hacks.<p>The problem is this; how does a browser verify that the code it has received from a website – is the same code the organization released to production?<p>The use of secure HTTPS communications – over TLS and SSL – prevents some attackers, such as those carrying out man-in-the-middle attacks from malicious wireless access points, or your local network. However, there’s a much larger attack surface upstream of HTTPS protections, Load Balancers, upstream enterprise components (an explosion with microservices trends), and the application servers themselves. Any of these components present a pivot point for attackers looking to exploit customers – because they can all introduce code before those connections are protected over the wire with HTTPS.<p>What we could use is an addition to web-standards, one that allows organizations to produce verifiable client-code, and browsers to validate that code. Long term, the dynamic aspects of JavaScript and modern development practices – such as manipulation of the Document Object Model at runtime – via WebSockets, Web Messages, contents of state in LocalStorage, and Ajax calls, mean that these efforts need to rely on a subset of JavaScript, safe from unintended injection attacks. Quick and Dirty - think GPG signatures.<p>Is there any work being done in this domain?
======
Eridrus
How many websites do you know that have implemented Subresource Integrity or
CSP? These technologies are a pain in the ass and don't provide enough benefit
to be worth it.

If you're interested in pursuing something like this I think you really need
to find a problem that is big and important enough for your average developer
to agree that the cost is worth it. Being able to use insecure networks safely
turned out to be important enough that we're making strides to TLS, though
with a lot of struggle, what problem is as important as that for people to
start thinking of doing this much work? And is there any way to scope it down
to be less work?

Maybe you could convince people that we need to sign ads so that it's not so
trivial to deploy exploit kits inside ads, but we're largely solving that by
making exploits more expensive, rather than solving ad integrity, so the gain
on this is smaller than it would have been a decade ago.

~~~
tbiehn
This is exactly my point - the available controls are not sufficient (plus
immature). Providing a control that lets you create trusted client code lets
you (comprehensively) cover the risk of code introduced 'after build time'.
This would be obviously difficult to deploy - but it isn't without precedent,
most clients (that aren't web) are already protected by signatures at build
time, and orgs where that really matters (Signal, GPG, etc) get to pursue
signing on air-gapped computers, with ceremonies and deterministic build
processes. Preventing the implanting of client code has been deemed so
important that even when updates go out over TLS, their signatures are double-
checked by the recipient. I do not see how our use of the web-browser as a
client becomes more trivial over time, it is clear that eventually the exposed
capabilities, and desired use-cases, will eventually mandate assemblies
verifiable in this manner.

------
tbiehn
\+ extra long text;

To date, the strongest technologies that can be deployed to protect against
these attacks are insufficient. Some technologies are on the right path –
SubResourceIntegrity (SRI) promises to help organizations manage the risk of
including 3rd party JavaScript includes – or those from load balancers.
Googles’ Caja project is showing some promise in producing the security
assurances that a verification scheme would rely on. These are showing some
promise – but the industry has yet to comprehensively focus on verifiable
build-of-materials protocols for code delivered to web-browsers. We could
enable the types of applications that depend on client-integrity, for example,
the use of End to End Encrypted Chat is only secure from these attacks if a
specific version of a web-application can be identified, verified and tested
by trusted experts, and only that version allowed to execute.

------
Tomcrook
Thank you for sharing this post. In this post, i learned about preventing
browser Hijacking. You can prevent your sites by hijacking to following some
tips like: Update Your OS and Your browser Software, Use Your Antivirus
Software's "Realtime Protection" Feature etc.

