Hacker Newsnew | comments | show | ask | jobs | submit | neotek's comments login

Your slippery slope is ludicrous, like when homophobes insist gay marriage will lead to people being allowed to marry their pets.

-----


What evidence do you have that Google is trying to block torrent sites specifically? It would seem to me, given the way the safe browsing report is worded, that it isn't the torrent sites themselves that are considered malicious, it's the content being delivered via third parties (i.e., ad networks) that Google's flagging.

You're basically implying a conspiracy by Google to police the internet by limiting access to sites that enable piracy, but I don't understand why you think they'd choose this route rather than just de-indexing torrent sites completely, which they could easily and justifiably do.

-----


AdSense delivers a veritable shitload of ads with big blue download buttons.

-----


Pretty sure it'll be because of the semi-shady ad networks that service torrent sites and various other filehosts. The networks themselves aren't shady, per se, but they don't have the resources of Google to adequately police all of the ads that run through their networks. As a result, they end up serving malicious ads that link to dodgy installers or even straight up 0-day browser exploits.

-----


Google ads themselves serve plenty of dodgey ads, go search "$popular_software download" for instance.

-----


Absolutely, but those ads aren't usually pointing at outright malicious software, just shitty adware like download accelerators or whatever. Google's algorithms and manual approval processes have been pretty good at filtering out anything explicitly malicious, at least in my experience.

-----


Even malware makes it into adsense , and if i remember correctly it took a few hours after reporting the ad for it to be removed. It's been a while since i ve had these reports though.

-----


Could you give a real example for $popular_software which is showing a dodgey ad?

-----


Searching for firefox on Google always returned a paid ad for a download of Firefox with 10 separate bundleware offers included.

-----


They recently changed the rules for download sites doing that - https://news.ycombinator.com/item?id=9502026

-----


I don't think that sort of ad is permitted anymore.

Here's what I see when I do that search: http://i.imgur.com/505yzNW.png

-----


Google Search recently put a lot of time into this area and as of a few months ago, you should rarely see these malicious ads and results.

-----


then why block/replace the top-level page instead of the embedded content like an adblocker would?

-----


Because a site that is willing to host these kinds of ads may use more than one ad service, and Google may not have detected all the bad ad services.

The fault here lies squarely with the websites. They choose to work with scum. Let them fix the mess instead of blaming Google.

-----


> and Google may not have detected all the bad ad services.

You're basically arguing that false positives in an external content blocking service is better than false negatives.

I strongly disagree with that kind of notion. It reeks of nannying.

-----


I think it's more of a game-theoretic "principal-agent-problem avoiding" solution. If system X.Y.Z is having a problem, the way to allow the most degrees of freedom in the way it gets fixed (and therefore, usually, the way to get it fixed most efficiently) is to put pressure on component X. X will put pressure on X.Y, who will in turn put pressure on X.Y.Z. But if the system can also be fixed by, say, getting X to find a new X.Y, that's good too!

An example of this: auto insurance. When you get in an accident and want money, you don't sue the other guy; you sue your own insurance company, who sues the other guy's insurance company, who in turn sues them. If, somewhere during that propagation, an alternative is found (e.g. the two insurance companies agree that it was a no-fault collision under arbitration and settle for some amount), then you end up achieving the same effect while putting less stress on the system as a whole.

-----


>The fault here lies squarely with the websites. They choose to work with scum.

It isn't much of a choice, since they're not allowed to use AdSense or any of the other major ad networks.

-----


I have a browser. I tell the browser to go to a site. The browser refuses to go to the site. This is clearly an issue with the browser.

-----


The interstitial for blocked content is implemented on a per-page level. You can test this yourself by creating a test page with an iframe to http://ianfette.org.

Suppose that Chrome just tried to replace the malicious embedded content. What happens when that embedded content is styled with CSS properties to make it hidden from the user?

-----


They could overlay the message or have an information bar or whatever. UI design is not an argument against more measured blocking.

I don't see a technical necessity to block a whole top-level page when the security model of a browser consists of many different origins.

-----


Erring on the side of caution.

If a page has included known exploits, it might also include unknown exploits.

-----


This is the advertising equivalent of clickbait - make a video about an ad which maybe ran a handful of times, if it even aired at all, in the hopes of that video going viral and stimulating press coverage.

Pizza Hut recently did something similar with their pizza box projector[1], a fun little idea which ultimately probably only directly reached a few hundred customers but which millions of people around the world read an article about.

[1] http://www.businessinsider.com.au/pizza-hut-box-turns-into-m...

-----


I'd put money on it being an abuse of YouTube's Content ID.

It's a common YouTube scam - find a video with a lot of views that's rising quickly, download it and reupload it to your own account and submit it to Content ID. YouTube will automatically scan its library for copies of "your" video, and gives you the option to either take the copies down, or monetise them via ads.

Because of the DMCA's stupid counter notification process, it takes YouTube two weeks before they'll release the copyright claim, by which time the video is no longer viral and the original content creator has missed out on the bulk of the video's revenue.

The thing is, YouTube isn't really to blame for all of this, it's the idiotic way the DMCA is written and applied. As a service provider, YouTube is obliged to immediately respond to DMCA claims, regardless of how spurious a claim might be, or risk losing its protection under the DMCA's safe harbour provisions.

If the person against whom the DMCA takedown was lodged wants to challenge its validity, they have to send a counter notice, which starts a two week timer. If the person who submitted the takedown doesn't start actual legal action by the end of that two week period, YouTube is allowed to reinstate the content... at which point the DMCA troll can submit another takedown request, starting the whole Kafkaesque process again.

-----


...YouTube isn't really to blame for all of this...

In the case of ContentID, which bypasses the DMCA, yes, they are to blame, along with all the media companies that demanded its creation.

-----


ContentID doesn't bypass DMCA. It is neither required to take advantage of the DMCA safe harbor with regard to suits by copyright owners not compliant with the DMCA safe harbor provisions for suits by users hosting media, but it doesn't bypass anything required by the DMCA.

People sometimes confuse the things which trigger DMCA safe harbor provisions with mandates but they aren't, and the only reason to be guided by them is to take advantage of the safe harbor attached to them. Real businesses often want to have better relations with big money content suppliers than the minimum required to avoid copyright liability, and can already structure their relations with end users in a way that they would have no liability for any take-down in any case, so exceeding what is necessary for the safe harbor on the content owner side while not concerning oneself with the safe harbor on the end-user side if perfectly rational, and doesn't bypass anything.

It might underscore why the DMCA isn't as balanced as it seems on the surface but instead radically tilted in favor of content owners, since only one of the superficially parallel safe harbor provisions is even relevant to most hosts.

-----


> ContentID doesn't bypass DMCA

AFAIK ContentID claims mechanism merely mimics DMCA mechanism – ContentID infringement claims are not actually DMCA notices.

That means that false-positive ContentID claims basically carry no legal consequences, unlike false-positive DMCA claims (perjury).

That means you can throw automated infringement notices at users knowing that X% of them will be false positives. That wouldn't work with DMCA.

-----


> AFAIK ContentID claims mechanism merely mimics DMCA mechanism

It doesn't even do that, nor is it intended to. This isn't bypassing DMCA, though.

> That means you can throw automated infringement notices at users knowing that X% of them will be false positives. That wouldn't work with DMCA.

Sure, but DMCA isn't even relevant. DMCA notice provisions are a requirement for content owners to bypass the DMCA liability shield for third-party hosts in filing infringement claims -- they have to file notice in accordance with the DMCA, and then if the content host doesn't act within the parameters of the safe harbor, they can pursue whatever infringement action they would, absent the DMCA liability shield, have had against the content host.

DMCA notice requirements do not protect users from infringement claims, they protect content hosts. (And counter-notice requirements protect content hosts from liability claims from users stemming from take downs based on the DMCA notices.)

There are no DMCA provisions that exist to protect users posting allegedly infringing content.

-----


Are people sending DMCA notifications for the copies? They have the right to and if you stop the troll copies (whether they did use DMCA process or not), nobody can benefit from them. Still not perfect, but why do the scam if you can't benefit from it.

-----


It's a common YouTube scam - find a video with a lot of views that's rising quickly, download it and reupload it to your own account and submit it to Content ID. YouTube will automatically scan its library for copies of "your" video, and gives you the option to either take the copies down, or monetise them via ads.

Because of the DMCA's stupid counter notification process, it takes YouTube two weeks before they'll release the copyright claim, by which time the video is no longer viral and the original content creator has missed out on the bulk of the video's revenue.

What I don't understand is why someone uploading videos like that isn't being sued for straightforward copyright infringement.

The measures under the DMCA (and similar measures elsewhere, such as under the EUCD in Europe) were supposed to protect YouTube, the hosting service, in cases like this. Otherwise, the host is vulnerable to fallout from illegal acts committed by others and of which it has no knowledge. It's a similar argument to the common carrier principle in other communications channels.

Now, you can certainly debate whether the protection is too generous. For example, under this sort of scheme it is possible to build a business that facilitates and encourages copyright infringement and generates huge revenues as a result of that business model, yet wash your hands of it by claiming to just be the innocent third party host. This is still about the hosting service, though.

As far as I know, none of the DMCA-style laws protect the original uploader who actively and in this sort of case knowingly shares someone else's content in breach of copyright. If they're doing that with the kind of content that picks up millions of views and consequently denying advertising revenues and marketing effects to the legitimate creator/rightsholder, why isn't that grounds for a regular copyright infringement suit and, in a jurisdiction like the US, seeking statutory damages that make it worth pursuing one?

-----


I feel it's a little disingenuous for the author not to mention up front that the fake page can't be interacted with, because it completely changes the severity of the vulnerability.

-----


It doesn't change anything. As some people used to say, "security is binary; you either are secure or you are not".

While it's useful for phishing pages to be interactive, it's not strictly necessary:

----

"Your paypal account is locked, because we suspect it to be hacked. To unlock it, please call our tech support (phone number 1-234-56789) and tell them your paypal password to prove your identity (and CVV of all the credit cards pretty please)."

-----


It absolutely changes things, there's a marked difference in severity between encouraging someone to call a number or respond in some other way to written instructions, and capturing their login details on a page they've been trained to trust (i.e., https with a green lock.)

I'm certainly not saying there's no issue here - your example perfectly demonstrates a realistic and dangerous use case - I'm merely pointing out that omitting such an important aspect of the vulnerability in the repo readme is disingenuous and materially changes the severity of the issue. To be honest, the omission actually smacks a little of clickbait.

-----


it's a proof of concept. I don't see how it changes anything. once you've got the browser displaying https://site you control pretending to be something else.

fixing everything else is trivial.

-----


> As some people used to say, "security is binary; you either are secure or you are not".

I think anyone that used to say that was just wrong. "We" (software community in general) have a much more sophisticated understanding of security than we used to, and realization that absolute security is virtually impossible. Security is always a continuum of risk management.

Bruce Schneier writes:

> Security is a trade-off. This is something I have written about extensively, and is a notion critical to understanding the psychology of security. There's no such thing as absolute security, and any gain in security always involves some sort of trade-off.

He's not writing specifically about digital security in that quote, but since he's foremost an expert on digital security, it's safe to say his opinions on security in general apply to digital security too.

It's a good essay, worth reading.

https://www.schneier.com/essays/archives/2008/01/the_psychol...

Here's another Schneier quote about digital specifically:

> That is why security experts aren't surprised by the Sony story. We know people who do penetration testing for a living—real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker—and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren't sufficiently skilled, good security may protect you completely.

https://www.schneier.com/essays/archives/2014/12/sony_made_i...

-----


> As some people used to say, "security is binary; you either are secure or you are not".

Disregarding all other context: I disagree strongly with this statement. Practical security is about risk management, and is never binary unless you've proven the absence of bugs (and backdoors) through your whole stack down to the transistors (unlikely). The practical exploitability of any particular bug is certainly relevant.

-----


> Your paypal account is locked, because we suspect it to be hacked. To unlock it, please call our tech support...

This is why it makes it a lot less severe.

-----


Isn't this a little like suggesting that it's a vulnerability that someone can have a large <img> tag with a GIF of Facebook's login page on it?

-----


If facebook allows you to put an arbitrarily large img tag onto any of the pages under its domain (so internal navigation could be hidden under it), then yes it is a valid attack vector.

-----


> As some people used to say, "security is binary; you either are secure or you are not".

I'm sorry, which people used to say that? I'm not aware of any serious security professionals who hold that opinion. "Security" is and has always been a complicated spectrum of interactions and requirements.

-----


I don't remember the exact quote, and might have used one out of place.

I am thinking about it in binary terms, because it helps to prevent security through obscurity trap many seem to fall in.

My point is: the fact that a particular bug has security impart seems pretty binary. And dismissing one because "hey nobody will think of/work hard enough/have enough money/etc. to make use of it" isn't a very bright idea.

See 3rd party content under a legitimate url? The system is insecure. Period.

-----


One thing you might be thinking of is "there should be one mode, and it should be secure".

http://iang.org/ssl/h3_there_is_only_one_mode_and_it_is_secu...

This isn't exactly the same as what you said, but I think it's an argument in favor of your point of view.

Another analogous thing is that academic cryptographers will regard an algorithm or protocol as broken if an adversary can gain a significant advantage (probability of distinguishing things that are supposed to be indistinguishable, reduction in work factor, etc.), even if the resulting work factor to mount the attack is still enormous. For example, if there were an attack that could break AES in 2¹⁰⁰ operations, AES would be considered broken even though we believe there is no one who can perform 2¹⁰⁰ operations, because it no longer provides the designed or advertised security margin.

-----


Security is not binary at all. Your computer is not 100% secure as you should know yet you are still plugging it into the Internet. Aren't you?

-----


My iPhone regularly reacts to words that sounds like "Hey Siri" when I'm listening to audiobooks using the in-built speaker.

-----


Wow, surprising. You'd think they would have that filtered out, just like they have to when taking a call on speaker phone.

-----


This is pure speculation and I'm nowhere near qualified to speak with any accuracy, but perhaps it's the case that the processing requirements would reduce battery life too quickly. Although, having said that, "Hey Siri" only works when the phone is plugged into a power source, unless you use a jailbreak tweak to turn it on all the time.

-----


>The client paid for ADP access and has every right to use whatever means they deem necessary to input or output data from the ADP system.

Surely ADP has a superseding right to determine how information enters and leaves their system? I'm not suggesting that's good or bad, just pointing out that it's their system, being a customer doesn't give you carte blanche to do whatever you like.

Hell, look at what happened to Aaron Swartz.

-----


> Surely ADP has a superseding right to determine how information enters and leaves their system?

No. They do not own this information.

-----


What happened to Aaron is a good reason to give them the finger.

-----


Very much looking forward to reading about this on Daring Fireball.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: