Hacker News new | comments | show | ask | jobs | submit login
Mozilla hit with takedown request for anti-paywall addons (github.com)
311 points by AndrewDucker 9 days ago | hide | past | web | favorite | 387 comments





I am the author of that add-on ... what would you do in my shoes ?

  - provide an XPI (that will only benefit the few since the side-loading process is made awkwarder at every point release) ?
  - fight it ? If so, on what grounds and how ?
  - something else ?

Firstly, thanks for the useful add-on!

> - provide an XPI (that will only benefit the few since the side-loading process is made awkwarder at every point release) ?

You could get your add-on signed as an "unlisted"[0] add-on and upload it to GitHub as a "release" (for which you can upload binary files[1]). This would make the add-on less easily discoverable (than it had been), but at least all Firefox users could install it without much of a hassle. OTOH if the add-on becomes popular again, take-down requests might be sent to you or GitHub.

[0] By unlisted I mean not publicly distributed via AMO; there's no restriction on distribution via other channels. See https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Dis...

[1] https://help.github.com/articles/distributing-large-binaries...


They make it clear that it's not the listing of the addon on AMO that makes it subject to their policies (and local laws)... it's the signing itself.

So yeah... I can provide an unsigned copy of my add-ons to users... but unsigned add-ons are by design awkward to side-load :/


Raly all your users to bug Mozilla about unbreaking side-loading. If addons really isnt supposed be a walled garden as they claim, they should be willing to open a discussion with the community about how to open things up so we can avoid legal bullying like this.

It's not gonna happen. Firefox would be dead by now if it still allowed bad or incompetent actors to make Firefox unusable (as in slow, unstable, and btw insecure), because users wouldn't keep up with that for too long. They'd just use another browser. This may be okay with you for now because you know how to protect yourself against malware, but in the long run Mozilla needs market share to stay relevant and be able to compete with the richest companies on this planet.

That's like saying windows is dead in the water because it gives the user the ability to install applications not approved by microsoft.

Well, it's actually a been big problem for Windows, and installing unsigned software on Windows is harder than it used to be. Also, luckily for Microsoft, installing another OS is harder than using another browser. (I say using, not installing, because there's always another browser already installed on most OSes.)

I'm not sure the "big problem" of Windows stemmed exactly from possibility to install third-party apps, and not from poor security defaults which let those apps to mangle everything around without asking for permission, and poor architectural decisions which let some programs to install themselves (aka viruses). Nowadays Windows is much better with this staff than it's used to be, in spite of the fact that (contrary to your statement) installing unsigned software is still as easy as pushing buttons.

To be fair, Linux and friends aren't much better in that regard. Permission management for locally running programs is pretty much poor by default in all major operating systems. SELinux and AppArmor for Linux improve on the situation, but they're usually not the default unless you're using Fedora, and even there, it might need some extra configuration.

I'm looking forward to capability-based systems with microkernels, since they really improve on the situation, but I think it'll take a while until we get some major ones. Maybe Google's Fuchsia will establish itself soon enough, who knows? (We'll also have to see what can be done about hardware security, since any software mitigation could potentially be rendered useless by insecure hardware.)


> To be fair, Linux and friends aren't much better in that regard.

Linux gets there a different way. The standard way to install applications is from the package manager and essentially all of the applications in there are trustworthy (because they're all open source and if they did anything seriously user-hostile, someone would fork it and that version would be the one in the package manager). Meanwhile the package managers do actually add nearly everything that isn't user-hostile, so the need to install anything from another source, while still possible, is rare enough that most people never have to do it.

And binaries downloaded via web browser don't even have the execute bit set by default. You can still do it, but you have to know how, and the people who know enough to know how to do it generally know enough to be suspicious when doing it.

And even then, things typically run as the user rather than root/administrator, so they can't alter the system or anything other than that one user's (presumably backed up) home directory.

Having even more granular permission would be even better, but restricting the harm to one home directory of one user and only in cases of users who are at the same time knowledgeable and stupid has already handled basically the entire problem.


Package managers are exactly the centralized "app store" model, and there have been plenty of periods where users have had to add third party repositories to get basic functionality (DeCSS, video codecs, ZFS, etc).

The difference is that Linux repositories are less noteworthy to attack, have a more distributed culture not ruled by capitulating lawyers, and can actually jurisdiction shop.

The ultimate problem is a lack of a cross-software security model of the OSs, Linux included. User-based isolation is cool and all, but orthogonal to the modern world [0]. For decades we've been continually looking for better ways of isolating local apps, while also rejecting centralized control. We keep looking, while centralization keeps ratcheting.

I'm still hopeful that a well done capability (handle-based) system would go a long way, but not fully solve it. Unfortunately that means shedding off POSIX/LSB rather than duplicating the entire monolithic OS environment for every security context.

[0] Where say even a local LAN IP address is security-critical information!


> Package managers are exactly the centralized "app store" model, and there have been plenty of periods where users have had to add third party repositories to get basic functionality (DeCSS, video codecs, ZFS, etc).

And they still work precisely because the user can do that.

> For decades we've been continually looking for better ways of isolating local apps, while also rejecting centralized control.

The main issue is that there is so little real benefit in it. It's possible to isolate e.g. LibreOffice so that it can't access anything it shouldn't, but in general the authors can be trusted not to be doing anything nefarious to begin with, so what you're really doing is limiting the damage in the event of compromise. In which case you're still pretty screwed, because it inherently needs access to your documents, so the focus has not surprisingly been on preventing compromise rather than mitigating after the fact.

> We keep looking, while centralization keeps ratcheting.

The motivations behind the rise in centralization are authoritarian and pecuniary rather than any legitimate security concern. Trying to prevent it by finding alternative ways to improve security is like trying to repeal the DMCA anti-circumvention rules by finding alternative ways to reduce piracy. They'll never be satisfied because their motivations aren't the stated ones.


> in general the authors can be trusted not to be doing anything nefarious to begin with

It's a dream. Kept alive by a very diligent community, but let's not kid ourselves that it's a very strong assumption.

> In which case you're still pretty screwed, because it inherently needs access to your documents

Access to the specific document you presently want to edit (say through an OS-provided file select dialog or equivalent cli) is much different than unfettered access to all of your files.

> The motivations behind the rise in centralization are authoritarian and pecuniary rather than any legitimate security concern

People do choose say Apple products precisely because of the curated Disneyland App store. Centralizers certainly have their selfish motives, but people are driven into their arms looking for safety. Browsers are used for software distribution precisely because they're a sandbox - there's much less fear of the unknown than running a random exe.


> Access to the specific document you presently want to edit (say through an OS-provided file select dialog or equivalent cli) is much different than unfettered access to all of your files.

This is one thing I like about Android. You can avoid requesting file permissions for your app by using native intents for file access. I think its the same for other stuff, like email. As much as I loathe Googles control over Android, I think this is a good thing.


And they still work precisely because the user can do that.

And you can install an anti-paywall extension in Firefox.

The moment you concede the Linux "well you can work around it" argument is the moment you have to stop arguing on pure absolute principle and start arguing about practicality and the relative difficulty of workarounds.


So, if hypothetically microsoft were facing more competition and people left due to malware you would support them doing the same move where they only allow microsoft-approved applications unless you replace some flag in the bootloader for example?

And of course the next step would be DMCA takedowns for applications in the appstore. Everyone's computer would effectively be under US jurisdiction.


Significant part of FF's userbase decided in its favor exactly because of choice of third-party addons.

How significant? Firefox' market share was falling rapidly. (It still is declining, but not as dramatically.)

Their market share is lower because at its peak, Chrome didn't exist.

The rise of Chrome is responsible for all of Mozilla's lost share. But major factors causing Chrome to gain share are being the default on the most popular mobile platform (Android) and being heavily promoted on google.com for many years.

Making it harder to install addons (and breaking all the old ones) is one of the things contributing to Mozilla losing share to Chrome. People used to use Firefox over Chrome because of all the great addons, which they then broke, leaving users with less reason not to use Chrome (which was significantly faster until Firefox Quantum). In other words, the causation is exactly the opposite of what you're suggesting.


The reason I switched to Chrome way back when from Firefox is that they changed the UI and that broke some of the add-ons I used. At that point I thought "well, the browser looks like Chrome now, it has as much customizability as Chrome, but its JS engine is slower, so I might as well be using Chrome." At the time I was playing a browser-based game, where the speed of JS mattered a lot to me.

If all the add-ons had kept working on FF, then I probably wouldn't have switched browsers.


I used to default to Firefox for work. Then they killed the old addons, which broke a major part of my workflow (FireFTP's "open a file and as you edit it it automatically re-uploads" feature). So there was a lot less keeping me stuck to it. Recently, and this is past Quantum, performance on some things (Google Apps) became abysmal (now, this could well be Google being half-assed or deliberately malicious about testing) but that was enough that I gave up on Firefox for work.

I haven't used Chrome in a while, but I was under the impression sideloading was completely blocked in Chrome nowadays. So the fact that Mozilla allows sideloading at all puts them ahead in that regard.

Nope — sideloading is still easy in Chrome.

The rise of Chrome is responsible for all of Mozilla's lost share. But major factors causing Chrome to gain share are being the default on the most popular mobile platform (Android) and being heavily promoted on google.com for many years.

Not giving any technical and ux credit to chrome for also being a major factor for firefox's loss of market share is disingenuous IMO.


Indeed. The lack of Chrome-like profiles keeps me off Firefox.

I want to have a profile for my work, and a personal profile. Chrome provides that, and Firefox, last time I checked, required multiple hoops to enable and also blocked me from having more than one open.

I would even contribute funding to that if Firefox had a "Fund this feature".


Do you need completely separate profiles? If not, containers might be enough for your workflow: https://addons.mozilla.org/es/firefox/addon/multi-account-co...

I'm looking at the "containers", but it isn't very self explanatory. So with this, I couldn't have two installs of Bitwarden, one for personal and one for work?

Does FF have a feature crowdfunding site? I would totally donate $100 a month until the bounty is high enough that someone would actually do it.

I can't offer you a number, but it's significant enough for Google to offer me to autocomplete "firefox quantum addons" with "stopped working", and relevant discussions left a visible footprint on the net. I'm not sure how is the second sentence related to the question. If you're implying that addons dragged it down - it for sure is incorrect.

Yes, Quantum changed API => it made a bunch of addons dead => lots if worried, and upset users appeared discussing it. I have nothing against Quantum, I accept it was necessary to go for a new model, but it clearly shows addons are important for a significant share of users.

You said "exactly because of choice of third-party addons" and that's what I'm questioning. I'm not questioning whether addons in general are important for a significant share of users.

Could you clarify your position please: do you think that "Significant part of FF's userbase decided in its favor exactly because of choice of third-party addons" is incorrect?

If by third-party you mean unsigned, then yes, I think that's incorrect.

What you want to have is a trustworthy place where >99% of users can get >99% of software. You can have that -- look at Linux package managers -- without preventing side loading. And in fact that's necessary to prevent despotism.

The people operating the repository want people to use it. They don't want them getting things from untrustworthy places. But if they can just prohibit that, they can be tyrants -- refusing beneficial software that the user wants because the monopoly provider has a conflict of interest or is being coerced by someone else.

By contrast, if the user can load the beneficial application themselves then the repository has the incentive to prevent that from happening (and thereby discourage users from doing that in general) by carrying it themselves. And the fact that there can be competing repositories means that the one that carries the most beneficial software and the least user-hostile software can be the one that wins in the marketplace. But not if the vendor locks everyone else out and becomes an abusive monopolist.


I think Linux is quite a bit different. Firefox shares its space with applications that have the same privilege, making sideloading into Firefox much easier, especially on Windows. And then Linux is also niche enough as a consumer OS that it's just not as attractive for malware in the first place.

> I think Linux is quite a bit different. Firefox shares its space with applications that have the same privilege, making sideloading into Firefox much easier, especially on Windows.

All the more reason to have a supported way to do it. If they require the malware to replace the Firefox binary with a different one, what happens when it does? The user ends up with twelve pieces of malware instead of one because the original malware author didn't bother to support browser updates properly and the user ends up with a browser full of publicly known security vulnerabilities.

> And then Linux is also niche enough as a consumer OS that it's just not as attractive for malware in the first place.

People have been claiming that as the reason there was so much more malware on Windows for decades. Then Windows adopted some of the same types of measures as Linux and the amount of Windows malware fell off considerably.


Quantum changed the API model, not whether you could install unsigned addons, so that's a different discussion.

It's still falling even after the Quantum push. It's dipped under 5% total market share and under 9% on desktop. Losing a percentage point every several months.

You are deeply misinformed, and to be blunt, people like you are killing Firefox. Somehow Firefox managed to go for over a decade without blocking sideloading of extensions.

And no Mozilla should not be thinking in terms of "market share". There is no market! Mozilla is a non-profit. They should not be sharing the same paradigm as Google. And it is this vanity-driven pursuit of "brand prestige" and "market share" that has ruined things.

It makes business sense for Google's platform for them to do things the way they do. It makes absolutely no sense for Mozilla to emulate Google. The very fact that Mozilla can just afford to make unique software without the constraints of the market, supported by donors, is a strength, not a weakness. They should be leveraging it instead of trying to "stay relevant" and compete with Google on Google's terms.

And this whole incident is a disgrace. If it wasn't for Mozilla getting between users and their software, then this takedown would be nearly irrelevant, as it would be that much easier for users to install it. It would probably even disincentivize take-downs like this, because they would be so futile.


> because users wouldn't keep up with that for too long. They'd just use another browser.

Users will leave in droves if...Firefox allows you to install third-party extensions if you choose to do so? I don't get it.


This is a really bad argument that can be used to justify any anti user feature whatsoever.

No? First of all, I'm saying that allowing sideloading is actually hostile to the majority of users, and I'm also saying this threatened the very survival of Mozilla, so this needs to be part of the consideration.

I'm sorry to be rude, but I honestly believe that the only thing that will save Mozilla is for people like you to take your terrible ideas elsewhere.

You must see that your ideas are not working. Have you been able to attract new users, or even prevent existing users from leaving? No? So then how can you be so convinced that you are right?

Firefox is software, not an experience. Winning over people that don't care is not an accomplishment, even if you could do it. Trying to be better than Google at nannying your lower functioning users from hurting themselves is futile, and punishes everyone else.

You're feverishly stripping away everything that made Firefox superior. Driving away developers who have made unique extensions that cannot be replicated elsewhere.

And for what? What is your end game? Have you seen the Chrome store? It's awful! The majority of extensions are very low quality and often misleading. And they are unresponsive and have to employ strange hacks to do things that Firefox extensions did very simply (like taking a screenshot, or interacting with other extensions).

And what is your ideal outcome? To have Firefox be considered a good browser by Google's standards? How about what the users standards?

You are trying to slowly turn emacs into an cheaper Microsoft Word. And patting yourself on the backs for doing it.


> I'm also saying this threatened the very survival of Mozilla

You are saying it, but not making any argument or offering any evidence for what you're saying.


We have telemetry data on performance, user retention, installed addons and how these things correlate.

> We have telemetry data on performance, user retention, installed addons and how these things correlate.

Then why is that not itself an obvious solution? Use that data to provide a default-on blacklist of known-malicious addons. Then you have a blacklist rather than a whitelist, so it can justifiably be more difficult for the user to override it and the user has no motivation to do so for a malicious addon they didn't intentionally install to begin with.

It also gives you the opportunity to allow the user to specify a blacklist provider, so that if the original vendor ever gets compromised because they're forced to operate in an oppressive jurisdiction, they can farm out the low-resource task of hosting the blacklist to someone who isn't compromised, and the user (or the distribution packaging the browser) can make that determination for themselves.


As I recall, because people were already programmatically generating endless numbers of malware extension IDs to avoid blacklisting.

Blacklisting depends on a limited supply of whatever you're trying to control. UUIDs are not limited.

More generally, if you come up with an obviously superior solution to a problem that someone else has claimed is important and difficult, and has spent considerable resources addressing, perhaps it would be more constructive to investigate or ask questions to test your understanding of the situation before assuming and asserting that the other people are doing it all wrong?

I'm not saying mozilla has the best possible solution here. I don't know what that would be. I do know (I work for mozilla though not in this area) that the step was taken to address real, and very active, threats to security, privacy, and stability.


> As I recall, because people were already programmatically generating endless numbers of malware extension IDs to avoid blacklisting.

Which is a good reason not to use the extension ID as the sole method to enforce the blacklist, and is why anti-malware software generally uses a signature-based approach.

> More generally, if you come up with an obviously superior solution to a problem that someone else has claimed is important and difficult, and has spent considerable resources addressing, perhaps it would be more constructive to investigate or ask questions to test your understanding of the situation before assuming and asserting that the other people are doing it all wrong?

You're implying this is a novel problem that hasn't been widely studied by everyone and that they're not actually doing it all wrong.

> I do know (I work for mozilla though not in this area) that the step was taken to address real, and very active, threats to security, privacy, and stability.

There are many ways to solve a problem by trading it for a set of different problems. Centralized authoritarian control is exactly that. And those things can be popular in the same way anything that externalizes costs and internalizes benefits can be -- because that type of thing seems attractive to the people not directly paying the cost of it. Until the victims (in this case the developers and users whose apps you prohibit) devise a way to protect themselves. Then you end up in an antagonistic relationship with your users and addon developers. What's the cost of that?

Meanwhile there are other solutions that don't do that.


Dredging telemetry data has a nasty effect of letting you find precisely what you are looking for. There is no way to prove from passive monitoring of telemetry that rouge addons were causing people to abandon Firefox, nor that this measure stops people from abandoning Firefox (Or even stops rouge addons, after all its trivial to recompile firefox and replace a binary, if this becomes a problem is firefox going closed source next, for the users of course). But if you can show two numbers that seem to have a correlation (Like Pirates and Global Warming), you can advance any agenda you care to.

Not to mention this "telemetry" process is entirely opaque, and used to justify decisions like this in a very Ex Cathedra sorta way, where you either accept the conclusions they have made from data you can't see, or be treated like your opinion is irrelevant.

Its a very hostile way to run a supposedly "Open Source" project.


Its a very hostile way to run a supposedly "Open Source" project.

It's under a GPL-compatible license. Fork it and do what you like.

This is a pretty FUD-y way to FUD your FUD.


It absolutely can, for example:

"It's not gonna happen. Firefox would be dead by now if it still allowed users to customize their webbrowser and make Firefox unusable (as in slow, unstable, and btw insecure), because users wouldn't keep up with that for too long. They'd just use another browser. This may be okay with you for now because you know how to protect yourself against misconfiguration, but in the long run Mozilla needs market share to stay relevant and be able to compete with the richest companies on this planet."

Substitute in any positive feature that puts the user in control of their web browsing experience, power over something means the power to break it too, using that as a reason to remove functionality is stupid.

Doubly so when you pile on the "omg if they do this mozilla is doomed" hysterics.


Sure, you can always claim that some random lockdown is vital. The thing is, not every assertion is the same. Yours could be nonsense while mine could still be true.

If you claim something like "John is a carpenter because John is a human and carpenters are humans" and someone points out that your logic doesn't follow (by applying it to some other known non-carpenter human), you can then claim that this doens't prove that John isn't a carpenter, but the point is that the original claim utterly fails to prove that he is.

Where did I assert something similar to your example? It seems like a false analogy. The criticism wasn't even that my logic was flawed but that the same argument could somehow be made to justify any user-hostile action.

If the problem in your point of view is that I didn't /prove/ how big of a problem sideloading was then yes, I didn't even attempt to do that. There's a separate subthread on that question.


> Where did I assert something similar to your example? It seems like a false analogy. The criticism wasn't even that my logic was flawed but that the same argument could somehow be made to justify any user-hostile action.

That's why your logic is flawed. If a construct can be used to "prove" various known-false statements then it has no value in proving anything.

> If the problem in your point of view is that I didn't /prove/ how big of a problem sideloading was then yes, I didn't even attempt to do that. There's a separate subthread on that question.

There is no proof of it there either. The most popular platforms (Android on mobile and Windows on desktop) allow the user to load their own applications, showing not only that it can't be a major cause of failure in the market but that it seems to be a characteristic of the player with the most share.


> If a construct can be used to "prove" various known-false statements then it has no value in proving anything.

Yes, if... But it can't. That criticism of my statement didn't make sense.

> There is no proof of it there either. The most popular platforms [...]

I meant this: https://news.ycombinator.com/item?id=18583257


> Yes, if... But it can't. That criticism of my statement didn't make sense.

It applies directly. Anything that gives the user a choice can make the user experience worse for a user who makes the wrong one. But it also makes it better for a user who makes the right one in a way different from what the developer would have had to use as a default -- because sometimes something is right for 70% of the users, so it should be the default, but the other 30% are better off with something else. Taking away the choice makes the 30% worse off to benefit the 10% of the 70% that would have chosen incorrectly for themselves. That is not a relative advantage.

> I meant this: https://news.ycombinator.com/item?id=18583257

I'll reply there, but note that you haven't addressed my point -- other platforms survive and indeed have the largest share without prohibiting users from installing software, even when competing directly against others that do.


Nobody is "prohibited" from doing so. You want a Firefox that lets you do whatever you want, you can easily use an unbranded or pre-release build, or even roll your own or use someone else's lightly-tweaked fork. Nobody owes you an officially supported product that's for a wide audience and is also a complete free for all. Even Ubuntu and other Linux distros require you to opt into third-party software channels. You just don't like the specific choice that Mozilla is giving you, but it's still very easily there.

I think users should direct their attention towards Mozilla. They're the ones who decided that it is appropriate to take the extension down, and they're in the best position to fight it (compared to individual extension developers). There's a chance that this decision was done by a low-level reviewer and does not represent the official stance of Mozilla.

Also, anything that gets (or forces) the actors behind this to issue a takedown request against GitHub is good, because then there's a chance GitHub will tell them to pound sand (sending a strong signal to others that unconditional compliance with such demands may not be the only, or best, course of action) or actively fight it.


As I mentioned in a previous discussion on this [0], I would investigate removal of those websites from google by quoting google's terms of service:

"Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected.

Some examples of cloaking include:

... Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor." [1]

That you're able to provide the same view shown to google isn't exactly 'bypassing' anything.

[0] https://news.ycombinator.com/item?id=18299374

[1] https://support.google.com/webmasters/answer/66355?hl=en


Rename the plugin and change the description. The message from Mozilla states that the problem is the intent of the plugin. The technological measures it actually takes are not illegal per sé, but are illegal when used to circumvent paywalls (of course IANAL).

If you present this as a plug-in that allows you to view websites as the Google bot views them, for educational and debugging purposes, there is no problem. You can give the fact that it won’t see the paywall as an example. It’s actually useful for that purpose: you are not lying. It’s just that most people will install the plugin for its ‘side effects’. Their use of it will still be illegal, but the intent will not be illegal. Cf. Firearms, crypto, drugs, ...

(I say this as someone that pays for various journalistic sources and I encourage everyone to pay for at least their three favorite sources)


The intent of the plugin should not be illegal - that is what this battle is ultimately about, and this takedown action is essentially baseless bullying via the legal system. Each party to a transaction has their own desires, and what these companies are ultimately trying to do make you conform to their internal business desires post facto.

In any transaction there is a demarcation point where interests meet and then part. Businesses have gotten used to this idea of decommodifying [0] their products rather than competing - spamming restrictive clickwrap "licenses" etc. So much so that we view common sense rights like "first sale doctrine" as a friendly exception rather than the bedrock norm. A website telling you how you must/mustn't display the page you've retrieved is equivalent to a retail store demanding a share of your business's profits if you use their products commercially. We can envision such a scheme being cooked up with "terms of sale" and blah blah, but people would rightfully not stand for it - markets and society simply cannot function with such top-down control.

The only difference here is that civilization needs to re-figure these things out for the digital world, especially as frivolous overenforcement appears to be much easier.

[0] Adding unnecessary complexity/restrictions to make their market less efficient. See also: net neutrality.


You know, I consider myself fairly liberal on technical issues, and I agree with you in the general sense, but I don't agree in this specific instance.

This isn't a situation like Aaron Swartz, where the companies in question are restricting access to publicly-funded research. The newspapers are privately-funded entities that conduct their own investigative journalism.

This isn't a situation like Weev, where there was no access control to the data and he was just probing exposed endpoints.

This isn't a "right of first sale" issue. It could be if the publishers were trying to restrict access to page content after you paid for an account, but that's not what's happening here.

I agree with the GP that the issue is the framing. It's the difference between selling a mask, and selling a mask that's advertised to let you rob a bank without being caught.


My argument isn't based on liberating content, but that it's simply not one's responsibility to implement someone else's business rules.

The problem is that the publishers are still supplying their content for free, while then trying to attach arbitrary post-facto terms. It would be straightforward to just not send the article to someone they don't want to view it, but yet this is not what they have chosen to do.


The terms aren't post-facto, they're a priori. You can access the content if you paid for an account or if you're Googlebot. Impersonating Googlebot doesn't change the terms of the access.

Note the description from the add-on page "This extension will mangle your browser's requests to maximize the chances of bypassing paywalls". This is where there's a meaningful difference from the Weev situation.

You can also choose to not read the content if you don't feel it's worth the money.


> Impersonating Googlebot doesn't change the terms of the access.

But I wouldn't be "impersonating" the Google bot, I would just be using it's user-agent string. That does not make me the Google bot. If a foolish publisher chooses to interpret my usage of that string as me being Google, that is not my responsibility.

Just like with the Weev situation, this is another case of someone trying to shift blame for their crummy security.

A user-agent string is NOT authentication. It is merely supposed to provide a hint to the webserver.

I once saw a website that didn't have ads and let you view their content for free if the width was less than 700px. I guess they figured they were driving away more mobile users or something that it was worth. So, was I being criminally minded when I resized my browser to view their content instead of jumping through their hoops and giving them my email address? At a certain point, it is not my responsibility to hold their hand and play along with their pretend restrictions.


The protocol is the transaction demarcation point - anything else is sophistry. Let's try out an analogous argument with physical goods:

Customer goes to Home Depot, buys a bunch of $5 aviators. Sells them on Amazon for $50. The distributor sues the customer-reseller for violating their contract to not commercially resell, which undermines their expensive offerings. Customer points out they never entered into any such contract at Home Depot. The distributor claims Home Depot is a mechanical conduit that only facilitates a larger relationship, and the transaction actually took place between the customer and distributor, with the "standard" terms having been available on request.

The fundamental truth is that the interests of any buyer and seller are only aligned at the exact time of sale - afterwards they diverge. What has happened is that pervasive information technology has got sellers attempting to retain some kind of interest in things after they've sold them, with the "transaction" never really ending (an oxymoron). Copyright is one of the hooks by which they're purporting to do this (which actually at the very least it needs to be reformed so a seller of "digital goods" cannot destroy your access to a copy you've paid for)

As I said before, society cannot function this way - non-pure ownership can work for big tickets (eg real estate), but centralization and every item having a history simply doesn't scale. And as I also said, there is a very simple remedy for publishers - stop sending the article to people they do not want to view it!

(Your argument also implies that it is illegal to suggest someone visit a URL with lynx to avoid a javascript popup)


I don't find that analogy analogous.

Alice goes to Home Depot. Aviators cost $50, but they're on sale for $5 if your name is Bob. Alice tells the cashier that her name is Bob and shows an (obviously) fake ID. The cashier isn't paid enough to care, so Alice gets the glasses for $5.

I agree with you that anything Alice does after this point is her own business. I could accept an argument that Home Depot should've paid the cashier more to help prevent the issue. But I don't think that makes Alice's actions at the point of sale less fraudulent.

> And as I also said, there is a very simple remedy for publishers - stop sending the article to people they do not want to view it!

Which the publishers do. They don't send the article unless they think you're Googlebot.

> Your argument also implies that it is illegal to suggest someone visit a URL with lynx to avoid a javascript popup

My issue is with intentionally changing the user agent header to impersonate a well-known crawler that has been given access to the site under specific terms.

I don't have an issue with using other browsers or javascript blockers or ad blockers. There's a difference between spoofing a client-> server request and blocking a client -> server request.


> Which the publishers do. They don't send the article unless they think you're Googlebot.

I've never had trouble viewing an article in incognito mode, or sometimes even hitting stopping a load before javascript runs, so I've been under the impression that it's mainly javascript trickery. My argument here was with that assumption, so perhaps it's moot.

> Alice tells the cashier that her name is Bob and shows an (obviously) fake ID. The cashier isn't paid enough to care

I'd say that the proper example is that the cashier doesn't check ID and isn't even expected to by corporate. Companies aren't even strictly against this type of soft-fraud word-of-mouth trick, as it just further helps their price discrimination and customer mindshare. Whether you get one free birthday desert per year or five, you're still buying meals.

The elephant in the room is that Google could easily setup a system whereby Googlebot got secure access to the articles. I think they haven't done this because of an idea that the same pages should be served to users. So who is defrauding whom?

Philosophically I'd assert that the Internet philosophy is directly opposed to the very concept of fraud - "identity" does not scale, especially across jurisdictions. We've finally got a way to formalize and mechanically execute contracts purely between private parties, so why cling to a heavyweight idea of post-facto enforcement based on nebulous ambient natural-language rules, especially as an overarching foundation? When you put a quarter into a "claw game" and it drops your prize before the chute you don't sue for "fraud" - you just stop putting quarters in.


> I've never had trouble viewing an article in incognito mode, or sometimes even hitting stopping a load before javascript runs, so I've been under the impression that it's mainly javascript trickery. My argument here was with that assumption, so perhaps it's moot.

Typically publishers will use a combination of cookies and javascript to attempt to enforce a soft limit of X articles per month for normal users. Publishers also use UA sniffing to give unrestricted access to Googlebot and other crawlers.

Circumventing the former by blocking cookies or javascript definitely should be legal, because the client should be control of their local computer. Spoofing requests to take advantage of the latter (which is what this specific extension is doing) isn't okay IMO.

> The elephant in the room is that Google could easily setup a system whereby Googlebot got secure access to the articles. I think they haven't done this because of an idea that the same pages should be served to users. So who is defrauding whom?

Something other users have suggested[1] in response to this article. Maybe that will happen in the future, but I think it's a whataboutism with regards to this specific incident.

Google explicitly supports paywalled content[2], so I don't think it's fair to say that publishers are defrauding Google.

[1] https://news.ycombinator.com/item?id=18582893

[2] https://news.ycombinator.com/item?id=18584014


I don't find your analogy analogous either. Not only did you equate physical property to IP, but even if you accept that equivalence for the purpose of argument, your analogy seems like it would be better if...

Home Depot gives (more like leases, when taking into account that copyright doesn't allow transitive redistribution... no first sale doctrine for intangible/unfixed IP) the aviators to Bob for free, knowing that Bob will take a bunch, set up a table outside that looks like he's giving away free sunglasses, but when you pick up a pair a Home Depot employee comes out, snatches them away from you, and starts pressuring you to get you to buy a pair.

I'd classify that as a persuasion pattern, which is partly why this is so polarizing (bad pun, sorry...). You had the aviators in your hand, or in Google's case, you read a snippet they had indexed. You can practically taste it, metaphorically speaking, and then comes the hard sell. Either you pay them for the thing because you were so close to getting it and aren't going to let a few dollars stop you, or you get angry because you had the thing you thought you were about to get snatched away from you. The seller is counting on that, by grabbing for the product, you're committed to getting it even if you discover it actually costs money.


> Not only did you equate physical property to IP

Tracking the ownership of the aviators is a non sequitur, as the issue occurs at the point of transaction, not before or after.

> [Bob] set[s] up a table outside that looks like he's giving away free sunglasses, but when you pick up a pair a Home Depot employee comes out, snatches them away from you, and starts pressuring you to get you to buy a pair

> You had the aviators in your hand, or in Google's case, you read a snippet they had indexed. You can practically taste it, metaphorically speaking, and then comes the hard sell

I don't think that just because a publisher makes content free to one user for one use case it obligates them to make content free to all users for all use cases.

Does a studio including snippets of a movie in a trailer mean that the movie has to be distributed for free?

I would argue Google's actions are more akin to an advertising or PR firm drumming up interest by showing bits of the article to consumers.

If I had to condense my thoughts into a couple of bullet points:

1) There is a colorable difference between ad blockers and what this extension is doing

2) I think it is unfair to characterize disapproval of this specific extension as an attack on ad blockers in general

3) Publishers have the right to give samples of their content to search providers without them being obligated to make the content free for everyone


Very well done analogy and thought experiment!

Your retail store analogy is a red herring. A browser plugin that “maximizes the chances of bypassing paywalls” is attempting not to be party to a transaction, because if it succeeds, the user has expressly not agreed to the seller’s terms.

Instead your position appears analogous to arguing that altering a driver’s license to gain free admission to a cinema by misrepresenting oneself as entitled to senior citizen terms of entry is justifiable despite being prohibited by law in that jurisdiction.

It’s clear from this and other posts that you have articulate, principled view on many issues. So why aren’t you addressing the underlying economic issue? Publishers, like any business, need to earn revenue. If technological barriers to accessing intellectual property — and the legal protection thereof — are not valid (your claim of “frivolous overenforcement”), whose economic rights supersede the content producers? And why?


> whose economic rights supersede the content producers?

The owner of the computer they're insisting implements their business logic. It's trivial to simply not send the article to someone they do not want to view it.

If the extension moves on towards sharing an account, P2P distribution, etc, you would have a point. But as it is, it's only action is to interpret the content in a different way than the publisher desires.

Let's say an abutter of a drive-in theater sets up their own seats and starts selling tickets. The intent to "see a movie for cheap" would only become relevant if coupled with some action that is actually illegal.


This is the best idea here. What's funny is that the news sites in their typical technical incompetence are actually way far behind in this battle. Spoofing user agent does not work anymore on most 'techy' sites who have quality software engineering. Analyzing user agent is preschool tier blocking these days. If the news sites want to get serious they need to use browser fingerprinting to detect human behavior and only accept Googlebot user agents from Google IPs. They are basically complaining about something that they are too technically incompetent to solve themselves.

Your first paragraph reads a bit crazy and I imagine that's why you're being downvoted, but as you're expanding your idea in the second paragraph I think you might have a point.

Thanks, reread and adjusted it a bit. Better like this?

What about re-releasing it as a bookmarklet? Slightly worse UX but not blockable by a first party

The most effective trick it uses requires the ability to send custom HTTP headers... that wouldn't work as a bookmarklet.

> provide an XPI (that will only benefit the few since the side-loading process is made awkwarder at every point release) ?

This add-on is the first XPI I have ever loaded. I didn't find it awkward to do; it is simply:

Tools -> Add-Ons -> Cog Icon -> Install add-on from file...

The only thing I found unintuitive was the cog icon.


Have you restarted Firefox since you did that? Can't test right now, but I'm pretty sure it will be uninstalled, by design.

I've just tested it, still working.

Mind you I'm using Firefox 61 (current version is 63). I switched off auto-updates when it kept changing my user interface and trashing the extensions I need to use to make the web half-usable (e.g. Adblock Plus, Bypass Paywalls, Disable Javascript, Font Contrast Fix, Stylish).

I'm guessing Firefox 63 doesn't let you run Bypass Paywalls, then? I suspect if fucks up other stuff, though I'm not trying it to find out.

If Firefox want to increase their market share, a good start would be not to make changes that are actively hostile to the user.


Perhaps ask Mozilla if they could have it not be displayed in the legal jurisdiction it's accused of illegal, but have it keep on being displayed in all other jurisdictions?

The backwards laws of one jurisdiction shouldn't enable take downs across all of them.


I'd just move on. To be honest sites with those types of paywalls should not be indexed. The loophole you are taking advantage of here is a bait and switch by these sites. They want the search traffic but don't want public access. Most of us have already adapted, however, and avoid these sites or pay for them.

Your plugin title blatantly describes that you're avoiding paying for something they are charging for so even though it may not be illegal it's not something I'd waste energy fighting for.


How is that bait and switch? Their content, their rules.

Its like saying some people get Nike shoes for free in exchange for a review. And when I ask for it, or even take it, they have a problem with it.

I think with physical goods we have an innate understanding of what constitutes as theft. Just because the distribution cost is zero doesn't make digital costs free-for-all.


It is a strained metaphor. What I was trying to say is that clicking on a search result and getting a paywall instead of the content alluded to on the search page is not expected behavior to the end user.

Do you think Google might start removing or lowering the priorities of sites like this in search results?

they should if it's even possible. They knowingly allow web crawlers to index their sites

Both DMCA and EU directive 2001/29/EC prohibit circumventing technological measures that "effectively control access".

Checking user agent doesn't meet the requirement to be considered an effective access control technological measure.


Attorney here! (Not providing legal counsel, though.)

It’s a common misconception that the word “effective,” as used in the respective legislation, was intended to mean “successful” or “always cause.” Rather, it’s always been interpreted to mean, “is designed or intended to have the effect of.”

The fact that the measure is not always successful or is breakable does not make the law no longer applicable; that would be an absurd result. The law isn’t interpreted literally when the result would be absurd; this is doctrine that goes back to at least 1892 and even further back to English common law. See, e.g., Holy Trinity Church v. United States, 143 U.S. 457 (1892).


Ah, but in this case we're provided a definition:

> a technological measure “effectively controls access to a work” if the measure, in the ordinary course of its operation, requires the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work.

17 U.S. Code § 1201 (a) (3) (B)


Correct; thanks for providing the definition.

With this definition does you opinion change? I mean this user agent isn't a secret information - this provides weaker access control than not advertising a public URL.

> Checking user agent doesn't meet the requirement to be considered an effective access control technological measure.

I highly doubt that. Keep in mind that age restrictions which make you hit a single button to circumvent them are much worse in keeping the wrong users out, and afaik legally they're perfectly fine.

"Effective access control" means they have to do something which may work to some degree and doesn't have to be a burden financially or technically.


Ah, but in this case we're provided a definition: > a technological measure “effectively controls access to a work” if the measure, in the ordinary course of its operation, requires the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work.

17 U.S. Code § 1201 (a) (3) (B)


Would you be making the same argument if we were talking about your bank's website?

IMO Effective access control works both ways. you can't send the ALL of the content back and then have the access control work on the client machine. What if i just used curl and viewed the HTML raw to read the article?


Ok, maybe it's not legal in US, but the rest of the world shouldn't be deprived of it!

I would consult with an attorney to determine legal options for an adequate defense and expected expenses. A consult is not a contract and you can change your mind if you are unwilling to take the risk with a law suit.

I suspect the take down notice is a DMCA take down based upon an flawed assumption of the law. The hard part about this is arguing the technical merits of the case before non-technical people. While the take down notice is probably in error they could still make a good argument around bypassing their security controls.

You could appeal to the EFF or ACLU. If they are willing to take your case it will be pro bono.


I'm not sure if this has already been suggested, but if it hasn't been, then maybe you could market/describe your add-on as one that prevents sneaky websites from tracking you.

And instead of calling it a anti-paywall add-on, instead call it a anti-tracking add-on.

If you do that though, you may be asked to explain the list of websites for which the add-on activates. I guess you'll need to have some sort of "valid" explanation for that.


How about stop the development? The fact that you can break the paywall (easily) doesn't give you the right to do it. Current laws (in that country) state that it's ilegal, so why do you insist in doing it?

Paywalls aren’t enshrined by any law. You’re misinformed.

the web is public, despite SV’s best attempts to subvert and exploit that. If you don’t want someone accessing information don’t publish it as a website that anyone can access.


Does that apply to online banking as well?

If it's legitimate for a bank to hide your data behind a username and password, how is a journalism-provider any different?


Does your bank allow your account to be indexed by google?

No.

So is that's your fundamental issue with a paywall? Anything that's available to Google (and Bing, DDG, etc) should also be available to you at no cost?

Restating that from the other perspective: if the information isn't universally available for no cost, it cannot be looked up via a search engine?


Uh yeah. I'm assuming you know how HTTP works, but if not, _basically_ you send a request to GET content, and the server makes a decision on what/whether or not to return. If that user shouldn't be able to see the content because they haven't logged in then its up to the server to decide.

It's crazy to send them the content but tell them not to read it... back to your example would you expect your bank to do that? Here's all the account details and transactions but oops thats not your account. I'm guessing no, you'd hold your bank to a high technical standard.

To be clear, if newspapers/journalists want to work out some special agreement with google (or partner/agreed upon indexers) so their requests are authenticated so that only they have access to the content - i think that is a better solution then pay walls and sending the article and saying "don't read this please"


I agree: it's a crazy strategy. And you're correct: if my bank sent my details to someone with a half-baked attempt to prevent them accessing the data they were given, I'd be getting a new bank.

But regardless of how crazy this scheme is, I don't think it justifies taking advantage of that craziness to unwrap such content.

I think it's reasonable to question the approach of banning the plugin too: the problem is the users' choice to use the plugin, not making it available. But ... when there's no justifiable use for the plugin, and the author clearly intends it to be used to view unauthorized content ... I can see that it's an attractive strategy to just ban it.


because there's no username and passwords for paywalls...

The ones I'm thinking of (eg. NYTimes, WaPo, WSJ, etc) are all username/password.

What kind of paywall are you thinking of?


How do you think this add on was working? do you think it was brute forcing the password of all the sites you were thinking of?

I'm less concerned with the implementation than I am with the principle.

In principle, I have reservations about exposing content to search engines but then requiring payment to read it. Especially if it's non-trivial to filter out the sources that require payment.

But a plugin which works around an attempt to restrict visibility of content to those who've paid for it ... I thnk the intent here is wrong.

I think it's ok to have information that's only accessible to a restricted set of viewers.

It's not that it's not possible. It's not that the implementations aren't dumb. It's that the principle of "if I want it, and I can do it, then it's ok" doesn't really hold up, IMO.


They just bypass the paywalls by pretending they are google, there's no username or password involved.

Okey. So how about respecting the right of those who use paywalls to get money for the content they create?

Just because something is on the internet doesn't give you the holly right of getting it for free.


> Just because something is on the internet doesn't give you the holly right of getting it for free.

Correct. Others have the holy right to charge, and I have the holy right to try getting around it.


Yeah. That's the point. I won't go into the subject of discussing if that is legal or not, but for sure it's not ethical.

That said, you're free to try to get around easy "protections", Mozilla is free to take down your methods for doing that.


You could make moral arguments for and against the companies seeking these protections and even Mozilla itself.

Ultimately businesses will always ruthlessly try to make more money, and software will ruthlessly seek a more efficient user experience.

Often these objective clash. Spotify is the obvious example that seemed to offer a solution in the music space. But we have yet to discover such a solution in online publishing.


Bypassing paywalls is a "more efficient user experience"? Wow... that's a whole new level...

It pretty easily meets the definition of efficiency- it allows users to do more, in less time, and utilizing less resources.

How about paying? That will let you do more, in less time, and won't require an extra browser plugin, which means that it will use even less resources than using a plugin to bypass the paywall.

Money = resources

> Just because something is on the internet doesn't give you the holly right of getting it for free.

Why not?


The same reason why I can't just get your lawnmower even if it's visible, in your backyard.

A better comparison is dropping my lawnmower off in your backyard, teasing you with access, and then complaining when you touch it.

You cannot reasonably expect to protect or restrict content with a flawed understanding of the medium in which that content is conveyed.

If you don’t want me access it don’t put it on the web.


Your lawnmower (your website) is in your backyard (your servers). If I go to your backyard (your servers) and I get/use your lawnmower (your website), I'm 1) trespassing privet property (the paywall) and 2) using something that I'm not allowed to use (your lawnmower, your website that requires me to pay for the content).

No matter how easy is it for me to go into your backyard (bypass your paywall), it's still an offense.


If I can download your content by simply changing my user-agent identifier you don't have any security. In this context the backyard is the local computer and web browser. The lawnmower is the content in question. It is deposited and there. If you don't want the user to access your content then don't drop it into their backyard. The user isn't trespassing by accessing content left in their property.

More simple, if you don't want the user to have it then don't give it to them.


In my opinion, the problem is because content owners/producers want a double standard. They want people to pay for access but they also want there content indexed so people can find it. So if this tool makes it so that the site treats me as an indexer than so what?

Have you ever been in a conversation where someone talks about something and you said “hey I read this cool article on that, let me send it to you.” If so, guess what - you were the search engine for that conversation. Should you then have access to view the non-paywalled content?

So yeah, I have no issue with this add on. If they didn’t want the double standard - to allow free access for some and not others - it is easily possible and in their full control to prevent add ons like these (think of any admin site or service for which you have to login before seeing/do anything.)

Content producers have a choice and they’re choosing to be bullies. I have no moral or ethical qualms when it comes to dealing with bullies or double standards.

Just my two cents.


> The fact that you can break the paywall (easily) doesn't give you the right to do it.

I would argue, from both legal and technology perspectives, the ability to easily break a paywall provides you the right to do so.


Why? "The ability to do X gives you the right to do so" is not a moral argument in the slightest. In fact, it is inherently anti-moral, because the entire point of moral codes is "just because you can do X doesn't mean you should do X". I mean, if you can't do something, there's no point in prohibiting it, because it can't be done.

Some of us are immoral, or at the very least amoral. Not everything has to be based on morals.

Talking about "the right to do so" is either a legal or a moral judgement, and the parent said this in the context of both a legal and a non-legal (and therefore moral) perspective.

No. One could argue for a right while accepting that there isn't an inherent moral value attached to it.

This isn't really about morality, because if were there would be a harm. Nobody is being harmed here. This is about technology, specifically content on the web. In technology the ability to do something absolutely is the right to so do unless you have already agreed otherwise.

If you really REALLY want to make this about morality then I would argue restricting access to available content is the greater moral offense, because it is an inherent violation of liberty.


If you make software for a living, you’re likely earning income in large part due to the fact that not just anyone can copy your software for free.

Writing news is work of perhaps a different subject, but it can’t be denied that like software development, a significant amount of real work goes into that. How can you justify that the authors shouldn’t be able to reasonably expect to have control over their work and expect to be paid for it, if their consumers deem it valuable?


> Writing news is work of perhaps a different subject

Don't care because that is not the subject of this issue not matter how much you wish it were. The subject is whether or not a paywall is adequate for restricting access to content on a public medium. If that restriction is so easily bypassed it is clearly ineffective and inadequate. If you really cared about your business and revenue you would solve this problem instead of complaining about failed value assessments.

> control over their work

Journalists don't have control over their work. Publishers do. This is the same failed argument the music and movie industries peddled from around 1998-2008. Instead of wasting your energy on distribution control spend it on what you are good at and return superior value to your work.


> In technology the ability to do something absolutely is the right to so do unless you have already agreed otherwise.

That's just plain wrong. There's nothing special about technology that means you get to throw all your existing moral principles out the window. The ability to technically do something does not confer you the right do it, either legally or morally.

Maybe if I draw an analogy to patents it might make sense? It seems most people in technology agree that taking something unpatentable and adding "do it on a computer" shouldn't make it patentable. Morality is no different. Taking something immoral, like taking someone else's intellectual property without paying for it, and saying "but do it on a computer" doesn't suddenly make it moral. Taking someone else's IP isn't immoral because you're depriving them of the paper it's printed on (that part is just petty theft), it's immoral because you're depriving them of the right to control the distribution of their IP and the right to get compensated for the work they did in exchange for giving you the right to use the IP.


Which moral principle is in violation? You are serving content and then complaining about people reading it. If you are so interested in protecting your IP then why are you giving it to me anonymously? In no way have I altered your means of distribution.

If I choose to serve my IP to Google in full for free, that doesn't mean I'm choosing to serve it to everybody in full for free. Just because you can trick me into thinking you're Google doesn't give you the moral or the legal right to my content.

Actually it completely does. You don’t own my computer, software, or it’s settings. In accessing your content I am committing no act of fraud or theft since the user agent string is not a qualified identifier.

Are you directly giving it to Google via encrypted tunnel? I am not redefining how any technology works or making any alteration to your distribution or your service. Yet, you are still giving it to me free and anonymous.

Some people think taxes are immoral, but that doesn’t give them the right or ability to redefine the tax code. HTTP is not immoral.

The real issue here is you are lost and confused because you don’t know how to secure your product. Your negligence in accounting for your business, financial model, and IP is not my lack of morality. Fix your technology.


What makes you think that the technical ability to do something confers either the legal or moral right to do that? Just because it's technically possible for you to trick me into thinking you're Google does not mean it's ok for you to do that. You tricked me into giving you something I thought I was giving to someone else; that doesn't mean you have permission to do this, that just means you tricked me. Neither legality nor socially-accepted morality recognizes that successfully tricking someone confers the right to the results of the trickery. In fact, both are pretty explicit about how this is wrong.

> What makes you think that the technical ability to do something confers either the legal or moral right to do that?

Because I am operating squarely within the specification, designs, and intentions of the technology. When this is not the case there are two possible outcomes: a defect or missing security.

The absence of security or warning thereupon is an implied invitation. This is why, at least in the US, a warning must clearly be stated before trespassing can be enforced. In this case it isn't even remotely confused with trespassing because you are giving me the prize simply by my asking for it without any regard for who or where I am.

We can invent all kinds of fictions as to why this is right or wrong. More important is whether I am violating security (there is none to violate) or violating technology (clearly I am using HTTP properly). Ultimately this distills down to one question:

If you don't want me to access a given content then why are you giving it to me anonymously?

The answer to that question determines why the use of a technology qualifies the usage behavior.

> Just because it's technically possible for you to trick me into thinking you're Google does not mean it's ok for you to do that.

Why is that not okay?

> You tricked me into giving you something I thought I was giving to someone else

I did no such thing, because you never asked who I was. Would you give your car keys to a complete stranger assuming they are a valet without any consideration as to whether they are who they claim? If in this case the car is stolen what is your expected recourse?

> that doesn't mean you have permission to do this

It is perfectly legal to modify my user agent identifier for any reason at any time. It is my computer, my software, and my settings.

> that just means you tricked me

How could I have tricked you? You never bothered to ask. This is how this technology qualifies its use in this way. You made a faulty assumption and are shifting the blame for that assumption. You can continue to be upset about this, but you will continue to hand me the content that you wish to protect without any legal recourse or restitution.

If this were my business I would abandon this irrational commitment to a failed idea and either secure my financial model or just open it up.


> Because I am operating squarely within the specification, designs, and intentions of the technology.

No you aren't. The intent is "Give Google the complete text, put up a paywall for everyone else". The fact that the technical implementation does not perfectly express this intent does not mean you can pretend the intent is different.

> The absence of security or warning thereupon is an implied invitation.

Nope.

> This is why, at least in the US, a warning must clearly be stated before trespassing can be enforced.

This is wildly incorrect when it comes to computers. The CFAA does not require any sort of warning in order to determine that a user has overstepped their authority.

This isn't even true in the physical world. Many states require notice that you're trespassing before you're criminally liable, but "many states" does not equal all states. And even in the states where this is true, the absence of a notice is not at all an "implied invitation", it just means you're not criminally liable if you weren't informed that you didn't have permission to be there.

> If you don't want me to access a given content then why are you giving it to me anonymously?

Because you tricked me.

The fact that you fooled me into thinking you're someone else does not give you the legal or moral right to the results. Neither legality nor morality is a game, where if you can just figure out the right loopholes you can get away scot-free. That's not how it works. Not in the real world, and not in computers either. The word that describes what you're doing is "fraud".

Just because it's easy to trick me does not make it right to do so. Your entire argument boils down to "if you didn't want to be tricked, you should have tried harder". That's not a moral argument. The ease of tricking me does not in any way affect whether it's ok.

As an analogy, let's say I'm a baker, and to celebrate my dear mother Alice's birthday, I decide to give away a slice of cake today to anybody named Alice that comes into my shop. I'm not a suspicious soul by nature, so if someone walks up and says "Hi I'm Alice" I'm inclined to believe them. I'm not advertising this anywhere, there's no sign saying "If you introduce yourself as Alice you get free cake", it's just something I'm doing. If you hear about this (say, you have a friend named Alice and she tells you about the free cake she got), do you think you're morally justified in walking into my shop and saying "Hi I'm Alice"? Sure, the damages are pretty small, but you're still lying to get something you know you're not entitled to, with no justification other than you want it.


I think you are aware but irrationally refusing to accept that HTTP is public and anonymous by default. That is the only defense a person would need to destroy you in court in a counter-suit. Why would you leave your business so openly exposed to this risk?

>> The absence of security or warning thereupon is an implied invitation. > > Nope.

You should consult an attorney. There is ample case law on this. This is how the military learned, the hard way, to impose warning banners.

> This is wildly incorrect when it comes to computers.

It still applies. There is legal precedent. CFAA does not apply in this case. In order for it to apply I, as a user, would have to knowingly overstep my authorized privilege level, which is commonly referred to as privilege escalation. This is explained in the Wikipedia article for CFAA.

> the absence of a notice is not at all an "implied invitation"

It is in the case where entry is anonymous and public, such as a store. HTTP is anonymous and public.

> Because you tricked me.

How could I have tricked you if HTTP is anonymous and public and was never asked to identify myself? I am an unknown stranger like every other requestor.

> The fact that you fooled me

I did no such thing. You never asked who I am. Had you asked I would tell you and then can you can decide if you would like to grant me access to the content. Instead you guessed incorrectly. Your bad judgement is not my fraud because HTTP is public and anonymous.

> Neither legality nor morality is a game

I keep trying to point this out to you but you would rather irrationally maintain your commitment to a failed idea and leave your business fully exposed. I did cyber security work for the military for about a decade, so I am fully aware of what the risks, technology, and laws are. If you were to serve me with a DMCA take down notice for anti-paywall software I would take you to court and I anticipate I would likely win. You really don't want that to happen. Why are you exposing yourself in this way?

> Just because it's easy to trick me does not make it right to do so.

Stupid is not a legal defense. Listen to the absurdity of this when rephrased to say the exact same thing: I am totally surprised that nice looking stranger drove away in my car after I voluntarily gave him the keys. Just who does he think he is? I know this will be all cleared up once the police find my car with the signed title in the glove box proving ownership. It was wrong for him to look like a banker. The imposter tricked me. How morally repugnant.

> Your entire argument boils down to "if you didn't want to be tricked, you should have tried harder".

That understates the absent mindedness of your position, but yes. It may or may not be moral, but it quite often the more legally valid point of view, particularly when there is a provable expectation of knowledge and risks.

> do you think you're morally justified in walking into my shop and saying "Hi I'm Alice"?

Yes, because free cake is great (since you're offering), and I think you will run out of cake. If I were the baker I would attempt to validate the person by asking for a phone number, address, or to see their ID. Stores already do this, which is why your scenario is absurd. If this scenario were a real thing it could backfire by resulting in a potential discrimination claim.

eridius 6 days ago [flagged]

> I think you are aware but irrationally refusing to accept that HTTP is public and anonymous by default.

That has literally no bearing whatsoever on the morality of pretending to be someone else in order to get access to IP that you don't have the right to access. This is the second time you're calling me irrational, and it's not acceptable.


I am not pretending to be anybody. I am anonymous. This is how HTTP works. That is why you should either be asking your users who they are or stop pretending to care that it is important.

Rationality - https://en.wikipedia.org/wiki/Rationality

I used the word irrational in the context (both times) of commitment. This is an accepted description in behavioral health literature and does not address you as a person. It is also not polite to accuse people of tricky (fraud) when there is no evidence of such.


Please don't do tedious tit-for-tats on HN.

Please don't do tedious tit-for-tats on HN.

Please don't paint both parties to an argument with the same brush. I was on my best behavior for this argument and did not engage in a "tit-for-tat". I do not appreciate being officially reprimanded and having my comments flagged when I did nothing wrong.

I'm not expecting a response (heck, I'm expecting this comment to be flagged for arguing with a moderator), I'm just hoping you'll pause for a moment and realize that heavy-handed moderation is counterproductive; if I'm punished for being on my best behavior, what incentive is there for me to care about my behavior at all?


Whenever we get one of these snippy exchanges going far to the right on the page, circulating uselessly and boring everybody else, both parties are to blame for keeping it going.

Any time you find yourself writing things like "This is the second time you're calling me irrational, and it's not acceptable," you've passed the kind of discussion we want on HN. Even if it's true that the other person was behaving as badly or worse.

https://news.ycombinator.com/newsguidelines.html


The point of that comment was to signal that the discussion was over. I agree that the discussion had passed a line; that's precisely why I wrote the comment, and why I ignored the reply. I was self-moderating. It seems both ironic and counter-productive for a moderator to yell at me for ending the discussion, with the explanation that the discussion should be ended.

But really, it's not the fact that this comment was flagged that bothers me (though being accused of "tit-for-tat" behavior still rankles). What really gets me is my previous comment (https://news.ycombinator.com/item?id=18604446) is flagged as well, for no reason I can see.

As before, I'm not actually expecting a reply. I know you want this whole discussion to be over, and I'm fine with that. I just want you to spend a few seconds thinking about what I'm saying here.


Ok, I've unflagged that comment.

It's far from clear that draw-on-top paywalls qualify for the protection offered by either the DMCA or the CFAA. Even if they did, the mechanisms by which this specific extension works are indistinguishable actions users take for many other reasons. The legal notice is a gamble that the extension author doesn't want to spend the money to find out the hard way.

Stop enabling the destruction of journalism?

That's an entirely serious suggestion. I'm not arguing that what you're doing is illegal. I have all sorts of qualms with copyright and its enforcement myself. If I were "copyright czar", movies would get three years, and newspapers maybe a week.

But if we wish for copyright-holders to respect the rights of internet users, it would seem to be a winning proposition to maybe not actively work against their attempts to find a compromise.

And a "soft paywall" that allows you to read a dozen or so articles per month without subscription seems to be jus that: a compromise, and a reasonable one at that.

What do you expect to be the long-term outcome of bypassing such mechanisms? I have trouble of thinking of anything but "soft paywalls" turning into "hard paywalls". Then, we'll be left with the maybe one or two publications we subscribe to. How can such an outcome be in anyone's interest?

I know you can bypass these schemes. Yes, they are laughable. I know ads are sometimes annoying. I know large swaths of the press have earned your scorn because they, like, use the wrong JS framework. Or something.

But I still don't understand this attitude that appears to go even further than just "I want it for free" to an almost gleeful appreciation of vandalism destroying the foundations of democratic societies.


I have a few responses to this, but the most basic is: Please dont tell people what to do on their computers.

If your business model is not supportable except by other people willfully hiding information on the client computer, your business model is not good.

"Avoiding a paywall that's only half implemented" is not "the destruction of journalism."


You're willing to wage an awful lot on the proposition that there is another business model, even though both you as well as thousands of now-bancrupt publishers have failed to find it.

Also: your own business model of "living" is just held up by the completely artificial rule of others not taking everything you own. Just because "it's on the internet" doesn't make "might is right" a true proposition.


> You're willing to wage an awful lot on the proposition that there is another business model,

No, for a business model not to work doesn't mean that there has to be another business model that will allow you to run exactly the same business. If I need to fish with dynamite to make fishing in a particular area profitable, and dynamite becomes illegal, it's no one's responsibility to come up with another way I can fish.


I dont have a business model based on living as a general rule, or elsewise some people owe me some money :)

And no, that's not a fair analogy, its me saying that if you tell me something, I am not beholden on your instructions to forget it.

You simply have to use something as simple as a lock and key and I will be forever stopped from knowing the thing (in reality).

All news orgs have this option, they simply want the cake and eat it too of working with popular search engines and also charge users for information that they effectively already have.

If you want to operate a private service go for it, nobody is going to complain about it, and any unauthorized access will be met with scorn (at the least.)

edit: also I am not just willing to wage failed business models all day - clinging to things that dont work isnt a way towards success.


> Stop enabling the destruction of journalism?

My two cents is that advertisement is what is killing journalism.

YouTube, for example, can show advertisements for well known companies in videos about Anti-vaccination, far-right conspiracies, etc. without consequences.

Why is that? Because all that happens in the privacy of your own computer. Usually any newspaper that publicly have printed such bullshit in their pages will be dead. Public will react to it.

What is different? Facebook, YouTube, etc. are personalized. You are shown what you are interested in without public accountability. Niche radical content gets a lot of views for its own controversial nature. Views and money.

Who wants to investigate, hire good writers and expend the money that it takes to write a good article when you can hire some one without ethics for a fraction of the price and get as many or more views as radicalization grows?

YouTube, Facebook and others say that they are not responsible of the content they offer. I think that it should be true for things like comments. But for the monetized content they are 100% responsible of incentivizing that radicalization and killing good journalism in the process.


> YouTube, for example, can show advertisements for well known companies in videos about Anti-vaccination, far-right conspiracies, etc. without consequences.

It is a myth that there is any problem with ads being shown on controversial videos. YouTube demonetizing select videos is nothing more than a way to strangle independent media and help gentrify their platform.


Multiple news orgs have stated in public that they don't care about paywalls bypassers because they put up the paywall to convert only the most amenable readers. This is very similar to the paper distribution model where an extreme minority of subscribers represented the majority of revenue, due to extreme discounts and other subsidies given to most readers.

That's true, but it's also on the assumption that the paywall adds just enough friction and reminds readers "you know, if you like this, eventually you might think about paying for it." Some will. Some won't.

Automating the process by pretending to be Google means the user won't even see that friction or messaging. It's clever, but it breaks the premise.


The friction is installing the addon.

Then these websites should block the ability of search engine crawlers from accessing their paywalled content, which is basically moving the content out of the clear web onto the deep web (which is not the dark web, there's a difference).

If they can't accept that compromise and not have these paywalled articles indexed, then the problem is on them.


They can even have a summary or introductory paragraph indexed while designing a paywall that way. What they can't have is search engines able to read the whole article and not everyone else.

I think this is fair, and so do the search engines. Google calls doing otherwise "cloaking" and says they penalize the ranking of sites that do it. Perhaps they're not doing so effectively enough.


[dead]


> Journalism has never been one of the foundation of democratic societies.

So I'm assuming the news of what the people you voted for get to you via... diffusion?

> . The united states and democracy itself existed far before journalist.

You should sue your school for the permanent damage they did. Both in terms of grammar as well as history.

In any case: if journalism is so useless, why are you jumping through hoops to read it?


I’ve responded elsewhere in this thread but I’d like to discuss the double standard being applied.

Have you ever been in a conversation where someone talks about something and you said “hey I read this cool article on that, let me send it to you?” If so, guess what - you were the search engine for that conversation. Should you then have access to view the non-paywalled content?


Chabge is hard. You can fight it or you can embrace it.

Sometimes embracing change requires letting go.


You do realize this argument is so vacuous, it would apply to someone torching your house, right?

Simply because the words could be applied to another situation doesn’t mean no thought was put into them. So no, I do not realize that the argument is vacuous.

I’ve simply stated that you can either fight change or embrace it.

In this context I mean that many web users do not want to view their content with advertisements or be tracked through advertising. Attempting to force them to is to push against user behavior (fighting change).

I don’t believe users will change their behavior if you try and force them to.

I believe that by embracing the users behaviour we can learn how to make it work.

I think that might require letting go of our predetermined notions of what “journalism” is and how it should be funded. I think we need to be more open to reviewing why people paid for journalism in the first place and how the need it fulfilled is met today.


Harmful changes can and should be fought. Journalism plays a vital role in democracies. Journalism is expensive. Someone needs to pay for it.

“Harmful changes should be fought” - Harmful is a pretty context dependent word in this situation.

“Journalism places a vital role in democracies” - what role is that? (I’m not disagreeing, I simply want to know your opinion).

“Journalism is expensive” - does it need to be? Why is that?

“Someone needs to pay for it” - This seems quite vacuous. If the thing is valuable then someone will likely pay for it (if they can be convinced of the value and it’s realized).

It’s not readers who pay for it now though is it? It’s advertising companies since they buy the ads. We’re not asking people to pay for it we’re asking them to be visually and audibly distracted into buying products and services in return for journalism.

Perhaps this is not a fair deal for many anymore?


This extension just seems to strip tracking data and pretend to be a Google bot. It baffles me that this is somehow concerning enough to be taken down. And anyway, isn't making exemptions for Google's robots sort-of against their policy?

https://support.google.com/webmasters/answer/66355


That's exactly what I thought too, not even " sort-of " it seems "without a doubt" against that policy. It seems really tricky for Google to enforce though. Wouldn't they need to send out bots crawling without the GoogleBot agent? That also seems wrong, somehow.

If I recall correctly, this rule is why googling for an article and then clicking on it from Google frequently works.

The rule is something like "a user clicking from Google can't see something (meaningfully) different than what the Googlebot sees". So you can paywall direct traffic or links from Reddit or internal links, but that first one from Google is supposed to work.


It does seem wrong. I wonder if they could double crawl and penalize sites that throw up a paywal

They could add the ability to report sites that do this. That way there would be less sites to double crawl.

There are exceptions for paywalls.

Is there another use case for cloaking?

You are not allowed to circumvent paywalls, just like you are not allowed to circumvent locks, no matter how easy they are to circumvent. As far as I have always understood, you may publish a locksmithing guide, even an “entering your home when locked out” guide, but not a “how to commit burglary” guide.

It seems that there is a Canadian case that confirms the validity of paywalls as a "technical protection measure". While in the case, the circumvention was social(obtain copy from another subscriber), I doubt circumventing it via user agent spoofing would fare better.

https://tcllp.ca/2016/05/the-copyright-act-and-paywalls/


A paywall is not a lock. It is code that someone else wants to ship to my computer to run. It's up to me to decide what parts of it I run or not.

No, it’s not. The jurisprudence is pretty clear.

the jurisprudence of whom? In my own jurisprudence paywalls are viruses.

The US of course. Look at the list of sites the plugin works for: none of them cares if you ignore their paywall. The likelihood that you would subscribe is negligible. It’s American readers they are after. And for them the jurisprudence is clear. No deliberate subversion of access controls allowed.

If that's the case Mozilla has no business banning the addon for everyone...

Mozilla is subject to US law. Mozilla has no obligation to enable certain addons only for visitors from certain countries.

Folks, you can downvote me, but that doesn’t make what I’m saying false. Look at the law cited in TFA.

The publishers want it both ways. They need to publish their full article content for search crawlers to read but don’t want humans to see it. The idea that this is a form of DRM protected by the DMCA is laughable in my opinion, but IANAL.

They mostly just want to survive...

But in this case, they specifically want to allow you read maybe a dozen articles per month for free, but also (see above) to eat.


Criminals also just want to survive. That doesn't make their immoral business model legitimate.

Difference is, while we do want the publishers to survive as publishers, we do not want the criminals to survive as criminals.

But do we really want the publishers to survive?

Not using immoral methods (like advertising/tracking) to generate income.

This article is about paywalls, which tend to generate income from subscriptions.

Criminals don't perform a useful service for society. Journalists do. Journalism is flawed, but flawed journalism is better than no journalism.

>but flawed journalism is better than no journalism.

I would disagree. Flawed journalism means things like mixing in lies in their news stories, mixing in opinion pieces into news etc. That kind of stuff can do more harm than good.


A little information is a dangerous thing.

---

Sufficiently flawed journalism is perhaps one of the most severe problems in our society. People are fed half truths, and at times simply false information, and then go on to repeat it and base their world views upon as if it were fact. And in the era of social media false and sensational views spread incredibly rapidly while any 'corrections' are basically dead on arrival.

To give a very recent example of this, you probably read about a shocking new study showing that the oceans are actually heating up 60% faster than we thought. Turns out the study was mathematically flawed and its actual numbers are not particularly different than what other studies have shown. [1][2][3]. And the error was detected just hours after publication - apparently not quickly enough to stop the shocking headlines. This is not an implied comment on climate change, but rather just a very recent example of this phenomena of how people are misled without having their views ultimately corrected that happens to involve climate change.

The sites that rely on 'teasing' and then paywalling are some of the worst offenders in the state of journalism today, and in my opinion the world, let alone society, would be in a much better place without them.

[1] - https://www.sandiegouniontribune.com/news/environment/sd-me-...

[2] - https://www.latimes.com/science/sciencenow/la-sci-sn-oceans-...

[3] - https://judithcurry.com/2018/11/06/a-major-problem-with-the-...


We can all agree that rewriting science department press releases is a bad idea and that journalists often get it wrong. But I wasn't referring to shoddy journalism, naked propaganda, click bait, or low-quality churnalism. My point was that no journalist or publication is perfect, but some are good enough, and what they do is expensive, necessary, and deserving of support.

That wasn't the point I was making at all.

What I was saying is that journalism, in its current state, often runs sensational stories that later end up being false or otherwise unsupported. That report I mentioned was not about the media rewriting a science publication -- the publication itself was published with errors in it. The media just accurately repeated those falsehoods. The problem is that, due to the nature of social media, the sensational stories get let's say a million hits. Later on they either update the story to more accurately reflect the truth or run a retraction. These 'corrections', by contrast get maybe a hundred hits. So you have people who have wildly broken worldviews largely because they take the media at face value.

The following quote from Thomas Jefferson is dated 1807, but it's now more relevant than ever simply because mass media is resulting in mass misinformation: "Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle. The real extent of this state of misinformation is known only to those who are in situations to confront facts within their knowledge with the lies of the day. I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors. He who reads nothing will still learn the great facts, and the details are all false."


You make a good case for state-supported journalism, not for individual respect of paywalls.

That's assuming that laws are the perfect arbiter of what is useful and what is not. Some useful and moral activities are or used to be criminal acts.

The purpose of the law is not to determine what is useful and what isn’t. People can decide that for themselves.

How is that remotely relevant to the topic under discussion? I mean, it's true that laws are sometimes unethical or mistaken. But that has nothing to do with whether journalism is useful and worth protecting.

What thing, that was objectively and indisputably your property, has the Wall Street Journal robbed you of at gunpoint?

Many modern day "journalists" have robbed me of journalism.

Not all criminals steal others property, they're more like drug dealers that try to get you hooked from a couple of free hits^.

^ Not something actual drug dealers ever do.


In western societies, nobody commits crimes to literally survive. And, if they do, it is generally accepted as an excuse.

(c.f. the bishop of Cologne, Germany, regarding the theft of basic foodstuffs during the famine immediately after WW2)

A wish to survive is also an accepted excuse in many other cases of behaviour that would otherwise be considered criminal. You can even kill someone and get out of it with a credible claim of self-defence.


Then implement technological measures to make sure that this is the case. At the very least, don't just make data freely available to the user and expect them not to read it, just because you visually hid it! And if you're serving non-paywall pages to Google bots, that's kind of against Google's policy, isn't it?

Not any more. There were changes.

Their business model has been broken for years. It’s not incumbent on me to help them survive.

Yes, but considering you seem to be keen on reading their content: have you considered that their survival may, just possibly, be in your interest?

That's a great discussion to be had anywhere but on an HN thread about how those same organizations are trying to create a precedent for limiting the kind of software you can choose to run on your own computer.

These organizations are making the issue harder and harder for themselves by consistently arguing for insane concessions from society and other companies after they were caught napping during the biggest technological revolution in their business since the printing press. We should definitely have discussions on how to have a healthy press in the digital age but that should never include putting up with bullshit like this


If he felt that way, he would probably have a subscription. I will reiterate what several other posters have said: it's up to publications like WSJ to ensure their content isn't available to unauthorized users. If they're sending content out, they should not be upset that it's being consumed.

I've read stormfront threads, too. Additionally, it just seems incoherent to me to pay for things that I read; I pay for things that I want other people to read.

>They need to publish their full article content for search crawlers to read

At first, I thought it might be possible to solve this by getting search engines to standardize on a vector format that they could accept for protected content. So the crawler sees a 300-dimensional vector that effectively gives a semantic summary of the document.

But then I thought content providers could achieve a substantially-similar effect by just serving their documents to search engines in scrambled (e.g. alphabetized) form. They could still provide normal headlines to get them clicks.

BUT THEN I thought it would be a really cool and bizarre problem to circumvent this by attempting to devise a method for finding the most-probable original document given its alphabetized version.


Anyone should still be able to consume the web as if they were a search engine.

Could publishers index their own content and expose that through a common querying API that anybody could plug into ?

So Google could still deliver ranking and publishers put their content behind their walls.


This breaks the social contract of the web. If you publish on the web (over HTTP) using HTML (with or without css+js) then you respect the concept of a User-Agent that renders the content on behalf of the user. If you wish to get around this then create another file format and protocol and application/plugin to do so — one that enforces the DRM for the owner. Don’t take agency away from User-Agent.

Precisely this. Website operators who wish to establish effective paywalls with publicly indexed content need a way to cryptographically authenticate web crawlers. That means working with search engines to establish a standard for registering and authenticating authorized crawlers.

If Google isn't interested in such a standard (because it reduces the quality of open search results) then that's the publishers' problem, not Google's or users'.


Alternatively, the add-ons break the social contract, whereby content creators request payment for their content.

If you visit stuff on the web, you respect the concept of a social contract between you and the provide. Don’t try to circumvent their wishes.


HTTP is public by default and SMTP is private by default. The laws respecting this in the US are pretty solid, especially for email. In the US privacy generally trumps public access which allows email to retain its private by default status even if served over HTTP (web mail).

That said anybody attempting to make parts of content non-pubic in HTTP without appropriate security restrictions are void the protections of law. The remedy for this is to ensure necessary security is in place so that non-pubic content is limited to account and session, which then puts part of the burden on the end user to ensure they restrict the credentials that provide access.

Content, not already covered, served over HTTP without the necessary security in place is not private.

There is a clause in the DCMA that makes it illegal to circumvent security controls. The stipulation there is that the security controls in question must be adequate and reasonable. Dropping a CSS model over some content is not a valid security control, and thus has no legal protection. Of course everything regarding DCMA is open to argument at legal expense.


Attorney here! (Not providing legal counsel, though.)

It’s a common misconception that the word “effective,” as used in the respective legislation, was intended to mean “successful” or “always cause.” Rather, it’s always been interpreted to mean, “is designed to have the effect of.”

The fact that the measure is not always successful or is breakable does not make the law no longer applicable; that would be an absurd result. The law isn’t interpreted literally when the result would be absurd; this is doctrine that goes back to at least 1892 and even further back to English common law. See, e.g., Holy Trinity Church v. United States, 143 U.S. 457 (1892).

Engineers too often assume that the text of the law means what they think it means. This is one of those cases where a lack of legal education serves them poorly and leads them to incorrect conclusions.


Quick question. If one group intends a grossly inadequate effort to be a layer of security and another party ignores that effort who is in error? It could be argued that if the supposed security measure is so poorly conceived the bypassing party presumed it to be an error and thus used their software knowledge to mitigate the defect. On one hand there is an intentional barrier to restrict access to content while on the other hand the intention is to serve content and so the bypassing party assists that service intention.

I am of the bias that a space of technology cannot be summarily redefined by a single group interest merely to compensate their financial insecurity.


To make a long story short, the law to date has always been on the side of the property owner/rights holder insofar as these sorts of questions are concerned. You can’t presume that just because the door is swinging open in the breeze that it constitutes an invitation to enter or that you can fix the defect on someone else’s behalf without repercussions. Context matters in the law, just as it does in the real world.

> If you visit stuff on the web, you respect the concept of a social contract between you and the provide.

If I send a site an http GET request, and that site responds by giving me some data, then that site has given me implied consent to look at the data. If they don't want me to look at it, they shouldn't have given me it.


^ THIS! I don’t see how the situation is in any way more complicated. Real-world analogies can get pretty squirrelly but it would be quite literally like the following.

Man standing on the sidewalk outside a store yells into the store at the shopkeeper “hey there, can I have some free fish?”

Shopkeeper yells back “sure!” and tosses a fish to the man on the sidewalk.

Add more yelling back and forth for TLS handshake.

At this point, the man on the sidewalk can do whatever he wants with the fish. It was freely given.


Off topic: isn't this how user privacy works too though? People complain about companies like Google and Facebook "stealing their data," but it's their browsers sending out the data. If users don't want Google to have the data then they shouldn't send it to Google.

This could have implications for Google.

Basically this example, if accepted in law, would mean that if you want your data to be private then Google would be expected to respect that, even if you sent them that data in the clear.

It's a ridiculous outcome.


Yes it is, and that's why several browsers have been working hard to protect user privacy without impacting convenience too much.

I walk into a shop, the shop keeper holds the door open for me. I place an article in my pocket and leave without paying.

Presumably, there's implied consent for me to to that? yes?


> I place an article in my pocket and leave without paying.

The analogy would be if the shopkeeper (i.e. web server) picks up an article and puts it in your pocket.


"We're sending data to your computer but you mustn't look! Promise!"

I'm not sure how the addon worked, if there's some trickery going on, but I'm long waiting for a lawsuit against ad blockers that could get heated.


This has nothing to do with ad blockers, or indeed advertising at all. The way it works is they set the user-agent to one used by search engines. The website thinks you are Google, so they give you the content for free because they want Google to index their proprietary content.

How long before Google switches to digitally signed crawling requests?

> Alternatively, the add-ons break the social contract, whereby content creators request payment for their content.

That's what access control is for. If you don't secure your content, you don't get to whine that someone is reading without paying.


> Alternatively, the add-ons break the social contract, whereby content creators request payment for their content.

The site is free to not emit any content before payment is assured. The reason these add-ons are possible is that these sites are trying to have their cake and eat it to. They want to implement the paywall in the user agent so they can still get their content in Google. At the same time some of them are trying to argue for payments from Google for linking to their content. The situation is a mess but it's not about social contracts at all.


Anybody who tries to show me ads clearly respects neither my time nor my money, so why would I respect theirs? Anyway, getting angry at ad-blocking is pretty silly - it's as if someone threw a pamphlet at me while I was talking to them, and then yelled at me because I didn't read it.

These are competing and potentially incompatible social contracts. As a hacker type who's been on the web a good while, I prefer the former: the client decides how to render content, and the server should not send content it doesn't want rendered.

This doesn't necessarily prohibit the use of paywalls: the paywalled site just has to be designed so that it only sends content to clients that have provided evidence of payment rather than relying on the content being rendered in a way that prevents the reader from reading all of it.

Much as I'm for standards-compliant HTML rendering, I don't want any specific rendering encoded into law.


Sure...except that the page content is loaded before any payment is made or confirmed. Websites probably want this to happen so that some user agents will not be blocked by the paywall e.g. search engine crawlers.

Basically it sounds like some publishers want to rewrite the rules of the web to neatly and conveniently serve their interests.


Why aren't they using proper access control, then?

It's obviously not a "request" if they're making legal threats. And it's not payment they're demanding, it's the ability to track anybody who visits their site, whether or not you agree to be tracked. And it's not part of any social contract I agreed to; the (very short) history of the web is an ongoing negotiation. Paywalls were invented this decade. This plugin is part of that negotiation.

Browsers already have a mechanism for authorizing access to content. Many mechanisms, really. If a company chooses to use an unreliable mechanism, I don't think we are morally obligate to roll over and do what they want.


This is mostly about "soft" paywalls, allowing you to read maybe a dozen articles per month without paying.

There is currently no way I know of allowing this functionality that is meaningfully "reliable". At least not without erecting new barriers such as mandatory registration (and verification).

So you, and everyone else, seem to be demanding publishers just switch to "absolute paywalls". You will then have strictly less access than before.

How is that supposed to be better?


I'm not demanding publishers switch to anything. Personally, I like the soft paywall system. I also think people who bypass it were never going to pay money, so I think publishers freaking out about people opening up an incognito window are wasting their time.

But if publisher do want absolute control over who views their content, then they should make people log in. I think that would be a mistake, but it's their mistake to make. I'm just opposed to give them legal control over what I do with my computer and any data their servers freely give me.


Prohibiting paywall bypass software violates the principle that the client dictates how web pages are rendered, or requires carving out a special exception just for the publishing industry.

Viewed under a magnifying glass, it seems like a good thing to ensure the publishing industry can use soft paywalls, but taking a broader view, I think that breaks the web. I don't think soft paywalls are worth breaking the web, and even more broadly, governments should only regulate what users can do with data willingly sent to their computers in very exceptional circumstances.


This seems to be a classic example of an argument in which the ends don't justify the means (the means being allowing the lawsuits, the ends being publishers switch to "absolute paywalls")

It may mean that without a harsh letigious method, publishers do not offer "soft paywalls". It's more likely they will continue to do so to not be shut out of the market, AND we won't have waves of frivolous harsh lawsuits bankrupting independent developers.

The alternative you're proposing is to in effect be held to ransom by content providers - your argument could equally be made for any form of drm or appeasement (without root how do we know you are not recording this, wothout your location how do we know you are within our licensing area, without direct retinal scans how do we know who is really watching, etc)


Ideally, it would hasten society’s realization that the flow of information should not be subjected to capitalism.

I agree but unfortunately in reality information costs money to discover.

> Paywalls were invented this decade.

Compuserve paywalled all of their content way back in the 1980s.


Sorry, I specifically mean the "soft" paywall, which is the kind at issue here. If these publishers had required people to log in, no browser extension could bypass that.

> Paywalls were invented this decade.

Paywalls were used in adult industry even before the 2000's.


They can request it. I don’t necessarily agree to it.

Would you make the same point for an extension that allows users to easily participate in DDOS attacks?

Edit: the link, for the willfully dense, is that some protocol specification does not imply its users’ acceptance of any and all (ab)uses possible. It’s completely different only in that you like one but not the other.


That's completely and utterly unrelated to what the guy said. He is talking about user-agents and the way you interpret the data you receive from a http(s) request. You're talking about DDOS attacks. There is no logical link here.

Uh... yes? It's not very relevant, but I guess if someone was making some kind of hacktivist extension that did something like that, I wouldn't just take it down from the marketplace.

There’s also a social contract about respect for the law. It’s arguably much older. It involves aspects like “following the spirit of the law”. Also “technical ability is not the same as permission”. It’s why the old shopkeeper’s waning ability to see is no license for you from the produce displayed outside.

(This is mostly to show that I, too, can argue with invented “contracts”).


If you’re selling something called wood, and I use a saw to cut it, I expect it to work because I paid for wood. That’s the implied contract. If the person who sells the wood puts some foreign substance in it to circumvent my saw I should not get in trouble for finding another way to cut it.

I pay for my internet connection and I expect others who make use of the same network to follow its protocols. I expect HTTP to work as HTTP. If a bad network node chooses to circumvent how it works and I find a way around their circumvention then I should not be punished. On the other hand, if I republish and break copyright it is an entirely different matter.


If I notice the shopkeeper gives discounts to people wearing hats, is it wrong for me to come back wearing a hat for the discount? If I notice the WSJ sends different content to different user-agents, is it wrong of me to change my user-agent?

Broad baseless claims are pedantic.

> There’s also a social contract about respect for the law.

Which law, and as it applies in which context?


There's also a social contract with government, such that laws are supposed to be thoughtfully written to achieve societal optimums, and rebalanced when that's not the case.

In practice, American legislators have to raise large amounts of money just to say in office. So it's unsurprising that they listen mainly to the concerns of people with money: https://www.vox.com/2014/4/18/5624310/martin-gilens-testing-...

We are in an age where information distribution is approximately free. And where the costs of producing information-based products has declined drastically. Paywalls are not any sort of law; they're an attempt by information producers to have their cake and eat it too. They want the abundance of the new, computer-driven world. But they want artificial scarcity, so that they can charge like they did in 1970. My sympathy is with journalists here. But that doesn't mean I have to roll over and let them break the web.

The laws in question here, the DMCA and the CFAA, are both too new and too old to be reflexively bowed to with "OMG spirit of the law!" Too new because they are obviously part of society's attempts to deal with technology, and are equally obviously driven by the interests of people who have piles of money and are trying to protect those piles. And too old because both, but especially the CFAA which was written in 1984, reflect a very early understanding of what computers are, what they're for, and how society should treat them.

As far as I'm concerned, this is an ongoing negotiation. We need to find ways for journalists to get paid. We need to find ways to have a much more informed populace. We need to preserve as much as possible of the freedom computers and networks have given to every individual.

If we want paywalls to be a part of law, then we should have a national debate about whether publishers deserve a special protection so they make all of their content available, let it be downloaded to any computer in the world, and then have that computer enforce any restrictions they think are good ones. We are not obliged to just concede that debate right at the beginning just because you think reading an infinitely replicable article already on your computer is exactly the same thing as stealing a unique piece of fruit from a blind man's store.


> technical ability is not the same as permission

Exactly. 99% arguments here are essentially the equivalent to "since Marriott/SPG are horrible at cyber security, I am ENTITLED to use stolen credit cards from their leaks."

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: