- provide an XPI (that will only benefit the few since the side-loading process is made awkwarder at every point release) ?
- fight it ? If so, on what grounds and how ?
- something else ?
> - provide an XPI (that will only benefit the few since the side-loading process is made awkwarder at every point release) ?
You could get your add-on signed as an "unlisted" add-on and upload it to GitHub as a "release" (for which you can upload binary files). This would make the add-on less easily discoverable (than it had been), but at least all Firefox users could install it without much of a hassle. OTOH if the add-on becomes popular again, take-down requests might be sent to you or GitHub.
 By unlisted I mean not publicly distributed via AMO; there's no restriction on distribution via other channels. See https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Dis...
So yeah... I can provide an unsigned copy of my add-ons to users... but unsigned add-ons are by design awkward to side-load :/
I'm looking forward to capability-based systems with microkernels, since they really improve on the situation, but I think it'll take a while until we get some major ones. Maybe Google's Fuchsia will establish itself soon enough, who knows? (We'll also have to see what can be done about hardware security, since any software mitigation could potentially be rendered useless by insecure hardware.)
Linux gets there a different way. The standard way to install applications is from the package manager and essentially all of the applications in there are trustworthy (because they're all open source and if they did anything seriously user-hostile, someone would fork it and that version would be the one in the package manager). Meanwhile the package managers do actually add nearly everything that isn't user-hostile, so the need to install anything from another source, while still possible, is rare enough that most people never have to do it.
And binaries downloaded via web browser don't even have the execute bit set by default. You can still do it, but you have to know how, and the people who know enough to know how to do it generally know enough to be suspicious when doing it.
And even then, things typically run as the user rather than root/administrator, so they can't alter the system or anything other than that one user's (presumably backed up) home directory.
Having even more granular permission would be even better, but restricting the harm to one home directory of one user and only in cases of users who are at the same time knowledgeable and stupid has already handled basically the entire problem.
The difference is that Linux repositories are less noteworthy to attack, have a more distributed culture not ruled by capitulating lawyers, and can actually jurisdiction shop.
The ultimate problem is a lack of a cross-software security model of the OSs, Linux included. User-based isolation is cool and all, but orthogonal to the modern world . For decades we've been continually looking for better ways of isolating local apps, while also rejecting centralized control. We keep looking, while centralization keeps ratcheting.
I'm still hopeful that a well done capability (handle-based) system would go a long way, but not fully solve it. Unfortunately that means shedding off POSIX/LSB rather than duplicating the entire monolithic OS environment for every security context.
 Where say even a local LAN IP address is security-critical information!
And they still work precisely because the user can do that.
> For decades we've been continually looking for better ways of isolating local apps, while also rejecting centralized control.
The main issue is that there is so little real benefit in it. It's possible to isolate e.g. LibreOffice so that it can't access anything it shouldn't, but in general the authors can be trusted not to be doing anything nefarious to begin with, so what you're really doing is limiting the damage in the event of compromise. In which case you're still pretty screwed, because it inherently needs access to your documents, so the focus has not surprisingly been on preventing compromise rather than mitigating after the fact.
> We keep looking, while centralization keeps ratcheting.
The motivations behind the rise in centralization are authoritarian and pecuniary rather than any legitimate security concern. Trying to prevent it by finding alternative ways to improve security is like trying to repeal the DMCA anti-circumvention rules by finding alternative ways to reduce piracy. They'll never be satisfied because their motivations aren't the stated ones.
It's a dream. Kept alive by a very diligent community, but let's not kid ourselves that it's a very strong assumption.
> In which case you're still pretty screwed, because it inherently needs access to your documents
Access to the specific document you presently want to edit (say through an OS-provided file select dialog or equivalent cli) is much different than unfettered access to all of your files.
> The motivations behind the rise in centralization are authoritarian and pecuniary rather than any legitimate security concern
People do choose say Apple products precisely because of the curated Disneyland App store. Centralizers certainly have their selfish motives, but people are driven into their arms looking for safety. Browsers are used for software distribution precisely because they're a sandbox - there's much less fear of the unknown than running a random exe.
This is one thing I like about Android. You can avoid requesting file permissions for your app by using native intents for file access. I think its the same for other stuff, like email. As much as I loathe Googles control over Android, I think this is a good thing.
And you can install an anti-paywall extension in Firefox.
The moment you concede the Linux "well you can work around it" argument is the moment you have to stop arguing on pure absolute principle and start arguing about practicality and the relative difficulty of workarounds.
And of course the next step would be DMCA takedowns for applications in the appstore. Everyone's computer would effectively be under US jurisdiction.
The rise of Chrome is responsible for all of Mozilla's lost share. But major factors causing Chrome to gain share are being the default on the most popular mobile platform (Android) and being heavily promoted on google.com for many years.
Making it harder to install addons (and breaking all the old ones) is one of the things contributing to Mozilla losing share to Chrome. People used to use Firefox over Chrome because of all the great addons, which they then broke, leaving users with less reason not to use Chrome (which was significantly faster until Firefox Quantum). In other words, the causation is exactly the opposite of what you're suggesting.
If all the add-ons had kept working on FF, then I probably wouldn't have switched browsers.
Not giving any technical and ux credit to chrome for also being a major factor for firefox's loss of market share is disingenuous IMO.
I want to have a profile for my work, and a personal profile. Chrome provides that, and Firefox, last time I checked, required multiple hoops to enable and also blocked me from having more than one open.
I would even contribute funding to that if Firefox had a "Fund this feature".
The people operating the repository want people to use it. They don't want them getting things from untrustworthy places. But if they can just prohibit that, they can be tyrants -- refusing beneficial software that the user wants because the monopoly provider has a conflict of interest or is being coerced by someone else.
By contrast, if the user can load the beneficial application themselves then the repository has the incentive to prevent that from happening (and thereby discourage users from doing that in general) by carrying it themselves. And the fact that there can be competing repositories means that the one that carries the most beneficial software and the least user-hostile software can be the one that wins in the marketplace. But not if the vendor locks everyone else out and becomes an abusive monopolist.
All the more reason to have a supported way to do it. If they require the malware to replace the Firefox binary with a different one, what happens when it does? The user ends up with twelve pieces of malware instead of one because the original malware author didn't bother to support browser updates properly and the user ends up with a browser full of publicly known security vulnerabilities.
> And then Linux is also niche enough as a consumer OS that it's just not as attractive for malware in the first place.
People have been claiming that as the reason there was so much more malware on Windows for decades. Then Windows adopted some of the same types of measures as Linux and the amount of Windows malware fell off considerably.
And no Mozilla should not be thinking in terms of "market share". There is no market! Mozilla is a non-profit. They should not be sharing the same paradigm as Google. And it is this vanity-driven pursuit of "brand prestige" and "market share" that has ruined things.
It makes business sense for Google's platform for them to do things the way they do. It makes absolutely no sense for Mozilla to emulate Google. The very fact that Mozilla can just afford to make unique software without the constraints of the market, supported by donors, is a strength, not a weakness. They should be leveraging it instead of trying to "stay relevant" and compete with Google on Google's terms.
And this whole incident is a disgrace. If it wasn't for Mozilla getting between users and their software, then this takedown would be nearly irrelevant, as it would be that much easier for users to install it. It would probably even disincentivize take-downs like this, because they would be so futile.
Users will leave in droves if...Firefox allows you to install third-party extensions if you choose to do so? I don't get it.
You are saying it, but not making any argument or offering any evidence for what you're saying.
Then why is that not itself an obvious solution? Use that data to provide a default-on blacklist of known-malicious addons. Then you have a blacklist rather than a whitelist, so it can justifiably be more difficult for the user to override it and the user has no motivation to do so for a malicious addon they didn't intentionally install to begin with.
It also gives you the opportunity to allow the user to specify a blacklist provider, so that if the original vendor ever gets compromised because they're forced to operate in an oppressive jurisdiction, they can farm out the low-resource task of hosting the blacklist to someone who isn't compromised, and the user (or the distribution packaging the browser) can make that determination for themselves.
Blacklisting depends on a limited supply of whatever you're trying to control. UUIDs are not limited.
More generally, if you come up with an obviously superior solution to a problem that someone else has claimed is important and difficult, and has spent considerable resources addressing, perhaps it would be more constructive to investigate or ask questions to test your understanding of the situation before assuming and asserting that the other people are doing it all wrong?
I'm not saying mozilla has the best possible solution here. I don't know what that would be. I do know (I work for mozilla though not in this area) that the step was taken to address real, and very active, threats to security, privacy, and stability.
Which is a good reason not to use the extension ID as the sole method to enforce the blacklist, and is why anti-malware software generally uses a signature-based approach.
> More generally, if you come up with an obviously superior solution to a problem that someone else has claimed is important and difficult, and has spent considerable resources addressing, perhaps it would be more constructive to investigate or ask questions to test your understanding of the situation before assuming and asserting that the other people are doing it all wrong?
You're implying this is a novel problem that hasn't been widely studied by everyone and that they're not actually doing it all wrong.
> I do know (I work for mozilla though not in this area) that the step was taken to address real, and very active, threats to security, privacy, and stability.
There are many ways to solve a problem by trading it for a set of different problems. Centralized authoritarian control is exactly that. And those things can be popular in the same way anything that externalizes costs and internalizes benefits can be -- because that type of thing seems attractive to the people not directly paying the cost of it. Until the victims (in this case the developers and users whose apps you prohibit) devise a way to protect themselves. Then you end up in an antagonistic relationship with your users and addon developers. What's the cost of that?
Meanwhile there are other solutions that don't do that.
Not to mention this "telemetry" process is entirely opaque, and used to justify decisions like this in a very Ex Cathedra sorta way, where you either accept the conclusions they have made from data you can't see, or be treated like your opinion is irrelevant.
Its a very hostile way to run a supposedly "Open Source" project.
It's under a GPL-compatible license. Fork it and do what you like.
This is a pretty FUD-y way to FUD your FUD.
You must see that your ideas are not working. Have you been able to attract new users, or even prevent existing users from leaving? No? So then how can you be so convinced that you are right?
Firefox is software, not an experience. Winning over people that don't care is not an accomplishment, even if you could do it. Trying to be better than Google at nannying your lower functioning users from hurting themselves is futile, and punishes everyone else.
You're feverishly stripping away everything that made Firefox superior. Driving away developers who have made unique extensions that cannot be replicated elsewhere.
And for what? What is your end game? Have you seen the Chrome store? It's awful! The majority of extensions are very low quality and often misleading. And they are unresponsive and have to employ strange hacks to do things that Firefox extensions did very simply (like taking a screenshot, or interacting with other extensions).
And what is your ideal outcome? To have Firefox be considered a good browser by Google's standards? How about what the users standards?
You are trying to slowly turn emacs into an cheaper Microsoft Word. And patting yourself on the backs for doing it.
"It's not gonna happen. Firefox would be dead by now if it still allowed users to customize their webbrowser and make Firefox unusable (as in slow, unstable, and btw insecure), because users wouldn't keep up with that for too long. They'd just use another browser. This may be okay with you for now because you know how to protect yourself against misconfiguration, but in the long run Mozilla needs market share to stay relevant and be able to compete with the richest companies on this planet."
Substitute in any positive feature that puts the user in control of their web browsing experience, power over something means the power to break it too, using that as a reason to remove functionality is stupid.
Doubly so when you pile on the "omg if they do this mozilla is doomed" hysterics.
If the problem in your point of view is that I didn't /prove/ how big of a problem sideloading was then yes, I didn't even attempt to do that. There's a separate subthread on that question.
That's why your logic is flawed. If a construct can be used to "prove" various known-false statements then it has no value in proving anything.
> If the problem in your point of view is that I didn't /prove/ how big of a problem sideloading was then yes, I didn't even attempt to do that. There's a separate subthread on that question.
There is no proof of it there either. The most popular platforms (Android on mobile and Windows on desktop) allow the user to load their own applications, showing not only that it can't be a major cause of failure in the market but that it seems to be a characteristic of the player with the most share.
Yes, if... But it can't. That criticism of my statement didn't make sense.
> There is no proof of it there either. The most popular platforms [...]
I meant this: https://news.ycombinator.com/item?id=18583257
It applies directly. Anything that gives the user a choice can make the user experience worse for a user who makes the wrong one. But it also makes it better for a user who makes the right one in a way different from what the developer would have had to use as a default -- because sometimes something is right for 70% of the users, so it should be the default, but the other 30% are better off with something else. Taking away the choice makes the 30% worse off to benefit the 10% of the 70% that would have chosen incorrectly for themselves. That is not a relative advantage.
> I meant this: https://news.ycombinator.com/item?id=18583257
I'll reply there, but note that you haven't addressed my point -- other platforms survive and indeed have the largest share without prohibiting users from installing software, even when competing directly against others that do.
Also, anything that gets (or forces) the actors behind this to issue a takedown request against GitHub is good, because then there's a chance GitHub will tell them to pound sand (sending a strong signal to others that unconditional compliance with such demands may not be the only, or best, course of action) or actively fight it.
"Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected.
Some examples of cloaking include:
Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor." 
That you're able to provide the same view shown to google isn't exactly 'bypassing' anything.
If you present this as a plug-in that allows you to view websites as the Google bot views them, for educational and debugging purposes, there is no problem. You can give the fact that it won’t see the paywall as an example. It’s actually useful for that purpose: you are not lying. It’s just that most people will install the plugin for its ‘side effects’. Their use of it will still be illegal, but the intent will not be illegal. Cf. Firearms, crypto, drugs, ...
(I say this as someone that pays for various journalistic sources and I encourage everyone to pay for at least their three favorite sources)
In any transaction there is a demarcation point where interests meet and then part. Businesses have gotten used to this idea of decommodifying  their products rather than competing - spamming restrictive clickwrap "licenses" etc. So much so that we view common sense rights like "first sale doctrine" as a friendly exception rather than the bedrock norm. A website telling you how you must/mustn't display the page you've retrieved is equivalent to a retail store demanding a share of your business's profits if you use their products commercially. We can envision such a scheme being cooked up with "terms of sale" and blah blah, but people would rightfully not stand for it - markets and society simply cannot function with such top-down control.
The only difference here is that civilization needs to re-figure these things out for the digital world, especially as frivolous overenforcement appears to be much easier.
 Adding unnecessary complexity/restrictions to make their market less efficient. See also: net neutrality.
This isn't a situation like Aaron Swartz, where the companies in question are restricting access to publicly-funded research. The newspapers are privately-funded entities that conduct their own investigative journalism.
This isn't a situation like Weev, where there was no access control to the data and he was just probing exposed endpoints.
This isn't a "right of first sale" issue. It could be if the publishers were trying to restrict access to page content after you paid for an account, but that's not what's happening here.
I agree with the GP that the issue is the framing. It's the difference between selling a mask, and selling a mask that's advertised to let you rob a bank without being caught.
The problem is that the publishers are still supplying their content for free, while then trying to attach arbitrary post-facto terms. It would be straightforward to just not send the article to someone they don't want to view it, but yet this is not what they have chosen to do.
Note the description from the add-on page "This extension will mangle your browser's requests to maximize the chances of bypassing paywalls". This is where there's a meaningful difference from the Weev situation.
You can also choose to not read the content if you don't feel it's worth the money.
But I wouldn't be "impersonating" the Google bot, I would just be using it's user-agent string. That does not make me the Google bot. If a foolish publisher chooses to interpret my usage of that string as me being Google, that is not my responsibility.
Just like with the Weev situation, this is another case of someone trying to shift blame for their crummy security.
A user-agent string is NOT authentication. It is merely supposed to provide a hint to the webserver.
I once saw a website that didn't have ads and let you view their content for free if the width was less than 700px. I guess they figured they were driving away more mobile users or something that it was worth. So, was I being criminally minded when I resized my browser to view their content instead of jumping through their hoops and giving them my email address? At a certain point, it is not my responsibility to hold their hand and play along with their pretend restrictions.
Customer goes to Home Depot, buys a bunch of $5 aviators. Sells them on Amazon for $50. The distributor sues the customer-reseller for violating their contract to not commercially resell, which undermines their expensive offerings. Customer points out they never entered into any such contract at Home Depot. The distributor claims Home Depot is a mechanical conduit that only facilitates a larger relationship, and the transaction actually took place between the customer and distributor, with the "standard" terms having been available on request.
The fundamental truth is that the interests of any buyer and seller are only aligned at the exact time of sale - afterwards they diverge. What has happened is that pervasive information technology has got sellers attempting to retain some kind of interest in things after they've sold them, with the "transaction" never really ending (an oxymoron). Copyright is one of the hooks by which they're purporting to do this (which actually at the very least it needs to be reformed so a seller of "digital goods" cannot destroy your access to a copy you've paid for)
As I said before, society cannot function this way - non-pure ownership can work for big tickets (eg real estate), but centralization and every item having a history simply doesn't scale. And as I also said, there is a very simple remedy for publishers - stop sending the article to people they do not want to view it!
Alice goes to Home Depot. Aviators cost $50, but they're on sale for $5 if your name is Bob. Alice tells the cashier that her name is Bob and shows an (obviously) fake ID. The cashier isn't paid enough to care, so Alice gets the glasses for $5.
I agree with you that anything Alice does after this point is her own business. I could accept an argument that Home Depot should've paid the cashier more to help prevent the issue. But I don't think that makes Alice's actions at the point of sale less fraudulent.
> And as I also said, there is a very simple remedy for publishers - stop sending the article to people they do not want to view it!
Which the publishers do. They don't send the article unless they think you're Googlebot.
My issue is with intentionally changing the user agent header to impersonate a well-known crawler that has been given access to the site under specific terms.
> Alice tells the cashier that her name is Bob and shows an (obviously) fake ID. The cashier isn't paid enough to care
I'd say that the proper example is that the cashier doesn't check ID and isn't even expected to by corporate. Companies aren't even strictly against this type of soft-fraud word-of-mouth trick, as it just further helps their price discrimination and customer mindshare. Whether you get one free birthday desert per year or five, you're still buying meals.
The elephant in the room is that Google could easily setup a system whereby Googlebot got secure access to the articles. I think they haven't done this because of an idea that the same pages should be served to users. So who is defrauding whom?
Philosophically I'd assert that the Internet philosophy is directly opposed to the very concept of fraud - "identity" does not scale, especially across jurisdictions. We've finally got a way to formalize and mechanically execute contracts purely between private parties, so why cling to a heavyweight idea of post-facto enforcement based on nebulous ambient natural-language rules, especially as an overarching foundation? When you put a quarter into a "claw game" and it drops your prize before the chute you don't sue for "fraud" - you just stop putting quarters in.
> The elephant in the room is that Google could easily setup a system whereby Googlebot got secure access to the articles. I think they haven't done this because of an idea that the same pages should be served to users. So who is defrauding whom?
Something other users have suggested in response to this article. Maybe that will happen in the future, but I think it's a whataboutism with regards to this specific incident.
Google explicitly supports paywalled content, so I don't think it's fair to say that publishers are defrauding Google.
Home Depot gives (more like leases, when taking into account that copyright doesn't allow transitive redistribution... no first sale doctrine for intangible/unfixed IP) the aviators to Bob for free, knowing that Bob will take a bunch, set up a table outside that looks like he's giving away free sunglasses, but when you pick up a pair a Home Depot employee comes out, snatches them away from you, and starts pressuring you to get you to buy a pair.
I'd classify that as a persuasion pattern, which is partly why this is so polarizing (bad pun, sorry...). You had the aviators in your hand, or in Google's case, you read a snippet they had indexed. You can practically taste it, metaphorically speaking, and then comes the hard sell. Either you pay them for the thing because you were so close to getting it and aren't going to let a few dollars stop you, or you get angry because you had the thing you thought you were about to get snatched away from you. The seller is counting on that, by grabbing for the product, you're committed to getting it even if you discover it actually costs money.
Tracking the ownership of the aviators is a non sequitur, as the issue occurs at the point of transaction, not before or after.
> [Bob] set[s] up a table outside that looks like he's giving away free sunglasses, but when you pick up a pair a Home Depot employee comes out, snatches them away from you, and starts pressuring you to get you to buy a pair
> You had the aviators in your hand, or in Google's case, you read a snippet they had indexed. You can practically taste it, metaphorically speaking, and then comes the hard sell
I don't think that just because a publisher makes content free to one user for one use case it obligates them to make content free to all users for all use cases.
Does a studio including snippets of a movie in a trailer mean that the movie has to be distributed for free?
I would argue Google's actions are more akin to an advertising or PR firm drumming up interest by showing bits of the article to consumers.
If I had to condense my thoughts into a couple of bullet points:
1) There is a colorable difference between ad blockers and what this extension is doing
2) I think it is unfair to characterize disapproval of this specific extension as an attack on ad blockers in general
3) Publishers have the right to give samples of their content to search providers without them being obligated to make the content free for everyone
Instead your position appears analogous to arguing that altering a driver’s license to gain free admission to a cinema by misrepresenting oneself as entitled to senior citizen terms of entry is justifiable despite being prohibited by law in that jurisdiction.
It’s clear from this and other posts that you have articulate, principled view on many issues. So why aren’t you addressing the underlying economic issue? Publishers, like any business, need to earn revenue. If technological barriers to accessing intellectual property — and the legal protection thereof — are not valid (your claim of “frivolous overenforcement”), whose economic rights supersede the content producers? And why?
The owner of the computer they're insisting implements their business logic. It's trivial to simply not send the article to someone they do not want to view it.
If the extension moves on towards sharing an account, P2P distribution, etc, you would have a point. But as it is, it's only action is to interpret the content in a different way than the publisher desires.
Let's say an abutter of a drive-in theater sets up their own seats and starts selling tickets. The intent to "see a movie for cheap" would only become relevant if coupled with some action that is actually illegal.
This add-on is the first XPI I have ever loaded. I didn't find it awkward to do; it is simply:
Tools -> Add-Ons -> Cog Icon -> Install add-on from file...
The only thing I found unintuitive was the cog icon.
I'm guessing Firefox 63 doesn't let you run Bypass Paywalls, then? I suspect if fucks up other stuff, though I'm not trying it to find out.
If Firefox want to increase their market share, a good start would be not to make changes that are actively hostile to the user.
The backwards laws of one jurisdiction shouldn't enable take downs across all of them.
Your plugin title blatantly describes that you're avoiding paying for something they are charging for so even though it may not be illegal it's not something I'd waste energy fighting for.
Its like saying some people get Nike shoes for free in exchange for a review. And when I ask for it, or even take it, they have a problem with it.
I think with physical goods we have an innate understanding of what constitutes as theft. Just because the distribution cost is zero doesn't make digital costs free-for-all.
Checking user agent doesn't meet the requirement to be considered an effective access control technological measure.
It’s a common misconception that the word “effective,” as used in the respective legislation, was intended to mean “successful” or “always cause.” Rather, it’s always been interpreted to mean, “is designed or intended to have the effect of.”
The fact that the measure is not always successful or is breakable does not make the law no longer applicable; that would be an absurd result. The law isn’t interpreted literally when the result would be absurd; this is doctrine that goes back to at least 1892 and even further back to English common law. See, e.g., Holy Trinity Church v. United States, 143 U.S. 457 (1892).
> a technological measure “effectively controls access to a work” if the measure, in the ordinary course of its operation, requires the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work.
17 U.S. Code § 1201 (a) (3) (B)
I highly doubt that. Keep in mind that age restrictions which make you hit a single button to circumvent them are much worse in keeping the wrong users out, and afaik legally they're perfectly fine.
"Effective access control" means they have to do something which may work to some degree and doesn't have to be a burden financially or technically.
IMO Effective access control works both ways. you can't send the ALL of the content back and then have the access control work on the client machine.
What if i just used curl and viewed the HTML raw to read the article?
I suspect the take down notice is a DMCA take down based upon an flawed assumption of the law. The hard part about this is arguing the technical merits of the case before non-technical people. While the take down notice is probably in error they could still make a good argument around bypassing their security controls.
You could appeal to the EFF or ACLU. If they are willing to take your case it will be pro bono.
And instead of calling it a anti-paywall add-on, instead call it a anti-tracking add-on.
If you do that though, you may be asked to explain the list of websites for which the add-on activates. I guess you'll need to have some sort of "valid" explanation for that.
the web is public, despite SV’s best attempts to subvert and exploit that. If you don’t want someone accessing information don’t publish it as a website that anyone can access.
If it's legitimate for a bank to hide your data behind a username and password, how is a journalism-provider any different?
So is that's your fundamental issue with a paywall? Anything that's available to Google (and Bing, DDG, etc) should also be available to you at no cost?
Restating that from the other perspective: if the information isn't universally available for no cost, it cannot be looked up via a search engine?
It's crazy to send them the content but tell them not to read it... back to your example would you expect your bank to do that? Here's all the account details and transactions but oops thats not your account. I'm guessing no, you'd hold your bank to a high technical standard.
To be clear, if newspapers/journalists want to work out some special agreement with google (or partner/agreed upon indexers) so their requests are authenticated so that only they have access to the content - i think that is a better solution then pay walls and sending the article and saying "don't read this please"
But regardless of how crazy this scheme is, I don't think it justifies taking advantage of that craziness to unwrap such content.
I think it's reasonable to question the approach of banning the plugin too: the problem is the users' choice to use the plugin, not making it available. But ... when there's no justifiable use for the plugin, and the author clearly intends it to be used to view unauthorized content ... I can see that it's an attractive strategy to just ban it.
What kind of paywall are you thinking of?
In principle, I have reservations about exposing content to search engines but then requiring payment to read it. Especially if it's non-trivial to filter out the sources that require payment.
But a plugin which works around an attempt to restrict visibility of content to those who've paid for it ... I thnk the intent here is wrong.
I think it's ok to have information that's only accessible to a restricted set of viewers.
It's not that it's not possible. It's not that the implementations aren't dumb. It's that the principle of "if I want it, and I can do it, then it's ok" doesn't really hold up, IMO.
Just because something is on the internet doesn't give you the holly right of getting it for free.
Correct. Others have the holy right to charge, and I have the holy right to try getting around it.
That said, you're free to try to get around easy "protections", Mozilla is free to take down your methods for doing that.
Ultimately businesses will always ruthlessly try to make more money, and software will ruthlessly seek a more efficient user experience.
Often these objective clash. Spotify is the obvious example that seemed to offer a solution in the music space. But we have yet to discover such a solution in online publishing.
You cannot reasonably expect to protect or restrict content with a flawed understanding of the medium in which that content is conveyed.
If you don’t want me access it don’t put it on the web.
No matter how easy is it for me to go into your backyard (bypass your paywall), it's still an offense.
More simple, if you don't want the user to have it then don't give it to them.
Have you ever been in a conversation where someone talks about something and you said “hey I read this cool article on that, let me send it to you.” If so, guess what - you were the search engine for that conversation. Should you then have access to view the non-paywalled content?
So yeah, I have no issue with this add on. If they didn’t want the double standard - to allow free access for some and not others - it is easily possible and in their full control to prevent add ons like these (think of any admin site or service for which you have to login before seeing/do anything.)
Content producers have a choice and they’re choosing to be bullies. I have no moral or ethical qualms when it comes to dealing with bullies or double standards.
Just my two cents.
I would argue, from both legal and technology perspectives, the ability to easily break a paywall provides you the right to do so.
If you really REALLY want to make this about morality then I would argue restricting access to available content is the greater moral offense, because it is an inherent violation of liberty.
Writing news is work of perhaps a different subject, but it can’t be denied that like software development, a significant amount of real work goes into that. How can you justify that the authors shouldn’t be able to reasonably expect to have control over their work and expect to be paid for it, if their consumers deem it valuable?
Don't care because that is not the subject of this issue not matter how much you wish it were. The subject is whether or not a paywall is adequate for restricting access to content on a public medium. If that restriction is so easily bypassed it is clearly ineffective and inadequate. If you really cared about your business and revenue you would solve this problem instead of complaining about failed value assessments.
> control over their work
Journalists don't have control over their work. Publishers do. This is the same failed argument the music and movie industries peddled from around 1998-2008. Instead of wasting your energy on distribution control spend it on what you are good at and return superior value to your work.
That's just plain wrong. There's nothing special about technology that means you get to throw all your existing moral principles out the window. The ability to technically do something does not confer you the right do it, either legally or morally.
Maybe if I draw an analogy to patents it might make sense? It seems most people in technology agree that taking something unpatentable and adding "do it on a computer" shouldn't make it patentable. Morality is no different. Taking something immoral, like taking someone else's intellectual property without paying for it, and saying "but do it on a computer" doesn't suddenly make it moral. Taking someone else's IP isn't immoral because you're depriving them of the paper it's printed on (that part is just petty theft), it's immoral because you're depriving them of the right to control the distribution of their IP and the right to get compensated for the work they did in exchange for giving you the right to use the IP.
Are you directly giving it to Google via encrypted tunnel? I am not redefining how any technology works or making any alteration to your distribution or your service. Yet, you are still giving it to me free and anonymous.
Some people think taxes are immoral, but that doesn’t give them the right or ability to redefine the tax code. HTTP is not immoral.
The real issue here is you are lost and confused because you don’t know how to secure your product. Your negligence in accounting for your business, financial model, and IP is not my lack of morality. Fix your technology.
Because I am operating squarely within the specification, designs, and intentions of the technology. When this is not the case there are two possible outcomes: a defect or missing security.
The absence of security or warning thereupon is an implied invitation. This is why, at least in the US, a warning must clearly be stated before trespassing can be enforced. In this case it isn't even remotely confused with trespassing because you are giving me the prize simply by my asking for it without any regard for who or where I am.
We can invent all kinds of fictions as to why this is right or wrong. More important is whether I am violating security (there is none to violate) or violating technology (clearly I am using HTTP properly). Ultimately this distills down to one question:
If you don't want me to access a given content then why are you giving it to me anonymously?
The answer to that question determines why the use of a technology qualifies the usage behavior.
> Just because it's technically possible for you to trick me into thinking you're Google does not mean it's ok for you to do that.
Why is that not okay?
> You tricked me into giving you something I thought I was giving to someone else
I did no such thing, because you never asked who I was. Would you give your car keys to a complete stranger assuming they are a valet without any consideration as to whether they are who they claim? If in this case the car is stolen what is your expected recourse?
> that doesn't mean you have permission to do this
It is perfectly legal to modify my user agent identifier for any reason at any time. It is my computer, my software, and my settings.
> that just means you tricked me
How could I have tricked you? You never bothered to ask. This is how this technology qualifies its use in this way. You made a faulty assumption and are shifting the blame for that assumption. You can continue to be upset about this, but you will continue to hand me the content that you wish to protect without any legal recourse or restitution.
If this were my business I would abandon this irrational commitment to a failed idea and either secure my financial model or just open it up.
No you aren't. The intent is "Give Google the complete text, put up a paywall for everyone else". The fact that the technical implementation does not perfectly express this intent does not mean you can pretend the intent is different.
> The absence of security or warning thereupon is an implied invitation.
> This is why, at least in the US, a warning must clearly be stated before trespassing can be enforced.
This is wildly incorrect when it comes to computers. The CFAA does not require any sort of warning in order to determine that a user has overstepped their authority.
This isn't even true in the physical world. Many states require notice that you're trespassing before you're criminally liable, but "many states" does not equal all states. And even in the states where this is true, the absence of a notice is not at all an "implied invitation", it just means you're not criminally liable if you weren't informed that you didn't have permission to be there.
> If you don't want me to access a given content then why are you giving it to me anonymously?
Because you tricked me.
The fact that you fooled me into thinking you're someone else does not give you the legal or moral right to the results. Neither legality nor morality is a game, where if you can just figure out the right loopholes you can get away scot-free. That's not how it works. Not in the real world, and not in computers either. The word that describes what you're doing is "fraud".
Just because it's easy to trick me does not make it right to do so. Your entire argument boils down to "if you didn't want to be tricked, you should have tried harder". That's not a moral argument. The ease of tricking me does not in any way affect whether it's ok.
As an analogy, let's say I'm a baker, and to celebrate my dear mother Alice's birthday, I decide to give away a slice of cake today to anybody named Alice that comes into my shop. I'm not a suspicious soul by nature, so if someone walks up and says "Hi I'm Alice" I'm inclined to believe them. I'm not advertising this anywhere, there's no sign saying "If you introduce yourself as Alice you get free cake", it's just something I'm doing. If you hear about this (say, you have a friend named Alice and she tells you about the free cake she got), do you think you're morally justified in walking into my shop and saying "Hi I'm Alice"? Sure, the damages are pretty small, but you're still lying to get something you know you're not entitled to, with no justification other than you want it.
>> The absence of security or warning thereupon is an implied invitation.
You should consult an attorney. There is ample case law on this. This is how the military learned, the hard way, to impose warning banners.
> This is wildly incorrect when it comes to computers.
It still applies. There is legal precedent. CFAA does not apply in this case. In order for it to apply I, as a user, would have to knowingly overstep my authorized privilege level, which is commonly referred to as privilege escalation. This is explained in the Wikipedia article for CFAA.
> the absence of a notice is not at all an "implied invitation"
It is in the case where entry is anonymous and public, such as a store. HTTP is anonymous and public.
> Because you tricked me.
How could I have tricked you if HTTP is anonymous and public and was never asked to identify myself? I am an unknown stranger like every other requestor.
> The fact that you fooled me
I did no such thing. You never asked who I am. Had you asked I would tell you and then can you can decide if you would like to grant me access to the content. Instead you guessed incorrectly. Your bad judgement is not my fraud because HTTP is public and anonymous.
> Neither legality nor morality is a game
I keep trying to point this out to you but you would rather irrationally maintain your commitment to a failed idea and leave your business fully exposed. I did cyber security work for the military for about a decade, so I am fully aware of what the risks, technology, and laws are. If you were to serve me with a DMCA take down notice for anti-paywall software I would take you to court and I anticipate I would likely win. You really don't want that to happen. Why are you exposing yourself in this way?
> Just because it's easy to trick me does not make it right to do so.
Stupid is not a legal defense. Listen to the absurdity of this when rephrased to say the exact same thing: I am totally surprised that nice looking stranger drove away in my car after I voluntarily gave him the keys. Just who does he think he is? I know this will be all cleared up once the police find my car with the signed title in the glove box proving ownership. It was wrong for him to look like a banker. The imposter tricked me. How morally repugnant.
> Your entire argument boils down to "if you didn't want to be tricked, you should have tried harder".
That understates the absent mindedness of your position, but yes. It may or may not be moral, but it quite often the more legally valid point of view, particularly when there is a provable expectation of knowledge and risks.
> do you think you're morally justified in walking into my shop and saying "Hi I'm Alice"?
Yes, because free cake is great (since you're offering), and I think you will run out of cake. If I were the baker I would attempt to validate the person by asking for a phone number, address, or to see their ID. Stores already do this, which is why your scenario is absurd. If this scenario were a real thing it could backfire by resulting in a potential discrimination claim.
That has literally no bearing whatsoever on the morality of pretending to be someone else in order to get access to IP that you don't have the right to access. This is the second time you're calling me irrational, and it's not acceptable.
Rationality - https://en.wikipedia.org/wiki/Rationality
I used the word irrational in the context (both times) of commitment. This is an accepted description in behavioral health literature and does not address you as a person. It is also not polite to accuse people of tricky (fraud) when there is no evidence of such.
I'm not expecting a response (heck, I'm expecting this comment to be flagged for arguing with a moderator), I'm just hoping you'll pause for a moment and realize that heavy-handed moderation is counterproductive; if I'm punished for being on my best behavior, what incentive is there for me to care about my behavior at all?
Any time you find yourself writing things like "This is the second time you're calling me irrational, and it's not acceptable," you've passed the kind of discussion we want on HN. Even if it's true that the other person was behaving as badly or worse.
But really, it's not the fact that this comment was flagged that bothers me (though being accused of "tit-for-tat" behavior still rankles). What really gets me is my previous comment (https://news.ycombinator.com/item?id=18604446) is flagged as well, for no reason I can see.
As before, I'm not actually expecting a reply. I know you want this whole discussion to be over, and I'm fine with that. I just want you to spend a few seconds thinking about what I'm saying here.
That's an entirely serious suggestion. I'm not arguing that what you're doing is illegal. I have all sorts of qualms with copyright and its enforcement myself. If I were "copyright czar", movies would get three years, and newspapers maybe a week.
But if we wish for copyright-holders to respect the rights of internet users, it would seem to be a winning proposition to maybe not actively work against their attempts to find a compromise.
And a "soft paywall" that allows you to read a dozen or so articles per month without subscription seems to be jus that: a compromise, and a reasonable one at that.
What do you expect to be the long-term outcome of bypassing such mechanisms? I have trouble of thinking of anything but "soft paywalls" turning into "hard paywalls". Then, we'll be left with the maybe one or two publications we subscribe to. How can such an outcome be in anyone's interest?
I know you can bypass these schemes. Yes, they are laughable. I know ads are sometimes annoying. I know large swaths of the press have earned your scorn because they, like, use the wrong JS framework. Or something.
But I still don't understand this attitude that appears to go even further than just "I want it for free" to an almost gleeful appreciation of vandalism destroying the foundations of democratic societies.
If your business model is not supportable except by other people willfully hiding information on the client computer, your business model is not good.
"Avoiding a paywall that's only half implemented" is not "the destruction of journalism."
Also: your own business model of "living" is just held up by the completely artificial rule of others not taking everything you own. Just because "it's on the internet" doesn't make "might is right" a true proposition.
No, for a business model not to work doesn't mean that there has to be another business model that will allow you to run exactly the same business. If I need to fish with dynamite to make fishing in a particular area profitable, and dynamite becomes illegal, it's no one's responsibility to come up with another way I can fish.
And no, that's not a fair analogy, its me saying that if you tell me something, I am not beholden on your instructions to forget it.
You simply have to use something as simple as a lock and key and I will be forever stopped from knowing the thing (in reality).
All news orgs have this option, they simply want the cake and eat it too of working with popular search engines and also charge users for information that they effectively already have.
If you want to operate a private service go for it, nobody is going to complain about it, and any unauthorized access will be met with scorn (at the least.)
edit: also I am not just willing to wage failed business models all day - clinging to things that dont work isnt a way towards success.
My two cents is that advertisement is what is killing journalism.
YouTube, for example, can show advertisements for well known companies in videos about Anti-vaccination, far-right conspiracies, etc. without consequences.
Why is that? Because all that happens in the privacy of your own computer. Usually any newspaper that publicly have printed such bullshit in their pages will be dead. Public will react to it.
What is different? Facebook, YouTube, etc. are personalized. You are shown what you are interested in without public accountability. Niche radical content gets a lot of views for its own controversial nature. Views and money.
Who wants to investigate, hire good writers and expend the money that it takes to write a good article when you can hire some one without ethics for a fraction of the price and get as many or more views as radicalization grows?
YouTube, Facebook and others say that they are not responsible of the content they offer. I think that it should be true for things like comments. But for the monetized content they are 100% responsible of incentivizing that radicalization and killing good journalism in the process.
It is a myth that there is any problem with ads being shown on controversial videos. YouTube demonetizing select videos is nothing more than a way to strangle independent media and help gentrify their platform.
Automating the process by pretending to be Google means the user won't even see that friction or messaging. It's clever, but it breaks the premise.
If they can't accept that compromise and not have these paywalled articles indexed, then the problem is on them.
I think this is fair, and so do the search engines. Google calls doing otherwise "cloaking" and says they penalize the ranking of sites that do it. Perhaps they're not doing so effectively enough.
So I'm assuming the news of what the people you voted for get to you via... diffusion?
> . The united states and democracy itself existed far before journalist.
You should sue your school for the permanent damage they did. Both in terms of grammar as well as history.
In any case: if journalism is so useless, why are you jumping through hoops to read it?
Have you ever been in a conversation where someone talks about something and you said “hey I read this cool article on that, let me send it to you?” If so, guess what - you were the search engine for that conversation. Should you then have access to view the non-paywalled content?
Sometimes embracing change requires letting go.
I’ve simply stated that you can either fight change or embrace it.
In this context I mean that many web users do not want to view their content with advertisements or be tracked through advertising. Attempting to force them to is to push against user behavior (fighting change).
I don’t believe users will change their behavior if you try and force them to.
I believe that by embracing the users behaviour we can learn how to make it work.
I think that might require letting go of our predetermined notions of what “journalism” is and how it should be funded. I think we need to be more open to reviewing why people paid for journalism in the first place and how the need it fulfilled is met today.
“Journalism places a vital role in democracies” - what role is that? (I’m not disagreeing, I simply want to know your opinion).
“Journalism is expensive” - does it need to be? Why is that?
“Someone needs to pay for it” - This seems quite vacuous. If the thing is valuable then someone will likely pay for it (if they can be convinced of the value and it’s realized).
It’s not readers who pay for it now though is it? It’s advertising companies since they buy the ads. We’re not asking people to pay for it we’re asking them to be visually and audibly distracted into buying products and services in return for journalism.
Perhaps this is not a fair deal for many anymore?
The rule is something like "a user clicking from Google can't see something (meaningfully) different than what the Googlebot sees". So you can paywall direct traffic or links from Reddit or internal links, but that first one from Google is supposed to work.
But in this case, they specifically want to allow you read maybe a dozen articles per month for free, but also (see above) to eat.
I would disagree. Flawed journalism means things like mixing in lies in their news stories, mixing in opinion pieces into news etc. That kind of stuff can do more harm than good.
Sufficiently flawed journalism is perhaps one of the most severe problems in our society. People are fed half truths, and at times simply false information, and then go on to repeat it and base their world views upon as if it were fact. And in the era of social media false and sensational views spread incredibly rapidly while any 'corrections' are basically dead on arrival.
To give a very recent example of this, you probably read about a shocking new study showing that the oceans are actually heating up 60% faster than we thought. Turns out the study was mathematically flawed and its actual numbers are not particularly different than what other studies have shown. . And the error was detected just hours after publication - apparently not quickly enough to stop the shocking headlines. This is not an implied comment on climate change, but rather just a very recent example of this phenomena of how people are misled without having their views ultimately corrected that happens to involve climate change.
The sites that rely on 'teasing' and then paywalling are some of the worst offenders in the state of journalism today, and in my opinion the world, let alone society, would be in a much better place without them.
 - https://www.sandiegouniontribune.com/news/environment/sd-me-...
 - https://www.latimes.com/science/sciencenow/la-sci-sn-oceans-...
 - https://judithcurry.com/2018/11/06/a-major-problem-with-the-...
What I was saying is that journalism, in its current state, often runs sensational stories that later end up being false or otherwise unsupported. That report I mentioned was not about the media rewriting a science publication -- the publication itself was published with errors in it. The media just accurately repeated those falsehoods. The problem is that, due to the nature of social media, the sensational stories get let's say a million hits. Later on they either update the story to more accurately reflect the truth or run a retraction. These 'corrections', by contrast get maybe a hundred hits. So you have people who have wildly broken worldviews largely because they take the media at face value.
The following quote from Thomas Jefferson is dated 1807, but it's now more relevant than ever simply because mass media is resulting in mass misinformation: "Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle. The real extent of this state of misinformation is known only to those who are in situations to confront facts within their knowledge with the lies of the day. I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors. He who reads nothing will still learn the great facts, and the details are all false."
^ Not something actual drug dealers ever do.
(c.f. the bishop of Cologne, Germany, regarding the theft of basic foodstuffs during the famine immediately after WW2)
A wish to survive is also an accepted excuse in many other cases of behaviour that would otherwise be considered criminal. You can even kill someone and get out of it with a credible claim of self-defence.
These organizations are making the issue harder and harder for themselves by consistently arguing for insane concessions from society and other companies after they were caught napping during the biggest technological revolution in their business since the printing press. We should definitely have discussions on how to have a healthy press in the digital age but that should never include putting up with bullshit like this
At first, I thought it might be possible to solve this by getting search engines to standardize on a vector format that they could accept for protected content. So the crawler sees a 300-dimensional vector that effectively gives a semantic summary of the document.
But then I thought content providers could achieve a substantially-similar effect by just serving their documents to search engines in scrambled (e.g. alphabetized) form. They could still provide normal headlines to get them clicks.
BUT THEN I thought it would be a really cool and bizarre problem to circumvent this by attempting to devise a method for finding the most-probable original document given its alphabetized version.
So Google could still deliver ranking and publishers put their content behind their walls.
If Google isn't interested in such a standard (because it reduces the quality of open search results) then that's the publishers' problem, not Google's or users'.
If you visit stuff on the web, you respect the concept of a social contract between you and the provide. Don’t try to circumvent their wishes.
That said anybody attempting to make parts of content non-pubic in HTTP without appropriate security restrictions are void the protections of law. The remedy for this is to ensure necessary security is in place so that non-pubic content is limited to account and session, which then puts part of the burden on the end user to ensure they restrict the credentials that provide access.
Content, not already covered, served over HTTP without the necessary security in place is not private.
There is a clause in the DCMA that makes it illegal to circumvent security controls. The stipulation there is that the security controls in question must be adequate and reasonable. Dropping a CSS model over some content is not a valid security control, and thus has no legal protection. Of course everything regarding DCMA is open to argument at legal expense.
It’s a common misconception that the word “effective,” as used in the respective legislation, was intended to mean “successful” or “always cause.” Rather, it’s always been interpreted to mean, “is designed to have the effect of.”
Engineers too often assume that the text of the law means what they think it means. This is one of those cases where a lack of legal education serves them poorly and leads them to incorrect conclusions.
I am of the bias that a space of technology cannot be summarily redefined by a single group interest merely to compensate their financial insecurity.
If I send a site an http GET request, and that site responds by giving me some data, then that site has given me implied consent to look at the data. If they don't want me to look at it, they shouldn't have given me it.
Man standing on the sidewalk outside a store yells into the store at the shopkeeper “hey there, can I have some free fish?”
Shopkeeper yells back “sure!” and tosses a fish to the man on the sidewalk.
Add more yelling back and forth for TLS handshake.
At this point, the man on the sidewalk can do whatever he wants with the fish. It was freely given.
Basically this example, if accepted in law, would mean that if you want your data to be private then Google would be expected to respect that, even if you sent them that data in the clear.
It's a ridiculous outcome.
Presumably, there's implied consent for me to to that? yes?
The analogy would be if the shopkeeper (i.e. web server) picks up an article and puts it in your pocket.
I'm not sure how the addon worked, if there's some trickery going on, but I'm long waiting for a lawsuit against ad blockers that could get heated.
That's what access control is for. If you don't secure your content, you don't get to whine that someone is reading without paying.
The site is free to not emit any content before payment is assured. The reason these add-ons are possible is that these sites are trying to have their cake and eat it to. They want to implement the paywall in the user agent so they can still get their content in Google. At the same time some of them are trying to argue for payments from Google for linking to their content. The situation is a mess but it's not about social contracts at all.
This doesn't necessarily prohibit the use of paywalls: the paywalled site just has to be designed so that it only sends content to clients that have provided evidence of payment rather than relying on the content being rendered in a way that prevents the reader from reading all of it.
Much as I'm for standards-compliant HTML rendering, I don't want any specific rendering encoded into law.
Basically it sounds like some publishers want to rewrite the rules of the web to neatly and conveniently serve their interests.
Browsers already have a mechanism for authorizing access to content. Many mechanisms, really. If a company chooses to use an unreliable mechanism, I don't think we are morally obligate to roll over and do what they want.
There is currently no way I know of allowing this functionality that is meaningfully "reliable". At least not without erecting new barriers such as mandatory registration (and verification).
So you, and everyone else, seem to be demanding publishers just switch to "absolute paywalls". You will then have strictly less access than before.
How is that supposed to be better?
But if publisher do want absolute control over who views their content, then they should make people log in. I think that would be a mistake, but it's their mistake to make. I'm just opposed to give them legal control over what I do with my computer and any data their servers freely give me.
Viewed under a magnifying glass, it seems like a good thing to ensure the publishing industry can use soft paywalls, but taking a broader view, I think that breaks the web. I don't think soft paywalls are worth breaking the web, and even more broadly, governments should only regulate what users can do with data willingly sent to their computers in very exceptional circumstances.
It may mean that without a harsh letigious method, publishers do not offer "soft paywalls". It's more likely they will continue to do so to not be shut out of the market, AND we won't have waves of frivolous harsh lawsuits bankrupting independent developers.
The alternative you're proposing is to in effect be held to ransom by content providers - your argument could equally be made for any form of drm or appeasement (without root how do we know you are not recording this, wothout your location how do we know you are within our licensing area, without direct retinal scans how do we know who is really watching, etc)
Compuserve paywalled all of their content way back in the 1980s.
Paywalls were used in adult industry even before the 2000's.
Edit: the link, for the willfully dense, is that some protocol specification does not imply its users’ acceptance of any and all (ab)uses possible. It’s completely different only in that you like one but not the other.
(This is mostly to show that I, too, can argue with invented “contracts”).
I pay for my internet connection and I expect others who make use of the same network to follow its protocols. I expect HTTP to work as HTTP. If a bad network node chooses to circumvent how it works and I find a way around their circumvention then I should not be punished. On the other hand, if I republish and break copyright it is an entirely different matter.
> There’s also a social contract about respect for the law.
Which law, and as it applies in which context?
In practice, American legislators have to raise large amounts of money just to say in office. So it's unsurprising that they listen mainly to the concerns of people with money: https://www.vox.com/2014/4/18/5624310/martin-gilens-testing-...
We are in an age where information distribution is approximately free. And where the costs of producing information-based products has declined drastically. Paywalls are not any sort of law; they're an attempt by information producers to have their cake and eat it too. They want the abundance of the new, computer-driven world. But they want artificial scarcity, so that they can charge like they did in 1970. My sympathy is with journalists here. But that doesn't mean I have to roll over and let them break the web.
The laws in question here, the DMCA and the CFAA, are both too new and too old to be reflexively bowed to with "OMG spirit of the law!" Too new because they are obviously part of society's attempts to deal with technology, and are equally obviously driven by the interests of people who have piles of money and are trying to protect those piles. And too old because both, but especially the CFAA which was written in 1984, reflect a very early understanding of what computers are, what they're for, and how society should treat them.
As far as I'm concerned, this is an ongoing negotiation. We need to find ways for journalists to get paid. We need to find ways to have a much more informed populace. We need to preserve as much as possible of the freedom computers and networks have given to every individual.
If we want paywalls to be a part of law, then we should have a national debate about whether publishers deserve a special protection so they make all of their content available, let it be downloaded to any computer in the world, and then have that computer enforce any restrictions they think are good ones. We are not obliged to just concede that debate right at the beginning just because you think reading an infinitely replicable article already on your computer is exactly the same thing as stealing a unique piece of fruit from a blind man's store.
Exactly. 99% arguments here are essentially the equivalent to "since Marriott/SPG are horrible at cyber security, I am ENTITLED to use stolen credit cards from their leaks."