Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Four lines of code for The New York Times’ paywall (niemanlab.org)
62 points by atularora on March 21, 2011 | hide | past | favorite | 24 comments


Am I the only one who assumes the NY Times considered the futility of trying to stop every single 2000-bit hack and is rather trying to change the way intelligent consumers of news and culture with disposable income approach the Times, and perhaps quality content generally, online.

I think they've done a fair job of trying to handhold users through the transition. I expect the success or failure of the paywall not to depend on the fact of the paywall itself (obviously), but on how easy and frictionless they'll make it to pay. If they can make it graceful enough, I will happily pay for content I know there are loopholes for accessing. If it's a clusterfuck, as with other DRM schemes, I'll either seek out the leaks or give up.

I'm hoping they've adopted something like this approach, a slightly more rigorous version of the NPR pledge model, and are relying on the goodwill and intelligence of their readership and not its stupidity.


I assumed they want to see how people will react (they are in essence A/B testing Canada vs the USA). And that they're oping that people will end up using the service more or less as intended.

It's an interesting experiment, I read more than 20 articles from NYTimes.com a month, but I don't need more than a few. If the blogosphere starts pointing to the BBC instead, I'll end up reading that.

I should probably mention that it's my code (if we can call it that) we're talking about. The traffic is killing my blog now, but I a few hours ago made it out of curiosity if I could hack that (it was 20 minutes all in).


The javascript: http://toys.euri.ca/nyt.js (which is really 3 lines).

The NYT does have a really hard problem to solve though. How do you securely police the "20 free articles a week" system if you can't trust cookies, can't put validation on the page itself, and ip addresses change all the time without requiring even a free account to read the content?

They are trying to come up with a modern pay solution to journalism and for that I give them props. It's not going to be an easy road though.


How do you securely police the "20 free articles a week" system if you can't trust cookies, can't put validation on the page itself, and ip addresses change all the time without requiring even a free account to read the content?

You assume that the number of people with the ability and desire to circumvent your defenses is a small fraction of your total audience, and ignore them.


In other words, exactly the same way you deal with AdBlock on your ad-supported website.


This is, at least obliquely, the subject of an honours thesis I am working on. I'd like to say more but this margin is too narrow to contain the complete proof.


You can trust cookies if you disallow anonymous access and force everyone to sign in to an account, free or otherwise, before reading anything.


Then there's nothing to stop people from just making a new free account each time they use up their 20 free stories.


Yup and it could be 1 line if we were playing codegolf.

I'm loading that code dynamically from the bookmarklet because I assume that they'll change the ids of the divs at some point.


Indeed. A simple work-around if you're using chrome: 1) right-click a nytimes article 2) open in incognito mode


EverCookie?


I just realized that since the NYT paywall is copy protection, View Source on the NYT is now illegal.


> ... is now illegal.

In a certain country, you may want to add.

Some people are arguing whether the country in question is more of a representative republic or of a democracy ((guess that's enough of snarkiness for one short post)).


Although I'm pretty sure that country isn't Canada (where I am and where they're blocking us after 20 articles).


It's actually a pretty long list of countries that have signed that treaty.


Yes, he forgot to add that the imperialistic country in question tries hard to push its draconian ideas about copyright to the rest of the world. Spread the hurt...


And if this fails, "referer: http://google.com/search?q=totallyfake " works just as well.


It was clearly designed to be easy to hack, so they are not afraid of that. What they've really done is set a premium price on their content. Whether people pay it or not, everyone will now feel like the NYT is very high quality. That seems to be the intent.


Has anybody done the math to figure out how many people they'll need to sign up for this thing in order to break even?

They seem to have spent $40M building it and dropped their stock price 3% on the first day it was live. That's a lot of $40 annual subscriptions just to cover the cost of putting it in place.


They'd setup a method of preventing right-click save of images not that long ago, replacing the image's referencing URL with a small blank block. Easily overcome by viewing source, but still was the new start of their effort to protect the content a little more.


Messy Javascript code, indeed. Nice work-around.


Curious as to what this was.


I'm assuming you're referring to the site being non-responsive? Try searching Google for "cache:url" :)


Assuming it's my blog that we're referring to, I mirrored the relevant bits on the slice that hosts the static files: http://toys.euri.ca/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: