Hacker News new | past | comments | ask | show | jobs | submit login
How we broke PHP, hacked Pornhub and earned $20k (evonide.com)
327 points by KngFant on July 23, 2016 | hide | past | web | favorite | 104 comments

The takeway:

    You should never use user input on unserialize. Assuming that 
    using an up-to-date PHP version is enough to protect 
    unserialize in such scenarios is a bad idea. Avoid it or use 
    less complex serialization methods like JSON.

Actually, the takeaway is not that "you should never use user input on unserialize." It is, that you should NEVER TRUST USER INPUT. This rule is as old as computing itself and trust of user input has always been the beginning of a security vulnerability. You need user input, you will use user input, but you must understand how it's used and filter, strip everything that is not needed away.


I think a lot of people are reading this and think that this advice is too wide, and include so much in so few words.

It's the same caliber as "don't trust strangers" and "be responsible of your actions". There is undeniable truth in it, you cannot go wrong following it and it's difficult to argument against. But I think that's what makes it counter productive and basically devoid of meaning.

Effectively if the rule is as old as computing, but it still has to be voiced, it means it's not a simple rule to begin with.

As you put it "you must understand how it's used and filter, strip everything that is not needed away". This basically means every time you have user input, ideally you'd have to audit all the libraries and frameworks accessing the data to check how they use it, and filter accordingly. Pass a construct to a json library ? first check what the json library does with it, sanitize your input for everything that could be harmful to the library. This solution is voiced in one sentence, but would mean hours/days of library auditing in a real world scenario.

TL;DR: you can't cover for everything, you have to choose your battles. Knowing which libraries have known vulnerabilities is valuable info.

That's not actionable. You have to deal with user input at some point.

What if my language can't deal with strings properly? Maybe strpos has a buffer overflow on a carefully crafted input. Would I be wrong for using it?

Never trust user input doesn't mean never use user input; you need to use it carefully -- often that means restricting to acceptable values and lengths, (appropriate!) escaping, and passing through that it's user input to other functions (ex: sql placeholders). When functions do arcane things, and don't let you pass through that it's user provided, that's a red flag.

Note that a cookie that you set, and are now getting back IS user-input, unless you do something to validate that it's actually the value you set. (HMAC is a good start)

If your language can't deal with strings properly, I strongly suggest you not expose it to strings provided by users. If you do expose it to strings from users, at least you should sandbox your application as much as possible.

I advised someone doing their masters in information security as a mentor. My student did their dissertation on input scrubbing. We did quite extensive research on the subject, and we found out that a simple AWK program doing regular expression matching on the input, before passing it on to conventional scrubbers inside of languages like PHP virtually eliminated attack vectors. For three months we tried our very best to craft some SQL code to get by the AWK regex and we couldn't. Lesson learned.

> Lesson learned.

Without meaning to be sarcastic (particularly because I found your post interesting), what lesson learned? A casual perusal of your post suggests the lesson "one can't craft SQL code to get by an AWK regex", but, of course, "what I can't do no-one can" is a bad lesson to learn in security.

The lesson we learned is that sometimes getting back to the roots (AWK) and using simple methods (regex) can be extremely effective. You are of course right that "what I can't do no-one can" is a bad thing.

IMO your lesson is to get to define a problem simply enough that you can apply a simple solution. This is not a given and usually needs serious design and project management skills.

Otherwise even your simple solution would be drawn in "can you support multi-byte characters ? Do you handle non unicode stuff ? What if it leaks in your layers if code before reaching your awk library ?" and other problems that abound in most mildy complex projects.

your lesson is to get to define a problem simply enough that you can apply a simple solution.

Hear hear! So true. The problem is that making complex things simple is extremely difficult.

It is actionable, sort of, but it takes a lot of careful thinking about what you can safely do with data you do not trust.

E.g. you need to assume that every function you pass that data to will be subjected to malicious or accidentally broken input. I originally wrote "unless/until you have sanitised the content", but really, when building applications taking user data, just assume that you're dealing with malicious or accidentally broken input everywhere unless you have proven otherwise.

This goes from the trivial: Range check numbers; check length of strings, and verify any other constraints (encoding, character set limitations) that you may later depend on.

To the very complex: Going to re-use end-user HTML? May sound simple, but you basically need a HTML parser with explicit white-listing of tags and attributes, and if you allow CSS you need to parse and white-list CSS attributes too (biggest risks: chance of executing malicious JS in context of another logged in user, including in your admin interface; chance of causing unintended side-effects if you allow triggering HTTP requests - as a minimum, even assuming nobody any longer are stupid enough to trigger side-effects on GET requests and assuming that's all they are able to trigger, it has privacy impacts including the chance of leaking details about your admin systems or any third party systems you pass the HTML on to).

In general it means you have to understand all the ways the type of data you allow can go from being innocuous inert sequences of bytes to triggering effects that may be under the control of a potentially malicious user, and you have to assume that if you don't know, then format needs to be assumed to not be inert when passed to any given piece of code.

E.g. to take a much simpler example than HTML. Consider passing arbitrary XML to an XML parser in order to validate it against a schema to sanitise. Could be a smart thing to do. Except, even assuming the schema is strict enough, consider that a malicious XML document passed to a parser that's not explicitly configured not to, may be able to make HTTP requests with a source IP on your internal network (by specifying a suitable URL for the doctype).

Doesn't need to be malicious either - I've seen plenty of systems have throughput fall through the floor because someone didn't handle this case and suddenly got a bunch of XML documents with a doctype URL that took ages waiting for requests to a downed nameserver for some third-party domain to time out.

In this case you also better be sure you don't have any services that are "protected" only by being behind a firewall that allows side effects via GET requests (a there's a good reason to never allow side effects via GET requests and not allow unauthenticated services even behind your firewall, on the assumption that somewhere, sometime, you will slip in this area and allow a user-supplied URL to get retrieved from an internal IP due to the multitude of formats that can include URLs)

And yes, if there's a risk of strpos having a buffer overflow, you are now SOL if you haven't validated your input in a way that prevents it, and while that's an unlikely case, it is an important illustration of the overall point:

All third-party data is unsafe until proven safe in the context of the code it will be passed to.

As a wider point, you should consider not only your own immediate usage, but whether or not a given piece of data may ever be passed on to a third party API etc., as whether or not you consider their own security lapses to be their problem, it can also harm you.

As a corollary, you should assume any data coming coming from a trusted partner is as unsafe as data passed to you from a known hacker.

It's with data as with unprotected sex: when you take data from someone, you're exchanging data not just with them, but with everyone with access to their systems and anyone they exchange data with.

Don't assume they're being safe - it takes just a single slip-up in their data handling before what you might think are "safe" data fields provided by your partner are actually unvalidated content provided by a malicious user. You may think you know the source of the data when taking a feed from a trusted partner, but you don't - not really.

To the extent that you should not just treat individual fields as supplied by potential malicious users. You should treat their entire supplied data feed as supplied by a potentially malicious user. As for why, consider the equivalent of SQL injection applied to whatever format your partner is passing you. Or they may have been hacked.

The TL;DR boils down to pretty much the comment you replied to. Anything longer, including the above needs to come with a big, huge caveat: It's NOT complete.

You can write books about the ways data-validation can go wrong and things to look for, and what I've written above just scrapes the surface in a few very unsatisfactory ways (except, hopefully, by terrifying you). You need to always approach it assuming the worst.

    It's with data as with unprotected sex: when you take data from someone, you're exchanging data not just with them, but with everyone with access to their systems and anyone they exchange data with.
I'll start calling airgapped systems abstinence-only networking.

As we all know abstinence-only doesn't work, so maybe there are stronger parallels here than at first glance. ;)

Well, it works if you actually practice it...

In both cases, it's much easier said than done.

Even better would be to not trust what one's application is returning back, and scrub the output in addition to scrubbing the input.


That is not clear at all and pretty useless. What does it mean? I should not accept any user input at all?

> strip everything that is not needed

That does not always work. What if I have a comment form that should accept any characters?

>What does it mean? I should not accept any user input at all?

No, it means you should never assume that user data is safe, or even sane. Assume, rather, that everything every user is sending you is malicious, all the time, and write your code accordingly.

>. What if I have a comment form that should accept any characters?

First, you probably shouldn't, because your database and HTML should be using explicit character encodings, so a comment form that accepts anything doesn't make a lot of sense. How are you expecting to deal with "any characters"? What happens when they paste in a binary blob, or javascript code?

Secondly, assuming you want to do that, you still shouldn't trust the data. Add it to the database using parameterized queries, escape it when rendering, never mix it in to javascript variables and never serialize it into a format designed to unserialize executable objects.

It's not an unreasonable burden to expect web developers to at least be aware and code defensively. Especially with PHP.

> it means you should never assume that user data is safe, or even sane

I'm curious if Haskell's purity helps developers focus on this issue and therefore makes it easier to mitigate. Given that all user input/state already has to be handled carefully (for ex: with monads). It will be obvious in the codebase which parts need to be zero'd in on for possible attack vectors.

Haskell's web frameworks help, but it's nothing to do with purity. In fact any web framework can do this, you segregate user-supplied data and ensure it can never be supplied to an untrusted function without explicit cleaning.

Perl and Ruby have included this as a 'tainted' flag, many functions cannot be called with a tainted string.

>I'm curious if Haskell's purity helps developers focus on this issue and therefore makes it easier to mitigate

No, haskell's type system does, not its purity.

>Given that all user input/state already has to be handled carefully (for ex: with monads)

What? Monads are not some mythical beast, there is nothing "handled carefully" about it. A monad is just a general interface.

I'm talking about clear separation of state via Monads making it easier to focus on the riskier input. For example when you are doing code reviews. That is about the developer nothing to do with some inherent functionality of Monads. Not sure what it being a "general interface" has to do with that. Maybe I wasn't clear.

>Not sure what it being a "general interface" has to do with that

I was explaining what monads are. You have read some of the weird misconceptions about haskell and monads and are now repeating them.

How does purity not help the developer focus in on areas where user input might have a negative affect the codebase or system?

I used monads only as an example (in brackets) but it's hardly the only form where purity creates clear divisions in the codebase. So, once again, the specific functionality of monads has nothing to do with it. I'm speaking about the coding style that Haskell promotes making detecting bugs easier during code reviews.

Having done security reviews of code I would have loved to have that type of distinction exist in a codebase compared to the usual OOP mess (with PHP or Ruby for example, which is typically the only paid work available in infosec) where state/user input is leaking all over the place. Compared to clearly defined paths containing IO - which are separated from pure functions which don't have unexpected side effects and require far less scrutiny.

I'm answering my own question here but I was hoping someone who has more experience than me at conducting security-centric code reviews would chime in to provide their perspective.

I don't know how to be any more clear. Monads are not an example, that is the point. It is like saying in python "Given that all user input/state already has to be handled carefully (for ex: with dictionaries)." It is nonsense.

I'm not being a smartaleck, but "you shouldn't be writing code" with your attitude/approach. "NEVER trust user input" is an important security mantra to learn all on its own, like "wipe your butt/wash your hands" is in another context.

The guy is writing a valid point on Hacker(!) News. People writing comments on HN (especially to summarize a takeaway from a longer form article) are not required to accurately recapitulate entire dossiers of how to process input. It is completely valid to say "you should never trust user input". Somebody who is looking to make that "actionable" or "clear and pretty useful" can very very easily google the phrase and will turn up a lot of useful answers and information.

This is what is meant by the idea that the simplicity of the iPhone UI and/or automated IDEs has created a generation of helplessness and entitlement.

The good advice remains good advice: you should never trust user input. If you can't turn that into sound advice from Hacker News, your options become limited to, nobody should trust the code you write, you shouldn't write code, or you shouldn't read hacker news for advice.

But the idea that people need to write what you personally need to hear or they shouldn't write comments? that's nuts. Could I have written a more useful comment to you and to the community? I'll tell you this, I did think about it, and this my best shot at what I thought you and the community could benefit from!

There used to be a guy on usenet news who posted all sorts of stuff, and had the name of his company in his .sig line, and he included the phrase "these ARE the opinions of my company" instead of that boring old boilerplate "none of the opinions I express are..."

   NEVER trust user input" is an important security
   mantra to learn all on its own, like "wipe your 
   butt/wash your hands" is in another context.
Even the contexts are not so different. DNA is an information carrier, life is an information system, hygine and the immune system are information security mechanisms.

Though I am not sure who the user is in this analogy.

You're confusing "trust" with "use," which appears to be the cause of your apparent bewilderment. I could be wrong, however.

Suppose you were handling snakes. Some snakes are not poisonous, and don't need to handled with the care you'd handle, say, a black mamba with. However, you are being advised to treat every snake you encounter as though it was the most poisonous snake known, and apply every care that you normally apply to snakes that you know are poisonous.

Will you handle the snakes? Yes, you're a snake handler, remember? But you handle all of them like they are deadly, even the ones you "know" to not be deadly.

On a side note; snakes are venomous, not poisonous. Your point is well taken though - and I've come closer than I'd like to a few tiger snakes in my area.

User input ought to be treated with the same kind of respect, although in terms of user input the option of giving them a wide birth isn't always as practical.

Even JSON isn't great. It's still a hash-collision DoS attack vector.


Dude, that's horrific. I figured some secure coders wouldve at least implemented a better JSON one by now since it's relatively simple. Or are they already available but dev's often rely on these broken ones?

Hash functions designed for hash tables are generally not hard to find collisions for, so there's not much that can be done. You could shoehorn in a secure hash function, but that would hurt performance.

Having per-table random seeds and properly designed hash function prevents remotely forced collisions. This is the typical defense against hashtable DOS.

It has to be done properly, of course -- for example, a poorly designed hash function could have characteristic collisions for many different seeds.

Please be aware that many common hash functions are easy to generate collisions for independent of the seed. This includes both MurmurHash and CityHash. These aren't poorly-designed hash functions: they have excellent distribution characteristics. But the way they use their seed wasn't designed to resist this kind of attack.

Rather than "poorly-designed", I should have said "cryptographically insecure".

Most hash functions are engineered for speed and collision resistance, in that order. Trading collision resistance for speed is worthwhile for many workloads, since it barely affects the average case.

> Most hash functions are engineered for speed and collision resistance, in that order.

I disagree with that assessment. I think most hash functions are designed for speed and good distribution of outputs given normal inputs. But designing to resist collisions against someone trying to deliberately create them is a different thing entirely.

> Trading collision resistance for speed is worthwhile for many workloads, since it barely affects the average case.

I think that is far from established. SipHash is marketed under this premise, but from what I have heard it is significantly slower, particularly for short inputs.

Yes, speed for normal inputs is what's generally desired.

A faster hash function can yield better overall performance than having fewer collisions. Here's some empirical evidence for this: https://www.strchr.com/hash_functions

The speed is correlated only weakly with the number of collisions-- and using the modern x86 CRC32 instruction yields the best results.

I suggest somebody test SpookyHash-128 similarly to those to see how it performs. Designed to be collision resistant and fast.


Cuz I either forgot it or never heard of it. Also looks to be one Perl comments are referencing. Has good performance and published cryptanalysis. That's awesome! Looks to be good default for this sort of thing. Thanks for the link. Definitely going in bookmarks & probably future apps :)

You can randomize some parameters of the hashing function when you detect too many collisions. Which makes a DOS much harder.

The Perl interpreter has been doing that for ages.

I think that most major languages have fixed this for years.

Python in 2012: http://bugs.python.org/issue13703

I was thinking more on lines of (a) does JSON handling need hash tables and (b) if do, do they have to be open addressing or something vulnerable to DOS? A No on either can lead to an implementation with better DOS resistance.

For B) All hash table implementations are vulnerable to this kind of attack. However, some kinds of chaining are less vulnerable. In particular, balanced binary tree chaining would make the worst-case lookup (and insertion?) time complexity O(log n), which is a significant improvement over probing or linked list chaining. The tricks mentioned in the other comments above also improve things, by making such an event less likely.

As for A), no. No, JSON handling doesn't need hashtables. Since your app will only look at certain values in the JSON, you can simply ignore all the other values, and dump the values in an object/struct of your choosing. It wouldn't even be all that hard to write, provided you know how to write a parser...

Since your app will only look at certain values in the JSON, you can simply ignore all the other values, and dump the values in an object/struct of your choosing. It wouldn't even be all that hard to write, provided you know how to write a parser..."

That's what I was thinking. Never seen verified parsers or generators requiring these things. So I figured it was an unnecessary requirement bringing in its own security issue.

A large part of the problem is that people want to be able to "just" de-serialize a chunk of JSON into a suitable generic structure they can then "just" de-reference as a suitable tree of dictionaries and arrays. It'd be fantastic to see easier-to-use patterns to discourage resorting to this, but it's very hard to beat for simplicity.

Or you could use tries.

Reminds me of a post that mocked how Php 7 "improves" security of unserialize function and people at reddit, at /r/php defending it....

[1] https://www.reddit.com/r/PHP/comments/3j88v4/something_about...

An actionable takeaway is to disable eval in php.ini. Not always practical but i doubt everyone needs it.

OT: Is there a site that curates these kinds of interestingly detailed hacks? Like Dan Luu does for debugging stories? (https://github.com/danluu/debugging-stories)

Interestingly, I've noticed just about every well upvoted story on /r/netsec will hit the HN front page 2-3 days later.

Arbitrage opportunity there: submit them to HN yourself.

I've been guilty of doing that before. Also, from top r/reverseengineering, though that subreddit is a little less traveled so doesn't get as many comments, so it's interesting to read what HNers think.

Some sort of "hack news", maybe. :-)

That moment when the company you work at is on the front page of Hacker News xD

Seriously, please do an AMA. As a developer, I am very curious about how it feels like working for a company like that :)

Some other Pornhubbers did one 2,5 years ago: https://www.reddit.com/r/IAmA/comments/1un3wn/we_are_the_por...

In terms of working at the company, aside from the adult content on people's screen, discussions of adult content, etc it is honestly no different than working at any large company.

I feel you and I know each others ;)


This is an elaborate hack and a very detailed writeup. Thanks for sharing.

> Using a locally compiled version of PHP we scanned for good candidates for stack pivoting gadgets

Surprised that worked. Guess they got lucky and either got the comiler+optization flags the same as the PHP binary used, or the release process can create higly similar builds.

They mention that PH had a custom compiled PHP and that's why they couldn't get the address of the function they wanted to call for evaluating code.

My understanding is that ROP gadgets are a separate issue. Basically you want to find a function that compiles to assembly instructions resembling the ones you need to move the stack pointer to your desired location. Testing this locally shouldn't be a problem, because functions across builds will compile to the same assembly instructions (even if their headers have different load addresses).

Again, that's my understanding - I have a very vague grasp of this stuff.

Really good write up. Some people are really smart, I wouldn't ever be able to do that kind of stuff even after being programming for years.

As well as being good, they'll also be very experienced. What you're seeing in that post is specialised knowledge, likely built up over many years. We can't all know everything, as much as we'd like to!

I have some questions about two things in the exploit code that puzzled me:

  my $php_code = 'eval(\'
     header("X-Accel-Buffering: no");
     header("Content-Encoding: none");
     header("Connection: close");
     echo file_get_contents("/etc/passwd");
1. they seem to be using php to code the exploit (solely based on the $ before the variable name) but i've never seen the 'my' keyword before, what exactly is this language?

2. if i understand the exploit correctly they got remote code execution by finding the pointer to 'zend_eval_string' and then feeding the above code into it. doesn't that mean the use of 'eval' in the code that is being executed is unnecessary?

>i've never seen the 'my' keyword before, what exactly is this language?

It's Perl: http://perldoc.perl.org/functions/my.html

Looks like perl, seeing the `my`.

Appears to be experiencing the hug of death. May be quite slow

Yes, thats sad. There is more discussion about this here: https://www.reddit.com/r/netsec/comments/4u86a4/how_we_broke...

I guess the site is served using PHP.

That's funny because in my experience php is one of the fastest languages.

FWIW, it places well across TechEmpower benchmarks:


Raw PHP is reasonably fast. The performance issue comes with loading source files on every request: http://talks.php.net/show/froscon08

This means that there is a conflict between performance and having a well structured object oriented framework.

Demand loaded classes and byte code help a lot with that: http://www.yiiframework.com/performance/

Best would be a model where a persistent process handles multiple requests concurrently, but that is not normal for PHP. So you need to make sure that all the libraries you are using are not leaking, have a nice db connection pool, and write a PHP framework that handles concurrency. Might as well use a better language at that point :-)

> This means that there is a conflict between performance and having a well structured object oriented framework.

What do you need a 'well structured object oriented framework' for? You're going to build up a huge object graph in memory, to output some HTML, and then throw away all the objects at the end of the request. Nobody is going to see your beautiful object tree, so don't bother. A blog entry page should be super simple.

header, title, content, comments, recent comments, footer.

Header and footer are dead simple echos of the boilerplate, maybe replace in the html title or something. Read the title and content from disk[1]. Have another data file for all your articles for the index page.

I prefer not to have comments on my blog, but if you must, you can put them in a database; limit to something like 100 or 1000 comments per article (because really) and limit threading, and it's going to be pretty quick to query them (make sure your webserver is doing reads from a database in the same metro area, if not on the same box).

Recent comments is across all blog entries; I would probably add a index on the time in the comments table and just select 2 from there; you could union that into the earlier comments query if you don't want to make two round trips to the database.

You don't need to do this with concurrency, each page load has barely anything to wait for, so more threads doesn't help throughput. Run enough php workers (php-fpm, or apache children if you're using apache_mod_php) to keep your cpu busy, and you're golden.

[1] There's four articles on this blog -- it doesn't need a database. PS run php as a user that can't write anywhere on the disk, and push the blog entries and the summary datafile with another user.

Edit to add: If you skip comments (or outsource to disqus or some other comments w/ javascript platform), you can make the whole site just static html, and leave PHP at home. OTOH, these guys are running Wordpress, because they like frequent security updates?

> The performance issue comes with loading source files on every request

PHP ships with an opcode cache built-in (and at least on every distro I've seen, enabled by default) since PHP 5.5 that keeps the compiled bytecode in shared memory

Must be a funny comparison group you have there.

I'm certainly positive it's faster than Ruby, Python and Java.

Ruby and python aren't exactly known to be fast.

In my experience php is fast enough until you start generating lots of garbage. It seems it wasn't really designed to garbage collect at all, but to rely on the per-request cleanup.

> until you start generating lots of garbage

We should really stop pretending that the garbage collector is the problem with langauges. The collector isn't the problem, your garbage is the problem.

[Not that I'm a proponent of PHP, though it does make popping shells far more fun.]

The code becomes kind of weird when you have to second guess the language.

Now, the approach PHP takes is fine for the original vision of the language. It even works great.

But languages designed for long running processes often have some sort of mechanism for dealing with that situation explicitly. Like the `NSAutoreleasePool` in Objective C. In C++ you might build your own custom slab allocator.

I'd say the garbage collector is one of the problems with PHP. Then again, if you run into it, PHP might not be the right tool for that particular job.

It depends on the code base and use case. For the vast majority of coders, the time/cost savings will be in the usability of the language itself rather than the hardware required to run the code.

Sure, tell that to Reddit.

I'm very curious how you went from "the vast majority of coders" to "tell that to Reddit".


From a legal perspective how do companies and hackerone create a binding exemption from laws used to prosecute hackers?

Pornhub have active bug bounties. In general you have to sign up to abide by the rules, which generally say how far you can take an exploit, ie prove it works but don't fuck with the actual data just to show you can. Your exploit would show that you could and that's what they want you to do.

In the US the law is against unauthorized access. If a company agrees to let people try to hack their stuff, then the access is authorized and legal.

To what extent? What if you do something on accident that ends up messing up their stuff? Just the first example I can think of: you figure out a way to reboot an instance which lets you exploit a race condition in some auth code, and don't realize that the instance you're killing is critical for some other function (let's say billing) and you end up causing some real monetary damage, even though you had no idea.

Are there any legal precedents for this?

It's in the agreement with you and the company. Usually it says if you cause a side effect like that you are at fault.

> binding exception

Two words -- honor code. Rock the boat and you will find yourself in an unpleasant situation, so instead everybody does good work and nobody asks too many questions.

Honor codes for stuff that traditionally involve corporations going after individuals for criminal charges. I feel that's a bit of a crazy proposition.

When people say "honor code" around me, it usually means, "Do something honorable, even though it's against your self interest."

For both white hats and Pornhub, the legal/authorized bounty system is in their interest. White hats are making less money than some black hats, but they're not constantly terrified of being prosecuted under intense anti-hacking laws. Pornhub is spending a lot less than they would if they were hacked by black hats. Both parties win.

Most people would probably expect Pornhub to be more honorable than e.g. AT&T...

Crazy indeed, but it happens to be the case.


If they prosecute a white hat there will only be black hats left. It's not a legal perspective but it keeps the honour code working.

ehhhhh, a real answer here dictates whether I sign up to hackerone on TOR and request bitcoin payouts, or if I do it on clearnet, fill out 1099s with my real/entity name, and link my bank accounts

So does Pornhub's bug bounty program include some number of years of free paid membership along with financial bounties? Kind of a "treat us right and we'll let you treat yourself right" kind of thing?

If you know enough to be able to pull off hacks like this, you surely know enough to get more of this kind of thing than you'd have time to watch in 10 lifetimes. I doubt that such a "reward" would be very appealing to those who participate in the bug bounty program, and it'd honestly be a sleazy business proposition, which would harm the professionalism of operating a successful bug bounty program. And yes, I'm completely serious.

I'm glad you were serious, because frankly I wasn't. I'm sure there's more than enough "free" porn out there for just about anyone's taste, so I doubt that free accounts would be a significant incentive anyway.

But meh, sometimes jokes fall flat.

Too bad they didn't just go ahead and:

> Dump the complete database of pornhub.com including all sensitive user information.

And of course leak the data to expose everyone that participates in this nasty business. It is such a sad thing that people are even proud to work at companies like this where humans are not worth more than a big dick or boobs.

And then you get around and say that child porn is so horrible. No, all porn is horrible and destroys our families and integrity. How can there be any dignity left if these things are held to be something good?

Righteous much? The most insidious prison one could ever be put into is the arbitrary restriction in one's own mind. Luckily for the rest of us, we are in the 21st century and most people are educated enough to not believe in nonsense like witches any more. Pornography is heading in the same direction, people are finally starting to realize that sexuality is part of one's natural self. When you consider that each of us was born with a dick or tits, there is nothing wrong by accepting that and getting turned on by appreciating the aesthetics of it. It has been done since antiquity and even before that, and then we regressed into the dark ages of morality. At the very least, it is biology. Science? Yes please!

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact