Hacker News new | past | comments | ask | show | jobs | submit login
Teen Becomes First Hacker to Earn $1M Through Bug Bounties (digit.fyi)
583 points by ohjeez on Mar 2, 2019 | hide | past | web | favorite | 164 comments

Pro tip if you are a startup and want free security advice. Just sign up for all the bounty sites and for every single bounty just tell the submitter that it is a duplicate bug and pay them nothing, then hot patch it immediately and when they get suspicious tell them that their bug report had absolutely nothing to do with the timing of your patch. I know there are companies that do this because I have had it happen twice. There needs to be a bug bounty site with some sort of bug escrow to prevent this behavior.

Edit: pardon the tone, I understand that these types of problems are very very hard to solve because they aren't purely technical and involve humans.

To be on the other side of this, we do received unsolicited but welcomed bug and security reports. Some are legit and we pay bounties even if we don't have an official policy and we are an early startup. Others are just automated reports that people copy and paste. These ones are uninteresting, but these people still think they deserve money. Often more aggressively than the legitimate ones.

Can you elaborate on the automated reports a bit more? What makes them uninteresting?

I don't run a bug bounty but I do sit on a security@ inbox. I don't believe I've ever seen a report I would want to pay out on even if I could, but if you discount blatant spam (often peddling EV certificates), I've received reports asking about bounties for:

- nginx version disclosed in headers - "Feature-Policy" header missing - DNSSEC not set up on zone - Domain not in HSTS preload list

Responding to this sort of thing with "not a vulnerability" is intensely difficult because the of the potential for a PR backlash about "poor security" from people who just don't know better, particularly when the company is definitely not a tech company.

If I remember correctly VLC regularly receives bug bounty reports declaring it as a security vulnerability that their source code is available...

And for softwares like VLC that have a special place in our hearts ('K-Lite Codec Pack' [1] was my special friend until the moment I discovered VLC many-many years ago), there is or there should be a firewall rule that says Enabled-Block-Any-Any-In&Out.

[1]: https://www.codecguide.com/download_kl.htm

Ps: remember those days that we were looking for codecs for our Windows Players back in the Win95-98-2000-XP days???

I remember back in the day I used to field security scan reports for a client I consulted for, and the amount of things I had to mark as "not actually a problem" because all they did was check Apache header versions was staggering. This is slightly better than and slightly worse than your situation at the same time. Slightly better in that they were looking for versions that had actual exploits known in them, but slightly worse in that they had no way to deal with distros that back-patched for vulnerabilities, like RHEL (which was the distro in question). "Yes, I'm aware that that the version of Apache you are noting contains a vulnerability. No, it's not actually a problem or exploitable, since I already applied the patched update. Just like last week. And the week before that."

If you were to do some of these (like removing the nginx version), do you think they'd start demanding payment?

Just silence the version header advertisement for the web servers. Make them actually test for vulnerabilities not just check the version

Examples of "vulnerability" reports I've received:

- Dump of CVEs for "Web App X" or "Server X", even though literally zero of them apply to the version that I'm currently running.

- Dumps of port scans with warnings like "Running SSH on port 22 is not recommended" and "Server accepts HTTP. Always use HTTPS".

I assume there are tools that generate these reports because the reports use decent English but the accompanying emails are written in very broken English.

I miss the days when nessus was good enough to justify being cool.

Then again my favorite bug back in the day was veritas backup acting as a reverse shell. I only learned of that by running nessus.

What's the justification for running a host that responds to HTTP and doesn't immediately upgrade to HTTPS?

I'm having a hard time imagining a scenario where I manage a web server that is accessible to anonymous people running pen scanners on it that has a justifiable reason for broadcasting port 80.

No that's the point, the generation script recognizes that the server issues an HTTP-compliant response (which 301 Moved Permanently is) on port 80 and dumbly generates that false-positive, not understanding that the only responses on port 80 are to upgrade to HTTPS.

Oh, that makes sense. That does sound annoying.

If you connect remote communities with poor bandwidth http allows a shared cache behind the bandwidth bottleneck. And other caching scenarios.

Could you elaborate on this? I'm curious as to how a setup like this would work in practice. Many people in my family live in rural areas so the topic of restricted bandwidth/poor connection quality is of great interest to me.


“But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students. Because Wikipedia wouldn’t cache. Google wouldn’t cache. Meyerweb wouldn’t cache. Almost nothing would cache. Why? HTTPS.”

Thanks for the excellent link, discussed on HN a while ago [1]. For those that think an sslstriping proxy would solve it please remember that this would degrade the security for requests that really have to be encrypted.

[1] https://news.ycombinator.com/item?id=17707187

I remember https://sectools.org/tag/vuln-scanners/ for one. There are other software tools that run from your PC and can 'target'/scan a single IP or a range of IPs and they return some generic results. I won't get to the metaspoil discussion/area.

Imho (and sorry to intervene) it is uninsteresting because each of us can and DO run these tools and get the same reports, and since we do care enough, these are low hanging fruits that we have all assessed on day1, and either addressed, or ignored for a valid reason (e.g. on a client site, someone was making noise for a vuln on a system that was a standalone server, disconnected from any network. I understand that security requires that all layers are secure, but we need to use sense and logic before we start yelling 'fire!! fire!!'.

I don't doubt your lived experience, but for real companies, the economics of ruthlessly withdrawing bounties don't make sense; bounties just don't cost enough money to be worth picking fights over.

There are some patterns where I've seen people not get paid just on general principle; for instance, people find systemic issues and, rather than disclosing the root cause, try to claim bounties for every instance of the flaw (you'll get paid, but not for every instance). It's possible that naive development teams sometimes get this confused, and, for example, consider "all XSS" to be a single systemic bug.

>> if you are a startup

> for real companies

I wouldn't consider those entirely equivalent sets. I imagine plenty of startups probably don't fall under the criteria you would consider "real companies", or at least not in the beginning before people have a chance to mature into their roles or flunk out of them.

> the economics of ruthlessly withdrawing bounties don't make sense

The economics of something and how people try to justify it or let their own egos get in the way often don't match. I mean, I still have to kick myself sometimes because while I work at a small company, agonizing over a couple hundred dollars a month in service fee differences is not a good way to spend my time given my hourly rate and the time a more expensive option might save if it does what it says. Ingrained thinking can be hard to overcome.

I've had this happen for pretty large companies. In one case their security team later gave a talk about bugs they'd discovered that included a diagram I'd sent in my description of an issue, which I found annoying.

These days I generally just sit on issues. The work involved in putting together a bulletproof report that can be understood by whoever reads the security alias (could be a security engineer, could be a PHB, could be /dev/null...) is just too high to do for free.

Large companies sometimes do unethical things just because one person or a group of people at them thinks it is a good idea, unrelated to any measurable economic benefit.

I've never had this happen but what I've had in the past is people saying "This isn't a vulnerability", then I told them I would go public with it with an Easy POC that anyone could do.

Example: http://writecodeeveryday.github.io/projects/badqr/

I literally had to twist their arm to get it patched... since was a something to 'reduce friction' which allowed you to steal someone's Bitcoins.

At the time, the POC would have netted me $40 for every person I scammed, today, it's a $400 profit and that tool would generate a QR telling people there's free bitcoin at Coinbase so I bet you someone would have used it.

Edit: I told my boss if they didn't do shit about it, I would put that QR code with 'Social Engineering' into Facebook ads since it had just started and see how much money I made out of it.

With Amazon AWS I had fun exchange a couple of years back. They don't have a bug bounty (still I think), but their response was that they will fix it but not publicly recognize it because "the cloud is always secure"? Go figure.

This is hyperbolic nonsense. Having worked at AWS, I've never encountered a business that is more serious about their security position.

It's not, I still have the email exchange from a couple years back - I thought of posting it somewhere because it was so odd, but I dont have a blog and I am not interested in publicity.

Amazon still doesn't offer a bug bounty program to my knowledge. Also, it's the only cloud provider my active security researcher friends tell me that attempts to regulate them by some weird pen test authorization requirements which are very foreign to industry standards of other cloud providers.

I'm just on the side lines watching, but there is a difference of how transparent AWS vs. GCP vs. Azure are when it comes to security. GCP > Azure > AWS

> pen test authorization requirements

Yes, we don't want people to publicize when we fuck up so we'd rather just NDA them to death when they tell us about bugs.

Edit: If you don't accept, we just use the hacking laws in the US to silence you.

Well, you’re not entitled to conduct attacks on them at all, so why shouldn’t the terms be up to them?

This sounds.. awful. I'm sure there are reasons, but hiding information this way makes you seem incompetent and unsure of yourself (you as Amazon, not you personally) in my eyes.

Edit: I assume you are speaking as employee of Amazon of course, which is not necessarily true.

There’s no way that would be their reason.

It's just reporting, no payouts.

It's more likely that low hanging fruit bugs have either been found before or internally by the company, just not yet fixed. Anything that tools like Burp identify is of risk of having been found already by someone else. It's a tough competition where in the grand scheme of things the winner is the company.

  just tell the submitter that it is a duplicate
Or simply close it immediately with a nonsensical message unrelated to the problem report and then immediately change the code. (This happened to me last month.)

Well I do freelance work at a client which paid out about 20k in bounties in the last few weeks.

10k was for a bug that had actually been found by the internal test-team on a Friday after a new release on Wednesday. Over the weekend however, a bounty hunter/pen-tester discovered the same thing...

There was some internal discussion (certainly because an internal ticket existed with an extensive discussion) about paying out this bounty - but eventually was decided to not bother with it and not get a rep of screwing over bounty hunters/pen-testers, certainly because this was a guy they already worked with before, and they had actually informed him and a few others specifically about the new release that Wednesday.

They did inform the guy that the internal testing had already found this, but since it was still open on the public-facing service at the time he reported it, they would pay him.

I don't doubt this has happened but also just as likely they did know about it but some dumbass product owner decided it wasn't a high priority until someone external reported it. My company has hundreds of similar (not just security) issues just lying around.

I found a pretty serious bug in a major service provider’s 2fa practices. The first time I reported it, they told me I was wrong. The second time I reported it, they actually tried to reproduce it and had an “omgwtf” moment.

They closed it with severity 8.8 on hackerone but the bounty wasn’t very high given how serious it was. There’s not really any sorta process for selling your bugs elsewhere though, you know?

>Edit: pardon the tone, I understand that these types of problems are very very hard to solve because they aren't purely technical and involve humans.

Vuln escrow is a trivial problem to solve, just publish timestamped hashes of reports. Anything else is simply inexcusable.

When this actually happens, we link the duplicate reports together as proof.

Blockchain technology will solve this trust issue automagically!

Maybe you could call them out on Twitter? Or maybe in the future you could submit most of the bug but hold back something critical until they acknowledge it?

What a shitty move. Bug bounty sites should ban companies engage in this exploitative behavior repeatedly.

Sounds like an actual use case for staking - bug bounties on teh blockchain!

Hopefully, probably sarcastic, but just in case you're not: who puts the data on the blockchain?

The company posting the bounty. Third party verifies the bug. Why sarcastic?

Why doesn’t the third party publish the data themselves then?

The third party doesn't know about the vulnerability. Company C posts bug bounty B in contract. Researcher X discovers vulnerability. Validator Y confirms the vulnerability and X gets paid (1-f)B where f is validator fee.

OK, so why doesn't Y hold the money as well, given that they're in the position of deciding whether or not X gets it?

Lol, TLDR; If you are a startup break every law, agreement, and regulation you can.

I mean you're shooting yourself in the foot HARD if you do this because then people start actually hacking you instead of reporting it.

There is actually a very simple solution to this: publish a Merkel tree of submitted bug reports.

How would that work? Surely the company could make a fake duplicate and show it to you in a Merkle tree as "proof"? They literally make up every node in the Merkle tree after all.

They could publish a hash of the whole existing tree regularly.

Exactly. After every submission in fact.

Well no, they’d have to publish all reports which most companies don’t like doing because it makes them look very bad.

They wouldn't have to by the nature of how a Merkel tree works.

Could you elaborate precisely how this scheme works to stop a company claiming reports are duplicates when they’re not?

If it was a duplicate they would be able to show how the hash of the duplicate report was already in the tree. For more information you might want to read about Merkel proofs. For example here: https://www.quora.com/Cryptography-How-does-a-Merkle-proof-a...

I wish I had a knack for this type of work. That's quite a bit of cash. I do feel I am a competent software engineer, but understanding data structures and algorithms doesn't necessarily correlate to one's ability to identify security vulnerabilities.

> understanding data structures and algorithms doesn't necessarily correlate to one's ability to identify security vulnerabilities.

No, but it does suggest that you're likely capable of learning security work. Just like your data structure and algorithm knowledge didn't come for free, nobody is born knowing how to find security problems. You need to work for it.

What's a way to learn security work? Genuinely curious.

One of the best introductions to the field is going through overthewire’s bandit vulnerability games. https://overthewire.org/wargames/bandit/

They have 30+ levels where you ssh into a server and attempt to find some type of vulnerability. They start out very easy and get tough quick. It’s very eye opening to see the types of exploits that exist.

They also have a set of challenges aimed at serverside web security. http://overthewire.org/wargames/natas/ I went through the web challenges last year and they helped a ton in my web dev roles.

> One of the best introductions to the field is going through overthewire’s bandit vulnerability games.

Never knew about these. I'm visually impaired, so a text-based system like this appeals. Thanks!

> One of the best introductions to the field is going through overthewire’s bandit vulnerability games.

Out of curiosity I visited your first link and played the first dozen+ levels. It's just been bash-fu and occasional man reading/googling. Judging by the subsequent level instructions I went through, there didn't seem to be much more in there. I'm like, if you really want to learn more about shell commands, there are man pages. Admittedly, a game is arguably a good way to tutor a lazy reader. Still, did I miss anything else in there by not finishing the game?

bandit is just the beginner intro to shell series that is meant for pure beginners to unix, you didnt miss anything. All of the other games on there are actual wargames to learn about security

In addition to other methods, never underestimate the importance of breaking stuff for yourself. Teenagers are often very good at this (partly because they have the free time and they don't care about the consequences).

Beside that lower level knowledge of how computer systems work is always worth studying up on.

I followed a couple of courses at the VU University Amsterdam. I'll tell you what I've learned about security, it gives you a couple of terms to type into a search engine at least.

There are 3 courses they give for it:

1) Computer & Network Security

2) Binary and Malware Analysis

3) Hardware Security

The lower level it gets, the better they are at it. Each course costs 1200 euro's for non-EU students. I recommend it.

I learned about (in random order):

- Rowhammer (I hope memory vendors will fix this)

- Cache attacks (I hope Intel will fix this)

- Stack smashing

- String buffer trickery in C

- Spoofing IPs

- DNS cache poisoning

- Using machine learning to fingerprint things

- Dictionary attacks for password cracking

- Portscanning

- Cold boot attacks

- Spotting vulnerabilities in C code

- Reverse engineering binaries with IDA Pro and knowing x86 and x64 assembly

- Taint analysis

- Instrumenting binaries with PIN

- Using SMT solvers to crack passwords in binaries

Is it remote and in English?

No, you have to be there.

Apart from the standard methods mentioned by other commenters, you can also play CTFs (and read past CTF writeups). The problems in CTFs are not necessarily reflective of the vulnerabilities in modern software, but it's a fun way to learn a lot of the tools and mindset involved in real security research.

I'm not a security expert by any means, but based on learning other niche-ish areas of technology, the following probably exist for security:

* Books


* Lecture notes, slides, and assignments from university courses

* Subreddits, Quora topics, etc

* Prominent community members you can follow on Twitter

In my experience, security has much fewer of those resources. Most of the information seems to shared through word-of-mouth, conference presentations, and blog posts.

Much of the information is also just RTFM. I don't think it's a stretch to say that security is a lifestyle: if I read the documentation of an API, more often than not I'll wonder if something can be abused for something. Or when trying to register for health insurance, the password field required special characters, so I set my password generator to include them, after which the form broke, and so I investigated and found that I could inject scripts there. It's just stuff I come across when I'm not even trying.

Word of mouth, chat groups where things are shared, conferences, blog posts... yes, those are resources. But it's also just a whole lot of curiosity and poking at systems.

And experience. Knowing what to look for, even if it's just the HTML source of a web page, is rather important in the first steps of breaking the system. How do you learn what to look for? Well, certainly there are blog posts and even playgrounds with virtual systems and components one can have a go at.

Actual security expert^W dude here. On the topic of breaking/hacking things, I never read a book, followed a MOOC, or studied university course material, and I doubt most of my peers do that either. Which is not to say I don't use those resources, but they're for other topics like software development, system administration, or non-fiction books like Predictably Irrational[1].

The only security-relevant subreddit I'm subscribed to is /r/netsec, and sometimes interesting things come by, but I don't use it a lot. HN is more useful for (context about / following) big security events than netsec. Perhaps /r/sysadmin is also fair to mention, but that's more to see what's hot in sysadmin world (and get their perspective on breaking security news) than to learn about security.

Instead of Quora, I use the IT Security StackExchange site[2]: answering questions makes me dive into topics just a little deeper than what I already knew and I always come out knowing a few more useful details. The site has some really hardcore security people who are typically find any mistakes in your answers as well. I'd recommend that site a lot for learning, whether that is through asking or answering questions (though with answering, perhaps it's more to deepen knowledge than to get into the field), or even if it's just for getting correct answers to security questions like "What are the optimal WPA3 settings for a home router" or something.

So then, how does one get into security? Most people I know just started breaking things and noticed that others usually found it useful if we told them about it. After a while you'll have seen most of the common issues. Add to that some more structured materials like the OWASP top 10 and similar resources, and now I feel fairly confident that my reports are not just a haphazard collection of what I came across in previous years, but that I can actually give a reasonably complete assessment of the security of a system.

I don't know why the security field doesn't have as many structured resources as other fields. Maybe the field is just too small compared to how fast it moves? Or maybe security people are, y'know, as breakers of other people's systems, as hackers, as those who outsmart the people who made the system... maybe we want to be different and not follow norms by studying the normal way? And most of us just started doing it for fun before it became a profession, so few people would use the resources even if they were there? I'm just speculating.

[1] https://en.wikipedia.org/wiki/Predictably_Irrational

[2] https://security.stackexchange.com

Ironically, it's being curious on how to break something or how something that is for "A" can be used for "XYZ."

Though what you hear about more often are misconfigurations -- which, are valid, but that's more on execution vs truly finding something wrong.

I learned a tremendous amount from Root-Me [1], it has a strong community and is _often_ updated with new challenges.

1: https://www.root-me.org/?lang=en

no one mentioned it but the secret sauce is to have a "thing" for breaking and exploiting things. whether it's good or not is yours to decide. but here's my experience...

talented hackers, white, gray, black, whatever, excel at breaking and exploiting things. and people. and i have always struggled with that...

in college i took a computer security class. my class had the team ranked #1 in Maryland (US) among young coming up group of hackers. what i saw them do is always, and i mean always, thinking of ways to break things. i mean, well, it's broken a tad bit, adding this or that will fix it. no! i saw them exploit every little tiny thing!

i wanted to "fix" things. i wanted to be a "good" programmer. i was like: "oh they didn't do this, what should they have done to make more secure?". a good hacker was like "oh they didn't do this, what can i do to exploit it?"

i hope my little experience convey to you how they think. or at least what i saw first hand while taking that class. for me it's hard, i wanted to fix things. they wanted to break things. i didn't fail my class. i wasn't good at it either...

but i admire them and i am still amazed by what these people can pull off.

Beware survivorship bias, I wonder what the average hourly rate of bug hunters is?

Trail of Bits has a nice summary[0] on that (they're discussing this[1] book).

> As productive as the top 1% are, their earnings are equally depressing. The top seven participants in the Facebook data set averaged 0.87 bugs per month, earning an average yearly salary of $34,255; slightly less than what a pest control worker makes in Mississippi.


[0] https://blog.trailofbits.com/2019/01/14/on-bounties-and-boff...

[1] https://mitpress.mit.edu/books/new-solutions-cybersecurity

The Trail of Bits piece kinda ignores the amount of time invested however: https://www.techrepublic.com/article/bug-bounty-programs-eve...

I feel like there are probably a lot of easy targets available. Look at the network requests between the client and server and see if there is anything that looks like its not validated. If you see ids try changing them and see what happens. Quite often it seems that the back end just trusts whatever the client sends. Especially if its a mobile app or a SPA because new devs seem to think the api is only visible to them.

From reading some of these hacks on peoples blogs it seems like quite often they just man in the middle a mobile app and find out the api provides way more info than should be shown to the user and the ui hides it.

That is indeed a lot of money.

Not sure why there are not more people doing it. I thought about it as well for years but still don't do it.

This is 1MM over 3-4 years, right? $330k is good money, but it's also in the ballpark for gifted vulnerability researchers in SFBA.

But no one is going to pay $330k to a 17 year old with no experience

Well, they did, right? So that doesn't seem true.

This is the usual salary vs. contractor thing.

You can almost always make more as a contractor because you're shifting risk from the company onto your LLC. They pay for a la carte results instead of paying for an employee who could hypothetically deliver results.

They did because he found a niche where you get paid for results, and there's no pre-vetting.

There's basically no job like this. Any form of pre-vetting, even just a face-to-face, would have excluded him.

By "no one" crapbone meant no single employer. And no single employer paid Santiago the money. It was paid by a group of companies each paying bounties for different bugs.

And a pretty good income for 13/14 year old

The person in the article started at 16 and is now 19.

Even so, it's an amazingly good income for a teenager. Live with your parents, and you're a millionaire before you're 20. Live anywhere outside Silicon Valley, and it's still a fantastic income for anyone.

330K USD in San Francisco is much, much less than 300K USD in Buenos Aires

People don't even conceive the difference, in Buenos Aires you can rent a great house in a great neighborhood for 800 USD per month, in San Fran you get a shared room where other 3 people live for that much -IF even that-. In SF you spend at least 5 dollars going anywhere and going back using public transport, in Buenos aires $2 is more than enough to go the the opposite side of the city and back.

I know it is that way, but I don't understand it. It always seems to me like it just indicates that the exchange rate is wrong: clearly I can buy more stuff if I convert my money to pesos and spend them there, so the peso is just worth less than the amount we get per euro.

Could someone recommend some a website or blog post that explains this? (Or is it a simple enough explanation to fit in an HN comment without going hugely off topic?)

It's related to Purchasing Power Parity [1] and a good example of that is the Big Mac Index [2]. Basically, even if you adjust for exchange rate, the same amount of currency can buy 2 apples in one country and 4 in another. This should not be possible in a globalized market because of the Law Of One Price [3]. However, that only really applies in the long term, for buyers with perfect information (i.e. full knowledge of all price/quantity options), and for goods that are tradable. Land is not tradable internationally. You can't just move 1000 sq ft. from Argentina to US. Same with labor e.g. people who speak a specific language or perform a specific skill. Add to that local taxes, transportation, and energy costs and you can see why the same apple costs more in a different place.

Gas stations next to each other but divided by a state line in the US have different prices. Taco Bell sells the same burrito for different prices. The same factors apply internationally too, nothing to do with exchange rate.

Hope this was as ELI5 as necessary for HN-level discussion.

https://en.wikipedia.org/wiki/Purchasing_power_parity https://en.wikipedia.org/wiki/Big_Mac_Index https://en.wikipedia.org/wiki/Law_of_one_price

> Gas stations next to each other but divided by a state line in the US have different prices

Hell, gas stations divided by a street have different prices. In one case I saw, the one you could see from the freeway was +$0.50 per gallon compared to the one you couldn't see from the freeway.

Interestingly enough, I have seen gas stations very close to each other with very different prices. This was close to an airport so I assume the idea is to rip off tourists who don't have time to search for cheapest fuel before returning the rented cars. At least that was the only explanation I could find that didn't involve conspiracy theories (Catania, Sicily, Italy).

So why haven't you done that, one of the times in the past you've thought this?

All the reasons you haven't are why everyone else doesn't either, so that's why it's the proper exchange rate.

What makes you attribute that to exchange rate when you can do the same thing with different parts of the US? $800 for a nice house in a nice part of town isn't that far off from prices where I live (Cleveland).

In the case of housing and transport, it's easy to see why prices can be very different in different parts of the world: cheaper housing basically just means that not as many people (relative to the number of houses available) are willing and able to spend a lot of money to live in that location; transport prices probably differ for a bunch of reasons, but the main point is that there's no reason they should converge, as transport within city X is not a possible substitute for transport within city y.

For other goods, you should be surprised if the price discrepancy is one that you really could exploit for significant profit (after accounting for shipping, import/export restrictions and taxes, and so on) -- but otherwise, I don't think it's a very strange phenomenon. Prices will always be set somewhere in the overlapping region between the cost of production (and distribution, and taxes, minus any subsidies, etc.) and the amount that customers are willing and able to pay. Both of those amounts can vary pretty dramatically from place to place.

Rental housing is a market and just like other markets it can be modelled using the economics described by the demand curve.

The demand curve maps the relationship between supply, demand and price.

It says that for high demand and low supply the price is high, for low demand and high supply the price is low and over time supply, demand and price will find an equilibrium.

So if you consider the housing market, on the supply side you are looking at a constrained resource (i.e. it is constrained by the land available to build).

The demand side will be driven by the numbers of people looking to rent and that will be driven by many other factors like work prospects, quality of life, crime rates etc etc.

So for places like SF there will be great demand for that limited housing which means the price (i.e. the rent) goes up.

you could probably still make that money working remotely with a sfba company tho

True, but consider that 70% of rentals in SF are rent controlled. It takes a couple emails from FB groups or Craigslist to find a good living situation for way below market rate.

Case in point: my friend that worked at Waymo paid $900/mo for a room in a house in Lower Haight...

900$/mo isn't necessarily below market rate. I lived by central park in manhattan for $1000/month, and I slept on the loft above my clotheshangers because it was basically a closet.

This is for sure below market...it’s been rent controlled for almost 10 years. Huge room in central Lower Haight. That place should go for at least $5500 between 3 people in a 3 bed/2ba.

From some Googling, seems like a typical software developer salary in Buenos Aires is around $10K USD/year. So this is dramatically more, ~30x that.

But you also have the freedom to live in lower cost of living areas, making the money go further.

SFBA income tax, state and federal, would leave about 55% of that, then, so, as usual, California is expensive.

Income taxes are more like 30% of that. It’s expensive, but not that expensive. Unless you’re counting rent in that, but even then I think half is pushing it.

Actually 39.04% according to this website. Still, 30% is closer to correct than 55%.


$330k per annum?

I wonder if anyone has "cobra effect"ed the bug bounty world yet.. whereby they leave vulnerabilities in their code in order to obtain a bug bounty.

From 23 years ago!

How did you find such a specific reference lol

Someone showed me this strip ~10 years ago when the place I was working briefly instituted a similarly counterproductive incentive policy. I just Googled "Dilbert code me a minivan"

I have never once heard of a bug bounty being paid to a former employee let alone to the same person who wrote the code. It strikes me as something that is likely to do damage to ones reputation far out of proportion the few thousands of dollars one might hope to gain.

I received a bug bounty from Mozilla after leaving, in a browser component I previously worked on. I didn't write the vulnerable code though.

On the other hand, Google refused to pay me a bug bounty for a bug I found in the same component, in part because I used to work on it when I was at Mozilla, even though I didn't write the vulnerable code.

I'm very happy for the kid and like the idea that these programs are available but does this incentivise companies to effectively outsource their bug finding?

From a purely fiscal point of view, why hire expensive full time staff to go digging when you can just throw a few sheckles at stuff as it comes up?

This is similar to Katie Moussouris's argument from the article:

> Moussouris, who created the bug bounty at Microsoft, warned that if badly implemented such programmes could see talent leaving organisations in favour of pursuing bug bounties, and thus damage the talent pipeline.

I've seen her argue this on Twitter before - the argument IIRC is that bug bounties should always pay less than getting a job helping the blue team / writing secure code in the first place, otherwise the incentives are all wrong. It's great that you know about bugs, but it would be better not to have them. And, also, there's a bit of a prisoner's dilemma involved in that you don't want to let the rest of the industry drive up the expected payouts of bug bounties beyond the expected salaries of secure developers, but you also don't want to lose out on vulnerability reports either.

Step one: Work at a company with a bug bounty program

Step two: Introduce subtle vulnerabilities

Step three: Claim bug bounty under a pseudonym (or just get someone else to claim it)

That's actually a great spin on old concept of subversion. I wonder if anyone is doing it. It should be easier for C apps where someone could say they didn't know about a specific kind of undefined behavior.

No serious company makes that choice, which misunderstands what companies use bug bounties for.

Google, for example, pays out bug bounties regularly, but their main security expense is wages/etc of security professionals, and that is probably in the neighborhood of a billion dollars a year.

Because a bug bounty does not guarantee that people actually look into your code. Of course some white hats while invest time upfront, but there is no guarantee and absence of payed bounties is not evidence of absence of bugs.

Also, a bug bounty usually limits the scope a lot more than a typical pentest does, i.e. no testing of infrastructure security, internal networks etc..

Lastly, if your bug bounty is high enough to make highly skilled people spent time to find your bugs its probably cheaper to just higher some security folks yourself and prevent excessive payouts (by preventing bugs).

Of course all of this does not stop some C-levels from using a bug bounty as replacement, but the issue is not as clear cut and especially the last point should even make sense to non-technical people.

So what? Nearly every advance of technology and management is in one way or another making more job positions irrelevant, if we want to tackle the ultimate consequence of leaving people without a living wave then we should tackle that problem directly instead of complaining when companies try to use cheaper alternatives to their problems.

Why not both? The bounties are great for finding things your security team might not think about or might consider low-pri, but having a dedicated team is important to make sure you've got the core use cases covered.

Believe me- these programs don’t even scratch the surface of the amount of security vulns companies have hiding in their code base. Having a good bug bounty program, internal product security team and a continuous third party audit process are all parts of having a good security posture.

When your company is as massive as something like google and you have nation states, groups, and individuals literally trying to hack you every few minutes, it’s definitely more financially viable to have a full time security team

Outsourcing bug finding is only a retroactive solution, not a proactive one

Only the biggest companies can really afford to have the scale and skill available from the vast range of people working for bug bounty money - and as one of the other posters mentioned - you still have to have internal staff to confirm and patch the bugs. It's almost like the best side of outsourcing, where the outsourced talent is driven to do their best work because otherwise they'll never get paid.

Then again, I can imagine some teams would get utterly spammed with inane, wrong or non-bounty-able reports, which could be an issue.

You still neee security engineers to operate the program.

This is great, especially that somebody with his skills could probably earn much more working for shady "security" companies.

I wonder if he's come up with some automated tooling to find them, seems like this might be the best way to monetize if so.

I found this, describing his specialty:

"Lopez specializes in the identification of Insecure Direct Object Reference flaws also known as IDOR vulnerabilities."

Then this, explaining IDOR: https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...

It certainly sounds like the sort of thing you could automate to a pretty big scale.

I don't do web apps. So, all I could do is DuckDuckGo for some terms that might get results. Found claim about using automated, open-source scanners to find IDOR and a lot of other stuff here:


They do note that IDOR poses some difficulties with it needing heuristics that might have high, false positives. The tools are in the references section toward the bottom. Try them out.

Do you have any suggestions on how? I do not doubt it can be automated, but it is one of few vulnerability types I do not have an intuitively understanding on how it should be done.

It seems hard to automatically understanding the difference with IDOR-vulnerability in the HR-system (from your link), salary.php?employee=EMP-00000 where you can change the ID for another employee and article.php?id=123 in a newspaper site.

Would it have to understand the difference? You could do pretty well with a crawler that detects such fields (by checking a simple increment, say) that then spits out URL/field combinations. Then you just need to scan through those and follow up on the ones that look like security holes.

You could focus on links that aren't in Google's cache, or links that match some numerical pattern in a set cookie, etc. Cookies are probably a whole thing on their own in this space too.

I definitely do have respect to this guy named Santiago Lopez, while I'm literally twice as old as him.

What does the second part of that sentence have to do with the first?

Mx Armamut is perhaps not a native English speaker. Let us suppose that the first part of the sentence is a statement, and the second part is the rationale - then, by way of conjunction, a native speaker would probably choose something like "because", or similar.

I expect there are languages where a word that translates neatly into "while" would be most appropriate, while actually meaning something more like "because". It's been a while since the last time I had to speak any foreign, but I remember stuff like this being very common - a large part of the reason I refuse to do it any more.

I’d say ‘while’ is perfectly fine for a native speaker in place of ‘because’. You could leave ‘while’ out of the sentence, the comma alone implies ‘because’. The reason for the statement is implied by the two clauses sitting side by side, many connective words would do equally well, right? The only thing that suggests a non-native speaker to me is ‘respect to’, which is still sometimes correct. Respect to charitable interpretations, and respect to multi-lingual people.

Yes =) both @tom_ and @mattigames are right. I'm not a native English speaker, pardon my English usage. What I mean is, this guys achivement shows his self-motivation, dedication and knowledge level at a relatively younger age. (Maybe for some, he is old enough.)

He had half the time to learn than many older people that also live off bugs bounties?

This is very cool, and props to the kid.

But damn, I wonder what Srinivasa Ramanujan or Norbert Wiener would have focused on if they were 13 now.

And maybe that's just whataboutism.

Breaking SSL worldwide probably.

I'm still waiting on YC to set up a bug bounty program after having two verified reports :-)

I would love to see a breakdown of which company paid the most :)

Shopify, Uber used to be at the top of the list.

not saying in this case, but i’ve heard it can often be more effective to group bug reports into more or less accounts. obviously privacy is a huge concern in a bug bounty program and i find it absurd how much the vendors charge small companies

Just imagine the amount of cash if he went all black :p

I like the picture at the beginning of some CLI novice trying to git push his home directory

That reminds me of my idea to create "tech" stock imagery that isn't a joke

So the opposite of what the Hacker Dojo did around 2012/2013: https://slate.com/technology/2013/02/hacker-photos-how-hacke...

Somehow, I think most stock images are crafted from a position or vantage point of objectively stupid cluelessness. No matter the discipline depicted in stock photos and clip art, it always conveys a total detachment of context, campy corny sentiment and general lack of expertise.

Not so much by accident, but by virtue of catering to clientelle disinterested in accuracy, and with the awareness that all participants (both buyers and sellers) are motivated to reduce costs at all levels of the creative process.

If you'd like I can whip up a GUI interface in Visual Basic. You can use that to track an IP address.

I like the random nucleotide sequence in the background. super relevant.

The terminal could have opacity at 75% and then have some bioinformatics stuff in the background in another application (browser, whatever).

I want to believe.

Who doesn't `git status` their home directory sometimes

Not me, but I have :wq aliased to exit in bash.

I do that...and still get embarrassed even though no one is watching.

Personally, I call that a "Brain Fart".

well, the username is "dau" which in good old German computer slang is the Dümmster Anzunehmender User -> dumbest user possible.

The double `ls` hits home :D (more like `git s` nowadays, also <esc>:w on repeat).

Woo! We need more capable bug bounty hunters. So many reports are very very very bad.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact