

$9000 bounty paid for Python bugs - butwhy
https://hackerone.com/reports/55017

======
dsacco
As a security engineer, I'm really happy to see news like this enter the
mainstream more and more on HN. These bounties are well deserved.

For those of you who would like to try and earn bounties like these, I
recommend the same books I always do:

1\. _The Art of Software Security Assessment_

2\. _Gray Hat Python_

3\. _The Web Application Hacker 's Handbook_

This is your ethical hacker starter kit. The first two are good for
foundational knowledge and will show you how to find the bugs worth something.
The third book is specialized for web applications, which is still great but
not quite as lucrative.

You will also want to check out CTFs, _Cryptography Engineering_ and the
Matasano Crypto Challenges.

If you're looking to join a top tier security firm, Matasano is great for
those who like offices and Accuvant (my employer) is great for those who like
working fully remotely.

~~~
exDM69
These books are probably great but if you look at some of the best/worst bugs
found in the past few years, there's one tool that seems to come up over and
over again: afl-fuzz.

I recon that the easiest way to get started with bug hunting might be to just
set up afl-fuzzing on some trivial code that has lots of potential for going
wrong. Stuff like JSON/HTML/HTTP parsing in C is a great candidate to find
integer overflow or buffer overrun bugs in (some of the bugs in this list are
exactly that). Throw some CPU time for the fuzzing, and pretty soon you should
have a handful of repro cases.

The nice thing about these is that the bugs could be very trivial to fix but
have enormous security consequences.

If someone wants a suggestion for a project to try some fuzzing against, the
new h2o/libh2o web server and its HTTP parser component (picohttpparser) look
very well written but not very well tested (only a handful of hand written,
hard coded test cases). I'm pretty sure there's one or more potentially
disastrous bugs in it.

edit: almost all bugs in this list seem to be integer overflow bugs. That
hints that these issues were found using a static analysis tool like
ClangAnalyzer or Coverity.

~~~
dsacco
Please don't start this way. Michal Zalewski didn't write afl fuzzer so it
would be used without deep technical knowledge. It's actually _not_ for
beginners at all despite how powerful it is.

As a general rule to start in security work, you will want to know how to
program in at least one language competently, then move on to theoretical
understanding of vulnerabilities, then finding them painstakingly by hand, and
finally by automation.

It's perfectly fine to use afl (or other similar tools), but understand that
using an automated tool without first knowing how to do things by hand and
groking the theory will stunt your growth and make it hard to progress very
far.

Pop open a weakened VM and exploit a few buffer overflow bugs the old
fashioned way before you use a fuzzer. With code review, learn the nuances of
each language before you automate your audit. You shouldn't have to rely on an
automated tool to find out register globals is on in PHP (as one example).

~~~
dalke
I realize there's a philosophy that it's best to start from the lowest levels
and work one's way up. It's one I can empathize with, in spirit, though I
disagree.

The same logic means that people shouldn't use static code analysis tools, or
valgrind, or even debuggers, until they acquire deep technical knowledge.
While I think all of these tools help reinforce the principles.

 _If_ someone starts with the fixed and unwavering goal of security analysis
in mind, then perhaps I can agree with you. If however someone is only curious
about security analysis, and finds that spending a year to "grok the theory"
is a high barrier, then even clumsy use of semi-automated tools may provide
more concrete incentive to learn the underlying skills.

While I don't believe you are correct, another question is, how many white
hats do we end up with? Even if it takes 2 years to learn security skills by
using automation, and only 1 year without automation, if after 10 years there
are 500 following your path, while 10,000 following my path, then that's a net
gain for the good side, yes? (I assume that there is such a thing as "good
enough", and that it's relatively stable. Obviously if there is only a market
for 500, and your 500 are always better than my 10,000 then that changes the
dynamics.)

Finally, it's also good to have even the script kiddies on the side of good
than the side of lolz.

~~~
dsacco
_> > The same logic means that people shouldn't use static code analysis
tools, or valgrind, or even debuggers, until they acquire deep technical
knowledge. While I think all of these tools help reinforce the principles._

This probably sounds controversial, but I agree. I don't think you should use
Valgrind until you understand how Valgrind works. This doesn't mean early C
programmers shouldn't use Valgrind - you can read the documentation and theory
behind Valgrind in a day. But definitely _do_ that. I think maybe "deep
technical knowledge" wasn't the right term for me to use. A better term would
be "technical understanding" \- know how it works, and know how to find the
different classes of bugs it can find, but you do not need to be capable of
writing the tool yourself.

Now let me clarify this, and my earlier point about afl - I think you _should_
use them, and generously, and pretty much always once you know what you're
doing. But if you use them without understanding the fundamentals, you will
get caught up in false positives/negatives. Always use it as a supplement, not
a crutch.

However, I agree with what you're saying about someone's level of dedication
to security analysis. Using afl is better than not using afl, so if you're not
a dedicated security guy, then you're definitely right that someone should use
it.

I would still caution anyone that using an automated tool without fully
understanding how it works will lead to an incomplete picture of the
application's security posture.

~~~
tptacek
This idea of people not using tools until they understand how they work (or in
the Matasano shorthand, until they can implement them, in "build your own
light saber" fashion) is I think very true of professional testers, but not
true for developers.

If your full-time job is going to be about developing a radar for assumptions
that developers make and a deviousness about breaking those assumptions, tools
--- and most especially opaque tools --- totally hamstring you. We both know
plenty of testers that develop a kind of blindness from relying on tools too
much.

But I'm guessing most of HN doesn't really understand the mindset that's
involved in pentesting full-time. I certainly wouldn't tell a developer that
they should avoid valgrind.

------
taspeotis

        $9000 bounty paid for Python bug (hackerone.com)
    

There's 10 of them. So, $900 paid out for each one. A job well done in
discovering and disclosing them and the payout is generous nonetheless, but
the title is wrong.

~~~
butwhy
Well the issue focused on one key point (integer overflow).

As for your speculation that it is $900 per bug, that is wrong, too. The
minimum payout per bug is $1500.

I don't really care about the semantics, so you'll have to deal with the
title.

~~~
stingraycharles
An easy fix would be to rename "bug" to "bugs", then everyone should be happy.

------
inglesp
Where's the $9000 come from?

~~~
tanishalfelven
Software companies of late pay developers to find bugs in their security. Read
more here: [http://blog.codinghorror.com](http://blog.codinghorror.com)

