
Ask HN: How do security researchers know where to look for vulnerabilites? - vedanshbhartia
Also, in a relatively open ended project, llike Google Zero, how do people there decide upon a course of research?
======
patio11
This is a deep topic. Here's an answer to it, which might not be everyone's
go-to answer:

Consider the case where you know either the software or the system which is
the target. Consider also the case where your goal is "get a reliably
exploitable vulnerability while coming in under budget" as opposed to e.g.
"enumerate a sufficiently broad swathe of vulnerabilities such that a
stakeholder is pleased with your diligence." You will probably prioritize
vulnerability classes which, in your experience and/or that of the industry,
are numerous and high-severity. You will probably also prioritize "the joints"
of the system, because (if you've done software development professionally)
you know that handoffs between scripts, teams, servers, processes, etc etc are
never as well-implemented or well-tested as something which is deeply within a
particular set of borders.

Thomas has posted a list of features which are high-probability candidates for
game-over vulnerabilities several times on HN; see second half of this
comment:
[https://news.ycombinator.com/item?id=7936921](https://news.ycombinator.com/item?id=7936921)

Research prioritization in a wide open space is research prioritization in a
wide open space, and is broadly a hard problem. Broadly speaking you look for
leverage: how does the set of things you are capable of causing get to a
disproportionate (potential) impact? If I were hypothetically in Google Zero,
I'd be looking primarily for widely deployed software which sits in poorly-
understood places, operates at least part of the time on user-supplied data,
and is colocated with terrifyingly sensitive systems. Bonus points if that
software is boring and so hasn't had anyone take a serious look at it in a
while, like e.g. png rendering libraries or request parsing libraries.

------
guessmyname
_Disclaimer:_ I have been working as a security researcher for +5 years _(web
and mobile apps)_.

I personally use tools that report all the activity that a software produces,
most of that is TCP or UDP connections. Among the tools that I use are
WireShark, Burp Suite and mitmproxy. I also keep myself updated with
techniques used by other researchers in other areas by collaborating on
different forums and networking events. I also have several hundred honeypots
distributed around the planet powered by OSSEC to collect and analyze bad
traffic.

Less than 3 years ago I switched to do vulnerability research on mobile and
desktop software and a whole new world opened in front of me. Debugging HTTP
connections is one of the most common tasks and there are plenty of tools
available out there, I have a small set of tools to do network sniffing and
analyzing. Going deeper into the software, I frequently do black-box
penetration testing _(which basically means, I don 't have access to the
source code)_ and so tools like IDA, Hopper Disassembler, Binary Ninja, Cutter
are first hand in my toolset.

> How do people there [Google Zero] decide upon a course of research?

• Someone tips you some information about suspicious activity,

• You are using the software as a regular user and notice something weird,

• You are interested to know how a software works and diving into it reveals
secrets,

• One of your honeypots and/or network sniffers alerts you about unwanted
connections,

• The author of that software requests you to do some penetration testing for
an audit,

• Someone found a small problem in the software and you dive into to try to
find more,

• And more commonly, you are bored and want to pass the time doing something
more boring :D

~~~
paulriddle
I feel like black box assessment is highly inferior to white box or whatever
it is called, when you have access to source code. Huge waste of time for both
company and security specialist. It is only acceptable for ongoing bug
bounties.

Am I wrong? I'm not in the field, so I don't really know. I have lots of
questions. Is it common for security consultancies to do only white box
reviews or this wouldn't be a good decision business-wise? Is it common to
charge for fixes to found vulnerabilities during an audit? What if the flaw is
in open source library?

~~~
sheldorr
Security consultancies will usually do whatever the client asks for, or try
and cater exactly for their needs. This may result in either a white box
attack or black box.

Usually the test will be done at a fixed price, with a fixed scope (What they
are/aren't allowed to test). The result of this will usually be a report
detailing the vulns, along with reccommended fixes/remediations and sometimes
a 'post-fix test' to check if the company has successfully remediated the
issues.

White box testing tends to look at the system/application from an internal-
looking out perspective, whereas black box is an outside-in view. Benefits to
whitebox being a very thorough assesment of the system but this will be time-
consuming and expensive. Blackbox on the otherhand can simulate the likely
attacks from an adversarie and sometimes be relatively quick dependent on the
systems attack surface.

Hope this helps.

------
pjc50
There's never just one cockroach.

Likewise, once a technique has been sucessfully used to exploit one piece of
software, there's a lot of milage in just trying that technique against
everything else.

There's also the "try everything" option of fuzzing; the default tool for this
is "afl-fuzz", which is automated once you've set up the target in a suitable
configuration.

Generally there are three strategies:

\- try to get some executable machine code in from outside and run it (buffer
overrun, use-after-free etc)

\- look at the set of files and data considered "trusted" and put something
untrustworthy in there (XSS, DLL injection, /tmp exploits)

\- attack the hardware (JTAG, power analysis, key exfiltration)

------
tuxxy
I like to use GitHub to find vulnerabilities.

I mostly do cryptographic engineering, so when hunting for issues, I search
for things that are usually problematic. For example, searching for something
like "XOR encrypt" and you might find someone doing something they shouldn't.

You can also try to find problematic implementations of standards by searching
for those standards and trying to find comments or similar code. You might
find some interesting stuff by searching "ECIES" or "NIST SP 800".

If your goal is to begin research, typically you'd find a problem, exploit
technique, or vulnerability class that interests you. Then you start looking
for places where you might be able to see how people defend against it (if at
all). This is when you start finding issues pretty quick since you develop
some sort of custom heuristics on code you examine.

Best tip from me would be to get to know some standards and see if they are
being implemented correctly.

------
h000per
They look everywhere, I think its a case of survivor bias where we only see
when they succeed (via published bugs). We don't hear about the thousands of
times that they failed to find anything.

~~~
Tepix
You're right about one thing (they look everywhere) albeit I'm afraid that the
cases where they don't find anything are in the minority.

~~~
irundebian
They probably find always anything, but mostly not something very interesting
or severe.

------
jt3
Disclaimer: I’m a security engineering consultant, focusing on code review.
I’ve been writing software for 15+ years.

Industry knowledge and following trends is useful. Following CVEs reveal
problem areas in software. Some industries or entities may not devote much
time to security review, leading to buggy code. Some see security only as an
expense unfortunately.

Looking for vulns in locations where others have not or are unlikely to look,
due to effort or domain knowledge requirements, can be very fruitful.

Directed fuzzing can yield great results. Any sort of parser in a lower
language like c or c++ are good targets. Spend manual review time for areas
that are unlikely to be reached by the fuzzer. Keep in mind fuzzers aren’t a
silver bullet though, and won’t catch everything.

Running static analayzers or grepping for common errors can find quick hits
often.

Complex specifications often have many errors when implemented. I’ve heard a
few stories of RCE vulns due to buggy X.509 parsers.

Developing a threat model is helpful to find high impact vulns.

Knowledge is also key. Understanding components at the unit and integration
level is a must.

After doing security reviews for a while, you develop an intuition of where to
look. Every once in a while though, you bump into a SQL injection on a login
page, so don’t overlook the simple things.

------
iuguy
One of the easiest ways to find bugs is to look at CVE releases in software
and identify similar software.

For example, if there's a bug in libfoo's ASN.1 structure parsing, then
chances are that any implementation of the same structure parsing is going to
have similar or identical bugs. It might not be the same field, but this
certainly tends to do well as a strategy for finding bugs in libraries, file
format bugs and complex network services.

I can't speak for Google Zero, but from the people I know there, they tend to
look at a broad area of interest, research it painstakingly and then drill
down deep while the bugs drop out. A good example of this is James Forshaw's
work on Windows kernel bugs, which started as looking into the Windows file
structure and alternate data streams and has slowly morphed over time into
walking through Windows' local attack surface.

Again, people I know who have spent far too much time looking for bugs in
specific pieces of software tend to take the deep dive approach as it yields
more interesting bugs. The broad at-scale reimplementation approach finds
bugs, but they're not as interesting.

------
ben_bai

        "Throw a rock, you gonna hit something"
        --- Ted Unangst, about OpenSSL
    

I guess it's true for most Software. 1 bug per 1000 LoC is pretty standard.

------
mpeg
I haven't done security research for a long time, but the way I did it was to
pick a specific category of vulnerability (say, SQL injection) and try to
exploit it everywhere you can possibly think of, where there is a potential
for input.

Even nowadays, I regularly see people leave their systems open to these super
basic input validation vulnerabilities because they only think about doing
things right on the surface area, but then they'll have some batch process
that analyses log files as a one-off script that is vulnerable if the user has
a malicious http header or something like that.

Another way would be to try and think how a particular thing was written and
figure out ways you can break it. I found plenty of buffer overflow vulns in
custom TCP servers this way, but you can also find less serious things that
let you do things you're not supposed to.

For example, an ecommerce business that would let you add an optional service
charge allowed negative numbers (to deduct money from the order).

Another online shop had test item ids with negative prices in the database.

------
lowpro
I'm still an intern in the industry but have worked for a few places, you
normally start by poking around/recon, look at where things get complicated
and where systems are connected because those are often the breaking points.

During recon, if you can find what tech is being used you can see if it's
outdated at all or where vulnerabilities were found in the past. If you're
doing penetration testing/vulnerability assessment you're not inventing new
exploits, just using what's already out there and tweaking it. Research on new
exploits is more rare as a job I think.

See [0] for the steps of pentesting and OWASP [1] for everything regarding
security.

[0] [https://www.cybrary.it/2015/05/summarizing-the-five-
phases-o...](https://www.cybrary.it/2015/05/summarizing-the-five-phases-of-
penetration-testing/)

[1]
[https://www.owasp.org/index.php/Main_Page](https://www.owasp.org/index.php/Main_Page)

Also there is a big security community on twitter where you can see
researchers tweet about a lot of the stuff they're working on right now.

------
corndoge
I recommend "The Art of Software Security Assessment" (Dowd, Mcdonald, Schuh)
if you want a exhaustive book on the subject.

~~~
wglb
And in addition, The Bug Hunter's Diary:
[https://nostarch.com/bughunter](https://nostarch.com/bughunter)

------
chrisrohlf
I used to teach a training on the subject (all the course material is now free
at [https://github.com/struct/mms](https://github.com/struct/mms)). I had a
section on ‘where to look’ for vulnerabilities. I started this section off
with a scenario: “You’ve checked out the Chrome tree. Where do you start? OK
you want to find Javascript interpreter bugs. Whats the first piece of code in
Chrome that sees untrusted Javascript?”. Its a trick question. The answer of
course is the networking code or the TLS decryption code. But you’ve never go
looking there for Javascript interpreter vulnerabilities. The pt of the
thought exercise is to introduce the concept of manual taint analysis.
Basically understanding how to analyze what code paths and data structures
your untrusted inputs could influence or control and then go from there.

------
czuczorgergo
I am dealing with these problems every day, in my workplace, CCLab, in
Hungary.

It is really a hard question, because it depends on several variables. I would
say that the most important is the information gathering, if you would like to
find a vulnerability in a system. There are several cases when you know that a
vulnerability is present just by checking the version numbers. If you would
like to find vulnerabilities in bigger systems, you should always search for
older unmanaged functions that might be present. This is how a security
researcher managed to find a critical vulnerability in a Google service that
was used by probably nobody. If I am given the task to search for
vulnerabilities in a standalone system, I always search for the functions that
are not crucial for the system to work, because they tend to be less tested.
If you have the possibility to upload files, then you could find a
vulnerability with almost 100 percent certainty. So I would recommend you to
spend a significant amount of time testing it. An other good indicator for
potential vulnerabilities is when the user input is reflected in any manner.
If you have access to the list of the used components, always check, if they
have any known vulnerabilities this could be a really handful input, if the
vulnerable features of that component are used by the system. If you can turn
on options that make the system act in a different way, then you should always
test them, because most of the automated scanners are going to miss those
vulnerabilities that are only present in certain cases.

------
dpeck
After a while you just get a feel for it.

One of the best heuristics I had when approaching a system for the first time
was to start poking at non-core features that were probably bolted on late.
Things like management portals whether web or console, user customization
settings, anywhere arbitrary files can be fed into the system. Those areas
usually very fruitful and what I learned there helped understand and
contextualize later research and discoveries in the core components.

------
perlgeek
One approach is to think of a class of bug (like SQL injection, command
injection, XSRF, ...), and look for that class of bugs in a large number of
software products.

Another is to focus on components that are of high impact because they are
used everywhere: standard UNIX tools, compilers, shells, OpenSSL and friends,
BIOS, CPUs, common network controllers, disk firmware etc. and analyze them
for anything you can think of, run a fuzzer on them etc.

------
irundebian
Since code audits are mostly conducted as an art rather than science with
rigorous methodologies, I assume most vulnerabilities locations are just
derived from experience and "poking around" work.

If you look at public penetration testing reports [3] seeing there mostly is
no section about methodology, it's reasonable to assume that there are rather
no true common standards or bodies of knowledge to find security
vulnerabilities.

For some application security fields like web application security there are
at least some semi-rigorous catalogs [1,2] which can help you to conduct more
comprehensive code audits or security tests/audits.

As already mentioned there are already tools which can help you to conduct
more professional and thorough code audits through static security source code
analyzers or dynamic analysis tools (e. g. valgrind for memory related bugs or
afl as an fuzzing tool example). These tools are focusing on implementation
bugs, design weaknesses still have to be evaluated manually.

In my opinion the discipline of software security assessments hasn't grown up
yet, but there is definitely research going on to improve the situation, e. g.
[4] for a research example on finding bugs statically.

[1] OWASP Testing Guide v4:
[https://www.owasp.org/index.php/OWASP_Testing_Project](https://www.owasp.org/index.php/OWASP_Testing_Project)

[2] OWASP Application Security Verification Standard (ASVS):
[https://www.owasp.org/index.php/Category:OWASP_Application_S...](https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project)

[3] [https://github.com/juliocesarfort/public-pentesting-
reports](https://github.com/juliocesarfort/public-pentesting-reports)

[4] Modeling and Discovering Vulnerabilities with Code Property Graphs:
[https://www.sec.cs.tu-bs.de/pubs/2014-ieeesp.pdf](https://www.sec.cs.tu-
bs.de/pubs/2014-ieeesp.pdf)

------
borplk
Maybe they have specific targets or categories of software in mind.

Or maybe the individuals focus on a specific theme or pattern.

They can often use the experience if they follow a similar pattern.

For example someone might focus on password manager applications.

If they find one family of weaknesses they can often go test other similar
applications to see if they have made similar mistakes.

------
marcusfrex
If a security researcher comes from developer and/or system administrator
background then he/she uses his/her experiences and habits to guess where to
look first.

That's why it is always suggested to start from point zero to be a solid
security researcher.

------
sevmardi
No one here, but I am assuming they have number/List of things they go through
and if they encounter something strange they dig deeper.

------
wepple
Complex or notoriously difficult-to-get-right code, accessible from the other
side of a security boundary.

------
elyrly
recon recon recon,

Give me six hours to chop down a tree and I will spend the first four
sharpening the axe. Abraham Lincoln

