
Announcing cross_fuzz, a potential 0-day in circulation, and more - mattyb
http://lcamtuf.blogspot.com/2011/01/announcing-crossfuzz-potential-0-day-in.html
======
ximeng
Microsoft not only not acknowledging security problems, but asking them not to
be disclosed after several months of inaction.

Search engine hits to this guy's site indicate that these problems are being
independently discovered by people based in China.

<http://lcamtuf.coredump.cx/cross_fuzz/known_vuln.txt>

Bugs in all other browsers, although with better responses it seems.
Interesting problems to solve here, both technically and socially.

\---

This guy's blog is great, read more of it! Some recent articles:

<http://lcamtuf.coredump.cx/electronics/> \- geek's guide to electronics for
programmers who don't know this stuff

<http://lcamtuf.coredump.cx/word/> \- cool physical project - threat level
indicator

\---

Author Wikipedia page:

<http://en.wikipedia.org/wiki/Micha%C5%82_Zalewski>

~~~
robin_reala
I’d also highly recommend his 'Silence on the Wire' book (
<http://lcamtuf.coredump.cx/silence.shtml> ), it’s a really readable full-
stack overview of potential security problems.

In fact, I’m going to dig my copy out and read it again.

~~~
m0nastic
He's also the author of Skipfish (a fairly new web scanner), which I've been
making a lot of use of the past few months.

It's still fairly beta, but already I find it to be more useful than
WebInspect in many cases.

~~~
tptacek
That's a pretty horrible indictment of WebInspect, because Skipfish virtually
never finds anything for us (we ban scanners on our teams, but I like Zalewski
and tend to run Skipfish just for kicks).

~~~
viraptor
Any specific reasons you don't use them? It's a pretty interesting view from a
security guy really... most of those I heard before say something like "it
doesn't hurt to leave it running, while we do our stuff manually - sometimes
it works".

Also, did you mean fuzzers in general or only web scanners?

~~~
tptacek
Just scanners. Everyone uses Burp's fuzzer. And in the rare cases we end up
doing network pentests, we will use Nessus.

Scanners make testers stupider. Even if you are conscientious about using them
responsibly†, they still work to turn off the part of your brain that thinks
about the kinds of flaws they do a good job of detecting. If you say you'll
only run them at the end of a engagement to see if there's anything you
missed, now you're working with a safety net.

† _And we've worked on plenty of projects that had previous runs from teams
that_ weren't _responsible about scanners, with predictably horrific results._

~~~
m0nastic
We recently partnered with another assessment firm to handle overflow
(something not one person in our group was happy about, but we spent the
entirety of 2010 overworked), and in the past couple weeks I've had two issues
come to me where this firm submitted a clean report with no findings.

They asked me to take a quick look at the applications (as it's a somewhat
rare occurrence to not have a single finding), and I immediately turned up a
bunch of issues. Upon further inspection, this company is just running
WebInspect, without apparently any actual validation or manual testing.

I so need to get out of our industry.

~~~
tptacek
Or move to a better company.

I tried running away from security in 1998 and found that there's charlatanism
anywhere you go. Try being a baker; no, wait, there are well-marketed
charlatan bakers, too!

There is at least one other company that has publicly banned web scanners. My
take is, you should work for/retain companies that refuse to use scanners,
and, when possible, avoid using companies that mention using them.

A friend of mine, who I'd have loved to work with, ended up at a security
company whose sales pitch to him included "we buy our consultants whatever
tools they want, so you can have WebInspect _and_ AppScan". I was unable to
convince him that this was a "run, don't walk" red flag.

Don't get me started on paid overtime.

~~~
m0nastic
_Or move to a better company. I tried running away from security in 1998 and
found that there's charlatanism anywhere you go. Try being a baker; no, wait,
there are well-marketed charlatan bakers, too!_

I figure it's a new year and maybe I'll finally man up and find someplace more
fulfilling.

My issue is that the pattern I find tends to be:

    
    
      1. Go work with a bunch of really smart, awesome folks doing cool work.
      2. Group is successful, gets bought by large monolith.
      3. It gets shitty, everyone leaves, starts over somewhere new.
      4. Repeat steps 1-3
    

I am literally on the fourth iteration of this process.

 _A friend of mine, who I'd have loved to work with, ended up at a security
company whose sales pitch to him included "we buy our consultants whatever
tools they want, so you can have WebInspect and AppScan". I was unable to
convince him that this was a "run, don't walk" red flag._

That's a bit like liking both "country" and "western".

 _Don't get me started on paid overtime._

In general I've never complained too much about hours. I've always found that
this type of work is cyclical as far as how busy you are on any particular
test.

As long as I can remember, it always went that 4th quarter was crunch time
(with companies having to spend their budget before the end of the year) so
you'd have to bust your ass for a few months, but then things would lull a bit
in January (as budgets aren't allocated yet). For some reason, this past year
the entire year was like that 4th quarter sprint.

~~~
tptacek
There's a common industry practice of double- and triple- booking people on
projects, to the point where schedules can only possibly be kept if people
regularly work 10-12 hour days (but, of course, bill 16-32 hours each day).
Consultants are kept happy --- indeed, many are thrilled with the arrangement
--- because the consultancies pay subcontractor-rate overtime.

This is predictably disastrous for clients.

It blows my mind, really. When you get a contract to assess an application for
security flaws, the client presumes you are going to find the stuff that needs
to get found; they trust that once you're done and they fix your findings, the
app is safe to deploy. Overbooking consultants is like being an auto shop that
fucks up brake jobs or an electrician that leaves bare sparking wires in the
basement.

That's the thing about hours.

~~~
m0nastic
We're in complete agreement.

I hate being double and triple booked because I worry that I'm going to miss
something that I wouldn't otherwise. I mind slightly less if the projects are
of dissimilar types (like an app test and a network test), but I feel like
it's just an accident waiting to happen.

I worked 16 hour days for the majority of 2010 (worked mind you, not billed)
with a huge portion of those on opposing testing schedules (so I'd do one test
in the daytime, another at night, and then try and sleep for a few hours).

And we don't get overtime. We have a utilization target (420 hours per
quarter), which if we hit we get a bonus. I don't think anyone has missed it
in two years (as pretty much everyone is working at least 600 hours per
quarter).

I'm hoping we fill some req's (we had around 30 open last year for testers,
but the larger organization had a hiring freeze).

~~~
tptacek
Yeah, I don't say this often because I feel weird talking about people who are
effectively competitors, but, your company is broken and you should quit.
There are better places to work. Drop me an email sometime; even if you're
geographically (or socially ;) precluded from working for Matasano, I can put
you in touch with lots of other people.

------
rphlx
Sadly you don't need a fuzzer to crash adobe flash (at least on x86_64 linux).
A few hours browsing top-25 websites normally does the trick.

There is a big reason Chrome sandboxes plugins, and its named "Adobe".

------
sabat
Although MS' reaction does appear to be irresponsible, a browser crash is
hardly the worst security issue I can imagine. If that's all this guy is
finding -- it's all he mentions in his post -- then this sounds more like
security for security's sake than anything practical.

~~~
viraptor
It's not all that he's reporting, he's just assuming some hacking knowledge I
guess. Every time you see a crash which is caused by jumping to some unknown
address, there's a pretty good chance that the crash is exploitable - but you
can't tell that easily without going through the source / poking around the
binary.

Basically in many cases the jumps into non-code areas mean that some buffer
overflowed (or some pointer got corrupted) and overwrote the stack frame
return address. As long as you can control what was overwritten, you're likely
to be able to point at the data you supplied yourself (back at the stack with
some text you control). If that condition is true, you can provide new code
for execution straight from html - that means the crash is exploitable.

Then again, even if you can't see how is that specific crash exploitable, it
doesn't mean it isn't. For some time double-free crashes were just bugs. Then
someone found out you can manipulate the pointers in possible later
reallocations. What I'm trying to say is that every crash caused by user
supplied data should be looked at from the "might be exploitable in the
future" perspective.

~~~
rphlx
> Every time you see a crash which is caused by jumping to some unknown
> address, there's a pretty good chance that the crash is exploitable

No. It's not 2001. Modern MS (and UNIX) operating systems and compilers use
NX, stack guarding, and addr randomization to make stack/heap overflows pretty
difficult to exploit. Not impossible, but statistically unlikely. Run-of-the-
mill C programming errors in a web browser are hardly automatic remote sploits
now.

> every crash caused by user supplied data should be looked at from the "might
> be exploitable in the future" perspective.

OK, I agree with this, but out of principle, not because there's a high chance
it's remotely exploitable.

