
Interview with an Adware Author (2009) - ANTSANTS
http://philosecurity.org/2009/01/12/interview-with-an-adware-author
======
mattknox
(Possibly) interesting aftermath to this article:

I gave this interview a long time ago to a completely unknown security
blogger. (I think her blog had like 10 subscribers at the time). Quite a while
later, she published it, and she had gotten at least one more subscriber in
the mean time, and that was Bruce Schneier. He reblogged it on his blog, and
from there it took off on slashdot, reddit, news.yc, etc..

The article linked to my site, which in turn had my cell phone number on it. I
figured that I would get a massive avalanche of death threats, etc., but
interestingly, I was only contacted a few times by email, and those were all
positive things (offers for consulting gigs that I didn't take, a few
conference talks that I did and that ultimately led to me joining twitter a
while later, requests for advice).

It was striking to me at the time that the collective reaction of the world
was so positive. I had feared that I would be stuck in an adware-developer
ghetto forever. We often talk about the fact that Silicon Valley has succeeded
in part due to its stance on failure, and I sense an echo of that in how it
shook out for me.

Feel free to ask me anything. I figure it's the ongoing part of my penance. :)

~~~
codezero
I didn't catch this when it was first posted, I'm glad it showed up here.

Possibly you didn't get negative responses because you were sincere and
introspective. Most people can understand the idea that when in need of money,
you slowly make concessions and compromises that you wouldn't otherwise make.

I remember when I was at Red Hat, one of the engineers had found a way to
clear a worm off Red Hat servers using our auto-update tool, but we weren't
allowed to push it because of the possible unintended consequences.

I'd be really curious to hear about whether you had to face any of those kinds
of things, were there any catastrophes (technically) when doing so much low
level wrangling? It seems like one false move could drop half your nodes, were
there any fail-safes?

Also, unrelated (and apologies for the Quora link), here's a really cool
answer from someone who used to spam people:
[http://qr.ae/Ga85e](http://qr.ae/Ga85e) It's very different from your
experience in that they were non-technical and in a poor region, but still
interesting to see global perspectives on similar work.

~~~
mattknox
I think you're right. If people talk honestly about why they did things and
weren't obviously trying to do ill from the start, it's pretty easy to
empathize.

I read that the participants in the Milgram experiments were _way_
disproportionally likely to be conscientious objectors later in life, and that
they attributed this to having been in the experiments. I hope that I've been
immunized like they were.

We were crazy careful about screwing up people's machines, because anything
that we did that made it seem to malfunction would likely result in them
reinstalling the OS over us, and while we had some ideas about how to persist
across a reinstall, it was a few bridges too far.

I certainly can't say that we avoided all catastrophe, but I'm not aware of us
ever causing one. We had pretty good abstractions: the stack-shuffling code
was fully encapsulated so that it wasn't just littered about our normal code,
etc..

We also tried pretty hard to avoid the really dangerous stuff. It sounds crazy
to put arbitrary code in some random process, but if you know that it doesn't
leak, and it never interacts with other threads in the process, it's not
really _that_ risky, really.

One thing that probably also helped is that we had so much feedback from the
individual ad clients. So we would know pretty fast if something started
happening.

~~~
codezero
Really interesting. I've watched controlled environments of less than 50 nodes
go completely haywire from bad code, so it's quite a feat to push code into 4
million questionable environments while at the same time dodging the virus
soup from competitors.

~~~
mattknox
I think it's harder to safely do changes to development machines, because the
coupling between components is greater. If you want to change, eg., libxml or
something like that, lots of processes that you don't know about might be
effected, and all hell can break loose.

By contrast, I was generally nuking random userland processes, which no
process (or user) would mourn or miss. I think that is a lot safer. There were
cases where we would touch something important, like the CreateRemoteThread
stuff, but that was a relatively small amount of our code, which rarely
changed, and again, it had very little interction with anyone else's code.

It's also possible that we _did_ create a lot of havoc, but I didn't know. I
think that's less likely because I think we would have noticed the loss of
revenue, but it's possible.

------
meowface
>I said, “I know enough C that I could kick the virus off the machines,” and I
did. They said “Wow, that was really cool. Why don’t you do that again?” Then
I started kicking off other viruses, and they said, “That’s pretty cool that
you kicked all the viruses off. Why don’t you kick the competitors off, too?”
>It was funny. It really showed me the power of gradualism. It’s hard to get
people to do something bad all in one big jump, but if you can cut it up into
small enough pieces, you can get people to do almost anything.

This rings incredibly true to me. Especially in my job field.

edit:

>The good news is that I’ve been on the other side of those automated script
things. Their capability is incredibly dangerous, but the actuality tends not
to be. >It would have been fairly trivial for me to go spelunking for people’s
credit card information or whatever. I had four million nodes. I could have
done it without anybody at the company even noticing. I was the guy writing
Scheme, so I could have just put a text file somewhere and then made it go
away, and there wouldn’t even have been an executable lying around. >But I
didn’t. To do that, by definition you have to be willing to become a criminal,
and that’s a little bit rare. So I’m not too worried about that.

I also think this is a good point. A lot of services out there could
potentially ruin your life (or at least make it difficult) if there's a rogue
employee who targets you (or targets a lot of people). An ISP employee looking
at web logs, a Google employee reading all your emails... The same goes for
the NSA, just the power that can be abused is even larger.

I think it's inevitable that human ability to access these kinds of things
will never become impossible, so I hope that companies (and the NSA) are
instituting controls and checks, including anomaly detection, to find rogue,
malicious, or snooping employees.

~~~
mattknox
It seems to me that very few organizations ( _) have effective countermeasures
against bad behavior, but very little seems to happen anyway.

(_) google had that guy who stalked people (
[http://gawker.com/5637234/gcreep-google-engineer-stalked-
tee...](http://gawker.com/5637234/gcreep-google-engineer-stalked-teens-spied-
on-chats) ) and I've read that intelligence circles have a whole category
called LOVINT for people spying on crushes. That said, they have hundreds of
thousands of employees between them, and the incidence rate is probably really
low.

------
frozenport
I find an incongruity in these two statements.

>> It would have been fairly trivial for me to go spelunking for people’s
credit card information or whatever... But I didn’t. To do that, by definition
you have to be willing to become a criminal, and that’s a little bit rare. So
I’m not too worried about that.

>> It was funny. It really showed me the power of gradualism. It’s hard to get
people to do something bad all in one big jump, but if you can cut it up into
small enough pieces, you can get people to do almost anything.

~~~
mattknox
Yeah, I can absolutely see what you mean, but I don't think there is as much
incongruity as you think.

Had I been working for the russian mob or whatever, that is, an organization
that is comfortable with being explicitly criminal, I don't know and fear what
would have happened. However, generally, most organizations are not
comfortable with telling employees to break the law, most of the time. In
particular, it seems to me to be really rare in the private sector-I don't
have any real experience with government.

------
cocksparrer
I like how he used TinyScheme. s7 is also fun for hacking around with a small
footprint scheme interpreter.

~~~
mattknox
It seems like s7 is a descendant of tinyscheme, and probably wasn't available
at the time. Very cool, though. :)

