
Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? - zdw
https://www.usenix.org/conference/usenixsecurity18/presentation/mickens
======
smacktoward
I love Mickens' work, and think this is overall a great presentation, but I
feel like it misses (or maybe just doesn't fully explore) an important point.

Start with the Internet of Things example. He chalks up the abysmal security
record of IoT devices to two factors: it keeps IoT devices cheap, and IoT
vendors don't understand history. And there's a lot of truth in both these
assertions! But they are both just expressing facets of a deeper, more
fundamental reason: IoT devices aren't secure because _their customers don 't
demand security._

This deeper problem completely explains why the two higher-level problems he
observes exist. Making your product secure makes it more expensive and slower
to come to market than just leaving it wide open, and the IoT vendors know
their customers care about cost and availability and don't care about
security. So they do the rational (in the _homo economicus_ sense of the term)
thing and optimize for things their customers are actually willing to pay for.

The same causality can be observed in the ML world. Mickens asks why people
are hooking ML systems whose operation isn't fully understood to important
things like financial decisionmaking and criminal justice systems. The answer
is that _the customers demand it._ ML is trendy and buzzworthy, so if you're a
vendor of (say) financial systems, and you can find some way to incorporate ML
into your offerings with a straight face, now you have an attractive new
checkbox on the feature list your salespeople dangle in front of potential
customers. And once the effectiveness of having that box checked becomes
clear, you kind of _have_ to do it, even if you know it'll be ineffective or
even worse, or risk losing business to a competitor with fewer scruples.

All of which is to say that what we see playing out in both these scenarios
isn't really the vendors' fault. They are instead classic examples of market
failure. People end up buying shoddy products because spotting their
shoddiness requires technical expertise they don't have; responsible vendors
who try not to make shoddy products lose sales to irresponsible vendors who
don't; eventually all the responsible vendors are out of business and the only
products available to buy are shoddy ones. There are lessons to learn from
this, but they're economic rather than technological.

~~~
zaptheimpaler
This is like saying doctors should push cheap drugs that may or may not make
your testicles explode because _customers don 't demand non-testicle exploding
drugs_.

We trust doctors to take into account all the nuances of medicine that laymen
have never even heard of, and give us good advice. Because not everyone can be
an expert on everything.

Its the same with software. We can't expect everyone to be an expert.. its up
to our industry to act responsibly.

Sure its "market failure" in so far as duping uninformed people is a good way
to make a quick buck, but the deeper issue is moral failure / failure to take
responsibility.

~~~
rossdavidh
...and we don't just rely on drug makers, for example, to be moral and take
responsibility. We have government agencies that _require_ strict testing of
their safety and effectiveness. If we left it up to the market, we would get
inferior results. The problem is, we have no FDA equivalent for tech security.

~~~
acct1771
That's a great point made with a pretty suspect example. FDA very subject to
regulatory capture.

~~~
rossdavidh
Regulatory capture is certainly an important problem, but the pre-FDA record
suggests strongly that the FDA we have is much better than not having one. But
I would certainly not suggest that there is not a real problem with regulatory
capture, just that the current situation in tech security (no FDA equivalent)
is worse.

------
CharlesW
"Using case studies involving machine learning and other hastily-executed
figments of Silicon Valley's imagination, I will explain why computer security
(and larger notions of ethical computing) are difficult to achieve if
developers insist on literally not questioning anything that they do since
even brief introspection would reduce the frequency of git commits."

For anyone who hasn't heard a James Mickens talk, do yourself a favor!

~~~
326543
He's the guy who wrote the Slow Winter. He's hilarious.

[https://www.usenix.org/system/files/1309_14-17_mickens.pdf](https://www.usenix.org/system/files/1309_14-17_mickens.pdf)

~~~
smolder
It'd be more hilarious if he didn't reference real problems with no obvious
solutions short of a painful dismantling of our heavily exploited societal
constructs.

------
Avshalom
James Mickens is pure gold

[https://mickens.seas.harvard.edu/wisdom-james-
mickens](https://mickens.seas.harvard.edu/wisdom-james-mickens)

~~~
wyldfire
This guy looks like the one who wrote those satire magazine-style articles.

~~~
stouset
He _is_ that guy. Mickens is a legend.

~~~
tedunangst
But how can we be sure? Maybe there are two people with the same name who look
exactly alike with the same writing style. If we put them all on the
blockchain, do they have the same hash?

------
jboggan
"I'm not saying that machine learning is the portal to a demon universe, I'm
just saying that some doors are best left unopened."

------
avhwl
This is an entertaining and important talk. Technology is not value-neutral,
and insistence that it is is a larger meta-security issue in of itself.

------
KZeillmann
Regarding the "we don't know how this stuff works" point, doesn't the FDA
approve a ton of drugs where we don't know the exact mechanism of how it
works? Do we need to know exactly and precisely how something works to know
_that_ it works?

~~~
a1369209993
The overwhelming majority of medical treatments don't have inteligent humans
actively trying to maliciously sabatoge them. Many drugs can be made
horrendously lethal or otherwise dangerous with little effort (often just by
significantly increasing the dosage) but we don't need to care very much
because it's not possible to silently untracably apply that effort from
arbitrarily far away, and there usually isn't anything to gain from doing so
even if it were possible.

~~~
emodendroket
Is there any kind of sabotage other than malicious sabotage?

~~~
dredmorbius
Ignorant, unintentional, self, incidental.

Stupidity is probably the biggest threat. Dietrich Bonhoeffer:

[https://religiousgrounds.wordpress.com/2016/05/11/bonhoeffer...](https://religiousgrounds.wordpress.com/2016/05/11/bonhoeffer-
on-stupidity-entire-quote/)

~~~
mehrdadn
I think the parent meant that "sabotage" is "deliberate destruction" by
definition.

------
badrabbit
Seems well opinionated but I disagree. He's thinking too much in absolutes
while in practice people care about relative security.

Computer security has gotten a lot better,many organizations have acheived a
security posture they are comfortable with. I think he's focusing strictly on
application security,in reality you care about maintaining C.I.A. for the
data.

I don't care if the entire software stack is riddled with vulnerabilities and
the CPU has unfixable vulnerabilities so long as that does not result in
attackers (as defined by my threat model) fail to compromise
confidentiality,integrity and availability of data I consider valuable.

The software might get exploited but there are post exploit controls,those may
get bypassed but attacker facing machines would ideally not store valuable
data. The attackers can move laterally but there are detection and prevention
measures for that. I mean, both in life and computer security,one shouldn't
expect absolute security, acheiving and accept level of a security posture
should be enough.

I'm not prepared to handle 10 guys mugging me as I walk home,but that isn't my
goal. My goal would be to defend myself against one or two attackers of the
same weight class as myself.

There is a reason so much security appears bad,easier to clean up a breach of
security or to just ignore it than to implement a SDLC and have independent
security staff. In the end,security improves only if it's cheaper to do so.

~~~
diafygi
I disagree with your disagree.

Say we all lived 50 years ago and worked in ergonomics engineering instead of
software engineering. People were fairly comfortable doing non-stressful work,
which I guess was better than being pulled into meat grinders of The Jungle.

However, there was this new science that was indicating a new problem of
repetitive stress injuries. Over the next 20-ish years, we learned that these
injuries caused a ton of harm, so we started legislating protections against
these types of stresses, when which resulted in increased productivity.

Now switch to today. What makes the lax of software security best practices so
different from repetitive stress injuries 50 years ago?

Software engineering is feeling like it will follow the same path as every
other engineering. First, we'll feel like we're gods. Then, we'll suffer
losses. Finally, we'll be regulated.

Remember, every regulation is written in blood. Software will be no different.

~~~
tokyodude
An 8yr old can write software and post it to github. An 8yr can't build a
house or car from scratch (two things who's construction is regulated).

My point being software is harder and possibly impossible to regulate. Is all
open source going to be banned unless it's been written by licensed certified
programmers and gone through review by an appointed inspector? That seems
untenable.

~~~
badrabbit
Writing software can't be regulated but use of unaudited software
can,especially for commercial use.

------
Yhippa
Here's the YouTube link to his talk:
[https://youtu.be/ajGX7odA87k](https://youtu.be/ajGX7odA87k). This made me
laugh more than it probably should.

------
the_greyd
I'll post IMO the most interesting slide of the talk.

\---

The Assumptions of Technological Manifest Destiny:

1) Technology is VALUE-NEUTRAL, and will therefore automatically lead to good
outcomes for everyone.

2) Thus, new kinds of technology should be deployed as quickly as possible,
even if we lack a general idea of how the technology works, or what the
societal impact will be.

3) History is generally uninteresting, because the past has nothing to teach
us.

\---

How relevant is this. With Cambridge Analytica scandal and now Google's
censored search engine in China. How about self driving cars?
Cryptocurrencies?

------
Anderkent
Sidestepping comedy for a bit, there are a lot of inscrutable systems that we
connect to 'things that matter' all the time. The financial systems themselves
are pretty damn inscrutable. Corporations are very often inscrutable.

~~~
tptacek
The word Mickens uses is "interpretable", which financial infrastructure is,
and ML models are not.

~~~
Anderkent
Financial IT is interpretable, maybe, but is the financial system itself? You
have a lot of agents taking actions that you can't really interpret from
outside - unless you say something vacuous like this person made this trade
because they think it was good, at which point you may as well say this ai
model made this decision because it thought it was good.

~~~
tptacek
What does "the financial system" mean? Obviously there's a level you can
address with that term where, just like with ML, interpretability becomes an
open question. But on most every level that is genuinely comparable to the
role an ML model plays in a software product, finance is plenty interpretable.

------
j45
Speaking generally, and not about this post - Keynote speakers often aren't
technical (enough), but speak about topics that have technical underpinnings.

Take for example, dangerous management consultants who speak all over the
place about AI, disruption, innovation, digital transformation, but don't know
technology, which is the underpinning of all the things they're speaking
about.

~~~
tjr225
I get this feeling from even some of the biggest conventions there are...
_cough_ I felt fairly fricken disenfranchised during a recent convention for a
popular containerization solution...

~~~
j45
Good point. "Technologists in management /leadership" groups need to form
everywhere to get the right people speaking about topics they understand.

It's ironic that there is an imposter syndrome among competent people, and
incompetent people have no issue being imposters.

------
tialaramex
He says at one point "Patrick Thistle" and I grant this seems like it makes
sense, but indeed the name of that particular association football club is
"Partick Thistle" and Partick is a real place, albeit not one which today
would obviously be in need of a professional soccer team, and it isn't where
they play.

------
throwaway_badai
A few years ago, I was working in a company that was trying to build an
innovative NLP system, or in more honest words, to do a chatbot that doesn’t
suck. Spoiler alert: we failed.

There were a lot things wrong in how this company was run and the product we
were doing, but I won’t go into details except to say that there were a lot of
intelligent people forced to do silly things by a clueless micromanaging boss.

Anyway, one of the problems with chatbots is the one of prior knowledge.
Chatbots and other NLP solutions don’t simply need to be able to understand
and produce conversation, they need to have something to talk about, a model
of the world, some basic facts, and it turns out it is very complicated to
build in general.

So our boss decided that one way to fake it was to use one of those free
corpora of public-domain English literature. Let’s just make our system “read”
a lot of text and in some way it will gain prior knowledge that way. So if it
reads “the Sun was high in the sky”, it would understand that the Sun is
something that has a position and that one of the possible position is “high
in the sky”. So if someone ever asks the chatbot “where can the Sun be?” it
could answer “The Sun can be high in the sky”. It was all pattern matching,
nothing very smart about it, just something to fake some parts of the
conversation and avoid having too many “I don’t know”.

Of course, it was literature, including fiction. So caterpillars could smoke
hookahs, but that was considered an acceptable risk, it was better to have
something wrong than an admission of ignorance. In some way don’t humans also
repeat stuff without understanding them?

It kinda worked. If you asked “What do people eat?” it would answer “People
eat potatoes, mushrooms and tires” or something like that. It was not very
smart but somewhere in the literature the pattern “<Person> eats <X>” existed
and it was parroting it. If you asked “What do children eat?” it would answer
“Children eat carrots, rocks and cupcakes”. It was a bit silly but nice. But
then we asked “Who eat children?” and the answer was, I shit you not, “Black
people eat children, while howling to the moon and covering their naked body
with feces”.

Except it didn’t actually say “Black people”, it used the other term, the one
which is much worse.

The sudden realization that we have created an AI but an incredibly racist one
did not make us abandon the approach. We just found the guilty piece of text
in the corpus and expunged it. Then it just said “Companies eat children”.
Depending on your politics you can consider that better.

To be fair, it was not really Machine Learning but the story shows what can
happen if you don’t control your input, either because it comes from the evil
internet or because it is a large dataset that it is too big to reasonably
sanitize and was not built for this purpose.

~~~
manquer
nice anecdote thanks for sharing. You assert that it was not really ML. I
think it is, It may not be true AI, however patten matching/recognition is the
core part of ML. ML is just a stochastic and statistical approach to do
pattern matching, the hype around ML has kind of distorted the expectation
from the field.

You don't have to really control the input, it is not difficult to automate
the sanitation by building a feedback loop of abuse reports to delete patterns
from the corpus, if you cannot release before significant cleanup, you could
either use something like Mechanical Turk/ Crowd-sourced paid users to test
the system extensively, or be more through generate millions of possible
questions and the answers for them and run content moderation tools on them,
human assisted or otherwise, or build a filter layer into your chatbot itself.
None of these approaches of course give you a guarantee something won't go
wrong, they give you a reasonable probability it won't.

------
silverlake
I’d like to see James Mickens vs Yoram Bauman in a comedy roast battle.

------
carlosdp
I love some good James Mickens content. Highly encourage people look up his
other work, especially his talks.

------
bogomipz
This was comic genius. It was also equally insightful. What a wonderful
speaker and a wonderful talk. Did anyone else catch the the Bob Ross painting
references during the graphic of the number 4? That had me in stitches.

Thank you for posting this. This made my day.

~~~
octosphere
Noticed this too. I recall one person in the audience laughing uproariously at
it - probably the only person that got the reference.

------
bogomipz
In the section of the talk "how do we pick the wights of the neural net" the
speaker states:

"the error then is going to be difference between what the classification of
the neural net outputs and what the classification or the oracle will be."

Could someone say what is an "oracle" in this context?

He says this at 10:31 in the talk.

~~~
dgacmu
The oracle = magic box that always gives the correct answer.

~~~
bogomipz
Thank you. Googling offered very little help as it just returned results
regarding Oracle the company and their ML service offerings. Cheers.

------
ivthreadp110
Because so many people/companies think about security as a secondary thing,
and in most places Improving Security is some really simple changes.

------
Animats
Fixing security is quite possible.

Install a backdoor, go to jail for "exceeding authorized access".

Fail to fix an security bug, get sued for negligence.

Make it public policy that license contracts cannot override those
responsibilities.

~~~
alasdair_
>Make it public policy that license contracts cannot override those
responsibilities.

This would be a disaster for open source. Who wants to write software for free
if you can get sued for a bug?

~~~
zbentley
I think it's implicit in that proposal that the amount of software available
would massively decrease. That's not necessarily a bad thing.

~~~
toast_coder
I think thats a ridiculous statement. Should we also limit how many books are
written and who can write them?

What is the difference?

~~~
shawnz
If you buy a book and it turns out to be trash, is that negligence on the part
of the author? Is your safety at risk because of it?

You could maybe argue that this is true for textbooks, but not much else.

------
stcredzero
How about a law where someone who finds an exploit can claim a bounty against
the company selling the product or publishing the website? This doesn't
address the machine learning part, but it does address IoT and security
generally.

------
choonway
That's because the metrics keep changing.

------
TearsInTheRain
Oh boy Mickens must really hate blockchains and uncensorable platforms like
a̶s̶s̶a̶s̶s̶i̶n̶a̶t̶i̶o̶n̶ prediction markets

~~~
darzu
"Blockchains Are a Bad Idea: More Specifically, Blockchains Are a Very Bad
Idea."

[https://www.youtube.com/watch?v=15RTC22Z2xI](https://www.youtube.com/watch?v=15RTC22Z2xI)

~~~
TearsInTheRain
eh that was a pretty bad lecture. Most of the technical problems he talked
about are easily solvable.

------
jrochkind1
This is an amazing lecture!

------
egberts
No security industry wants to make the security industry go away: it's that
sad.

------
ppierald
Makes me think of the recent XKCD on voting machine software:
[https://xkcd.com/2030/](https://xkcd.com/2030/)

------
oihoaihsfoiahsf
"Would you want Kingsley doing these things?"

Depends on how mission critical <thing> is and how accurate Kingsley is?

