
Interview with Christian Szegedy, Discoverer of AI Adversarial Examples [video] - ayw
https://scale.ai/interviews/christian-szegedy
======
bdowling
> Today, he's working on formal reasoning and dreams of creating an _automated
> software engineer_.

How does that work and should we be worried? The article doesn’t touch on this
topic beyond the introduction.

(Note: I am aware of using genetic algorithms to evolve code to produce a
solution to some specific problem. "Automated software engineer" seems to
contemplate something more sophisticated than that.)

~~~
symplee
The first generation of automated software engineers will write one bug per
100 lines of code. This will allow them to blend in while working remotely and
provide job security for existing human developers.

The second gen senior automated software engineers will write one bug per
1,000 lines but will become too expensive to maintain as they age.

All efficiency gains will therefore be rolled back to the naive first gen's
level in order to free up capital for the third gen automated angel investor.

------
hodgesrm
This is a good interview. I was a little disappointed that it did not dig more
deeply into adversarial examples. The compensation was that Szegedy argued
that human-like reasoning may be based on relatively simply mechanisms that we
just need to find. He cited AlphaGo as a solution that turned out to be much
simpler than expected.

