Hacker News new | past | comments | ask | show | jobs | submit login

You mean the government wastes money?

edit: The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.




You mean the government wastes money?

That's a ignorant cheap shot. Speculative, risky research is costly because lots of things don't work.

The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.

Show me a definition of AI that would include papers published at NIPS/ICML or even AAAI and wouldn't include the same techniques as are used in the DARPA Grand Challenge.

AI is just a label, it isn't something magical.


That's actually quite easy to do. The techniques used by the AI community (statistical estimation, numerical optimization, etc) are quite different from the techniques used to do things like the Cyber Grand Challenge (heavy use of SAT and other decision procedures for logical theories, lattice-based program analysis, formal notions of program semantics, etc). If you ask an AI researcher how knowledge of omega-complete partial orders can help automatically win programming contests, you'll get a blank look.

I'm actually kinda miffed that OpenAI's press release seems to think automatically writing/exploiting programs is an AI problem, and targeted the AI community in their proposals (as opposed to the programming languages community). I'm a program synthesis researcher, and know how to do major aspects of #2 and #3. I know a lot of people who are already working on them (with some quite impressive results, I might add). And none of us are machine learning people.

The important thing is: the Cyber Grand Challenge is funding a dozen teams to do exactly what OpenAI is hiring for. Call it AI or not, it's being done. You might look at the proposal to automatically exploit systems and call it sci-fi, but I look at it and think "Sure, that sounds doable."

Today's program analysis and synthesis technology allows for tools far beyond anything programmers see today. I'm excited to be part of a generation of researchers trying to turn it into the programming revolution we've been waiting for.


I'm agreeing with you(!?) I don't think any of this is SciFi (Well maybe the detection one is a bit out-there).

I don't think OpenAi would object to proposals from outside the AI/ML field. After all, people are doing Deep Learning based SAT solvers as class projects now, eg https://cs224d.stanford.edu/reports/BunzBenedikt.pdf


Yep. I talked to Dario, one of the authors of this press release. He's definitely interested in both kinds of approaches.


@Darmani Can you think of any ways traditional program synthesis techniques could be combined with machine learning to perform #2? Assume your system has access to a large amount of practice problems/solutions to train with.


"Traditional" program synthesis to me means the kind of stuff Dijkstra was doing 50 years ago, which works quite a bit differently than a lot of the constraint-solving based stuff that has really become hot in the last decade. But, answering what you actually meant to ask:

Oh certainly. DARPA's "MUSES" program (which I'm partially funded by) is $40 million into incorporating big data techniques into program analysis and synthesis. There are systems like FlashFill and Prophet which develop a statistical model of what human-written programs tend to look like, and use that to help prioritize the search space. There are also components in the problem other than the actual synthesis part, namely the natural language part. Fan Long and Tao Lei have a paper where they automatically read the "Input" section of programming contest problems and write an input generator. It's classic NLP, except for the part where they try running it on a test case (simple, but makes a big difference).

The reverse also is happening, with people incorporating synthesis into machine-learning. The paper by Kevin Ellis and Armando Solar-Lezama (my advisor) is a recent example.

I do get touchy when people label this kind of work "machine learning" and seem oblivious to the fact that an entire separate field exists and has most of the answers to these kinds of problems. Those examples are really both logic-based synthesizers that use a bit of machine learning inside, as opposed to "machine learning" systems.


I suspect OpenAi is coming from the Neural Turing Machine etc approach. But that doesn't preclude other approaches.

Also, NLP is at the very least closely aligned with "AI" research bit traditionally and looking at current trends.

I do get touchy when people label this kind of work "machine learning"

Don't ;) (Seriously - it's just a label. Embrace the attention)


You're right that it is an opportunity for attention. What seems to actually happen is that a bunch of people who really don't know what they're doing get a bunch of publicity, recruits, and potentially funding and tech-transfer, while we're sitting here with working systems running in production, not getting much. If you look at AI papers that try to touch programs, they have a tendency to not even cite work from the PL community that does the exact same thing but better. It's kinda like how if you Google "fitness," you're guaranteed to get really bad advice for all your results -- the people who actually know about fitness have lost the PR battle.

In short, you can say "it's just a label," but that's not a reason not to fight the battle over words.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: