edit: The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.
That's a ignorant cheap shot. Speculative, risky research is costly because lots of things don't work.
The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.
Show me a definition of AI that would include papers published at NIPS/ICML or even AAAI and wouldn't include the same techniques as are used in the DARPA Grand Challenge.
AI is just a label, it isn't something magical.
I'm actually kinda miffed that OpenAI's press release seems to think automatically writing/exploiting programs is an AI problem, and targeted the AI community in their proposals (as opposed to the programming languages community). I'm a program synthesis researcher, and know how to do major aspects of #2 and #3. I know a lot of people who are already working on them (with some quite impressive results, I might add). And none of us are machine learning people.
The important thing is: the Cyber Grand Challenge is funding a dozen teams to do exactly what OpenAI is hiring for. Call it AI or not, it's being done. You might look at the proposal to automatically exploit systems and call it sci-fi, but I look at it and think "Sure, that sounds doable."
Today's program analysis and synthesis technology allows for tools far beyond anything programmers see today. I'm excited to be part of a generation of researchers trying to turn it into the programming revolution we've been waiting for.
I don't think OpenAi would object to proposals from outside the AI/ML field. After all, people are doing Deep Learning based SAT solvers as class projects now, eg https://cs224d.stanford.edu/reports/BunzBenedikt.pdf
Oh certainly. DARPA's "MUSES" program (which I'm partially funded by) is $40 million into incorporating big data techniques into program analysis and synthesis. There are systems like FlashFill and Prophet which develop a statistical model of what human-written programs tend to look like, and use that to help prioritize the search space. There are also components in the problem other than the actual synthesis part, namely the natural language part. Fan Long and Tao Lei have a paper where they automatically read the "Input" section of programming contest problems and write an input generator. It's classic NLP, except for the part where they try running it on a test case (simple, but makes a big difference).
The reverse also is happening, with people incorporating synthesis into machine-learning. The paper by Kevin Ellis and Armando Solar-Lezama (my advisor) is a recent example.
I do get touchy when people label this kind of work "machine learning" and seem oblivious to the fact that an entire separate field exists and has most of the answers to these kinds of problems. Those examples are really both logic-based synthesizers that use a bit of machine learning inside, as opposed to "machine learning" systems.
Also, NLP is at the very least closely aligned with "AI" research bit traditionally and looking at current trends.
I do get touchy when people label this kind of work "machine learning"
Don't ;) (Seriously - it's just a label. Embrace the attention)
In short, you can say "it's just a label," but that's not a reason not to fight the battle over words.