Hacker News new | past | comments | ask | show | jobs | submit login

"Traditional" program synthesis to me means the kind of stuff Dijkstra was doing 50 years ago, which works quite a bit differently than a lot of the constraint-solving based stuff that has really become hot in the last decade. But, answering what you actually meant to ask:

Oh certainly. DARPA's "MUSES" program (which I'm partially funded by) is $40 million into incorporating big data techniques into program analysis and synthesis. There are systems like FlashFill and Prophet which develop a statistical model of what human-written programs tend to look like, and use that to help prioritize the search space. There are also components in the problem other than the actual synthesis part, namely the natural language part. Fan Long and Tao Lei have a paper where they automatically read the "Input" section of programming contest problems and write an input generator. It's classic NLP, except for the part where they try running it on a test case (simple, but makes a big difference).

The reverse also is happening, with people incorporating synthesis into machine-learning. The paper by Kevin Ellis and Armando Solar-Lezama (my advisor) is a recent example.

I do get touchy when people label this kind of work "machine learning" and seem oblivious to the fact that an entire separate field exists and has most of the answers to these kinds of problems. Those examples are really both logic-based synthesizers that use a bit of machine learning inside, as opposed to "machine learning" systems.




I suspect OpenAi is coming from the Neural Turing Machine etc approach. But that doesn't preclude other approaches.

Also, NLP is at the very least closely aligned with "AI" research bit traditionally and looking at current trends.

I do get touchy when people label this kind of work "machine learning"

Don't ;) (Seriously - it's just a label. Embrace the attention)


You're right that it is an opportunity for attention. What seems to actually happen is that a bunch of people who really don't know what they're doing get a bunch of publicity, recruits, and potentially funding and tech-transfer, while we're sitting here with working systems running in production, not getting much. If you look at AI papers that try to touch programs, they have a tendency to not even cite work from the PL community that does the exact same thing but better. It's kinda like how if you Google "fitness," you're guaranteed to get really bad advice for all your results -- the people who actually know about fitness have lost the PR battle.

In short, you can say "it's just a label," but that's not a reason not to fight the battle over words.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: